text
stringlengths 4
2.78M
| meta
dict |
---|---|
---
abstract: 'Emission-line variability data on NGC 5548 argue strongly for the existence of a mass of order $7 \times 10^7$within the inner few light days of the nucleus in the Seyfert 1 galaxy NGC 5548. The time-delayed response of the emission lines to continuum variations is used to infer the size of the line-emitting region, and these determinations are combined with measurements of the Doppler widths of the variable line components to estimate a virial mass. The data for several different emission lines spanning an order of magnitude in distance from the central source show the expected $V \propto r^{-1/2}$ correlation and are consistent with a single value for the mass.'
author:
- 'Bradley M. Peterson and Amri Wandel'
title: 'Keplerian Motion of Broad-Line Region Gas as Evidence for Supermassive Black Holes in Active Galactic Nuclei'
---
ø5007[\[O[iii]{}\]$\lambda5007$]{}
Introduction
============
Since the earliest days of quasar research, supermassive black holes (SBHs) have been considered to be a likely, if not the most likely, principal agent of the activity in these sources. Evidence for the existence of SBHs in active galactic nuclei (AGNs), and indeed in non-active nuclei as well, has continued to accumulate (e.g., Kormendy & Richstone 1995). In the specific case of AGNs, probably the strongest evidence to date for SBHs has been Keplerian motions of megamaser sources in the Seyfert galaxy NGC 4258 (Miyoshi et al. 1995) and asymmetric FeK$\alpha$ emission in the X-ray spectra of AGNs (e.g., Tanaka et al. 1995), though the latter is still somewhat controversial as the origin of the FeK$\alpha$ emission has not been settled definitively.
The kinematics of the broad-line region (BLR) potentially provide a means of measuring the central masses of AGNs. A virial estimate of the central mass, $M \approx r \sigma^2/G$, can be made by using the line velocity width $\sigma$, which is typically several thousands of kilometers per second, and the size of the emission-line region $r$. For this to be meaningful, we must know that the BLR gas motions are dominated by gravity, and we must have some reliable estimate of the BLR size. The size of the BLR can be measured by reverberation mapping (Blandford & McKee 1982), and this has been done for more than two dozen AGNs. Whether or not the broad emission-line widths actually reflect virial motion is still somewhat problematic: while the relative response time scales for the blueshifted and redshifted wings of the lines reveal no strong signature of outflow, there are still viable models with non-gravatitionally driven cloud motions. However, if the kinematics of the BLR can be proven to be gravitationally dominated, then the BLR provides an even more definitive demonstration of the existence of SBHs than megamaser kinematics because the BLR is more than two orders of magnitude closer to the central source than the megamaser sources. Recent investigations of AGN virial masses estimates based on BLR sizes have been quite promising (e.g., Wandel 1997; Laor 1998) and suggest that this method ought to be pursued.
In this Letter, we argue that the broad emission-line variability data on one of the best-studied AGNs, the Seyfert 1 galaxy NGC 5548, demonstrates that the BLR kinematics are Keplerian, i.e., that the emission-line cloud velocities are dominated by a central mass of order $7 \times 10^{7}$within the inner few light days ($r \ltsim 5 \times 10^{15}$cm). We believe that this strongly supports the hypothesis that SBHs reside in the nuclei of active galaxies.
Methodology
===========
Measurement of virial masses from emission lines requires (1) determination of the BLR size, (2) measurement of the emission-line velocity dispersion, and (3) a demonstration that the kinematics are gravitationally dominated. A correlation between the BLR size and line-width of the form $r \propto \sigma^{-2}$ is consistent with a wide variety of gravitationally dominated kinematics. It thus provides good evidence for such a dynamical scenario, although alternative pictures which contrive to produce a similar result cannot be ruled out. Indeed, the absence of such a relationship has been regarded as the missing item in AGN SBH measurements (Richstone et al. 1998).
For gravitationally dominated dynamics, the size–line-width relationship must hold for all lines at all times. To test this, we consider the case of NGC 5548, which has been the subject of extensive UV and optical monitoring campaigns by the International AGN Watch consortium[^1] (Alloin et al. 1994) for more than ten years. The data are from UV monitoring programs undertaken with the [*International Ultraviolet Explorer (IUE)*]{} in 1989 (Clavel et al. 1991) and with [*IUE*]{} and [*Hubble Space Telescope (HST)*]{} in 1993 (Korista et al. 1995), plus ground-based spectroscopy from 1989 to 1996 (Peterson et al. 1999 and references therein). We consider the response of a variety of lines in two separate observing seasons (1989 and 1993) and the response of over an eight-year period.
Cross-correlation of the continuum and emission-line light curves yields a time delay or “lag” that is interpreted as the light-travel time across the BLR. Specifically, the centroid of the cross-correlation function (CCF) $\tau_{\rm cent}$ times the signal propagation speed $c$ is the responsivity-weighted mean radius of the BLR for that particular emission line (Koratkar & Gaskell 1991).
We have measured $\tau_{\rm cent}$ for various emission lines using light curves of NGC 5548 in the AGN Watch data base and the interpolation cross-correlation method as described by White & Peterson (1994). The UV measurements for 1989 are the GEX values from Clavel et al. (1991). The UV measurements for 1993 are taken from Tables 12–14 and 16–17 of Korista et al. (1995). The optical data for 1989–1993 are from Wanders & Peterson (1996) and from Peterson et al. (1999) for 1994–1996. Uncertainties in these values were determined as described by Peterson et al. (1998b). The results are given in Table 1, in which columns (1) and (2) give the epoch of the observations and the emission line, respectively. Column (3) gives the lag $\tau_{\rm cent}$ and its associated uncertainties.
Emission-line widths are not simple to measure on account of contamination by emission from the narrow-line region, and in some cases, contamination from other broad lines. We have circumvented this problem by using a large number of individual spectra to compute mean and root-mean-square (rms) spectra, and we measure the width of the emission features in the rms spectrum. The advantage of this approach is that constant or slowly varying components of the spectrum do not appear in the rms spectrum, and the emission features in the rms spectrum accurately represent the parts of the emission line that are varying, and for which the time delays are measured (Peterson et al. 1998a). This technique requires very homogeneous spectra: for the 1989 UV spectrum, we used the GEX-extracted SWP spectra. For the 1993 UV spectrum, we used the [*HST*]{} FOS spectra, excluding those labeled “dropouts” by Korista et al. (1995) which were not optimally centered in the FOS aperture. For the optical spectra through 1993, we used the homogeneous subset analyzed by Wanders & Peterson (1996), and a similar subset for 1994–1996. In each rms spectrum, we determined the full-width at half-maximum (FWHM) of each measurable line, with a range of uncertainty estimated by the highest and lowest plausible settings of the underlying continuum. The line widths are given as line-of-sight Doppler widths in kilometers per second in column (4) of Table 1.
Each emission line provides an independent measurement of the virial mass of the AGN in NGC 5548 by combining the emission-line lag with its Doppler width in the rms spectrum. Column (5) of Table 1 gives a virial mass estimate $M = f r_{\rm BLR} \sigma_{\rm rms}^2/G$ for each line, where $\sigma_{\rm rms} = \sqrt{3}\vFWHM/2$ (Netzer 1990) and $r_{\rm BLR} = c \tau_{\rm cent}$. The factor $f$ depends on the details of the geometry, kinematics, and orientation of the BLR, as well as the emission-line responsivity of the individual clouds, and is expected to be of order unity. Uncertainty in this factor limits the accuracy of our mass estimate to about an order of magnitude (see §[3]{}). Neglecting the systematic uncertainty in $f$, the unweighted mean of all these mass estimates is $6.8\,(\pm 2.1) \times 10^{7}$. To within the quoted uncertainties, all of the mass measurements are consistent. The large systematic uncertainty should not obscure the key result, namely that the quantity $r_{\rm BLR} \sigma_{\rm rms}^2/G$ is constant and argues strongly for a central mass of order $7 \times 10^{7}$.
In Fig. 1, we show the measured emission-line lag $\tau_{\rm cent}$, plotted as a function of the width of the line in the rms spectrum for various broad emission lines in NGC 5548. Within the measurement uncertainties, all the lines yield identical values for the central mass. A weighted fit to the relationship $\log (\tau_{\rm cent}) = a + b\log (\vFWHM)$ yields $b=-1.96\pm0.18$, consistent with the expected value $b=-2$, although the somewhat high reduced $\chi^2_{\nu}$ value of 1.70 (compared with $\chi^2_{\nu} = 2.14$ for a forced $b = -2$ fit as shown in the figure) suggests that there may be additional sources of scatter in this relationship beyond random error.
If our virial hypothesis is indeed correct, we should measure the same mass using independent data obtained at different times. The emission line in NGC 5548 is the only line for which reverberation measurements have been made for multiple epochs. In Fig. 2a, we show the measured lag as a function of the width of the line in the rms spectrum for the six years listed in Table 1. The relationship is shallower than that seen in the multiple-line data shown in Fig. 1 ($b=-0.72\pm0.29$ with $\chi^2_{\nu} = 0.79$), and indeed is poorly fit with the expected virial slope (for the $b=-2$ fit shown in the figure, $\chi^2_{\nu} = 3.71$, although more than 50% of the contribution to $\chi^2_{\nu}$ is due to the single data point from 1996). Note that data from two years, 1993 and 1995, have not been included in this plot because the rms spectra for these two years have a strong double-peaked structure that we are unable to account for at present. We also note that a rather better relationship between the time lag and rms line width is found if we use the CCF peak rather than the centroid for the BLR size, as shown in Fig. 2b ($b=-1.47\pm0.21$ with $\chi^2_{\nu} = 0.59$, and for the $b=-2$ fit shown in the figure, $\chi^2_{\nu} = 1.58$). The CCF centroid represents the responsivity-weighted mean radius of the line-emitting region, but the CCF peak has no similarly obvious interpretation, though in some geometries the cross-correlation peak is a probe of the emission-line gas closest to the central source. In any case, the virial mass we infer from the mean of the data is the same within the uncertainties regardless of whether the CCF centroid ($7.3\,(\pm2.0)\times 10^{7}$) or peak ($6.8\,(\pm1.0)\times 10^{7}$) is used to infer the BLR size. There are a number of possible reasons for the large $\chi^2$ values for the virial fits; it is important to remember that both the lag and line width are dynamic quantities that are dependent on the mean continuum flux, which can change significantly over the course of an observing season. We attempted to test this by isolating individual “events” in the light curves and repeating the analysis. Unfortunately, the relatively few spectra in each event significantly degraded the quality of both the lag and line-width measurements and thus proved to be unenlightening.
A diagram similar to our Fig. 1 was published by Krolik et al. (1991) for NGC 5548. We believe that our improved treatment, plus additional data, makes the case more compelling primarily because we measured the broad-line widths from the variable part of the spectrum only (i.e., the rms spectrum) rather than by multiple-component fitting of the broad-line profiles. Also, we included only lines for which we could determine both accurate lags and line widths in the rms spectra, thus excluding $\lambda1215$ because of contamination by geocoronal in the rms spectrum, $\lambda1240$ because it is weak and badly blended with , and $\lambda1304$ on account of its low contrast in the rms spectrum. We excluded $\lambda2798$ because of its poorly defined time lag — the response of this line is long enough for aliasing to be a problem. Finally, we also included optical lines ( and $\lambda4686$) not included by Krolik et al., plus additional UV measurements from the 1993 monitoring campaign.
An obvious question to ask is whether or not it is possible to [*directly*]{} determine the BLR kinematics by differential time delays between various parts of emission lines (e.g., in the case of purely radial infall, the redshifted side of an emission line should respond to continuum changes before the blueshifted side). In general, cross-correlations of emission-line fluxes in restricted Doppler-velocity ranges have failed to yield significant time lags in the several AGNs tested to date (e.g., Korista et al. 1995), consistent with, although not proving, the virial hypothesis.
Discussion
==========
We have shown that the emission-line time-lag/velocity-width relationship argues very strongly for an SBH of mass $\sim7\times10^7\,$ in the nucleus of NGC 5548. The accuracy of this determination is limited by unknown systematics involving the geometry, kinematics, and line reprocessing physics of the BLR. As a simple illustration, we consider $\lambda1549$ line emission from a BLR consisting of clouds in a Keplerian disk with radial responsivity proportional to $r^{-2.5}$ (which is steep enough to make the results fairly insensitive to the outer radius of the disk) and inner radius $R_{\rm in} = 3$lt–days. A relatively low central mass ($5\times10^6$) with high inclination ($i=90$) disk and asymmetric line emission can fit the 1989 results in Table 1. At the other extreme, a larger mass ($1.1\times10^8$) is required for a lower inclination ($i=20$) and isotropic line emission. For further comparison, the specific model of Wanders et al. (1995), based on anisotropically illuminated clouds in randomly inclined Keplerian orbits, requires $M = 3.8 \times 10^{7}$, and extrapolation to the BLR of the Fe K$\alpha$ disk model of Nandra et al. (1997) requires $M = 3.4\times10^{7}$.
As shown by Peterson et al. (1999), the emission-line lag varies with continuum flux, though as with the results discussed here, the correlation shows considerable scatter, probably because of the dynamic nature of the quantities being measured. But it seems clear that as the continuum luminosity increases, greater response is seen from gas at larger distances from the central source. We argue here that this also results in a change in the emission-line width; as the response becomes dominated by gas further away from the central source, the Doppler width of the responding line becomes narrower. This shows that the different widths of various emission lines is related to the radial distribution of the line-emitting gas — high-ionization lines arise at small distances and have large widths, and low-ionization lines arise primarily at larger radii and are thus narrower.
While this accounts for some important characteristics of AGN emission lines and their variability, it is nevertheless clear that this is not the entire story; there is still scatter in the relationships that is unaccounted for by these correlations, and their are other phenomena that are not accounted for in this simple interpretation. For example, for central masses as large as reported here, observable relative shifts in the positions of the emission lines are expected from differential gravitational redshifts. The gravitational redshift for each line in NGC 5548 is given by $$\Delta V =\frac{GM}{cr_{\rm BLR}} \approx
\frac{1160\,\mbox{\kms}}{r_{\rm BLR}\,\mbox{\rm (light days)}}.$$ This clearly predicts that that high-ionization lines ought to be redshifted relative to the low-ionization lines, when in fact the opposite is observed in higher-luminosity objects (Gaskell 1982; Wilkes 1984). However, the gravitational redshift in NGC 5548 should apply to the [*variable*]{} component of the emission line only and would be sufficiently small to be unobservable in our rms spectra. The occasional appearance of double-peaked rms profiles is yet another complication. As noted earlier, in two of the eight years of optical data on NGC 5548, the profile in the rms spectrum is strongly double-peaked. We do not see an obvious explanation for why the emission line should be single-peaked on some occasions and double-peaked on others.
Summary
=======
We have shown that in the case of the Seyfert 1 galaxy NGC 5548 emission-line variability data yield a consistent virial mass estimate $M \approx 7 \times 10^7$, though systematic uncertainties about the BLR geometry, kinematics, and line-reprocessing physics limit the accuracy of the mass determination to about an order of magnitude. Data on multiple emission lines spanning a factor of ten or more in distance from the central source shows the $r_{\rm BLR} \propto \vFWHM^{-2}$ correlation expected for virialized BLR motions. The time delay of emission is known to vary by at least a factor of two over a decade (Peterson et al. 1999), and we show here that the line-width variations are anticorrelated with the time-delay variations. The central mass is concentrated inside a few light days, which corresponds to about 250 Schwarzschild radii ($R_{\rm S} = 2 GM/c^2$) for the mass we infer, which argues very strongly for the existence of an SBH in NGC 5548.
Alloin, D., Clavel, J., Peterson, B.M., Reichert, G.A., & Stirpe, G.M. 1994, in Frontiers of Space and Ground-Based Astronomy, ed. W. Wamsteker, M.S. Longair, & Y. Kondo (Dordrecht: Kluwer), p. 423 Blandford, R.D., & McKee, C.F. 1982, ApJ, 255, 419 Clavel, J., et al. 1991, ApJ, 366, 64 Gaskell, C.M. 1982, ApJ, 263, 79 Koratkar, A.P., & Gaskell, C.M. 1991, ApJS, 75, 719 Kormendy, J., & Richstone, D. 1995, ARAA, 33, 581 Korista, K.T., et al. 1995, ApJS, 97, 285 Krolik, J.H., Horne, K., Kallman, T.R., Malkan, M.A., Edelson, R.A., & Kriss, G.A. 1991, ApJ, 371, 541 Laor, A. 1998, ApJ, 505, L83 Miyoshi, M., Moran, J., Herrnstein, J., Greenhill, L., Nakai, N., Diamond, P., & Inoue, E. 1995, Nature, 373, 127 Nandra, K., George, I.M., Mushotzky, R.F., Turner, T.J., Yaqoob, T. 1997, ApJ, 477, 602 Netzer, H. 1990, in Active Galactic Nuclei, R.D. Blandford, H. Netzer, & L. Woltjer (Berlin: Springer-Verlag), p. 137 Peterson, B.M., et al. 1999, ApJ, 510, 659 Peterson, B.M., Wanders, I., Bertram, R., Hunley, J.F., Pogge, R.W., & Wagner, R.M. 1998a, ApJ, 501, 82 Peterson, B.M., Wanders, I., Horne, K., Collier, S., Alexander, T., & Kaspi, S. 1998b, PASP, 110, 660 Richstone, D., et al. 1998, Nature, 395, A14. Tanaka, Y., et al. 1995, Nature, 375, 659 Wandel, A. 1997, ApJ, 490, L131 Wanders, I., et al. 1995, ApJ, 453, L87 Wanders, I., & Peterson, B.M. 1996, ApJ, 466, 174 (Erratum: 1997, ApJ, 477, 990) White, R.J., & Peterson, B.M. 1994, PASP, 106, 879 Wilkes, B.J. 1984, MNRAS, 207, 73
[llccc]{} \[tab:sourcetab\] 1989 & Si[iv]{}$\lambda1400$ & $12.0^{+4.4}_{-2.6}$ & $6320 \pm 1470$ & $7.0^{+4.2}_{-3.6}$ & + O[iv]{}\]$\lambda1402$ & & C[iv]{}$\lambda1549$ & $ 9.5^{+2.6}_{-1.0}$ & $5520 \pm 380$ & $4.2^{+1.3}_{-0.7}$ & He[ii]{}$\lambda1640$ & $ 3.0^{+2.9}_{-1.1}$ & $8810 \pm 1800$ & $3.4^{+3.6}_{-1.9}$ & C[iii]{}\]$\lambda1909$ & $27.9^{+6.0}_{-5.5}$ & $4330 \pm 770$ & $7.7^{+3.2}_{-3.1}$ & He[ii]{}$\lambda4686$ & $8.5^{+3.4}_{-3.4}$ & $8880 \pm 1510$ & $9.8^{+5.2}_{-5.2}$ & H$\beta\,\lambda4861$ & $19.7^{+2.0}_{-1.4}$ & $4250 \pm 240$ & $5.2^{+0.8}_{-0.7}$ 1990 & H$\beta\,\lambda4861$ & $19.3^{+1.9}_{-3.0}$ & $4850 \pm 300$ & $6.6^{+1.0}_{-1.3}$ 1991 & H$\beta\,\lambda4861$ & $16.4^{+3.8}_{-3.3}$ & $5700 \pm 480$ & $7.8^{+2.2}_{-2.0}$ 1992 & H$\beta\,\lambda4861$ & $11.4^{+2.3}_{-2.3}$ & $5830 \pm 300$ & $5.7^{+1.3}_{-1.3}$ 1993 & Si[iv]{}$\lambda1400$ & $4.6^{+0.8}_{-1.4}$ & $9060 \pm 2320$ & $5.5^{+3.0}_{-3.3}$ & + O[iv]{}\]$\lambda1402$ & & C[iv]{}$\lambda1549$ & $ 6.8^{+1.1}_{-1.1}$ & $8950 \pm 570$ & $8.0^{+1.6}_{-1.6}$ & He[ii]{}$\lambda1640$ & $ 2.0^{+0.3}_{-0.4}$ & $13130 \pm 4500$ & $5.1^{+3.5}_{-3.6}$ 1994 & H$\beta\,\lambda4861$ & $15.5^{+2.3}_{-6.1}$ & $6860 \pm 420$ & $10.7^{+2.1}_{-4.4}$ 1996 & H$\beta\,\lambda4861$ & $16.8^{+1.4}_{-1.4}$ & $5700 \pm 420$ & $8.0^{+1.4}_{-1.4}$
[^1]: Information about the International AGN Watch and copies of published data can be obtained on the World-Wide Web at URL [http://www.astronomy.ohio-state.edu/$\sim$agnwatch/]{}.
|
{
"pile_set_name": "ArXiv"
}
|
[ **Quenching the Kitaev honeycomb model** ]{}
L. Rademaker^1,2\*^
[**1**]{} Department of Theoretical Physics, University of Geneva, 1211 Geneva, Switzerland\
[**2**]{} Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada\
\* [email protected]
Abstract {#abstract .unnumbered}
========
[**I studied the non-equilibrium response of an initial Néel state under time evolution with the Kitaev honeycomb model. With isotropic interactions ($J_x = J_y = J_z$) the system quickly loses its antiferromagnetic order and crosses over into a steady state valence bond solid, which can be inferred from the long-range dimer correlations. There is no signature of a dynamical phase transition. Upon including anisotropy ($J_x = J_y \neq J_z$), an exponentially long prethermal regime appears with persistent magnetization oscillations whose period derives from an effective toric code.** ]{}
------------------------------------------------------------------------
------------------------------------------------------------------------
Introduction
============
Quantum spin liquids[@Savary:2016fk] are intriguing forms of matter characterized by the absence of magnetic order and the presence of long-range entanglement. A defining feature is that they cannot be transformed smoothly into a non-entangled magnetic product state, such as the Néel antiferromagnet. One might wonder whether these opposite extremes can be connected under a [*rapid*]{} change of external parameters.
Such a rapid change is known as a quench[@Polkovnikov:2011iu; @2016JSMTE..06.4002E], and this set-up has lead to the prediction and observation of dynamical phase transitions.[@Heyl:2013fy; @Jurcevic:2017be; @Heyl:2018fv] For example, in the transverse field Ising model, time evolution of an initial magnetic state under a Hamiltonian with a trivial paramagnetic ground state leads to nonanalytic behavior in the return amplitude at certain times after the quench.[@Heyl:2013fy] Also the opposite quench, starting from a paramagnetic spin liquid and time-evolving with a Hamiltonian not supporting spin liquid behavior, has been studied[@Tsomokos:2009cr]. In this work, I will combine these works to answer the question: what happens when time evolves a magnetic state with a Hamiltonian that has a spin liquid ground state? Will we see a dynamical phase transition or crossover into a spin liquid regime at some finite timescale? Or will signatures of the initial magnetic order remain?
![\[Fig3Magnetization\] The staggered magnetization $m=(-1)^i \langle \sigma^z_i (t) \rangle$ after the quench for various $J_{xy}$, fixed $J_z=1$, $N_{mc} = 2000$, and system size $L=8$. While the magnetization vanishes quickly in the isotropic model, the response is exponentially slower when $J_{xy} < 1$. For completeness, the system size dependence of the staggered magnetization for $J_{xy} = 0.2J_z$ is shown in Fig. \[Fig6Prethermal\]. [*Inset:*]{} Typical timescales as a function of anisotropy. Shown here are the times it takes for the system to lose 80% and $99.9$% of its staggered magnetization, as well as the time where the free energy density reaches its steady state value.](Fig3Magnetization.pdf){width="0.7\columnwidth"}
The Kitaev honeycomb model[@Kitaev:2006ik] provides an ideal playground to answer this question since it is exactly solvable. A slow ramp in this model has been studied before[@Sengupta:2008cma; @Mondal:2008hma], but there the dynamics started from an initial spin liquid state. Here, I start from an antiferromagnetic Néel state, the simplest possible non-entangled magnetically ordered state, and time evolve with the Kitaev Hamiltonian with both isotropic ($J_x = J_y = J_z$) and anisotropic interactions ($J_x = J_y \neq J_z$). In order to time evolve with the Kitaev model I first express the Néel state as a superposition of different gauge fields configurations. Within each gauge sector, I then compute the exact time evolution of the free Majorana fermions.
As expected, the initial staggered magnetization vanishes (see Fig. \[Fig3Magnetization\]) after the quench. Unlike quenches in the transverse field Ising model, however, there seem to be no signatures of a dynamical phase transition. At long times the system becomes a steady state valence bond solid. More surprisingly, an exponentially long prethermal regime appears when the interactions are anisotropic, as seen for example in the time evolution of the magnetization (Fig. \[Fig3Magnetization\]). This prethermal regime is governed by an effective high-temperature toric code.
In Sec. \[Sec:Model\] I will first present the Kitaev honeycomb model, its exact solution and an outline of the quench method. The results are discussed in Sec. \[Sec:Results\], with a special emphasis on the question of a dynamical phase transition (\[Subsec:DPT\]), the prethermal regime (\[Subsec:Prethermal\]) and the final steady state (\[Subsec:SteadyState\]). I conclude with a brief discussion on entanglement and experimental realizations in Sec. \[Sec:Conclusion\].
![\[Fig1KitModel\] The Kitaev honeycomb model on a lattice. The unit cell with sites $A$ and $B$ is shown, together with the three inequivalent bonds labeled $\alpha = x,y,z$. The vectors $\delta_{\alpha}$ indicate the nearest neighbor position relative to an A site. In the middle of the lattice I indicate how a spin can be split up into four Majorana operators $b^\alpha$ and $c$. On the right a visualization of the initial Néel state.](Fig1honeycomb.pdf){width="\columnwidth"}
Model, initial state and method {#Sec:Model}
===============================
Before presenting the results in detail, let me introduce the set-up of the quench. Consider spin-$\frac{1}{2}$ degrees of freedom $\sigma_i$ on a honeycomb lattice. The unit cell has two sites, which I will label as the $A$ and $B$ site, shown in Fig. \[Fig1KitModel\]. The initial state will be a perfect Néel state polarized along the $z$-direction, which is an unentangled product state $|\psi_0 \rangle = \prod_i | \uparrow_{iA}\rangle \otimes | \downarrow_{iB}\rangle$. Starting from this initial state I will compute the time evolution using the Kitaev honeycomb model. In this model the bonds between lattice sites are divided into three types, depending on their direction, as shown in Fig. \[Fig1KitModel\]. Each bond-type has an Ising spin interaction along a different spin orientation, $$H = \sum_{i} \left( J_x \sigma^x_{iA} \sigma^x_{i+\delta_x,B}
+ J_y \sigma^y_{iA} \sigma^y_{i+\delta_y,B}
+ J_z \sigma^z_{iA} \sigma^z_{iB} \right).$$ Kitaev’s key insight was that one can solve this model exactly by representing each spin by four Majorana operators $b^x, b^y, b^z$ and $c$. This enlarges the Hilbert space, and in the enlarged Hilbert space we can define ’enlarged’ spin operators $\widetilde{\sigma}^x = i b^x c,
\widetilde{\sigma}^y = i b^y c,$ and $
\widetilde{\sigma}^z = i b^z c.$ The projection operator onto the real, physical, subspace is $P = \frac{1}{2} \left( 1 + b^x b^y b^z c \right)$. Therefore, the physical spins are given by $\sigma^\alpha = P \widetilde{\sigma}^\alpha P$, which implies $\sigma^x = \frac{i}{2} \left( b^x c - b^y b^z \right),
\sigma^y = \frac{i}{2} \left( b^y c - b^z b^x \right),$ and $
\sigma^z = \frac{i}{2} \left( b^z c - b^x b^y \right)$. In the following, I will use that within the physical subspace, the real spins can also be represented by the operators of the form $\sigma^z = -i b^x b^y$ and similar expressions hold for $\sigma^x$ and $\sigma^y$.
In terms of the new Majorana operators, the Hamiltonian reads $$H = i \sum_{j,\alpha} J_{\alpha} u_{j\alpha} c_{jA} c_{j+\delta_\alpha,B}$$ where $j$ sums over unit cells and $u_{j\alpha} = i b^{\alpha}_{jA} b^{\alpha}_{j+\delta_\alpha,B}=\pm 1$ is a static $Z_2$ gauge field living on the $\alpha=x,y,z$ bond. The product of $Z_2$ gauge fields along a plaquette is gauge-invariant and is the ’flux’ $w_p = \sigma^x_1 \sigma^y_2 \sigma^z_3 \sigma^x_4 \sigma^y_5 \sigma^z_6$. The remaining $c$-Majorana’s are called ’matter’ and are noninteracting.
The spin liquid ground state of the Kitaev honeycomb model is in the zero-flux sector, meaning all gauge fields $u_{j\alpha}$ are the same. In contrast, the Néel state, when expressed in terms of gauge and matter fields, is in a superposition of different flux configurations since $\langle \psi_0 | w_p| \psi_0 \rangle = 0$ where $w_p$ is the plaquette flux operator. We can show which flux configurations are included in this superposition by repeated use of the fact that the Néel state is an eigenstate of the physical operator $\sigma^z_i$.
A good basis to describe the Néel state is by pairing the remaining matter Majorana’s along the $z$-bonds within one unit cell, $v_{j} = i c_{jA} c_{jB} = \pm 1$. Any possible state in the enlarged Hilbert space can be written as a superposition of $u,v$-configurations, $| \psi \rangle = \sum_{ \left\{ u_{j \alpha}, v_{j} \right\} } c_{\left\{ u_{j \alpha}, v_{j} \right\} } | \left\{ u_{j \alpha}, v_{j} \right\} \rangle$, and our task is to find the weight constants $c_{\left\{ u_{j \alpha}, v_{j} \right\}}$. The fact that the Neel state is physical and therefore must satisfy $P_j | \psi_0 \rangle = |\psi_0 \rangle$, and that it is an eigenstate of $\sigma^z_j$ for every $j$, leads to two constraints on the possible $u,v$-configurations. On a lattice consisting of $L_x \times L_y$ unit cells with periodic boundary conditions, we have periodic chains of $xy$-bonds. The product of all $2L_x$ $z$-spins along such a $xy$-chain equals $(-1)$ times the product of all $x$ and $y$ gauge fields. Therefore, this product of gauge fields must equal $(-1)^{L_x+1}$. Consequently, the Néel state is an equal-weight superposition of all $N_c=2^{3L_xL_y-L_y}$ possible $u_{j \alpha}$ gauge field configurations that satisfy this constraint. The matter content $v_i$ is fixed by the constraint $\sigma^z_{jA} \sigma^z_{jB} = -1 $ within each unit cell, which implies $u_{jz} = v_j$. The relative phases between different $\left\{ u_{j \alpha}, v_{j} \right\} $-configurations are fixed by the expectation value of $\sigma^z_j$ operators, and are multiples of $i$.
Note that in principle the gauge freedom allows us to construct the same Néel state with a different set of gauge field configurations. However, the current choice is extremely transparent since it represents the Néel state as an equal superposition of all allowed configurations. This in turn makes the calculation of observables straightforward.
Because the gauge fields are integrals of motion only the matter fields will be changing over time, $$|\psi(t) \rangle = \frac{1}{\sqrt{N_c}} \sum_{\{ u_{j\alpha} \}}
| \{ u_{j\alpha} \} \rangle \otimes
e^{-i H^{\{ u_{j\alpha} \} } t}
| \psi_0^{\{ u_{j\alpha} \} } \rangle
\label{TimeEvo}$$ where $\left\{ u_{j\alpha} \right\}$ represents a gauge field configuration that respects the aforementioned constraints, $| \psi_0^{ \left\{ u_{j\alpha} \right\} } \rangle$ is the initial matter field configuration determined by $v_j = u_{jz}$ and $H^{\left\{ u_{j\alpha} \right\} } $ is a free matter Majorana Hamiltonian with hoppings depending on the $Z_2$ gauge fields. The magnetization on an $A$ lattice site $m_{jA}(t) = \langle \psi(t) | \sigma^z_{jA} | \psi(t) \rangle$ can be found using the gauge-field-only representation of spin, $\sigma^z_{jA} = -i b^x_{j} b^y_j$. Therefore, the magnetization can be written as the return amplitude with [*two*]{} matter Hamiltonians, $$m(t) = \frac{1}{N_c} \sum_{\{ u_{j \alpha} \} }
\langle \psi_0^{\{ u_{j\alpha} \} }|
e^{i H^{\{ u_{j\alpha}' \} } t}
e^{-i H^{\{ u_{j\alpha} \} } t}
| \psi_0^{\{ u_{j\alpha} \} } \rangle
\label{Magnet}$$ where the configurations $\{ u_{j\alpha}' \}$ and $\{ u_{j\alpha} \}$ differ only by the flip of the two gauge fields $u^x_j$ and $u^y_j$. The sum over exponentially many gauge field configurations can be replaced by a random Monte Carlo sampling over all configurations[@2017arXiv170104748S; @2017arXiv170509143S] that satisfy the constraints relevant for the initial Néel state. For each such configuration I need to compute these generalized return amplitudes for the matter Hamiltonian, which can be done efficiently using the Balian-Brezin decomposition as outlined in Appendix \[AppendixBB\].[@Balian:1969bs; @2017arXiv170707178N] Note that an alternative way of deriving my results is by using the ’brick wall’-representation of the Kitaev honeycomb model.[@2008JPhA...41g5001C]
Note that in the basis we use, where we pair Majorana matter particles along the $z$-bonds to create complex fermions, it is most natural to compute expectation values of $S^z$ and correlations thereof. In contrast, the computation of correlations functions containing $S^x$ or $S^y$ is more involved. Specifically, such correlation functions can no longer be expressed purely in terms of gauge fields $b$, and would require keeping track of the time evolution of Majorana matter fermions. Since we start with a $z$-polarized Néel, we only focus in this manuscript on the dynamics of correlation functions involving only $S^z$ operators.
Finally, it is worth mentioning that the sampling over all gauge fields is specific to our choice of initial state. In the extreme case that one has an initial state that lies purely within one flux sector, the corresponding quench dynamics is that of a non-interacting fermion model. This is done in, for example, Refs. [@Sengupta:2008cma; @Mondal:2008hma], where they study a ramp within the zero-flux sector. In this case the dynamics follow the general lore of dynamical phase transitions[@Heyl:2018fv; @Jurcevic:2017be; @Heyl:2017ds; @Heyl:2013fy]. Much of our results in the following section deviate from this precisely because we intertwined various flux sectors by choosing an initial Néel state.
![\[Fig2Loschmidt\] [**Left:**]{} The nonequilibrium free energy $f(t) = -\frac{1}{N} \log \mathcal{G}(t)$ for various $J_{xy}$, fixed $J_z=1$, $N_{mc} = 20000$, and system sizes $L=6$ (thin lines) and $L=8$ (thick lines), where $L_x = L_y = L$. No dynamical phase transition is observed in the short times before a steady state plateau emerges. A prethermalization regime appears with increasing anisotropy. [**Right:**]{} The nonequilibrium free energy for the isotropic case $J_z = J_{xy}$ as a function of system size $L$, averaged over $N_{mc} = 500,000$ gauge configurations. I extrapolated these results to $L=\infty$, which still does not show any signatures of nonanalytic behavior.](Fig2Loschmidt.pdf "fig:"){width="0.51\columnwidth"} ![\[Fig2Loschmidt\] [**Left:**]{} The nonequilibrium free energy $f(t) = -\frac{1}{N} \log \mathcal{G}(t)$ for various $J_{xy}$, fixed $J_z=1$, $N_{mc} = 20000$, and system sizes $L=6$ (thin lines) and $L=8$ (thick lines), where $L_x = L_y = L$. No dynamical phase transition is observed in the short times before a steady state plateau emerges. A prethermalization regime appears with increasing anisotropy. [**Right:**]{} The nonequilibrium free energy for the isotropic case $J_z = J_{xy}$ as a function of system size $L$, averaged over $N_{mc} = 500,000$ gauge configurations. I extrapolated these results to $L=\infty$, which still does not show any signatures of nonanalytic behavior.](Fig2aLoschmidt.pdf "fig:"){width="0.49\columnwidth"}
Results {#Sec:Results}
=======
I will now show results for a quench from an initial Néel antiferromagnet, to the Kitaev honeycomb model. I consider both quenches to the isotropic case ($J_x = J_y = J_z$) as well as the anisotropic Kitaev model ($J_x = J_y \neq J_z$). The anisotropy is defined by the ratio $J_z / J_{xy}$ where $J_{xy} \equiv J_x = J_y$.
Phase transition or crossover? {#Subsec:DPT}
------------------------------
The dynamics studied here can be viewed as a quench through a quantum critical point separating an antiferromagnetic phase and a paramagnetic spin liquid phase. It has been suggested that a quench from the ferromagnetic to the paramagnetic phase leads to non-analytic behavior of the return amplitude at given times.[@Heyl:2018fv; @Jurcevic:2017be; @Heyl:2017ds; @Heyl:2013fy] Specifically, consider the nonequilibrium free energy density $$f(t) = -\frac{1}{N} \log |\mathcal{G}(t)|$$ where $N = L_x L_y = L^2$, and $\mathcal{G}(t)$ is the return amplitude or [*Loschmidt echo*]{} $$\mathcal{G}(t) = \langle \psi(t) | \psi_0 \rangle.$$ In the transverse field Ising model, this quantity is shown to be nonanalytical at several moments after the quench. Such nonanalytic points are called [*dynamical phase transitions*]{}, in analogy to thermal phase transitions where the free energy becomes nonanalytic.
In order to study the possible appearance of a dynamical phase transition in our quench model, I computed the nonequilibrium free energy density $f(t)$. The results are shown in Fig. \[Fig2Loschmidt\]. On the left-hand side I show the free energy, for $J_z = 1$ and various $J_{xy}$, averaged over $N_{mc} = 20000$ gauge configurations. Independent of $J_{xy}$, there is an initial growth of free energy. Subsequently, a plateau (discussed in Sec. \[Subsec:Prethermal\]) appears for the anisotropic cases. After that, there seem to be severe system-size fluctuations and it is not apparent whether or not a true nonanalyticity appears.
On the right of Fig. \[Fig2Loschmidt\] I show the system-size dependence of the free energy for the isotropic case, computed using much more gauge configurations ($N_{mc} = 500,000$). Around the time $t = 10$ there seems to be a steady state plateau developing for the free energy, which is strongly system size dependent. The infinite $L$ limit, however, does not seem to suggest any nonanalytic behavior. The evolution of the free energy is likely more accurately described as a crossover.
It is important to note that many dynamical phase transitions have been found in models that are noninteracting, such as the transverse field Ising model,[@Heyl:2018fv; @Jurcevic:2017be; @Heyl:2017ds; @Heyl:2013fy] the XY model,[@Vajna:2014fy] or fermionic band insulators.[@Budich:2016be] Even though the Kitaev model is exactly solvable, it is not a noninteracting theory. This might be the reason why I do not observe any dynamical phase transition.
![\[Fig6Prethermal\] In case of large anisotropy the decay of magnetization is extremely slow, here shown for $J_{xy} = 0.2 J_z$ with $N_{mc} = 2000$ and various system sizes extrapolated to $L=\infty$. Between $t_1 \sim 1$ and $t_2 \sim 10^{3.5} J_z^{-1}$ there is a persisting magnetization, due to the high return amplitude visible in Fig. \[Fig2Loschmidt\]. After this the system is dominated by large magnetization oscillations that finally disappear around $t_3 \sim 10^{5.5} J_z^{-1}$. ](Fig6PrethermalPlateau.pdf){width="0.7\columnwidth"}
![\[Fig4Static\] The static spin-spin correlations $\langle \sigma^z_i \sigma^z_j \rangle$ for various short-range spins, in the isotropic model, with $L=8$ and $N_{mc} = 2000$. After a short time all spin correlations vanish except the nearest-neighbor correlation along a $z$-bond. The long-range dimer-dimer correlation function, here measured at the longest possible distance between two sets of $z$-bonds, also obtains a nonzero steady state value. These correlations are indicative of a valence bond solid phase.[@Sandvik:2007dt]](Fig4Static.pdf){width="0.7\columnwidth"}
Prethermalization {#Subsec:Prethermal}
-----------------
Upon increasing the anisotropy $J_{z}/J_{xy}$, a plateau emerges in the free energy (Fig. \[Fig2Loschmidt\]) that lasts long in the anisotropy ratio $J_z/J_{xy}$. During this exponentially long timescale, persistent oscillations in the staggered magnetization $m (t) = \sum_j (-1)^j \langle \sigma^z_j(t) \rangle$ appear. To emphasize this behavior, we show in Fig. \[Fig6Prethermal\] the staggered magnetization for $J_{xy} = 0.2J_z$ as a function of system size, including a $L = \infty$ limit. Even though the anisotropy is only $J_z/J_{xy}=5$, the time-scale over which the magnetization persists is about $10^5$ longer than for the isotropic case. Different measures of a typical time-scale, namely the onset of the free energy plateau or when the magnetization reaches a 0.2 or 0.001 threshold, all display an approximately exponential dependence on the anisotropy $t^* \sim e^{c J_z/J_{xy}}$, as is shown in the inset of Fig. \[Fig3Magnetization\].
Both of these phenomena - the long time-window $t^*$ and the persistent magnetization oscillations - can be understood within the framework of [*prethermalization*]{}.[@Berges:2004ef; @Gagel:2014bb; @Bertini:2015gf; @Abanin:2017hp; @2017arXiv170408703E; @Marcuzzi:2013de]
Let us first consider the length of the time-scale $t^*$. In typical quenched systems, dynamical time-scales would depend in a power-law fashion on an anisotropy parameter. For example, in a quench of the transverse field Ising model from the ferromagnetic to the paramagnetic phase, the typical time-scale is set by $t^* = \pi / \epsilon_{k^*}(g_1)$, where $\epsilon_k(g) = \sqrt{(g- \cos k)^2 + \sin^2 k}$, $\cos k^* = \frac{1 + g_0 g_1}{g_0 + g_1}$ and $g_0$,$g_1$ are the values of the transverse field before and after the quench.[@Heyl:2013fy] This timescale diverges as $g_1$, the post-quench transverse field, becomes close to the critical value $g_c = 1$. It is easy to show that this divergence indeed follows a power-law, $t^* \sim (g_1 - g_c)^{-1/2}$.
So why is the time-scale $t^*$ so much longer in the case of quenching the anisotropic Kitaev model? The answer lies in the concept of prethermalization[@Berges:2004ef; @Gagel:2014bb; @Bertini:2015gf; @Abanin:2017hp; @2017arXiv170408703E; @Marcuzzi:2013de] that occurs in systems close to integrability. In particular, the dynamics here can be understood using the framework of Ref. [@Abanin:2017hp]. For the anisotropic Kitaev model, the coupling along the $z$-bonds is significantly stronger than along the $x,y$-bonds. We can therefore treat the $x,y$-bond coupling as a perturbation. The Kitaev model is written as $$\hat{H} = - J_z \hat{N} + J_{xy} \hat{Y}$$ where $\hat{N}$ is the sum of all the $z$-bond couplings, and $\hat{Y}$ contains the couplings along the $x,y$-bonds. The term $\hat{N}$ is trivially integrable, it is just a sum of local commuting terms with integer eigenvalues. Following Ref. [@Abanin:2017hp], for $J_{xy} < J_z$, we can perform a unitary transformation such that the Hamiltonian becomes $$\hat{H} = -J_z \hat{N} + \hat{H}'_{\mathrm{eff}} + \mathcal{O}(e^{-J_z/J_{xy}})
\label{Eq:PrethermalTrafo}$$ where the new term $\hat{H}'_{\mathrm{eff}}$ commutes with $\hat{N}$. This means that $\hat{H}'_{\mathrm{eff}}$ does not affect the relative orientation of two spins along a $z$-bonds. More importantly, the remaining term is exponentially small, meaning that for an [*exponentially long time*]{} the dynamics preserve the spin correlations along each $z$-bond. To summarize, an exponentially long timescale $t^* \sim e^{c J_z/J_{xy}}$ appears because for a suitable unitary transformation the Hamiltonian has effectively [*new conservation laws*]{} that constrain dynamics for a long time.
The fact that the spin correlations are locked along a $z$-bond can be inferred from measuring the static spin correlation function $S^{zz}_{ij}(t)= \langle \psi(t) | \sigma^z_{i} \sigma^z_j | \psi(t) \rangle$. As shown in Fig. \[Fig4Static\], even in the isotropic case the relative orientation of spins along a $z$-bond remains nonzero in the infinite time-limit. This can be further corroborated by computing the static spin correlations in the diagonal ensemble (see Appendix \[AppendixDE\]), which indeed yields a steady state with zero magnetization but nonzero spin correlations along the $z$-bond. Notice that other static spin correlations vanish.
Having established the $z$-bond spin-lock for an exponentially long time, we can investigate how this leads to persistent magnetization oscillations during this prethermal regime. The key lays in the effective Hamiltonian $\hat{H'}_{\mathrm{eff}}$ in Eq. , which describes the effective interaction between the locked spins along a $z$-bond. At each $z$-bond, the configuration must be antiferromagnetic, meaning that only the spin configurations $\uparrow \downarrow$ and $\downarrow \uparrow$ are allowed. These two states constitute a ’new spin’ $\tau$, and using Kitaevs fourth order perturbation theory[@Kitaev:2006ik] the effective Hamiltonian becomes $$\hat{H}'_{\mathrm{eff}} = - \frac{J_{xy}^4}{16 J_z^3} \sum_{p}
\tau^y_{p,\mathrm{left}}
\tau^z_{p,\mathrm{top}}
\tau^y_{p,\mathrm{right}}
\tau^z_{p,\mathrm{bottom}}
+\ldots
\label{Eq:ToricCode}$$ where $p$ is every plaquette of the honeycomb lattice, and left/right/etc. refers to the $z$-bond to the left/right/etc. of this plaquette. Note that this model is equivalent to the toric code.[@Kitaev:2003ul] Any attempts to understand the dynamics in terms of topology are futile, as we are very far from the ground state during our quench dynamics and signatures of topology, such as anyonic excitations, are defined only close to the ground state.
![\[FigNewOscillations\] In the anisotropic limit $J_{xy} \ll J_z$, the exponentially long prethermal regime is governed by an effective toric code. This causes persistent oscillations in the magnetization with a period $T(J_{xy})$ given by Eq. . Here we show the magnetization oscillations in the range $J_{xy}/J_z = 0.05 - 0.3$ for linear system size $L=6$, with $N_{mc}= 2000$. The time (horizontal axis) is rescaled for each value of $J_{xy}$ to correspond to exactly four oscillation periods. We find that indeed for small $J_{xy}$, we approach the predicted periodicity.](FigNewOscillations.pdf){width="0.7\columnwidth"}
Nonetheless, a simple calculation shows what would happen if one starts with an initial Néel state (meaning $\uparrow \downarrow$ on every $z$-bond) and let time evolve with the Hamiltonian of Eq. . In such a quench, the staggered magnetization will oscillate according to $m(t) = \cos^2 (2 J_\mathrm{eff} t)$ where $J_\mathrm{eff} = \frac{J_{xy}^4}{16 J_z^3}$. Similarly, ignoring higher-order corrections to Eq. , in the prethermal regime the effective toric code will cause [*persistent oscillations*]{} with period $$T = \frac{8 \pi J_z^3}{J_{xy}^4}.
\label{Eq:Period}$$ The theory of prethermalization thus predicts that in quenching the anisotropic Kitaev model, there is a regime that for an exponentially long timescale $t^* \sim e^{J_z/J_{xy}}$ during which you will see persistent magnetization oscillations between nonzero $m$ and $0$ of period $T \sim \frac{ J_z^3}{J_{xy}^4}$.
To confirm this prediction, we analyzed how the period of persistent oscillations varies with $J_{xy}/J_z$. The results are shown in Fig. \[FigNewOscillations\] for $L=6$, $N_{mc} = 2000$ and varying small $J_{xy}$. Whenever $J_{xy} < 0.2 J_z$, the magnetic oscillations have a period that is well approximated by Eqn. . We conclude that indeed, for an exponentially long time, the system undergoes magnetic oscillations as dictated by the toric code.
![\[Fig5Dynamic\] The dynamical two-time response in the isotropic quench, as a function of frequency $\omega$ for various waiting times $t$. Directly after the quench ($t=0$) the peak at $\omega = 0$ indicates strong antiferromagnetic correlations. This peak is suppressed as time progresses, consistent with the loss of magnetization, see Fig. \[Fig3Magnetization\]. After $t \sim 0.3 J^{-1}$ the small frequency peak remains constant, and a dynamic magnetization reversal occurs in the frequency range between $4-6$ J, where the flux-averaged Majorana density of states (see inset) is highest and corresponding to the triplet excitation in the valence-bond solid.](Fig5Dynamic.pdf){width="0.7\columnwidth"}
Steady state valence bond solid {#Subsec:SteadyState}
-------------------------------
In the isotropic case there is no signature of prethermalization and after a short time of order unity the system equilibrates. While there is zero net staggered magnetization in this steady state, there are remnant nearest-neighbor spin correlations along the $z$-bond, as shown in Fig. \[Fig4Static\]. This suggests the steady state is a valence bond solid with the singlets oriented along the z-bonds. To further corroborate this claim, I studied the dimer-dimer correlation function $D_{ij}^{zz}(t) = \langle \sigma^z_i \sigma^z_{i+\delta_z} \sigma^z_j \sigma_{j+\delta_z}^z \rangle$ that has been used before as an indication of valence bond order.[@Sandvik:2007dt] Indeed, I find long-range dimer order, even though the state is at relatively high temperatures. Notice that this state does break rotational invariance, since the singlet bonds live on the $z$-bonds which are inequivalent to the $x,y$-bonds of the lattice.
Another way to quantify the steady state is through the dynamic two-time spin correlation function $S^{zz}_{j}(t,t')= \langle \psi(t) | \sigma^z_{jA} (t')\sigma^z_{jB} | \psi(t) \rangle$.[@Knolle:2014iwa; @Zschocke:2015hm; @Knolle:2015csa] The Fourier transform with respect to $t'-t$ can be interpreted as an AC spin conductivity. Specifically, the DC ($\omega = 0$) response measures the antiferromagnetic correlations along a $z$-bond. For small $\omega$ the correlations are reduced over a frequency scale set by the flux-averaged Majorana density of states, see Fig. \[Fig5Dynamic\]. At later times this correlations gets suppressed in the frequency range between 0 and 6J, which is the flux-averaged bandwidth of the matter Majorana’s. Interestingly, for times $t>0.5$ a reversal of the dynamic correlations for $4J < \omega < 6J$ appears. This is a signature of the elementary triplet excitation of the valence bond solid.
We thus find a dynamic crossover from a Néel state to a valence bond solid. This transition in equilibrium is known as a deconfined quantum phase transition and falls outside the usual Landau classification of continuous phase transitions.[@2004Sci...303.1490S] The absence of any finite time singularity is due to the fact that the location of the valence bonds are determined by the orientation of the initial Néel state. There is no dynamical spontaneous symmetry breaking: a Néel state polarized along the $x$-axis would give rise to a valence bond solid with singlets along the $x$-bonds, and so forth. It is an interesting open question to study what happens when an initial Néel state is not aligned along one of the principal spin axes.
Conclusion and Discussion {#Sec:Conclusion}
=========================
I showed that starting from a Néel state, time evolution with the Kitaev honeycomb model leads to crossover to a steady state valence bond solid. When the interactions are anisotropic ($J_{z}/J_{xy} \neq 1$), an exponentially long prethermal regime appears whose dynamics can be effectively described by a toric code. Note that similar results are expected if one would start with an initial ferromagnetic product state, rather than an antiferromagnetic product state.
To what extent these results remain valid beyond the exactly solvable model, for example by introducing a small Heisenberg term, is an open question. Based on the proof of Ref. [@Abanin:2017hp] I expect that the prethermal regime will persist even in the presence of such perturbations, but quantifying this requires new computational techniques beyond the ones used in this work.
An interesting aspect that was not included in this study is the topological nature of the Kitaev honeycomb model. The ground state of the model has nontrivial entanglement entropy,[@Yao:2010iw; @2017arXiv171001926D] a topological groundstate degeneracy and has anyonic excitations.[@Kitaev:2006ik] It is hard, however, to see one of these topological signatures in the quench set-up. After all, the Néel state has a high energy density expectation value in the Kitaev honeycomb model, meaning that the quench dynamics are far away from the ground state. The generated entanglement is therefore volume law, making it almost impossible to detect a possible topological entanglement entropy. The same holds for anyonic excitations, which are well-defined only close to the ground state. Interestingly, topological degeneracy might be observed. On a torus, the Kitaev honeycomb model in the toric code regime has a fourfold degeneracy, meaning that there are orthogonal ground states that are locally indistinguishable. Acting with a string of spin operators allows you to go from one to the other ground state. Now in the final valence bond steady state, acting with a vertical string of $\sigma^x$ and $\sigma^y$ operators will create an orthogonal state that is indistinguishable from the original state on the level of single-spin operator measurements. I leave it for future work to investigate whether indeed this amounts to a topological degeneracy, which might be relevant for quantum computations.
Finally, in recent years some material systems have been proposed to be experimental realizations of the Kitaev honeycomb model[@Jackeli:2009hz]. Though straining these materials is unlikely to give rise to the desired anisotropy to observe a prethermal regime, it might be possible to chemically engineer these system to get the anisotropic interactions desired. It will also be interesting to see the dynamic response after a quench with an initial state resembling the spiral magnetic order found in these materials.[@Biffin:2014jz; @2013PhRvL.110i7204C]
Acknowledgements {#acknowledgements .unnumbered}
================
We thank Tim Hsieh, Khadijeh (Sona) Najafi, Leon Balents, Hae-Young Kee, Yong Baek Kim and Zohar Nussinov for useful discussions.
#### Funding information
This research was supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science.
Matter Hamiltonian time evolution {#AppendixBB}
=================================
As described in the main text, the time evolution of the Kitaev honeycomb model is completely due to the Majorana fermions. In each unit cell $j$, which contains a $z$-link, we indentify an $A$ and $B$ sublattice site. The $c$-Majorana’s in Kitaev’s notation are then paired along the $z$-link to form complex fermions, $$\begin{aligned}
c_{jA} & = & a_j + a_j^\dagger, \\
c_{jB} & = & -i (a_j - a_j^\dagger ).
\label{DefineComplexFermions}\end{aligned}$$ The matter Hamiltonian on the full honeycomb lattice reads $$\begin{aligned}
H^{\{u\}} &=&
- \sum_j \left\{ (J_z u^z_j) (2 a^\dagger_j a_j - 1)
\right. \nonumber \\ && \left.
+ (J_x u^x_j)(a_j + a_j^\dagger)(a_{j+\delta_x} - a_{j+\delta_x}^\dagger)
+ (J_y u^y_j)(a_j + a_j^\dagger)(a_{j+\delta_y} - a_{j+\delta_y}^\dagger) \right\}\end{aligned}$$ where $j$ labels a unit cell, and $\delta$ connects to the unit cell with center at position $\delta_x = \frac{1}{2} ( -\sqrt{3} \hat{x} - 3 \hat{y})$, and $\delta_y=\frac{1}{2} ( \sqrt{3} \hat{x} - 3 \hat{y})$).
In each gauge sector, the required initial state is the product state where unit cells with $u^z_j = 1$ are occupied with a complex matter fermion. For later purposes it is practical to perform a particle-hole transformation on ‘occupied’ sites, so that the Hamiltonian becomes $$\begin{aligned}
H^{\{u\}} &=&
\sum_j \left\{ J_z (2 a^\dagger_j a_j - 1)
+ (J_x u^z_{j+\delta_x} u^x_j)(a_j + a_j^\dagger)(a_{j+\delta_x} - a_{j+\delta_x}^\dagger)
\right. \nonumber \\ && \left.
+ (J_y u^z_{j+\delta_y} u^y_j)(a_j + a_j^\dagger)(a_{j+\delta_y} - a_{j+\delta_y}^\dagger) \right\}.
\label{HafterPHT}\end{aligned}$$ With the Hamiltonian Eqn. (\[HafterPHT\]), the initial matter state is nothing but the $a$-vacuum $|0 \rangle$, defined by $a_j | 0 \rangle = 0$. This matter Hamiltonian can be brought into a canonical Bogoliubov-De Gennes (BdG) format, $$H^{ \left\{ u_{j\alpha} \right\} }
=\frac{1}{2}
\begin{pmatrix}
\mathbf{a}^\dagger & \mathbf{a}
\end{pmatrix}
\begin{pmatrix}
H_d & \Delta \\
-\Delta & - H_d
\end{pmatrix}
\begin{pmatrix}
\mathbf{a}\\ \mathbf{a}^\dagger
\end{pmatrix}
\label{BdGForm}$$ where $H_d$ is a real-valued symmetric matrix, $\Delta$ a real-valued antisymmetric matrix, and the vector $\begin{pmatrix}
\mathbf{a}^\dagger & \mathbf{a}
\end{pmatrix}$ contains all creation and annihilation operators for all unit cells. The $2N \times 2N$ BdG matrix in Eqn. (\[BdGForm\]) can be diagonalized, $H_{BdG} = V \Lambda V^\intercal$, with real eigenvalues $\Lambda = \mathrm{diag} \left( \epsilon_1, \epsilon_{2}, \ldots, \epsilon_N, - \epsilon_1, \ldots, -\epsilon_N \right)$ and $V$ a real orthogonal matrix of the form $V = \begin{pmatrix}
Q & R \\
R & Q
\end{pmatrix}$. This diagonalization allows us to compute the Balian-Brezin decomposition of the time evolution operator,[@Balian:1969bs; @2017arXiv170707178N] $$e^{-iHt} = e^{\frac{1}{2} a^\dagger X a^\dagger} e^{a^\dagger Y a} e^{\frac{1}{2} a Z a}
\det \left[ R e^{-i \Lambda t/2} + Q e^{i \Lambda t/2} \right]
\label{BBdec}$$ where $A = Q e^{i \Lambda t} Q^\intercal + R e^{-i \Lambda t} R^\intercal$, $B = Q e^{-i \Lambda t} R^\intercal + R e^{i \Lambda t} Q^\intercal$, $X = BA^{-1}$, $ e^{-Y^\intercal} = A, $ and $Z = A^{-1} B^*$.
The simplest quantity to compute is the overlap of the initial state with the time-evolved state, known as the return amplitude $\mathcal{G}(t) = \langle \psi(t) | e^{-iHt} | \psi_0 \rangle$. Because different flux sectors are orthogonal to one another, the total return amplitude is a sum of matter Majorana return amplitudes in each gauge sector, $$\mathcal{G}(t) = \frac{1}{N_c} \sum_{\{ u_{j\alpha} \}}
\langle \psi_0^{ \{ u_{j\alpha} \} } |
e^{-i H^{\{ u_{j\alpha} \} } t}
| \psi_0^{ \{ u_{j\alpha} \} } \rangle.
\label{ReturnAmp}$$ Note that due to the particle-hole transformation, the state $| \psi_0^{ \{ u_{j\alpha} \} } \rangle$ is equal to the $a$-vacuum, so $a_j | \psi_0^{ \{ u_{j\alpha} \} } \rangle = 0$ for every $a_j$. To simplify notation, from now on I will write $|0 \rangle$ for the initial state.
The return amplitude for a single free Majorana Hamiltonian follows directly from the Balian-Brezin decomposition Eqn. (\[BBdec\]), $$\langle 0 |
e^{-i H^{\{ u_{j\alpha} \} } t}
| 0 \rangle
= \det \left[ R e^{-i \Lambda t/2} + Q e^{i \Lambda t/2} \right].$$ Since the number of gauge field configurations scales exponentially with system size, it is impossible to compute the above sum of Eqn. (\[ReturnAmp\]) exactly. Instead, I averaged over $N_{mc}$ random gauge field configurations that satisfy the constraints set by the initial state. It turns out that $N_{mc} = 1000$ yields sufficient accuracy for the system sizes considered.
The staggered magnetization, defined as $m(t) = \frac{1}{2N} \sum_{j} \langle \psi(t) | \sigma^z_{jA} - \sigma^z_{jB} | \psi(t) \rangle$, will decay over time starting from $m(t=0)=1$. Using the representation $\sigma^z_{jA} = -i b^x_j b^y_j$, valid within the physical subspace, we see that the magnetization can be computed as a sum over return amplitudes involving two Hamiltonians, $$R_2(t) = \langle 0 | e^{i H_2 t} e^{-i H_1 t} | 0 \rangle$$ where $H_1$ and $H_2$ only differ through a flip of the $u_{jx}$ and $u_{jy}$ gauge fields neighboring the spin that we want to measure. I proceed by making the Balian-Brezin decomposition for both $H_1$ and $H_2$, $$\mathcal{R}(t) =
\det \left[ R_2 e^{i \Lambda_2 t/2} + Q_2 e^{-i \Lambda_2 t/2} \right]
\det \left[ R_1 e^{-i \Lambda_1 t/2} + Q_1 e^{i \Lambda_1 t/2} \right]
\langle 0 | e^{\frac{1}{2} a Z^*_2 a} e^{\frac{1}{2} a^\dagger X_1 a^\dagger} | 0 \rangle.$$ The remaining part can be brought again in the Balian-Brezin form, $$\begin{aligned}
\langle 0 | e^{\frac{1}{2} a Z^*_2 a} e^{\frac{1}{2} a^\dagger X_1 a^\dagger} | 0 \rangle
&=& \sqrt{ \det \left[ Z^*_2 X_1 + I \right] }
\\ && \times \nonumber
\langle 0 | e^{\frac{1}{2} a^\dagger X_1 (Z_2^* X_1 + I)^{-1} a^\dagger}
e^{ a^\dagger (- \log (Z_2^* X_1 + I) )^\intercal a}
e^{\frac{1}{2} a (Z_2^*X_1 + I)^{-1} Z_2^* } | 0 \rangle
\\ &=&
\sqrt{ \det \left[ Z^*_2 X_1 + I \right] }\end{aligned}$$ The square root can be avoided by observing that both $Z$ and $X$ are skew-symmetric, and thus using the Sylvesters determinant lemma we find $$\sqrt{ \det \left[ Z^*_2 X_1 + I \right] } =
\mathrm{Pf} \left[ \begin{pmatrix}
X_1 & -I \\
I & Z^*_2
\end{pmatrix}\right]$$ where $\mathrm{Pf}[..]$ refers to the Pfaffian of that matrix. In my numerical simulations, I use the software from Ref. [@Wimmer:2012ac] to compute the Pfaffians.
In conclusion, the Balian-Brezin decomposition yields for the return amplitude with two Hamiltonians $$\begin{aligned}
&R_2(t) = \det \left[ R_2 e^{i \Lambda_2 t/2} + Q_2 e^{-i \Lambda_2 t/2} \right]
\det \left[ R_1 e^{-i \Lambda_1 t/2} + Q_1 e^{i \Lambda_1 t/2} \right]
\mathrm{Pf} \left[ \begin{pmatrix}
X_1 & -I \\
I & Z^*_2
\end{pmatrix}\right].\end{aligned}$$ Note that the static correlations $S^{zz}_{ij}(t) = \langle \psi(t) | \sigma^z_{i} \sigma^z_j | \psi(t) \rangle$ can be computed using the same formule, where now the gauge fields need to be flipped on the $x,y$-bonds adjacent to both sites $i$ and $j$.
Finally, I can compute the dynamic two-time correlation function $$S^{zz}_{ij}(t,t')= \langle \psi(t) | \sigma_i^z (t') \sigma_j^z | \psi(t) \rangle.$$ This requires the computation of a return amplitude of time evolution with three different Hamiltonians. Using repeatedly the Balian-Brezin trick this can be expressed as $$\begin{aligned}
R_3 (t,t') &=& \langle 0 | e^{i H_3 (t+t')} e^{-i H_2 t'} e^{-iH_1 t} | 0 \rangle \\
&=&
\det \left[ R_3 e^{i \Lambda_3 (t+t')/2} + Q_3 e^{-i \Lambda_3 (t+t')/2} \right]
\det \left[ R_2 e^{-i \Lambda_2 t'/2} + Q_2 e^{i \Lambda_2 t'/2} \right]
\nonumber \\ && \times
\det \left[ R_1 e^{-i \Lambda_1 t/2} + Q_1 e^{i \Lambda_1 t/2} \right]
\nonumber \\ && \times
\langle 0 | e^{\frac{1}{2} a Z^*_3 a}
e^{\frac{1}{2} a^\dagger X_2 a^\dagger} e^{a^\dagger Y_2 a} e^{\frac{1}{2} a Z_2 a}
e^{\frac{1}{2} a^\dagger X_1 a^\dagger} | 0 \rangle
\\ &=&
\det \left[ R_3 e^{i \Lambda_3 (t+t')/2} + Q_3 e^{-i \Lambda_3 (t+t')/2} \right]
\left( \det \left[ R_2 e^{-i \Lambda_2 t'/2} + Q_2 e^{i \Lambda_2 t'/2} \right] \right)^{-1}
\nonumber \\ && \times
\det \left[ R_1 e^{-i \Lambda_1 t/2} + Q_1 e^{i \Lambda_1 t/2} \right]
\nonumber \\ && \times
\mathrm{Pf} \left[ \begin{pmatrix}
X_2 & -I \\
I & Z^*_3
\end{pmatrix}\right]
\mathrm{Pf} \left[ \begin{pmatrix}
Z_2 & -I \\
I & X_1
\end{pmatrix}\right]
\nonumber \\ && \times
\mathrm{Pf} \left[ \begin{pmatrix}
((Z_3^*)^{-1} + X_2)^{-1} & -A_2 \\
A_2 & (X_1^{-1} + Z_2)^{-1}
\end{pmatrix}\right].\end{aligned}$$
Diagonal ensemble {#AppendixDE}
=================
The diagonal ensemble is defined as follows. Our initial state is given by $|\psi \rangle = \sum_n c_n |n \rangle$ where the $| n \rangle$ form an orthonormal set of eigenstates. Strictly speaking, the time evolution of our state is then $|\psi (t) \rangle = \sum_n c_n e^{-i E_n t} |n \rangle$. The diagonal ensemble is a density matrix composed of the time-independent diagonal of the initial state density matrix, $$\rho_D = \sum_n |c_n|^2 | n \rangle \langle n |.$$ In our case, the eigenstates of our system have the form $|\{ u \} \rangle \otimes | \{ f \} \rangle$ where $|\{ f \} \rangle$ are Fock states composed of the single-particle wavefunctions diagonalizing the matter BdG Hamiltonian. Since the flux sectors are orthogonal we can construct a diagonal ensemble within each flux sector. The trace carries over to the extended Hilbert space provided we use the physical subspace projector and our initial state is completely embedded in the physical subspace.
Any operator that changes the flux sector, such as an isolated $\sigma^z_j$, must have a zero expectation value in the diagonal ensemble. The only (possibly) nonzero expectation values of two-spin operators are of of the form $\sigma^z_{jA} \sigma^z_{jB}$ along a $z$-bond. Using $\sigma^z_A \sigma^z_B = i b^z_A c_A i b^z_B c_B = - i u^z c_A c_B$, and the particle-hole transformation defined above, I find $$\begin{aligned}
\mathrm{Tr} \; \sigma^z_{jA} \sigma^z_{jB} \, \rho_D
&=& 1 - 2 \mathrm{Tr} \; a^\dagger_j a_j \, \rho_D
\\ &=&
1 - 2 \frac{1}{N_c} \sum_{\{ u \}} \sum_{m=1}^L \left( Q_{jm} Q^\dagger_{mj} (R^\intercal R)_{mm} + R_{jm} R^\dagger_{mj} (Q^\intercal Q)_{mm} \right)\end{aligned}$$ where I used the diagonalization of the matter Hamiltonian defined in the previous section.
[10]{} \[1\][`#1`]{} urlstyle \[1\][doi:\#1]{}
\[2\]\[\][[\#2](#2)]{}
L. Savary and L. Balents, *[Quantum spin liquids: a review]{}*, Rep. Prog. Phys. **80**(1), 016502 (2016).
A. Polkovnikov, K. Sengupta, A. Silva and M. Vengalattore, *[Colloquium: Nonequilibrium dynamics of closed interacting quantum systems]{}*, Rev. Mod. Phys. **83**(3), 863 (2011).
F. H. L. Essler and M. Fagotti, *[Quench dynamics and relaxation in isolated integrable quantum spin chains]{}*, J. Stat. Mech. **06**(6), 064002 (2016).
M. Heyl, A. Polkovnikov and S. Kehrein, *[Dynamical Quantum Phase Transitions in the Transverse-Field Ising Model]{}*, Phys. Rev. Lett. **110**(13), 135704 (2013).
P. Jurcevic, H. Shen, P. Hauke, C. Maier, T. Brydges, C. Hempel, B. P. Lanyon, M. Heyl, R. Blatt and C. F. Roos, *[Direct Observation of Dynamical Quantum Phase Transitions in an Interacting Many-Body System]{}*, Phys. Rev. Lett. **119**(8), 080501 (2017).
M. Heyl, *[Dynamical quantum phase transitions: a review]{}*, Rep. Prog. Phys. **81**(5), 054001 (2018).
D. I. Tsomokos, A. Hamma, W. Zhang, S. Haas and R. Fazio, *[Topological order following a quantum quench]{}*, Phys. Rev. A **80**(6), 060302 (2009).
A. Kitaev, *[Anyons in an exactly solved model and beyond]{}*, Annals of Physics **321**(1), 2 (2006).
K. Sengupta, D. Sen and S. Mondal, *[Exact Results for Quench Dynamics and Defect Production in a Two-Dimensional Model]{}*, Phys. Rev. Lett. **100**(7), 077204 (2008).
S. Mondal, D. Sen and K. Sengupta, *[Quench dynamics and defect production in the Kitaev and extended Kitaev models]{}*, Phys. Rev. B **78**(4), P04010 (2008).
A. Smith, J. Knolle, D. L. Kovrizhin and R. Moessner, *[Disorder-free localization]{}*, Phys. Rev. Lett. **118**, 266601 (2017).
A. Smith, J. Knolle, R. Moessner and D. L. Kovrizhin, *[Absence of Ergodicity without Quenched Disorder: from Quantum Disentangled Liquids to Many-Body Localization]{}*, Phys. Rev. Lett. **119**, 176601 (2017).
R. Balian and E. Brezin, *[Nonunitary bogoliubov transformations and extension of Wicks theorem]{}*, Nuovo Cimento B (1965-1970) **64**(1), 37 (1969).
K. Najafi and M. A. Rajabpour, *[On the possibility of complete revivals after quantum quenches to a critical point]{}*, Phys. Rev. B **96**, 014305 (2017).
H.-D. Chen and Z. Nussinov, *[Exact results on the Kitaev model on a hexagonal lattice: spin states, string and brane correlators, and anyonic excitations]{}*, Journal of Physics A: Mathematical and Theoretical **41**(7), 075001 (2008).
M. Heyl, *[Quenching a quantum critical state by the order parameter: Dynamical quantum phase transitions and quantum speed limits]{}*, Phys. Rev. B **95**(6), 249 (2017).
S. Vajna and B. D[ó]{}ra, *[Disentangling dynamical phase transitions from equilibrium phase transitions]{}*, Phys. Rev. B **89**, 161105(R) (2014).
J. C. Budich and M. Heyl, *[Dynamical topological order parameters far from equilibrium]{}*, Phys. Rev. B **93**(8), 085416 (2016).
A. W. Sandvik, *[Evidence for Deconfined Quantum Criticality in a Two-Dimensional Heisenberg Model with Four-Spin Interactions]{}*, Phys. Rev. Lett. **98**(22), 227202 (2007).
J. Berges, S. Bors[á]{}nyi and C. Wetterich, *[Prethermalization]{}*, Phys. Rev. Lett. **93**(14), 495 (2004).
P. Gagel, P. P. Orth and J. Schmalian, *[Universal Postquench Prethermalization at a Quantum Critical Point]{}*, Phys. Rev. Lett. **113**(22), 220401 (2014).
B. Bertini, F. H. L. Essler, S. Groha and N. J. Robinson, *[Prethermalization and Thermalization in Models with Weak Integrability Breaking]{}*, Phys. Rev. Lett. **115**(18), 180601 (2015).
D. Abanin, W. de Roeck, W. W. Ho and F. Huveneers, *[A Rigorous Theory of Many-Body Prethermalization for Periodically Driven and Closed Quantum Systems]{}*, Comm. Math. Phys. **354**(3), 809 (2017).
D. V. Else, P. Fendley, J. Kemp and C. Nayak, *[Prethermal Strong Zero Modes and Topological Qubits]{}*, Phys. Rev. X **7**, 041062 (2017).
M. Marcuzzi, J. Marino, A. Gambassi and A. Silva, *[Prethermalization in a Nonintegrable Quantum Spin Chain after a Quench]{}*, Phys. Rev. Lett. **111**(19), 197203 (2013).
A. Kitaev, *[Fault-tolerant quantum computation by anyons]{}*, Annals of Physics **303**, 2 (2003).
J. Knolle, D. L. Kovrizhin, J. T. Chalker and R. Moessner, *[Dynamics of a Two-Dimensional Quantum Spin Liquid: Signatures of Emergent Majorana Fermions and Fluxes]{}*, Phys. Rev. Lett. **112**(20), 207203 (2014).
F. Zschocke and M. Vojta, *[Physical states and finite-size effects in Kitaev’s honeycomb model: Bond disorder, spin excitations, and NMR lineshape]{}*, Phys. Rev. B **92**, 014403 (2015).
J. Knolle, D. L. Kovrizhin, J. T. Chalker and R. Moessner, *[Dynamics of fractionalization in quantum spin liquids]{}*, Phys. Rev. B **92**(11), 225 (2015).
T. Senthil, A. Vishwanath, L. Balents, S. Sachdev and M. P. A. Fisher, *[Deconfined Quantum Critical Points]{}*, Science **303**(5), 1490 (2004).
H. Yao and X.-L. Qi, *[Entanglement Entropy and Entanglement Spectrum of the Kitaev Model]{}*, Phys. Rev. Lett. **105**(8), 080501 (2010).
B. D[ó]{}ra and R. Moessner, *[Gauge field entanglement of Kitaev’s honeycomb model]{}*, Phys. Rev. B **97**, 035109 (2018).
G. Jackeli and G. Khaliullin, *[Mott Insulators in the Strong Spin-Orbit Coupling Limit: From Heisenberg to a Quantum Compass and Kitaev Models]{}*, Phys. Rev. Lett. **102**(1), 017205 (2009).
A. Biffin, R. D. Johnson, I. Kimchi, R. Morris, A. Bombardi, J. G. Analytis, A. Vishwanath and R. Coldea, *[Noncoplanar and Counterrotating Incommensurate Magnetic Order Stabilized by Kitaev Interactions in $\gamma$-Li$_2$IrO$_3$]{}*, Phys. Rev. Lett. **113**(19), 197201 (2014).
J. Chaloupka, G. Jackeli and G. Khaliullin, *[Zigzag Magnetic Order in the Iridium Oxide Na2IrO3]{}*, Phys. Rev. Lett. **110**(9), 097204 (2013).
M. Wimmer, *[Efficient numerical computation of the Pfaffian for dense and banded skew-symmetric matrices]{}*, ACM Trans. Math. Software **38**, 30 (2012).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Recent improved determinations of the mass density $\rho_{\rm BH}$ of supermassive black holes (SMBHs) in the local universe have allowed accurate comparisons of $\rho_{\rm BH}$ with the amount of light received from past quasar activity. These comparisons support the notion that local SMBHs are “dead quasars” and yield a value $\epsilon \gsim 0.1$ for the average radiative efficiency of cosmic SMBH accretion. BH coalescences may represent an important component of the quasar mass assembly and yet not produce any observable electromagnetic signature. Therefore, ignoring gravitational wave (GW) emission during such coalescences, which reduces the amount of mass locked into remnant BHs, results in an overestimate of $\epsilon$. Here, we put constraints on the magnitude of this bias. We calculate the cumulative mass loss to GWs experienced by a representative population of BHs during repeated cosmological mergers, using loss prescriptions based on detailed general relativistic calculations. Despite the possibly large number of mergers in the assembly history of each individual SMBH, we find that near–equal mass mergers are rare, and therefore the cumulative loss is likely to be modest, amounting at most to an increase by 20 percent of the inferred $\epsilon$ value. Thus, recent estimates of $\epsilon \gsim 0.1$ appear robust. The space interferometer [*LISA*]{} should provide empirical constraints on the dark side of quasar evolution, by measuring the masses and rates of coalescence of massive BHs to cosmological distances.'
author:
- 'Kristen Menou & Zoltán Haiman'
title: On The Dark Side of Quasar Evolution
---
\#1[to 0pt[\#1]{}]{}
Introduction
============
It is now widely accepted that quasar activity is powered by accretion onto supermassive black holes (SMBHs). From the active phases of accretion which characterize luminous, high-redshift quasars, one expects remnant SMBHs to be present at the centers of nearby galaxies (Lynden-Bell 1969; Soltan 1982; Rees 1990). The evidence for such a population of dead quasars has been growing over the years (see Kormendy & Richstone 1995 for a review) and it is now compelling (Magorrian et al. 1998).
Dynamical studies of nearby massive galaxies indicate that a close link exists between the masses of dead quasar SMBHs and the properties of their host galaxies, including the spheroid’s mass (Magorrian et al. 1998; Haering & Rix 2004), velocity dispersion (Ferrarese & Merritt 2000; Gebhardt et al. 2000; Tremaine et al. 2002) and the total galactic mass (Ferrarese 2002). These empirically-determined correlations allow accurate tests of the idea that the amount of mass locked into SMBHs in nearby dead quasars should be comparable to that inferred from the amount of light received from past quasar activity, with a radiative efficiency $\epsilon \sim 10\%$, since the latter is a tracer of BH mass build up via accretion (Soltan 1982; Chokshi & Turner 1992). Recent comparisons do find a good agreement between the mass density in dead quasar SMBHs and the integrated light from optically-bright quasars, provided that the radiative efficiency of BH accretion in luminous quasars is $\epsilon \gsim 0.1$ (Yu & Tremaine 2002; Aller & Richstone 2002; Haiman, Ciotti & Ostriker 2004). [The luminosity density of the quasar population can also be inferred from the X-ray bands. This has led to suggestions that optical quasar surveys may be missing some of the quasar emission (because of obscuration; Fabian & Iwasawa 1999; Barger et al. 2001), which may be indicative of radiatively more efficient accretion onto fast-spinning BHs (Elvis, Risaliti & Zamorani 2002). However, recent work, using the soft X-ray luminosity function of Miyaji et al. (2001) have found a low efficiency of $\epsilon\sim 0.05$ (Haiman, Ciotti & Ostriker 2004). Soft X-ray bands miss the most highly obscured sources, but the efficiency is increased further only by a factor of $\sim$ two when hard X-ray sources (with the luminosity function from Ueda et al. 2003) are added in the comparison (Marconi et al. 2003). ]{}
In the present study, we investigate the possibility that cumulative mass-energy losses to gravitational waves (GWs) during repeated BH binary coalescences, in the context of standard cosmological hierarchical structure formation models, may significantly reduce the amount of mass currently locked into BHs, and thus effectively bias the comparison between active and dead quasars toward larger values of the radiative efficiency, $\epsilon$. The role of GW losses for the quasar population has already been considered by Yu & Tremaine (2002), but only with an idealized description of cosmological mergers and for maximized “adiabatic” losses to GWs (see also Ciotti & van Albada 2001; Volonteri et al. 2003; Koushiappas et al. 2004). In a companion paper (Menou & Haiman 2004; hereafter paper I), we have reconsidered this issue with a more realistic description of cosmological BH mergers. Our results suggested that, while the mass loss in a single merger event is small, after numerous repeated mergers over cosmic times, adiabatic losses can result in a substantial and astrophysically important reduction of the BH mass density. However, the adiabatic assumption provides only an upper bound on the mass-energy carried away by GWs, and thus largely overestimates realistic losses. Here, we use an improved prescription for GW losses, based on the latest available general relativistic calculations, to provide more accurate constraints on the possible role of GWs in modifying the mass budget of merging quasars.
Models
======
Merger History of Massive Black Holes
-------------------------------------
Our description of the cosmological merger history of massive BHs follows very closely that presented in paper I (see also Menou, Haiman & Narayanan 2001 for details). We use a dark matter halo merger tree with a standard $\Lambda$CDM cosmology to evolve the population of massive BHs and their host galaxies. Only galaxies with a total mass exceeding a virial temperature equivalent of $10^4$ K are described by the tree, since these are the galaxies with efficient enough baryon cooling to allow BH formation (smaller objects rely on ${\rm H_2}$ cooling and are subject to disruptive feedback; Oh & Haiman 2004). It is assumed by default in our models that all potential host galaxies do harbor a massive BH, although we have also explored models in which massive BHs are ten times rarer and are initially confined to the $10 \%$ most massive galaxies (as described in paper I).
Recent quasar evolutionary studies indicate that the majority of the mass currently locked into SMBHs was accreted between redshifts $z
\simeq 3$ and $z=0$ (e.g. Yu & Tremaine 2002; Marconi et al. 2004). Since most of the losses due to mergers is expected to occur when most of the BH mass is being assembled, in order to estimate the GW losses due to mergers, there is no need to extend our models much beyond redshifts $z \sim 3$. We assume that the same relation between SMBHs and their host galaxy velocity dispersion exists at $z \simeq 3$ as it does locally (Shields et al. 2003) and adopt a mass ratio between BHs and their parent halos given by (Ferrarese 2002; Wyithe & Loeb 2004): $$M_{\rm bh}= 10^9 M_\odot \left( \frac{M_{\rm halo}}{1.5 \times 10^{12}
M_\odot}\right)^{5/3} \left( \frac{1+z}{7} \right)^{5/2},$$ where $M_{\rm halo}$ is the mass of the dark matter halo associated with each galaxy. This relation may result from the BH mass being limited (at least initially, during the luminous quasar phase) by feedback from the quasar’s radiation (Silk & Rees 1998; Wyithe & Loeb 2003). We have found in exploratory models (see paper I) that the shape of this $M_{\rm bh}-M_{\rm halo}$ relation is not strongly modified by the redistribution of BHs in galaxies due to cosmological mergers. This provides additional motivation for setting up BH masses according to equation (1) at $z \simeq 3$ and neglecting the role that accretion may subsequently have in modifying them over cosmic times (modulo an overall scaling factor). Below, we will express all mass deficits in evolutionary models with GW losses relative to a no-loss model, thus effectively scaling out the exact $\rho_{\rm BH}$ value from the loss problem.
Starting at $z \simeq 3$, we let the BH population evolve through a series of cosmological mergers up until $z=0$. We assume that each time two galaxies merge, the two BHs they were hosting coalesce rapidly. [In doing so, we ignore complications related to the “last parsec” problem for BH coalescences (Begelman, Blandford & Rees 1980; see Milosavljevic & Merritt 2003 for a recent discussion) and effectively maximize BH merger rates in our models. Rapid coalescences may be induced by effects due to the triaxility of galaxies (Yu 2002) or the presence of surrounding gas (Gould & Rix 2000; Armitage & Natarajan 2002; Escala et al. 2004). We do account for the inefficiency of dynamical friction in initially bringing the two BHs together, however,]{} by assuming, following Yu (2002), that BH binaries do not form for mass ratios $q < 10^{-3}$ (see also paper I). Finally, we emphasize that the above model, which ignores gas accretion, is not intended to yield a realistic description of the quasar BH population. Rather, our limited goal here is to quantify the effect of mergers alone on the remnant BH mass budget.
Mass Loss to Gravitational Waves
--------------------------------
[cccc]{}\
BH Spin Limit & Inspiral& Plunge & Ringdown\
(1)&(2)&(3)&(4)\
\
\
Slow Spin & $\alpha_{\rm ins}=0.06$ & $\alpha_{\rm
plu}=0.01$ & $\alpha_{\rm rin}=10^{-5}$\
Fast Spin & $\alpha_{\rm
ins}=0.42$ & $\alpha_{\rm plu}=0.10$ & $\alpha_{\rm rin}=0.03$\
\
\[tab:one\]
In their final stages of coalescence, energy and momentum are extracted from massive BH binaries by emission of GWs. As a result, the BH merger remnant has a mass which is less than that of its two progenitors. This mass loss to GWs, and its cumulative effect on the global BH mass density through repeated cosmological mergers, is the main focus of our study.
Rather than adopting simple GW loss prescriptions as in paper I, we wish to obtain more accurate constraints based on the latest available general relativistic calculations. This is no easy task, however, because the general relativistic BH coalescence problem has not been solved in full generality (see Baumgarte & Shapiro 2003 for a review of numerical progress) and approximate analytical solutions exist only for some limiting cases. Here, we will use such approximate solutions and extrapolate them whenever necessary.
Let us denote by $m$ and $M$ the masses of the two BHs involved in a coalescence, with $m \leq M$. The mass ratio is $q=m/M \leq 1$ and the reduced mass is defined as $\mu=mM/(m+M)$. In the test particle limit ($q \ll 1$), the coalescence can be decomposed into three successive phases: (i) a slow inspiral phase during which the two BHs spiral in quasi-adiabatically on nearly circular orbits, (ii) a plunge phase due to the existence of an innermost stable circular orbit (ISCO) past which the two BHs are brought together via a dynamical instability, and (iii) a final ringdown phase during which the perturbed merger remnant relaxes to a stationary Kerr BH. The separation between the plunge and ringdown phases is somewhat arbitrary. In addition, it is possible that no ISCO exists for some combinations of BH masses and spins when $q \rightarrow 1$. Clearly then, the decomposition into three successive phases must be used with caution. It is useful, however, in that approximate solutions for GW losses have been derived in some limiting cases for some of these phases.
We consider the two extreme limits for the spins of BHs involved in coalescences. In the slow-spin limit, BHs are assumed to have no spin. In the fast-spin limit, BHs are assumed to be maximally rotating (with a spin parameter $a \equiv J_{\rm bh}/M_{\rm bh}=1$, in $c=G=1$ units). In a given evolutionary model, we assume for simplicity that all the BHs involved satisfy one or the other spin limits. If we generically write a mass loss from the BH binary as $\Delta(m+M)$, then the losses that we have adopted in our models for the inspiral, plunge and ringdown phases are, respectively, $$\begin{aligned}
\Delta(m+M)_{\rm ins} & = &- \alpha_{\rm ins} \mu,\\
\Delta(m+M)_{\rm plu} & = &- \alpha_{\rm plu} M q^2,\\
\Delta(m+M)_{\rm rin} & = &- \alpha_{\rm rin} M_{coa} q^2,\end{aligned}$$ where the “coalesced” mass (before ringdown starts) is $M_{\rm coa}
= m+M-\Delta(m+M)_{\rm ins}-\Delta(m+M)_{\rm plu}$. The values of the loss coefficients $\alpha$ are given in Table \[tab:one\] for the two spin limits. Justifications for these prescriptions follow.
Losses to GWs during the inspiral phase have been discussed extensively. They involve calculating the binding energy at the location of the ISCO, since the quasi-adiabatic inspiral experienced by the binary means an efficient loss of this binding energy to GWs via a succession of nearly circular orbits. In the test particle limit ($m \ll M$), it is well known that the loss during inspiral is $\sim 6
\%$ of $mc^2$ in the slow spin limit, and $\sim 42 \%$ of $mc^2$ in the fast (prograde) spin limit (as is the case for accretion efficiency; see, e.g., Shapiro & Teukolsky 1983). In the equal-mass binary limit ($m = M$), results have been derived under a variety of approximations. For non-spinning, equal-mass BHs, the binding energy per unit reduced mass at the ISCO is roughly consistent with the test particle result (see Table 1 in Gammie, Shapiro & McKinney 2004). For spinning, equal-mass BHs, the analysis of Pfeiffer, Teukolsky & Cook (2000; their Table 1) indicates somewhat larger binding energies (per unit reduced mass) at the ISCO than for the test particle case, for a few moderate spin configurations. On the other hand, post-Newtonian calculations (e.g. Blanchet 2002) suggest somewhat lower binding energies per unit reduced mass at the ISCO (A. Buonanno; private communication). Based on these results and on the limit $\mu
\rightarrow m$ for test particles (when $m \rightarrow 0$), we have chosen to express inspiral losses in units of the reduced mass, $\mu$, with magnitudes identical to those of the test particle cases, irrespective of the BH mass combinations encountered in our models (see Eq. \[2\] and Table \[tab:one\]).
Losses to GWs during the plunge phase are much less well understood. An exact result for the combined plunge + ringdown phase exists for the test particle case, in the absence of any spin or orbital angular momentum (Davis et al. 1971): it amounts to a loss of $\sim 0.01 M
q^2c^2$. We adopt this minimal loss for the plunge phase in our slow-spin models. Orbital angular momentum should always be important in astrophysical BH coalescences and it is likely that plunge losses will then become substantially larger. For definiteness, we adopt ten times larger losses during plunge for the fast-spin models (see, e.g., Nakamura, Oohara & Kojima 1987). In the absence of analytical results on the plunge phase for equal-mass binaries, we further assume that the above test particle $q^2$ mass scaling is valid for any BH mass combination (see Eq. \[3\]). Finally, we add a contribution to GW losses from the ringdown phase. Our prescription is adapted from the results of Khanna et al. (1999; extrapolated at large spin values), with the same assumed $q^2$ mass scaling as before (see Eq. \[4\]).[^1]
Results
=======
The evolution of the distributions of BH and galaxy masses in our evolutionary models has been discussed extensively in paper I. We use Monte-Carlo realizations of the merger tree of dark matter halos, starting with $N \simeq 4.6 \times 10^4$ halos at $z=3$ with masses in the range $10^{8.6}$–$10^{12.1}$ M$_\odot$. Our merger tree database effectively describes a fixed comoving volume $\sim 1.7 \times
10^4$ Mpc$^{3}$. It is then straightforward to calculate the comoving mass density in BHs, $\rho_{\rm BH}$, and to monitor its evolution: we simply follow the merger history of BHs and subtract, at each merger event, the mass–energy lost to GWs. In models without any GW losses, $\rho_{\rm BH} \simeq 1.4 \times 10^5$ M$_\odot$ Mpc$^{-3}$ and it does not evolve with cosmic times. In models including GW losses, however, a small fraction of $\rho_{\rm BH}$ is lost each time two BHs coalesce, leading to a growing cumulative deficit.
Figure \[fig:one\] shows the evolution of the deficit in $\rho_{\rm
BH}$ from $z=3$ to $z=0$ in the slow- and fast-spin models of Table \[tab:one\]. The cumulative deficit is relatively small in the slow-spin model ($\sim 3 \%$ of the initial $\rho_{\rm BH}$ value) but it reaches $\sim 20 \%$ in the fast spin-model. Models with a ten times rarer population of massive BHs (initially confined to the $10
\%$ most massive galaxies) give essentially identical results (dotted lines in Fig. \[fig:one\]). This is because most of the BH mass loss occurs at the largest masses (see paper I). The cumulative deficit does depend on the value of the redshift at which cosmological evolution is initiated, however, as shown by the dashed line in Figure \[fig:one\] (a fast-spin model starting at $z=2$). This is a consequence of the reduced total number of BH mergers. We also note that the cumulative $\rho_{\rm BH}$ deficits shown in Figure \[fig:one\] are slightly smaller than the corresponding values for the simpler slow- and fast-spin models discussed in paper I. This results simply from the more realistic GW loss prescription adopted here.
Figure \[fig:two\] shows, for the fast-spin model, how various sub-categories of GW losses contribute to the $\rho_{\rm BH}$ deficit. Little evolution with redshift is seen except early on, when the initial BH masses are redistributed in galaxies. Inspiral losses largely dominate the overall mass loss budget, while plunge and ringdown losses contribute little. A combination of the adopted BH masses and of the cosmological merger history experienced by BHs results in most of the inspiral mass loss being due to BH binaries with mass ratios $q < 0.5$ (compare solid and long-dashed lines in Fig. \[fig:two\]; see also Fig. 5 in Menou 2003 for distributions of BH mass ratios comparable to those found in our models). This is important because inspiral losses, in the limit $q \ll 1$, are the best known of all. Since the low $q$ limit is still relatively accurate up to mass ratios $q \lsim 0.5$ according to post-Newtonian calculations (see, e.g., discussion in Hughes & Blandford 2003), this indicates that our results may not be critically sensitive to various uncertainties affecting our GW loss prescriptions for the other regimes (see §2.2). Cumulative losses at $z=0$ correspond to fractions $\sim \alpha_{\rm ins}/2$ in both the slow- and fast-spin models (compare Fig. \[fig:one\] and Table \[tab:one\]). This shows that a substantial fraction of the final mass density has been assembled through mergers of BH binaries with $q < 0.5$. The exact contribution to the mass assembly is difficult to estimate from losses alone, however, because our prescription for inspiral losses (written in units of reduced mass in Eq. \[2\]) effectively reduces the losses per unit “real” mass for large mass ratios ($\mu \rightarrow m/2$ in the limit $q \rightarrow 1$).
Discussion and Conclusion
=========================
Cumulative mass loss to GWs during repeated cosmological BH coalescences from $z \simeq 3$ to $z=0$ could reduce the amount of BH mass locked into nearby dead quasars by up to $\sim$ 20 percent, according to our fast-spin model (Fig. \[fig:one\]). This reduction in the local BH mass density would effectively lead to a similar fractional increase in the value of the radiative efficiency for cosmic BH accretion, $\epsilon$, if a comparison between dead and active quasars is attempted without accounting for GW losses. Each individual SMBH experiences numerous repeated mergers in its assembly history (especially BHs in the most massive halos). However, our detailed study of the merger history shows that the majority of these mergers have small mass ratios, for which losses to GWs are equally small (see Eqs. \[2-4\]; note that the fraction of mergers with $q \sim
1$ can be significantly higher at $z\gsim 6$, where the effective slope of the power spectrum at the mass–scale of collapsing objects is shallower; Haiman 2004).
It is important to emphasize that our models are highly idealized and that a number of effects ignored in our calculation are likely to mitigate the already small magnitude of the $\rho_{\rm BH}$ deficit. Except for the role of inefficient dynamical friction, we have assumed maximally efficient BH coalescences and have thus maximized GW losses in our models. The “last parsec” problem and gravitational radiation recoil effects (Milosavljevic & Merritt 2003; Favata, Hughes & Holtz 2004), for example, will only make BH coalescences less frequent than assumed here.[^2] We have also neglected the role of orbital configurations in our loss prescriptions. For randomly oriented BH encounters, some will be retrograde spin-orbit configurations and lead to smaller inspiral losses than assumed in our slow-spin model, even when BHs are spinning fast (e.g., $\alpha_{\rm
ins} \simeq 0.04$ for a maximally rotating retrograde configuration in the test particle limit; see also Kojima & Nakamura 1984). A more accurate calculation should therefore account for the distribution of orbital parameters of coalescing BHs and would probably find losses intermediate between those predicted by our slow- and fast-spin models. A proper calculation should also account for the growth of $\rho_{\rm BH}$ with cosmic time due to accretion. We have effectively maximized fractional losses by assuming that a given mass density is in place at $z=3$ and that losses occur after this redshift without any subsequent increase in $\rho_{\rm BH}$. Finally, typical BH spins may be moderated by coalescences and accretion (Hughes & Blandford 2003; Gammie et al. 2004), and this could easily bring losses closer to the predictions of our slow-spin model.
Another model uncertainty is the limited range of masses described by our merger tree. Losses in our models are dominated by the few most massive BHs that happen to be present in our Monte-Carlo realizations of the cosmological merger tree. These massive BHs still have lower masses than the $> 10^8$ M$_\odot$ BHs of interest when comparing dead and active quasars (see paper I and, e.g., Yu & Tremaine 2002). We have argued in paper I that including more massive BHs would increase the value of the cumulative $\rho_{\rm BH}$ deficit, but this increase is likely to remain modest. For example, a simple extrapolation with BH mass of the $\rho_{\rm BH}$ deficit value predicted at $z=0$ shows a small ($\ll \times 2$) increase of the fractional loss (shown in Fig. 1) up to BH masses $\sim 10^9$ M$_\odot$.
Given the above arguments, the magnitude of $\rho_{\rm BH}$ deficits shown in Figure \[fig:one\] cannot be taken at face value and it appears likely that the losses amounting to $\sim 10$–$20 \%$ of $\rho_{\rm BH}$ are only conservative upper limits to more realistic values. The corresponding bias on the value of the radiative efficiency of cosmic BH accretion, $\epsilon$, would also be $\lsim
10$–$20 \%$ and thus well within errorbars of current estimates (e.g. Aller & Richstone 2002; Elvis et al. 2002; Yu & Tremaine 2002). Therefore, inferences that $\epsilon \gsim 0.1$ appear robust and may indeed indicate radiatively–efficient accretion onto fast spinning BHs. In the future, it it likely that the space interferometer [*LISA*]{} will offer us some of the best empirical constraints on the dark side of quasar evolution. Even though the typical BH masses probed by [*LISA*]{} are smaller than those of luminous quasars (e.g. Hughes 2002; Menou 2003), measurements of the cosmological rate of massive BH coalescences and constraints on the masses, and perhaps the spins, of these BHs will prove very useful to clarify many of the uncertainties we have highlighted above. A pulsar timing array may also put interesting constraints on the magnitude of the stochastic GW background generated by cosmological BH mergers (e.g. Jaffe & Backer 2003).
Acknowledgments {#acknowledgments .unnumbered}
===============
K.M. thanks Alessandra Buonanno and Scott Hughes for helpful discussions on general relativistic calculations, as well as the Department of Astronomy at the University of Virginia for their hospitality. Z.H. was supported in part by NSF through grants AST-0307200 and AST-0307291.
Aller, M.C. & Richstone, D. 2002, AJ, 124, 3035 Armitage, P.J. & Natarajan, P. 2002, ApJ, 567, L9 Barger, A.J. et al. 2001, ApJ, 122, 2177 Baumgarte, T.W. & Shapiro, S.L. 2003, Phys. Rep., 376, 41 Begelman, M. C., Blandford, R. D. & Rees, M. J. 1980, Nature, 287, 307 Blanchet, L. 2002, Phys. Rev. D, 65, 124009 Chokshi, A. & Turner, E.L. 1992, MNRAS, 259, 421 Ciotti, L. & van Albada, T.S. 2001, ApJ, 552, L13 Davis, M. et al. 1971, PRL, 27, 1466 Elvis, M., Risaliti, G. & Zamorani, G. 2002, ApJ, 565, L75 Escala, A., Larson, R.B., Coppi, P.S. & Mardones, D. 2004, ApJ, submitted, astro-ph/0310851 Fabian, A. & Iwasawa, K. 1999, MNRAS, 303, 34 Favata, M., Hughes, S.A. & Holz, D.E. 2004, ApJL submitted, astro-ph/0402056 Ferrarese, L. 2002, ApJ, 578, 90 Ferrarese, L. & Merritt, D. 2000, ApJ, 539, L9 Gammie, C.F., Shapiro, S.L. & McKinney, J.C. 2004, ApJ, 602, 312 Gebhardt, K. et al. 2000, ApJ, 539, L13 Gould, A. & Rix, H.-W. 2000, ApJ, 532, L29 Haering, N. & Rix, H.-W. 2004, ApJ, 604, L89 Haiman, Z. 2004, ApJL, submitted, astro-ph/0404196 Haiman, Z., Ciotti, L., & Ostriker, J. P. 2004, ApJ, in press, astro-ph/0304129 Hughes, S. A. 2002, MNRAS, 331, 805 Hughes, S. A. & Blandord, R. D. 2003, ApJ, 585, 101 Jaffe, A. H. & Backer, D. C. 2003, ApJ, 583, 616 Khanna, G. et al. 1999, PRL, 83, 3581 Kojima, Y. & Nakamura, T. 1984, Prog. Theor. Phys., 71, 79 Kormendy, J. & Richstone, D. 1995, ARA&A, 33, 581 Koushiappas, S.M., Bullock, J.S. & Dekel, A. 2004, MNRAS, submitted, astro-ph/0311487 Lynden-Bell, D. 1967, MNRAS, 136, 101 Madau, P. & Quataert, E. 2004, ApJ, 606, L17 Magorrian, J., et al. 1998, AJ, 115, 2285 Marconi, A. et al. 2004, MNRAS, in press, astro-ph/0311619 Menou, K. 2003, Classical and Quantum Gravity, 20, S37 (astro-ph/0301397) Menou, K. & Haiman, Z. 2004, in Proceedings of “Black Hole Astrophysics 2004,” Pohang, S. Korea. Journal of the Korean Physical Society (Special Issue), in press, astro-ph/0405334 (paper I) Menou, K., Haiman, Z., & Narayanan, V. K. 2001, ApJ, 558, 535 Miyaji, T., Hasinger, G., & Schmidt, M. 2001, A&A, 369, 49 Milosavljevic, M., & Merritt, D. 2003, ApJ, 596, 860 Nakamura, T., Oohara, K. & Kojima, Y. 1987, Prog. Theor. Phys. Supp., 90, 135S Oh, S. P., & Haiman, Z. 2004, MNRAS, 346, 456 Pfeiffer, H.P., Teukolsky, S.A. & Cook, G.B. 2000, Phys. Rev. D, 62, 104018 Rees, M.J. 1990, Science, 247, 817 Shapiro, S.L. & Teukolsky, S.A. 1983, Black Holes, White Dwarfs, and Neutron Stars: The physics of compact objects (1983) Shields, G. A. 2003, ApJ, 583, 124 Silk, J., & Rees, M. J. 1998, A&A, 331, L1 Soltan, A. 1982, MNRAS, 200, 115 Tremaine, S. et al. 2002, ApJ, 574, 740 Ueda, Y., Akiyama, M., Ohta, K., Miyaji, T. 2003, ApJ, 598, 886 Volonteri, M., Haardt, F., & Madau, P. 2003, ApJ, 582, 559 Wyithe, J.S.B. & Loeb, A. 2003, ApJ, 595, 614 Wyithe, J.S.B. & Loeb, A. 2004, Nature, 427, 815 Yu, Q. 2002, MNRAS, 331, 935 Yu, Q. & Tremaine, S. 2002, MNRAS, 335, 965
[^1]: A mass scaling with the “reduced mass ratio,” $\eta = \mu/(m+M)$, replacing $q$ in Eqs. (3) and (4) may be more accurate in the limit $q \rightarrow 1$, according to post-Newtonian calculations (S. Hughes, private communication). This would reduce the importance of plunge and ringdown losses in our models, since $\eta
\rightarrow q$ when $q \rightarrow 0$ but $\eta \rightarrow 1/4$ when $q \rightarrow 1$.
[^2]: [Note that, by displacing or ejecting BHs from galactic centers (e.g. Madau & Quataert 2004), gravitational radiation recoil could also lead to an underestimate of the mass density in quasar remnants.]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Suppose that we wish to estimate a finite-dimensional summary of one or more function-valued features of an underlying data-generating mechanism under a nonparametric model. One approach to estimation is by plugging in flexible estimates of these features. Unfortunately, in general, such estimators may not be asymptotically efficient, which often makes these estimators difficult to use as a basis for inference. Though there are several existing methods to construct asymptotically efficient plug-in estimators, each such method either can only be derived using knowledge of efficiency theory or is only valid under stringent smoothness assumptions. Among existing methods, sieve estimators stand out as particularly convenient because efficiency theory is not required in their construction, their tuning parameters can be selected data adaptively, and they are universal in the sense that the same fits lead to efficient plug-in estimators for a rich class of estimands. Inspired by these desirable properties, we propose two novel universal approaches for estimating function-valued features that can be analyzed using sieve estimation theory. Compared to traditional sieve estimators, these approaches are valid under more general conditions on the smoothness of the function-valued features by utilizing flexible estimates that can be obtained, for example, using machine learning.'
author:
- Hongxiang Qiu
- Alex Luedtke
- Marco Carone
bibliography:
- 'references.bib'
title: |
Universal sieve-based strategies for efficient estimation\
using machine learning tools
---
Introduction
============
Motivation
----------
A common statistical problem consists of using available data in order to learn about a summary of the underlying data-generating mechanism. In many cases, this summary involves function-valued features of the distribution that are difficult to infer about nonparametrically — for example, a regression function or the density function of the distribution. Examples of useful summaries involving such features include average treatment effects [@Rubin1974], average derivatives [@Hardle1989], moments of the conditional mean function [@Shen1997], coefficients in additive partially linear models [@Fan1998], variable importance measures [@Williamson2017] and treatment effect heterogeneity measures [@Levy2018]. For ease of implementation and interpretation, in traditional approaches to estimation, these features have typically been restricted to have simple forms encoded by parametric or restrictive semiparametric models. However, when these models are misspecified, both the interpretation and validity of subsequent inferences can be compromised. To circumvent this difficulty, investigators have increasingly relied on machine learning (ML) methods to flexibly estimate these function-valued features.
Once estimates of the function-valued features are obtained, it is natural to consider plug-in estimators of the summary of interest. However, in general, such estimators are not root-$n$-consistent and asymptotically normal, and hence not asymptotically efficient (referred to as *efficient* henceforth). Lacking this property is problematic since it often forms the basis for constructing valid confidence intervals and hypothesis tests [@Bickel2003; @Newey2004]. When the function-valued features are estimated by ML methods, in order for the plug-in estimator to be CAN, the ML methods must not only estimate the involved function-valued features well, but must also satisfy a small-bias property with respect to the summary of interest [@Newey2004; @VanderLaan2018]. Unfortunately, because ML methods generally seek to optimize out-of-sample performance, they seldom satisfy the latter property.
Existing methodological frameworks {#section: intro existing methods}
----------------------------------
The targeted minimum loss-based estimation (TMLE) framework provides a means of constructing efficient plug-in estimators [@VanderLaan2006; @VanderLaan2018]. Given an (almost arbitrary) initial ML fit that provides a good estimate of the function-valued features involved, TMLE produces an adjusted fit such that the resulting plug-in estimator has reduced bias and is efficient. This adjustment process is referred to as targeting since a generic estimate of the function-valued features is modified to better suit the goal of estimating the summary of interest. Though TMLE provides a general template for constructing efficient estimators, its implementation requires specialized expertise, namely knowledge of the analytic expression for an influence function of the summary of interest. Influence functions arise in semiparametric efficiency theory and are key to establishing efficiency, but can be difficult to derive. Furthermore, even when an influence function is known analytically, additional expertise is needed to construct a TMLE for a given problem.
Alternative approaches for constructing efficient plug-in estimators have been proposed in the literature, including the use of undersmoothing [@Newey1998], higher-order kernels [@Bickel2003], twicing kernels [@Newey2004], and sieves [@Chen2007; @Newey1997; @Shen1997]. These methods neither require knowing an influence function nor performing any targeting of the function-valued feature estimates. Hence, the same fits can be used to simultaneously estimate different summaries of the data-generating distribution, even if these summaries were not pre-specified when obtaining the fit. These approaches also circumvent the difficulties in obtaining an influence function.
The idea behind undersmoothing [@Newey1998] is to deliberately tune the ML fit to achieve the small-bias property, at the expense of suboptimal out-of-sample performance. Generic windows of rates at which the tuning parameter (e.g., bandwidth in kernel methods) should change with sample size to ensure adequate undersmoothing have been proposed. However, selecting an actual tuning parameter value for a given data set remains challenging — for example, cross-validation (CV) often fails due to its focus on optimizing out-of-sample performance [@Hall2004; @Li2004; @Sheather1991; @Vanderlaan2003cv].
The twicing kernel method [@Newey2004] was also developed to obtain an efficient plug-in estimator. Given an arbitrary kernel, a twicing kernel is obtained as twice the kernel minus its self-convolution, and any kernel-based estimation strategy may then be used. This construction is simple, and the bandwidth can be selected via CV to obtain an efficient plug-in estimator. However, in practice, it is common to use second-order kernels (e.g., nonnegative symmetric kernels), while the order of the twicing kernel is higher than the given kernel, and higher-order kernels have been shown to perform poorly in small to moderate samples [@Marron1994].
Sieve estimation {#section: intro sieve}
----------------
In contrast to undersmoothing and other kernel-based approaches that do not require the knowledge of an influence function, under some conditions, sieve estimation can produce a flexible fit with the optimal out-of-sample performance while also yielding an efficient — and therefore root-$n$-consistent and asymptotically normal — plug-in estimator [@Shen1997]. In this paper, we focus on extensions of this approach. In sieve estimation, we first assume that the unknown function falls in a rich function space, and construct a sequence of approximating subspaces indexed by sample size that increase in complexity as sample size grows. We require that, in the limit, the functions in the subspaces can approximate any function in the rich function space arbitrarily well. These approximating subspaces are referred to as *sieves*. By using an ordinary fitting procedure that optimizes the estimation of the function-valued feature within the sieve, the bias of the plug-in estimator can decrease sufficiently fast as the sieve grows in order for that estimator to be efficient. Thus sieve estimation requires no explicit targeting for the summary of interest.
The series estimator is one of the best known and most widely used sieve techniques. These sieves are taken as the span of the first finitely many terms in a basis that is chosen by the user to approximate the true function well. Common choices of the basis include polynomials, splines, trigonometric series and wavelets, among others. However, series estimators usually require strong smoothness assumptions on the unknown function in order for the flexible fit to converge at a sufficient rate to ensure the resulting plug-in estimator is efficient. As the dimension of the problem increases, the smoothness requirement may become prohibitive. Moreover, even if the smoothness assumption is satisfied, a prohibitively large sample size may be needed for some series estimators to produce a good fit. For example, if the unknown function is smooth but is a constant over a region, estimation based on a polynomial series can perform poorly in small to moderate samples.
Series estimators may also require the user to choose the number of terms in the series in such a way that results in a sufficient convergence rate. The rates at which the number of terms should grow with sample size have been thoroughly studied (e.g. [@Chen2007; @Newey1997; @Shen1997]). However, these results only provide minimal guidance for applications because there is no indication on how to select the actual number of terms for a given sample size. In practice, the number of terms is the series is often chosen by CV. Upper bounds on the convergence rate of the series estimator as a function of sample size and the number of terms have been derived, and it has been shown that the optimal number of terms that minimizes the bound can also lead to an efficient plug-in estimator [@Shen1997]. However, CV tends to select the number of terms that optimizes the actual convergence rate [@Vanderlaan2003cv], which may differ from the number of terms minimizing the derived bound on the convergence rate. Even though the use of CV-tuned sieve estimators has achieved good numerical performance, to the best of our knowledge, there is no theoretical guarantee that they lead to an efficient plug-in estimator.
Two variants of traditional series estimators were proposed in [@Bickel2003]. These methods can use two bases to approximate the unknown function-valued features and the corresponding gradient separately, whereas in traditional series estimators, only one basis is used for both approximations. Consequently, these variants may be applied to more general cases than traditional series estimators. However, like traditional series estimators, they also suffer from the inflexibility of the pre-specified bases.
Contributions and organization of this article
----------------------------------------------
In this paper we present two approaches that can partially overcome these shortcomings.
1. *Estimating the unknown function with Highly Adaptive Lasso (HAL)* [@benkeser2016; @VanderLaan2017].\
If we are willing to assume the unknown functions have a finite variation norm, then they may be estimated via HAL. If the tuning parameter is chosen carefully, then we may obtain an efficient plug-in estimator. This method can help overcome the stringent smoothness assumptions that are required by existing series estimators, as we discussed earlier.
2. *Using data-adaptive series based on an initial ML fit*.\
As long as the initial ML algorithm converges to the unknown function at a sufficient rate, we show that, for certain types of summaries, it is possible to obtain an efficient plug-in estimator with a particular data-adaptive series. The smoothness assumption on the unknown function can be greatly relaxed due to the introduction of the ML algorithm into the procedure. Moreover, for summaries that are highly smooth, we show that the number of terms in the series can be selected by CV.
Although the first approach is not an example of sieve estimation, both approaches are motivated by the sieve literature and can be shown to lead to asymptotically efficient plug-in estimators using the sieve estimation theory derived in [@Shen1997]. The flexible fits of the functional features from both approaches can be plugged in for a rich class of estimands.
We remark that, although we do not have to restrict ourselves to the plug-in approach in order to construct an asymptotically efficient estimator, other estimators do not overcome the shortcomings described in Sections \[section: intro existing methods\] and \[section: intro sieve\] and can have other undesirable properties. For example, the popular one-step correction approach (also called debiasing in the recent literature on high-dimensional statistics) [@Pfanzagl1982] constructs efficient estimators by adding a bias reduction term to the plug-in estimator. Thus, it is not a plug-in estimator itself, and as a consequence, one-step estimators may not respect known bounds or shape-constraints on the summary of interest. This drawback is also typical for other non-plug-in estimators, such as those derived via estimating equations [@VanderLaan2003] and double machine learning [@Chernozhukov2017; @Chernozhukov2018]. Additionally, as with the other procedures described above, the one-step correction approach requires the analytic expression of an influence function.
Our paper is organized as follows. We introduce the problem setup and notation in Section \[section: setup\]. We consider plug-in estimators based on HAL in Section \[section: hal\], data-adaptive series in Section \[section: simple case\], and its generalized version that is applicable to more general summaries in Section \[section: general case\]. Section \[section: discussion\] concludes with a discussion. Technical proofs of lemmas and theorems (Appendix \[appendix: proof\]), simulation details (Appendix \[appendix: simulation\]) and other additional details are provided in the Appendix.
Problem setup and traditional sieve estimation review {#section: setup}
=====================================================
Suppose we have independent and identically distributed observations $V_1,\ldots,V_n$ drawn from $P_0$. Let $\Theta$ be a class of functions, and denote by $\theta_0 \in \Theta$ a (possibly vector-valued) functional feature of $P_0$ — for example, $\theta_0$ may be a regression function. Throughout this paper we assume that the generic data unit is $V=(X,Z) \sim P_0$, where $X$ is a (possibly vector-valued) random variable corresponding to the argument of $\theta_0$, and $Z$ may also be a vector-valued random variable. In some cases $V=X$ and $Z$ is trivial. We use $\mathcal{X}$ to denote the support of $X$. The estimand of interest is a finite-dimensional summary $\Psi(\theta_0)$ of $\theta_0$. We consider a plug-in estimator $\Psi(\hat{\theta}_n)$, where $\hat{\theta}_n$ is an estimator of $\theta_0$, and aim for this plug-in estimator to be asymptotically linear, in the sense that $\Psi(\hat{\theta}_n)=\Psi(\theta_0) + n^{-1} \sum_{i=1}^n \text{IF}(V_i) + o_p(n^{-1/2})$ with $\text{IF}$ an influence function satisfying ${\mathbb{E}}_{P_0}[\text{IF}(V)]=0$ and ${\mathbb{E}}_{P_0}[\text{IF}(V)^2]<\infty$. This estimator is efficient under a nonparametric model if the estimator is also regular. By the central limit theorem and Slutsky’s theorem, it follows that $\Psi(\hat{\theta}_n)$ is a CAN estimator of $\Psi(\theta_0)$, and therefore, $\sqrt{n} [\Psi(\hat{\theta}_n) - \Psi(\theta_0)] \overset{d}{\rightarrow} N(0, {\mathbb{E}}_{P_0}[\text{IF}(V)^2])$. This provides a basis for constructing valid confidence intervals for $\Psi(\theta_0)$.
We now list some examples of such problems.
\[example: moments\] Moments of the conditional mean function [@Shen1997]: Let $\theta_0: x \mapsto {\mathbb{E}}_{P_0}[Z|X=x]$ be the conditional mean function. The $\kappa$-th moment of $\theta_0(X)$, $X \sim P_0$, namely $\Psi_\kappa(\theta_0)={\mathbb{E}}_{P_0}[\theta_0^\kappa(X)]$, can be a summary of interest. The values of $\Psi_1(\theta_0)$ and $\Psi_2(\theta_0)$ are useful for defining the proportion of ${\textnormal{Var}}_{P_0}(Z)$ that is explained by $X$, namely ${\textnormal{Var}}_{P_0}(\theta_0(X))/{\textnormal{Var}}_{P_0}(Z)$. This proportion is a measure of variable importance [@Williamson2017]. Generally, we may consider $\Psi(\theta_0)={\mathbb{E}}_{P_0}[f(\theta_0(X))]$ for a fixed function $f$.
\[example: average derivative\] Average derivative [@Hardle1989]: Let $X$ follow a continuous distribution on ${\mathbb{R}}^d$ and $\theta_0: x \mapsto {\mathbb{E}}_{P_0}[Z|X=x]$ be the conditional mean function. Let $\theta_0'$ denote the vector of partial derivatives of $\theta_0$. Then $\Psi(\theta_0)={\mathbb{E}}_{P_0}[\theta_0'(X)]$ summarizes the overall (adjusted) effect of each component of $X$ on $Y$. Under certain conditions, we can rewrite $\Psi(\theta_0)={\mathbb{E}}_{P_0}[\theta_0(X) p_0'(X)/p_0(X)]$, where $p_0$ is the Lebesgue density of $X$ and $p_0'$ is the vector of partial derivatives of $p_0$. This expression clearly shows the important role of the Lebesgue density of $X$ in this summary.
\[example: ATE\] Mean counterfactual outcome [@Rubin1974]: Suppose that $Z=(A,Y)$ where $A$ is a binary treatment indicator and $Y$ is the outcome of interest. Let $\theta_0: x \mapsto {\mathbb{E}}_{P_0}[Y|A=1,X=x]$ be the outcome regression function under treatment value 1. Under causal assumptions, the mean counterfactual outcome corresponding to the intervention that assigns treatment 1 to the entire population can be nonparametrically identified by the G-computation formula $\Psi(\theta_0)={\mathbb{E}}_{P_0}[\theta_0(X)]$.
\[example: treatment effect heterogeneity\] Treatment effect heterogeneity measures in randomized controlled trials [@Levy2018]: As in Example \[example: ATE\], suppose that $A$ is the randomly assigned binary treatment and $Z$ is the outcome of interest. Let $\theta_0=(\mu_{00},\mu_{01})$, where $\mu_{0a}: x \mapsto {\mathbb{E}}_{P_0}[Z|A=a,X=x]$ is the outcome regression function for treatment arm $a\in\{0,1\}$. Then, $\Psi(\theta_0)={\textnormal{Var}}_{P_0}(\mu_{01}(X)-\mu_{00}(X))$ is an overall summary of treatment effect heterogeneity.
To obtain an asymptotically linear plug-in estimator, $\hat{\theta}_n$ must converge to $\theta_0$ at a sufficiently fast rate and approximately solve an estimating equation to achieve the small bias property with respect to the summary of interest [@Newey2004; @VanderLaan2017; @VanderLaan2018]. For simplicity, we assume the estimand to be scalar-valued — when the estimand is vector-valued, we can treat each entry as a separate estimand, and the plug-in estimators of all entries are jointly asymptotically linear if each estimator is asymptotically linear. Therefore, this leads to no loss in generality if the same fits are used for all entries in the summary of interest.
Sieve estimation allows us to obtain an estimator $\Psi(\hat{\theta}_n)$ with the small bias property with respect to $\Psi(\theta_0)$ while maintaining the optimal convergence rate of $\hat{\theta}_n$ [@Chen2007; @Shen1997]. The construction of sieve estimators is based on a sequence of approximating spaces $\Theta_n$ to $\Theta$. These approximating spaces are referred to as *sieves*. Usually $\Theta_n$ is much simpler than $\Theta$ to avoid over-fitting but complex enough to avoid under-fitting. For example, $\Theta_n$ can be the space of all polynomials with degree $K$ or splines with $K$ knots with $K=K(n) \rightarrow \infty$ as $n \rightarrow \infty$. In this paper, with a loss function $\ell$ such that $\theta_0 \in \operatorname*{argmin}_{\theta \in \Theta} {\mathbb{E}}_{P_0}[\ell(\theta)(V)]$, we consider estimating $\theta_0$ by minimizing an empirical risk based on $\ell$, i.e., $\hat{\theta}_n \in \operatorname*{argmin}_{\theta \in \Theta_n} n^{-1} \sum_{i=1}^{n} \ell(\theta)(V_i)$. Under some conditions, the growth rate of $\Theta_n$ can be carefully chosen so that $\Psi(\hat{\theta}_n)$ is an asymptotically linear estimator of $\Psi(\theta_0)$ while $\hat{\theta}_n$ converges to $\theta_0$ at the optimal rate.
Throughout this paper, for a probability distribution $P$ and an integrable function $f$ with respect to $P$, we define $Pf := \int f(v) dP(v)={\mathbb{E}}_P[f(V)]$. We use $P_n$ to denote the empirical distribution. We take $\langle \cdot, \cdot \rangle$ to be the $L^2(P_0)$-inner product, i.e., $\langle \theta_1, \theta_2 \rangle=P_0 (\theta_1 \theta_2)$, where $L^2(P_0)$ is the set of real-valued $P_0$-squared-integrable functions defined on the support of $P_0$. When the functions are vector-valued, we take $\langle \theta_1, \theta_2 \rangle=P_0 (\theta_1^\top \theta_2)$. We use $\| \cdot \|$ to denote the induced norm of $\langle \cdot, \cdot \rangle$. We assume that $\Theta \subseteq L^2(P_0)$. We remark that we have committed to a specific choice of inner product and norm to fix ideas; other inner products can also be adopted, and our results will remain valid upon adaptation of our upcoming conditions. We discuss this explicitly via a case study in Appendix \[appendix: change norm\].
For the methods we propose in this article, we assume that $\Theta$ is convex. Throughout this paper, we will further require a set of conditions similar to those in [@Shen1997]. For any $\theta \in \Theta$, let $\ell_{0}'[\theta-\theta_0](v) := \lim_{\delta \rightarrow 0} [\ell(\theta_0 +\delta(\theta-\theta_0))(v) - \ell(\theta_0)(v)]/\delta$ be the Gâteaux derivative of $\ell$ at $\theta_0$ in the direction $\theta-\theta_0$ and $r[\theta-\theta_0](v) := \ell(\theta)(v) - \ell(\theta_0)(v) - \ell_{0}'[\theta-\theta_0](v)$ be the corresponding remainder.
\[Adloss\] For all $\theta \in\Theta$, $\ell_{0}'[\theta-\theta_0]$ exists and $\ell_{0}'[\theta-\theta_0](v) - P_0 \ell_{0}'[\theta-\theta_0]$ is linear and bounded in $\theta-\theta_0$.
\[Aquadraticloss\] There exists a constant $\alpha_{0,\ell} \in (0,\infty)$ such that, for all $\theta \in \Theta$ such that $P_0 \{ \ell(\theta) - \ell(\theta_0) \}$ or $\| \theta - \theta_0 \|$ is sufficiently small, it holds that $P_0 \{ \ell(\theta) - \ell(\theta_0) \} = \alpha_{0,\ell} \| \theta - \theta_0 \|^2 /2 + o(\| \theta - \theta_0 \|^2)$.
We now present an equivalent form of \[Aquadraticloss\] that may be easier to verify in practice. For all $\theta\in \Theta\backslash\{\theta_0\}$, define $h_\theta:=(\theta-\theta_0)/\|\theta-\theta_0\|$ and $a_{\theta}:= \left. \frac{d^2}{d\delta^2} P_0 \ell(\theta_0 + \delta h_\theta) \right|_{\delta=0}$. Requiring Condition \[Aquadraticloss\] is equivalent to requiring that $a_{\theta_1}=a_{\theta_2}$ for all $\theta_1,\theta_2\in\Theta\backslash\{\theta_0\}$ and that $$\begin{aligned}
\sup_{\theta\in\Theta} \left|P_0 \ell(\theta_0 + \delta h_\theta) - P_0 \ell(\theta_0) - \frac{a_\theta}{2}\right|&= o(\delta^2).\end{aligned}$$ Moreover, if \[Aquadraticloss\] holds, then, for any $\theta\in\Theta\backslash\{\theta_0\}$, it is true that $\alpha_{0,\ell}=a_\theta$.
A large class of loss functions satisfy Condition \[Adloss\] and \[Aquadraticloss\]. For example, in the regression setting where $Z$ is the outcome, the squared-error loss $\ell(\theta): v \mapsto [z-\theta(x)]^2$ and the logistic loss $\ell(\theta): v \mapsto -z \theta(x) + \log\{1+\exp(\theta(x))\}$ both satisfy these conditions; a negative working log-likelihood usually also satisfies these conditions.
\[AdPsi\] $\Psi_{\theta_0}'[\theta-\theta_0] := \lim_{\delta \rightarrow 0} [\Psi(\theta_0+\delta (\theta-\theta_0)) - \Psi(\theta_0)]/\delta$ exists for all $\theta \in \Theta$ and is a linear bounded operator.
If Condition \[AdPsi\] holds, then, by the Riesz representation theorem, $\Psi_{\theta_0}'[\theta-\theta_0] = \langle \theta-\theta_0, \dot{\Psi} \rangle$ for a gradient function $\dot{\Psi}=\dot{\Psi}_{\theta_0}$ in the completion of the space spanned by $\Theta-\theta_0 := \{x \mapsto \theta(x)-\theta_0(x): \theta \in \Theta\}$.
\[APsiremainder\] There exists a constant $C>0$ so that, for all $\theta$ with sufficiently small $\| \theta - \theta_0 \|$, it holds that $|\Psi(\theta)-\Psi(\theta_0)-\Psi_{\theta_0}'[\theta-\theta_0]| \leq C \| \theta - \theta_0 \|^2$.
The above condition states that the remainder of the linear approximation to $\Psi$ is locally bounded by a quadratic function.
Estimation with Highly Adaptive Lasso {#section: hal}
=====================================
Estimation with an oracle tuning parameter
------------------------------------------
Recently, the Highly Adaptive Lasso (HAL) was proposed as a flexible ML algorithm that only requires a mild smoothness condition on the unknown function and has a well-described implementation [@benkeser2016; @VanderLaan2017]. For ease of presentation, for the moment, we assume that $\theta_0$ is real-valued. In this method, $\theta_0$ is assumed to fall in the class of càdlàg functions (right-continuous with left limits) defined on $\mathcal{X} \subseteq {\mathbb{R}}^d$ with variation norm bounded by a finite constant $M$. In this section, we denote this function class by $\Theta_M$. The variation norm of a càdlàg function $\theta$, denoted by $\| \theta \|_{{{\mathrm{v}}}}$, characterizes the total variability of $\theta$ as its argument ranges over the domain, so $\| \cdot \|_{{{\mathrm{v}}}}$ is a global smoothness measure and $\Theta_M$ is a large function class that even contains functions with discontinuities. Notably, this notion of variation norm coincides with that of Hardy and Krause [@owen2005]. A brief review of this concept is provided in Appendix \[appendix: varnorm\]. Fig. \[Fcadlag\] presents some examples of univariate càdlàg functions with finite variation norms for illustration. Because $\Theta_M$ is a rich class, it can be plausible that $\theta_0 \in \Theta_M$ for some $M < \infty$. Under this assumption, it has been shown that $\| \hat{\theta}_n - \theta_0 \| = o_p(n^{-1/4})$ regardless of the dimension of $X$ under additional mild conditions [@VanderLaan2017]. Thus, estimation with HAL replaces the usual stringent smoothness requirement of traditional series estimators by the requirement that $\theta_0\in\Theta_M$ for some $M$.
![Examples of univariate càdlàg functions with finite variation norms. The top-left, top-right, bottom-left and bottom-right plots present the standard normal density function, a minimax concave penalty function [@Zhang2010], a step function and the real part of a Morlet wavelet [@Mallat2009] respectively.[]{data-label="Fcadlag"}](Fcadlag.pdf)
For ease of illustration, for the rest of this section, we consider scalar-valued $\Psi$, and will discuss vector-valued $\Psi$ only at the end of this subsection. We further introduce the following conditions needed to establish that the HAL-based plug-in estimator is efficient.
\[Acadlag\] $\theta_0$ and $\dot{\Psi}$ are càdlàg.
\[AM\] For some $M<\infty$, $\| \theta_0 \|_{{{\mathrm{v}}}} + \| \dot{\Psi} \|_{{{\mathrm{v}}}} \leq M$.
Condition \[AM\] ensures that certain perturbations of $\theta_0$ still lie in $\Theta_M$, a crucial requirement for proving the asymptotic linearity of our proposed plug-in estimator.
In this section, we fix an $M$ that satisfies Condition \[AM\]. Additional technical conditions can be found in Appendix \[section: HAL additional regularity conditions\]. Let $\hat{\theta}_n=\hat{\theta}_{n,M} \in \operatorname*{argmin}_{\theta \in \Theta_M} n^{-1} \sum_{i=1}^n \ell(\theta)(V_i)$ denote the HAL fit obtained using the bound $M$ in Condition \[AM\]. We present a result on the asymptotic linearity and efficiency of the plug-in estimator based on $\hat{\theta}_n$.
\[THALefficiency\] Under Conditions \[Adloss\]–\[APsiremainder\] and \[Acadlag\]–\[AHALfinitevar\], $\Psi(\hat{\theta}_n)$ is an asymptotically linear estimator of $\Psi(\theta_0)$ with influence function $v \mapsto \alpha_{0,\ell}^{-1} \left\{-\ell_{0}'[\dot{\Psi}] (v) + {\mathbb{E}}_{P_0} \left[ \ell_{0}'[\dot{\Psi}] (V) \right]\right\}$, that is, $$\Psi(\hat{\theta}_n) = \Psi(\theta_0) + \frac{1}{n} \sum_{i=1}^n \alpha_{0,\ell}^{-1} \left\{ -\ell_{0}'[\dot{\Psi}] (V_i) + {\mathbb{E}}_{P_0} \left[ \ell_{0}'[\dot{\Psi}] (V) \right] \right\}+o_p(n^{-1/2}).$$ As a consequence, $\sqrt{n} [\Psi(\hat{\theta}_n)-\Psi(\theta_0)] \overset{d}{\rightarrow} \text{N}(0, \xi^2 )$ with $\xi^2 := {\textnormal{Var}}_{P_0}(\ell_{0}'[\dot{\Psi}] (V))/\alpha_{0,\ell}^2$. In addition, under Conditions \[Acloseminimizer\] and \[Aempiricalprocesslike\] in Appendix \[section: regularity additional conditions\], $\Psi(\hat{\theta}_n)$ is efficient under a nonparametric model.
We note that, for HAL to achieve the optimal convergence rate, we only need that $M \geq \| \theta_0 \|_{{{\mathrm{v}}}}$ [@benkeser2016; @VanderLaan2017]. The requirement of a larger $M$ imposed by Condition \[AM\] resembles undersmoothing [@Newey1998], as using a larger $M$ would result in a fit that is less smooth than that based on the CV-selected bound. The $L^2(P_0)$-convergence rate of the flexible fit using the larger bound remains the same, but the leading constant may be larger. This is in contrast to traditional undersmoothing, which leads to a fit with a suboptimal rate of convergence.
Under some conditions, the following lemma provides a loose bound on $\| \dot{\Psi} \|_{{{\mathrm{v}}}}$ in the case that $\dot{\Psi}$ has a particular structure.
\[Lvarnorm\] Suppose that $\dot{\Psi}=\dot{\psi} \circ \theta_0$, where $\dot{\psi}: {\mathbb{R}}\rightarrow {\mathbb{R}}$ is differentiable. Let $x^{(\ell)}=\sup \{x: P_0(X \geq x) = 1\}$ where $\sup$ and $\geq$ are entrywise. Assume that $\theta_0$ is differentiable. If each of $\| \theta_0 \|_{{{\mathrm{v}}}}$, $|\dot{\Psi}(x^{(\ell)})|$ and $B := \sup_{z: |z| \leq \| \theta_0 \|_{{{\mathrm{v}}}}} |\dot{\psi}'(z)|$ is finite, then $\| \dot{\Psi} \|_{{{\mathrm{v}}}} \leq B \| \theta_0 \|_{{{\mathrm{v}}}} + |\dot{\Psi}(x^{(\ell)})|$. Hence, $\| \theta_0 \|_{{{\mathrm{v}}}} + \| \dot{\Psi} \|_{{{\mathrm{v}}}} \leq (B+1) \| \theta_0 \|_{{{\mathrm{v}}}} + |\dot{\Psi}(x^{(\ell)})| < \infty$.
When $\theta_0$ is ${\mathbb{R}}^q$-valued, $\theta_0$ can often be viewed as a collection of $q$ real-valued variation-independent functions $\eta_{10}, \ldots, \eta_{q0}$. In this case, we can define $\Theta_M = \{ (\eta_1, \ldots, \eta_q): \eta_j \text{ is c\`{a}dl\`{a}g}, \|\eta_j\|_{{{\mathrm{v}}}} \leq M_j, j=1,\ldots,q \}$ for a positive vector $M=(M_1,\ldots,M_q)$. The subsequent arguments follow analogously, where now each $\eta_j$ is treated as a separate function.
We remark that an undersmoothing condition such as \[AM\] appears to be necessary for a HAL-based plug-in estimator to be efficient. We illustrate this numerically in Section \[section: hal\_sim\]. The choice of a sufficiently large bound $M$ required by Theorem \[THALefficiency\] is by no means trivial, since this choice requires knowledge that the user may not have. Nevertheless, this result forms the basis of the data-driven method that we propose in Section \[section: hal\_sim\] for choosing $M$. We finally remark that, if we wish to plug in the same $\hat{\theta}_n$ based on HAL for a rich estimands, the chosen bound $M$ needs to be sufficiently large for all estimands of interest.
Data-adaptive selection of the tuning parameter {#section: hal_sim}
-----------------------------------------------
Since it is hard to prespecify a bound $M$ on the variation norm that is sufficiently large to satisfy Condition \[AM\] but also sufficiently small to avoid overfitting for a given data set, it is desirable to select $M$ in a data-adaptive manner. A seemingly natural approach makes use of $k$-fold CV. In particular, for each candidate bound $M$, partition the data into $k$ folds of approximately equal size ($k$ is fixed and does not depend on $n$), in each fold evaluate the performance of the HAL estimator fitted on all other folds based on this candidate $M$, and use the candidate bound $M_n$ with the best average performance across all folds to obtain the final fit. It has been shown that $\hat{\theta}_{n,M_n}$ can achieve the optimal convergence rate under mild conditions [@Vanderlaan2003cv], but $M_n$ appears not to satisfy Condition \[AM\] in general. In particular, the derived bound on $\| \hat{\theta}_n - \theta_0 \|$ relies on an empirical process term, namely $\sup_{\theta \in \Theta_M} |(P_n-P_0) \{ \ell(\theta) - \ell(\theta_0) \}|$, and a larger $M$ implies a larger space $\Theta_M$. Therefore, the bound on $\| \hat{\theta}_n - \theta_0 \|$ grows with $M$. Because $k$-fold CV seeks to optimize out-of-sample performance, $M_n$ generally appears to be close to $\| \theta_0 \|_{{{\mathrm{v}}}}$ and not sufficiently large to obtain an efficient plug-in estimator.
To avoid this issue with the CV-selected bound, we propose a method that takes inspiration from $k$-fold CV, but modifies the bound so that it is guaranteed to yield an efficient plug-in estimator for $\Psi(\theta_0)$. This method may require the analytic expression for $\dot{\Psi}$. In Sections \[section: simple case\] and \[section: general case\], we present methods that do not require this knowledge.
1. Derive an upper bound on $\| \dot{\Psi} \|_{{{\mathrm{v}}}}$. This bound is a non-decreasing function of the variation norms of functions that can be learned from data (e.g., using Lemma \[Lvarnorm\]). In other words, find a non-decreasing function $F$ such that $\| \dot{\Psi} \|_{{{\mathrm{v}}}} \leq F(\| \eta_{10} \|_{{{\mathrm{v}}}}, \ldots, \| \eta_{q0} \|_{{{\mathrm{v}}}})$ for unknown functions $\eta_{10},\ldots,\eta_{q0}$ that can be assumed to be càdlàg with finite variation norm and can be estimated with HAL.
2. Estimate $\eta_{10},\ldots,\eta_{q0}$ by HAL with $k$-fold CV, and denote the CV-selected bounds for these functions by $M_{1n},\ldots,M_{qn}$.
3. For a small $\epsilon>0$, use the bound $F(M_{1n}+\epsilon,\ldots,M_{qn}+\epsilon)$ to estimate $\theta_0$ with HAL and plug in the fit. We refer to this step of slightly increasing the bounds as *$\epsilon$-relaxation*.
It follows from Lemma \[Lcvbound\] in the Appendix that this method would yield a sufficiently large bound with probability tending to one. In practice, it is desirable for the bound derived on $\| \dot{\Psi} \|_{{{\mathrm{v}}}}$ to be relatively tight to avoid choosing an overly large bound that leads to overfitting in small to moderate samples. We remark that multiplying by $1+\epsilon$ rater than adding $\epsilon$ to each argument also leads to a valid choice for the bound; that is, the bound $F(M_{1n} (1+\epsilon), \ldots, M_{qn} (1+\epsilon))$ is also sufficiently large with probability tending to one. In practice, the user may increase each CV-selected bound by, for example, 5% or 10%. Although it is more natural and convenient to directly use $F(M_{1n},\ldots,M_{qn})$ as the bound, we have only been able to prove the result with a small $\epsilon$-relaxation. However, if the bound is loose and $F$ is continuous, we can show that $\epsilon$-relaxation is unnecessary. The formal argument can be found after Lemma \[Lcvbound\] in the Appendix.
As for methods based on knowledge of an influence function, deriving $\dot{\Psi}$ and a bound for its variation norm requires some expertise, but in some cases this task can be straightforward. The derivation of an influence function is typically based on a fluctuation in the space of distributions, but in many cases, the relation between such fluctuations and the summary of interest is implicit and difficult to handle. In contrast, the derivation of $\dot{\Psi}$ is based on a fluctuation of $\theta_0$, and the summary of interest explicitly depends on $\theta_0$. As a consequence, it can be simpler to derive $\dot{\Psi}$ than to derive an influence function. For example, for the summary $\Psi_\kappa(\theta_0)=P_0 \theta_0^\kappa$ in Example \[example: moments\], we find that $\dot{\Psi}_\kappa=\kappa \theta_0^{\kappa-1}$ by straightforward calculation, whereas the influence function given in Theorem \[THALefficiency\] is more difficult to directly derive analytically.
We illustrate the fact that $M_n$ may not be sufficiently large and show that our proposed method resolves this issue via a simulation study in which $\theta_0: x \mapsto {\mathbb{E}}_{P_0}[Y|X=x]$ and $\Psi: \theta_0 \mapsto P_0 \theta_0^2$. We compare the performance of the plug-in estimators based on the 10-fold CV-selected bound on variation norm (M.cv), the bound derived from the analytic expression of $\dot{\Psi}$ with and without $\epsilon$-relaxation (M.gcv+ and M.gcv respectively), and a sufficiently large oracle choice satisfying Condition \[AM\] (M.oracle). According to Lemma \[Lvarnorm\], M.oracle is $3 \| \theta_0 \|_{{{\mathrm{v}}}}$ and M.gcv is 3$\times$M.cv. We also investigate the performance of 95% Wald CIs based on the influence function. For each resulting plug-in estimator, we investigate the following quantities: $n \cdot \text{MSE}$, $\sqrt{n} \cdot |\text{bias}|$ and CI coverage. More details of this simulation are provided in Appendix \[appendix: simulation\]. In theory, for an efficient estimator, we should find that $n \cdot \text{MSE}$ tends to a constant (the variance of the influence function $\xi^2 := P_0 \text{IF}^2$), $\sqrt{n} \cdot |\text{bias}|$ tends to $0$, and 95% Wald CIs have approximately 95% coverage.
We report performance summaries in Fig \[Fmse\_bias\_hal\] and Table \[TableCI\_hal\] with this criterion, from which it appears that the plug-in estimators with M.oracle and M.gcv+ achieve efficiency, while the plug-in estimator based on M.cv does not. The desirable performance of M.oracle and M.gcv+ agrees with the available theory, whereas the poor performance of M.cv suggests that cross-validation may not yield a valid choice of variation norm in general. Interestingly, M.gcv performs similarly to M.oracle and M.gcv+. We conjecture that using an $\epsilon$-relaxation is unnecessary in this setting. In Fig \[FM\_hal\], we can also see that M.cv tends to $\| \theta_0 \|_{{{\mathrm{v}}}}$ and has a high probability of being less than M.oracle. Therefore, this simulation suggests that using a sufficiently large bound — in particular, a bound larger than the CV-selected bound — may be necessary and sufficient for the plug-in estimator to achieve efficiency.
![The relative MSE, $n \cdot \text{MSE}/\xi^2$, and the relative absolute bias, $\sqrt{n} \cdot |\text{bias}/\Psi(\theta_0)|$, of the plug-in estimator of $\Psi(\theta_0)=P_0 \theta_0^2$ based on HAL for an oracle choice of the bound on variation norm (M.oracle), the 10-fold CV-selected bound (M.cv), a bound based on M.cv and analytic expression of $\dot{\Psi}$ without and with $\epsilon$-relaxation (M.gcv and M.gcv+ respectively). $\xi^2 := P_0 \text{IF}^2$ is the asymptotic variance that the $n \cdot \text{MSE}$ of an efficient estimator should converge to. Note that the $n \cdot \text{MSE}$ for M.oracle, M.gcv and M.gcv+ tends to $\xi^2$ but that for M.cv does not.[]{data-label="Fmse_bias_hal"}](Fmse_bias_hal.pdf)
n M.cv M.gcv M.gcv+ M.oracle
------- ------ ------- -------- ----------
500 0.87 0.96 0.96 0.97
1000 0.87 0.97 0.97 0.97
2000 0.90 0.95 0.95 0.96
5000 0.93 0.95 0.95 0.95
10000 0.89 0.95 0.95 0.95
: Coverage probability of 95% Wald CI of the plug-in estimator of $\Psi(\theta_0)=P_0 \theta_0^2$ based on HAL for an oracle choice of the bound on variation norm (M.oracle), the 10-fold CV-selected bound (M.cv), a bound based on M.cv and analytic expression of $\dot{\Psi}$ without and with $\epsilon$-relaxation (M.gcv and M.gcv+ respectively). The CI is constructed based on the influence function. The coverage for M.oracle, M.gcv and M.gcv+ is approximately 95%, but that for M.cv is not.[]{data-label="TableCI_hal"}
![A boxplot of the ratio of bounds based on 10-fold CV and M.oracle. The horizontal gray thick dashed lines are $1$ and $1/3$. The y-axis is scaled based on logarithm for readability. There is a high probability that M.cv is much smaller than M.oracle; M.cv tends to the variation norm of the function being estimated, $\| \theta_0 \|_{{{\mathrm{v}}}}$, corresponding to $1/3$ of M.oracle. Enlarging M.cv according to the analytic expression of $\dot{\Psi}$ with $\epsilon$-relaxation results in sufficiently large bounds. The enlargement without $\epsilon$-relaxation appears to have similar performance.[]{data-label="FM_hal"}](FM_hal.pdf)
Data-adaptive series {#section: simple case}
====================
Proposed method {#section: simple case method}
---------------
For ease of illustration, we consider the case that $\Psi$ is scalar-valued in this section. As we will describe next, our proposed estimation procedure for function-valued features does not rely on $\Psi$ and hence can be used for a class of summaries.
Suppose that $\Theta$ is a vector space of ${\mathbb{R}}^q$-valued functions equipped with the $L^2(P_0)$-inner product. Further, suppose that $\dot{\Psi}=\dot{\psi} \circ \theta_0$ for some function $\dot{\psi}: {\mathbb{R}}^q \rightarrow {\mathbb{R}}^q$. This holds, for example, when $\Psi: \theta \mapsto P_0 (f \circ \theta)$ for a fixed differentiable function $f$ in Example \[example: moments\]. In this case, $\dot{\Psi}=f' \circ \theta_0$ and hence $\dot{\psi}=f'$. Particularly useful examples include Examples \[example: moments\] and \[example: treatment effect heterogeneity\]. For now we assume that the marginal distribution of $X$ is known so that we only need to estimate $\theta_0$ for this summary. We will address the more difficult case in which the marginal distribution of $X$ is unknown in Section \[section: generalized simple case\].
Let $\theta_n^0$ be a given initial flexible ML fit of $\theta_0$ and consider the data-adaptive sieve-like subspaces based on $\theta_n^0$, $\Theta_n := \Theta_{n,\theta_n^0} := \text{Span}\{ \phi_1, \phi_2, \ldots, \phi_K \} \circ \theta_n^0$, where $\phi_1,\phi_2,\ldots$ are ${\mathbb{R}}^q$-valued basis functions in a series defined on ${\mathbb{R}}^q$ and $K=K(n)$ is a deterministic number of terms in the series — we will consider selecting $K$ via CV in Section \[section: CV\]. Let $\theta_n^*=\theta_n^*(\theta_n^0) \in \operatorname*{argmin}_{\theta \in \Theta_n} n^{-1} \sum_{i=1}^{n} \ell(\theta)(V_i)$ denote the series estimator within this data-adaptive sieve-like subspace that minimizes the empirical risk. We propose to use $\Psi(\theta_n^*)$ to estimate $\Psi(\theta_0)$.
Results for a deterministic number of terms {#section: simple case theory}
-------------------------------------------
Following [@Chen2007; @Shen1997], our proofs of the validity of our data-adaptive series approach make heavy use of projection operators. We use $\pi_n := \pi_{n,\theta_n^0}$ to denote the projection operator for functions in $\Theta$ onto $\Theta_n=\Theta_{n,\theta_n^0}$ with respect to $\langle \cdot, \cdot \rangle$. For any function $\theta \in \Theta$, let $\Pi_{n,\theta}$ denote the operator that takes as input a function $g: {\mathbb{R}}^q \rightarrow {\mathbb{R}}^q$ for which $g \circ \theta \in L^2(P_0)$ and outputs a function $\Pi_{n,\theta}(g): {\mathbb{R}}^q \rightarrow {\mathbb{R}}^q$ such that $\Pi_{n,\theta}(g) \circ \theta=\pi_{n,\theta}(g \circ \theta)$. In other words, letting $\beta_j$ be the quantity that depends on $g$ and $\theta$ such that $\pi_{n,\theta}(g \circ \theta)=\left( \sum_{j=1}^{K} \beta_j \phi_j \right) \circ \theta$, we define $\Pi_{n,\theta} (g) := \sum_{j=1}^K \beta_j \phi_j$. The operator $\Pi_{n,\theta}$ may also be interpreted as follows: letting $P_\theta$ be the distribution of $\theta(X)$ with $V=(X,Z) \sim P_0$, then $\Pi_{n,\theta}$ is the projection operator of functions ${\mathbb{R}}^q \rightarrow {\mathbb{R}}^q$ with respect to the $L^2(P_\theta)$-inner product. We use ${\mathcal{I}}$ to denote the identity function in ${\mathbb{R}}^q$.
We now present additional conditions we will require to ensure that $\Psi(\theta_n^*)$ is an efficient estimator of $\Psi(\theta_0)$.
\[Ainit\] $\| \theta_n^0 - \theta_0 \|=o_p(n^{-1/4})$.
\[Aestimation\] $\| \theta_n^* - \pi_n(\theta_0) \|=o_p(n^{-1/4})$.
\[Aapproxidentity\] $\| \theta_0 - \Pi_{n,\theta_0}({\mathcal{I}}) \circ \theta_0 \|=o(n^{-1/4})$.
\[Aapproxw\] $\| [\dot{\psi} - \Pi_{n,\theta_0}(\dot{\psi})] \circ \theta_0 \| \cdot \| \theta_n^* - \theta_0 \|=o_p(n^{-1/2})$.
Appendix \[section: series additional regularity conditions\] contains further technical conditions and Appendix \[section: condition discussion\] discusses their plausibility. As discussed in Appendix \[section: condition discussion\], Conditions \[Aestimation\]–\[Aapproxw\] typically imply restrictions on the growth rate of $K$: if $K$ grows too fast with $n$, then Condition \[Aestimation\] may be violated; if $K$ instead grows too slow, then Conditions \[Aapproxidentity\] and \[Aapproxw\] may be violated. We now present a theorem ensuring the asymptotic linearity and efficiency of the plug-in estimator based on $\theta_n^*$.
\[Tefficiency\] Under Conditions \[Adloss\]–\[APsiremainder\] and \[Ainit\]–\[Afinitevar\], $\Psi(\theta_n^*)$ is an asymptotically linear estimator of $\Psi(\theta_0)$ with influence function $v \mapsto \alpha_{0,\ell}^{-1} \left\{-\ell_{0}'[\dot{\psi} \circ \theta_0](v) + {\mathbb{E}}_{P_0} \left[ \ell_{0}'[\dot{\psi} \circ \theta_0](V) \right]\right\}$, that is, $$\Psi(\theta_n^*) = \Psi(\theta_0) + \frac{1}{n} \sum_{i=1}^n \alpha_{0,\ell}^{-1} \left\{ -\ell_{0}'[\dot{\psi} \circ \theta_0](V_i) + {\mathbb{E}}_{P_0} \left[ \ell_{0}'[\dot{\psi} \circ \theta_0](V) \right] \right\}+o_p(n^{-1/2}).$$ As a consequence, $\sqrt{n} [\Psi(\theta_n^*)-\Psi(\theta_0)] \overset{d}{\rightarrow} N(0, \xi^2)$ with $\xi^2 := {\textnormal{Var}}_{P_0}(\ell_{0}'[\dot{\psi} \circ \theta_0](V))/\alpha_{0,\ell}^2$. In addition, under Conditions \[Acloseminimizer\] and \[Aempiricalprocesslike\] in Appendix \[section: regularity additional conditions\], $\Psi(\hat{\theta}_n)$ is efficient under a nonparametric model.
\[remark: targeted series\] Consider the general case in which it may not be true that $\dot{\Psi}$ can be represented as $\dot{\psi} \circ \theta_0$ for some $\dot{\psi}: {\mathbb{R}}^q \rightarrow {\mathbb{R}}^q$. If the analytic expression of $\dot{\Psi}$ can be derived and $\dot{\Psi}$ can be estimated by $\dot{\Psi}_n$ such that $\| \dot{\Psi}_n - \dot{\Psi} \| \cdot \| \theta_n^0 - \theta_0 \|=o_p(n^{-1/2})$, then our data-adaptive series can take a special form that is targeted towards $\Psi$. Specifically, letting $\vartheta_0:=(\theta_0,\dot{\Psi})^\top$ and $\varPsi(\vartheta_0):=\Psi(\theta_0)$, it is straightforward to show that the gradient of $\varPsi$ is $\dot{\varPsi}=(\dot{\Psi},0)^\top=(e_2,\bm{0})^\top \vartheta_0$ with $\bm{0}=(0,0)^\top$ and $e_2=(0,1)^\top$, which is a function composed with $\vartheta_0$. We can set $\vartheta_n^0=(\theta_n^0,\dot{\Psi}_n)^\top$ and $\Theta_n=\text{Span}\{\theta_n^0,\dot{\Psi}_n\}$ in our data-adaptive series. This approach does not have a growing number of terms in $\Theta_n$ and is not similar to sieve estimation, but can be treated as a special case of data-adaptive series. It can be shown that Conditions \[Ainit\]–\[Aapproxw\] are still satisfied for $\vartheta$ and $\varPsi$ with this choice of $\Theta_n$, and hence our data-adaptive series estimator leads to an efficient plug-in estimator. We remark that the introduction of $\vartheta$ and $\varPsi$ is a purely theoretical device, and this targeted approach to estimation is quite similar to that used in the context of TMLE [@VanderLaan2006; @VanderLaan2018].
Summaries involving the marginal distribution of $X$ {#section: generalized simple case}
----------------------------------------------------
We now generalize the setting considered thus far by allowing the parameter to depend both on $\theta_0$ and on $P_0$, i.e., estimating $\Psi(\theta_0, P_0)$. The example given at the beginning of Section \[section: simple case method\], namely that of estimating $\Psi(\theta_0)=P_0 (f \circ \theta_0)$, is a special case of this more general setting. In what follows, we will make use of the following conditions:
\[Apackage\] When we regard $\Psi(\theta_0, P_0)$ as the mapping $\theta \mapsto \Psi(\theta,P_0)$ evaluated at $\theta_0$, Conditions \[Adloss\]–\[APsiremainder\], \[Ainit\]–\[Aapproxw\] and \[ALipschitzidentity\]–\[Afinitevar\] are satisfied for estimating $\Psi(\theta_0, P_0)$.
\[AHadamard\] The mapping $P \mapsto \Psi(\theta_0,P)$ is Hadamard differentiable at $P_0$.
By the functional delta method, it follows that $\Psi(\theta_0,P_n)=\Psi(\theta_0,P_0) + P_n \text{IF}_0 + o_p(n^{-1/2})$ for a function $\text{IF}_0$ satisfying $P_0 \text{IF}_0=0$ and $P_0 \text{IF}_0^2 < \infty$.
\[Asecondorderdifference\] $$[\Psi(\theta_n^*,P_n) - \Psi(\theta_0,P_n)] - [\Psi(\theta_n^*,P_0) - \Psi(\theta_0,P_0)] = o_p(n^{-1/2}).$$
This condition usually holds, for example, when $\Psi(\theta_0, P_0)=P_0 (f \circ \theta_0)$, as in this case the left-hand side is equal to $(P_n-P_0) (f \circ \theta_n^* - f \circ \theta_0)$, which is $o_p(n^{-1/2})$ under empirical process conditions.
\[TefficiencyP\] Under Conditions \[Apackage\]–\[Asecondorderdifference\], $\Psi(\theta_n^*,P_n)$ is an asymptotically linear estimator of $\Psi(\theta_0,P_0)$ with influence function $$v \mapsto \alpha_{0,\ell}^{-1} \left\{-\ell_{0}'[\dot{\psi} \circ \theta_0](v) + {\mathbb{E}}_{P_0}[\ell_{0}'[\dot{\psi} \circ \theta_0](V)] \right\} + \mathrm{IF}_0(V),$$ that is, $$\Psi(\theta_n^*,P_n) = \Psi(\theta_0,P_0) + \frac{1}{n} \sum_{i=1}^n \left\{ -\alpha_{0,\ell}^{-1} \ell_{0}'[\dot{\psi} \circ \theta_0](V_i) + \alpha_{0,\ell}^{-1} {\mathbb{E}}_{P_0} \left[ \ell_{0}'[\dot{\psi} \circ \theta_0](V) \right] + \mathrm{IF}(V_i) \right\}+o_p(n^{-1/2}).$$ As a consequence, $\sqrt{n} [\Psi(\theta_n^*,P_n)-\Psi(\theta_0,P_0)] \overset{d}{\rightarrow} N(0, \xi^2)$ with $\xi^2 := {\textnormal{Var}}_{P_0}(\alpha_{0,\ell}^{-1} \ell_{0}'[\dot{\psi} \circ \theta_0](V) + \mathrm{IF}(V))$.
This result is easy to verify by decomposing $\Psi(\theta_n^*,P_n) - \Psi(\theta_0,P_0)$ as $$\begin{aligned}
[\Psi(\theta_n^*,P_0) - \Psi(\theta_0,P_0)] + [\Psi(\theta_0,P_n) - \Psi(\theta_0,P_0)] + \{[\Psi(\theta_n^*,P_n) - \Psi(\theta_0,P_n)] - [\Psi(\theta_n^*,P_0) - \Psi(\theta_0,P_0)]\}\end{aligned}$$ Moreover, under conditions similar to the conditions \[Acloseminimizer\] and \[Aempiricalprocesslike\] given in Appendix \[section: regularity additional conditions\], we can show that $\Psi(\theta_n^*,P_n)$ is efficient under a nonparametric model.
Conditions \[AHadamard\] and \[Asecondorderdifference\] can be relaxed. Specifically, if $\hat{P}_n$ is an estimator of $P_0$ that satisfies that $\Psi(\theta_0,\hat{P}_n)=\Psi(\theta_0,P_0) + P_n \text{IF}_0 + o_p(n^{-1/2})$ for an influence function $\text{IF}_0$ and Condition \[Asecondorderdifference\] holds with $P_n$ replaced by $\hat{P}_n$, then $\Psi(\theta_n^*,\hat{P}_n)$ is an asymptotically linear estimator of $\Psi(\theta_0,P_0)$.
CV selection of the number of terms in data-adaptive series {#section: CV}
-----------------------------------------------------------
In the preceding subsections, we established the efficiency of the plug-in estimator based on suitable rates of growth for $K$ relative to the sample size $n$. In this subsection, we show that, under some conditions, such a $K$ can be selected by $k$-fold CV: after obtaining $\theta_n^0$, for each $K$ in a range of candidates, we can calculate the cross-validated risk from $k$ folds and choose the value of $K$ with the smallest CV risk. We denote the number of terms in the series that CV selects by $K^*$. In this section, we use $K$ in the subscripts for notation related to data-adaptive sieves-like spaces and projections; this represents a slight abuse of notation because, in Sections \[section: simple case method\] and \[section: simple case theory\], these subscripts were instead used for sample size $n$. That is, we use $\Theta_{K,\theta}$ to denote $\text{Span}\{ \phi_1, \phi_2, \ldots, \phi_K \} \circ \theta$, $\pi_{K,\theta}$ to denote the projection onto $\Theta_{K,\theta}$, $\Pi_{K,\theta}$ to denote the operator such that $\Pi_{K,\theta} (g) \circ \theta=\pi_{K,\theta}(g \circ \theta)$ for all $g: {\mathbb{R}}^q \rightarrow {\mathbb{R}}^q$ with $g \circ \theta \in L^2(P_0)$, and $\theta_n^\sharp := \theta_{K^*}^*(\theta_n^0)$ to be the data-adaptive series estimator based on $\theta_n^0 $ and $K^*$.
\[Abadseries\] There exists a constant $C>0$ such that, with probability tending to one, $\| \dot{\psi} \circ \theta_n^0 - \Pi_{K,\theta_n^0}(\dot{\psi}) \circ \theta_n^0 \| \leq C \| \theta_n^0 - \Pi_{K,\theta_n^0}({\mathcal{I}}) \circ \theta_n^0 \|$ for all $K$.
This condition is equivalent to $\| \dot{\psi} - \Pi_{K,\theta_n^0}(\dot{\psi}) \|_{L^2(P_{\theta_n^0})} \leq C \| {\mathcal{I}}- \Pi_{K,\theta_n^0}({\mathcal{I}}) \|_{L^2(P_{\theta_n^0})}$ for all $K$ with probability tending to one. It does not permit the use of common series, such as polynomial and spline series, for general summaries. Indeed, with such series, ${\mathcal{I}}$ would be exactly contained in the span of $\phi_1,\ldots,\phi_K$ for sufficiently large $K$, in which case the right-hand side would be zero for all sufficiently large $K$, whereas the left-hand side is generally nonzero for all $K$. This condition does, however, permit the use of trigonometric series and wavelets. If such series are used, then this condition also imposes a strong smoothness condition on $\dot{\psi}$, requiring that it be not much rougher than the identity function ${\mathcal{I}}$. Nonetheless, this may not be stringent in many interesting examples. For example, if $\Psi(\theta)=P_0 (f \circ \theta)$ for a fixed function $f$, then $\dot{\psi}$ equals $f'$ and hence can be expected to satisfy this strong smoothness condition provided $f \in C^\infty$. In applications, meaningful estimands usually involve $f$ satisfying this smoothness condition.
The following theorem justifies the use of $k$-fold CV to select $K$ under appropriate conditions.
\[Tcv\] Assume that Conditions \[Adloss\]–\[APsiremainder\], \[Ainit\]–\[Aapproxidentity\], \[Abadseries\], \[Aempiricalprocess\] and \[Afinitevar\] hold for a deterministic $K=K(n)$. Suppose part \[ALipschitzw first half\] of Condition \[ALipschitzw\] holds, then, with $\theta_n^\sharp:=\theta_{K^*}(\theta_n^0)$, $\Psi(\theta_n^\sharp)$ is an asymptotically linear estimator of $\Psi(\theta_0)$ with influence function $v \mapsto \alpha_{0,\ell}^{-1} \left\{-\ell_{0}'[\dot{\psi} \circ \theta_0](v) + {\mathbb{E}}_{P_0} \left[ \ell_{0}'[\dot{\psi} \circ \theta_0](V) \right]\right\}$, that is, $$\Psi(\theta_n^\sharp) = \Psi(\theta_0) + \frac{1}{n} \sum_{i=1}^n \alpha_{0,\ell}^{-1} \left\{ -\ell_{0}'[\dot{\psi} \circ \theta_0](V_i) + {\mathbb{E}}_{P_0} \left[ \ell_{0}'[\dot{\psi} \circ \theta_0](V) \right] \right\}+o_p(n^{-1/2}).$$ As a consequence, $\sqrt{n} [\Psi(\theta_n^\sharp)-\Psi(\theta_0)] \overset{d}{\rightarrow} N(0, \xi^2)$ with $\xi^2 := {\textnormal{Var}}_{P_0}(\ell_{0}'[\dot{\psi} \circ \theta_0](V))/\alpha_{0,\ell}^2$. In addition, under Conditions \[Acloseminimizer\] and \[Aempiricalprocesslike\] in Appendix \[section: regularity additional conditions\], $\Psi(\hat{\theta}_n)$ is efficient under a nonparametric model.
Simulation {#section: sim}
----------
### Demonstration of Theorem \[Tcv\] {#section: demonstrate data-adaptive series}
We illustrate our method in a simulation in which we take $\theta_0(x)={\mathbb{E}}_{P_0}[Z|X=x]$ and $\Psi(\theta_0)=P_0 \theta_0^2$. This is a special case of Example \[example: moments\]. The true function $\theta_0$ is chosen to be discontinuous, which violates the smoothness assumptions commonly required in traditional series estimation. In this case, $\dot{\psi}=2 {\mathcal{I}}$ and so the constant in Condition \[Abadseries\] is 2. We compare the performance of plug-in estimators based on three different nonparametric regressions: (i) polynomial regression with degree selected by 10-fold CV (poly), which results in a traditional sieve estimator, (ii) gradient boosting (xgb) [@Friedman2001; @Friedman2002; @Mason1999; @Mason2000], and (iii) data-adaptive trigonometric series estimation with gradient boosting as the initial ML fit and 10-fold CV to select the number of terms in the series (xgb.trig). We also compare these plug-in estimators with the one-step correction estimator [@Pfanzagl1982] based on gradient boosting (xgb.1step). Further details of this simulation can be found in Appendix \[appendix: simulation\].
Fig \[Fmse\_bias\] presents $n \cdot \text{MSE}$ and $\sqrt{n} \cdot |\text{bias}|$ for each estimator, whereas Table \[TableCI\] presents the coverage probability of 95% Wald CIs based on these estimators. We find that xgb.trig and xgb.1step estimators perform well, while poly and xgb plug-in estimators do not appear to be efficient. Since polynomial series estimators only work well when estimating smooth functions, in this simulation, we would not expect the fit from the polynomial series estimator to converge sufficiently fast, and consequently, we would not expect the resulting plug-in estimator to be efficient. In contrast, gradient boosting is a flexible ML method that can learn discontinuous functions, so we can expect an efficient plug-in estimator based on this ML method. However, gradient boosting is not designed to approximately solve the estimating equation that achieves the small-bias property for this particular summary, so we would not expect its naïve plug-in estimator to be efficient. Based on gradient boosting, our estimator and the one-step corrected estimator both appear to be efficient, but our method has the advantage of being a plug-in estimator. Moreover, the construction of our estimator does not require knowledge of the analytic expression of an influence function.
![The relative MSE, $n \cdot \text{MSE}/\xi^2$, and the relative absolute bias, $\sqrt{n} \cdot |\text{bias}/\Psi(\theta_0)|$, of estimators of $\Psi(\theta_0)=P_0 \theta_0^2$. $\xi^2 := P_0 \text{IF}^2$ is the asymptotic variance that the $n \cdot \text{MSE}$ of an efficient estimator should converge to. poly: plug-in estimator based on polynomial sieve estimation. xgb: plug-in estimator based on gradient boosting. xgb.1step: one-step correction (debiasing) of the plug-in estimator based on gradient boosting. xgb.trig: data-adaptive series with trigonometric series composed with gradient boosting. All tuning parameters are CV-selected. The y-axis for relative MSE is scaled based on logarithm for readability. Note that the $n \cdot \text{MSE}$ for xgb.trig and xgb.1step tend to $\xi^2$, but those for poly and xgb do not.[]{data-label="Fmse_bias"}](Fmse_bias.pdf)
n poly xgb xgb.1step xgb.trig
------- ------ ------ ----------- ----------
500 0.90 0.90 0.95 0.95
1000 0.86 0.89 0.95 0.95
2000 0.74 0.88 0.96 0.96
5000 0.47 0.88 0.94 0.94
10000 0.16 0.87 0.95 0.96
20000 0.02 0.86 0.96 0.96
: Coverage probability of 95% Wald CI based on estimators of $\Psi(\theta_0)=P_0 \theta_0^2$. poly: plug-in estimator based on polynomial sieve estimation. xgb: plug-in estimator based on gradient boosting. xgb.1step: one-step correction (debiasing) of the plug-in estimator based on gradient boosting. xgb.trig: data-adaptive series with trigonometric series composed with gradient boosting. All tuning parameters are CV-selected. The CI is constructed based on the influence function. The coverage probabilities for xgb.trig and xgb.1step are approximately 95%, but those for poly and xgb are not.[]{data-label="TableCI"}
We also investigate the effect of the choice of $K$ on the performance of our method. Fig \[FmseK\] presents $n \cdot \text{MSE}$ for the data-adaptive series estimator with different choices of $K$. We can see that our method is insensitive to the choice of $K$ in this simulation setting. Although a relatively small $K$ performs better, choosing a much larger $K$ does not appear to substantially harm the behavior of the estimator. This insensitivity to the selected tuning parameter suggests that in some applications, without using CV, an almost arbitrary choice of $K$ that is sufficiently large might perform well.
![$n \cdot \text{MSE}$ of estimators of $\Psi(\theta_0)=P_0 \theta_0^2$ based on data-adaptive series with different choices of $K$. The horizontal gray thick dashed line is the asymptotic variance that the $n \cdot \text{MSE}$ of an efficient estimator should converge to, $\xi^2 := P_0 \text{IF}^2$. Note that $n \cdot \text{MSE}$ is not sensitive to the choice of $K$ over a wide range of $K$.[]{data-label="FmseK"}](FmseK.pdf)
### Violation of Condition \[Abadseries\] {#section: rough Psi}
For the $k$-fold CV selection of $K$ in our method to yield an efficient plug-in estimator, $\Psi$ must be highly smooth in the sense that $\dot{\psi}$ can be approximated by the series about as well as can the identity function (see Condition \[Abadseries\]). Although we have argued that this condition is reasonable, in this section, we explore via simulation the behavior of our method based on CV when $\dot{\psi}$ is rough. We again take $\theta_0: x \mapsto {\mathbb{E}}_{P_0}[Z|X=x]$ and an artificial summary $\Psi(\theta_0)=P_0 (f \circ \theta_0)$, where $f$ is an element of $C^1[-1,1]$ but not of $C^2[-1,1]$. In this case, $\dot{\psi}=f'$ is very rough, so we do not expect it to be approximated by a trigonometric series as well as the identity function. However, it is sufficiently smooth to allow for the existence of a deterministic $K$ that achieves efficiency. Further simulation details are provided in Appendix \[appendix: simulation\].
Table \[Tablenonsmooth\] presents the performance of our estimator based on 10-fold CV. We note that it performs reasonably well in terms of the $n\cdot \text{MSE}$ criterion. However, it is unclear whether its scaled bias converges to zero for large $n$, so our method may be too biased. The coverage of 95% Wald CIs is close to the nominal level, suggesting that the bias is fairly small relative to the standard error of the estimator at the sample sizes considered. One possible explanation for the good performance observed is that the $L^2(P_0)$-convergence rate of $\theta_n^*$ is much faster than $n^{-1/4}$, which allows for a slower convergence rate of the approximation error $\| \dot{\psi} \circ \theta_0 - \Pi_{n,\theta_0}(\dot{\psi}) \circ \theta_0 \|$ (see Appendix \[section: condition discussion\]). This simulation shows that our proposed method may still perform well even if Condition \[Abadseries\] is violated, especially when the initial ML fit is close to the unknown function.
n relative $\text{MSE}$ root-$n$ absolute relative bias 95% Wald CI coverage
------- ----------------------- --------------------------------- ----------------------
500 0.88 3.95 0.97
1000 0.89 3.73 0.96
2000 0.79 3.15 0.97
5000 0.78 2.02 0.97
10000 0.88 2.57 0.97
20000 0.88 1.75 0.96
: Performance of the plug-in estimator of $\Psi(\theta_0)=P_0 (f \circ \theta_0)$ based on data-adaptive series. Here $f$ is not infinitely differentiable. The relative MSE is $n \cdot \text{MSE}/\xi^2$ where $\xi^2 := P_0 \text{IF}^2$ is the asymptotic variance that the $n \cdot \text{MSE}$ of an efficient estimator should converge to; the root-$n$ abs relative bias is $\sqrt{n} |\text{bias}/\Psi(\theta_0)|$. The performance appears to be acceptable in view of the small $\text{MSE}$ and reasonable CI coverage.[]{data-label="Tablenonsmooth"}
Generalized data-adaptive series {#section: general case}
================================
Proposed method {#section: general case method}
---------------
As in Section \[section: simple case\], we consider the case that $\Psi$ is scalar-valued in this section. The assumption that $\dot{\Psi}=\dot{\psi} \circ \theta_0$ may be too restrictive for general summaries as in Examples \[example: average derivative\] and \[example: ATE\], especially if $\dot{\Psi}$ is not derived analytically (see Remark \[remark: targeted series\]). In this section, we generalize the method in Section \[section: simple case\] to deal with these summaries. Letting ${\mathcal{I}}_x$ be the identity function defined on $\mathcal{X}$, we can readily generalize the above method to the case where $\dot{\Psi}$ can be represented as $\dot{\psi} \circ (\theta_0, {\mathcal{I}}_x)$ for a function $\dot{\psi}: {\mathbb{R}}^q \times \mathcal{X} \rightarrow {\mathbb{R}}^q$; that is, $\dot{\Psi}(x) = \dot{\psi}(\theta_0(x),x)$. This form holds trivially if we set $\dot{\psi}(t,x)=\dot{\Psi}(x)$, i.e., $\dot{\psi}$ is independent of its first argument, but we can utilize flexible ML methods if $\dot{\psi}$ is nontrivial. Again, we assume $\Theta$ is a vector space of ${\mathbb{R}}^q$-valued function equipped with the $L^2(P_0)$-inner product. We assume $\dot{\psi}$ can be approximated well by a basis $\phi_1,\phi_2,\ldots: {\mathbb{R}}^q \times \mathcal{X} \rightarrow {\mathbb{R}}^q$, and consider the data-adaptive sieve-like subspace $\Theta_n := \Theta_{n,\theta_n^0} := \text{Span}\{ \phi_1, \ldots, \phi_K \} \circ (\theta_n^0,{\mathcal{I}}_x)$. We propose to use $\Psi(\theta_n^*)$ to estimate $\Psi(\theta_0)$, where $\theta_n^*=\theta_n^*(\theta_n^0) \in \operatorname*{argmin}_{\theta \in \Theta_n} n^{-1} \sum_{i=1}^{n} \ell(\theta)(V_i)$ denotes the series estimator within $\Theta_n$ minimizing the empirical risk.
Results for proposed method {#section: general case theory}
---------------------------
With a slight abuse of notation, in this section we use ${\mathcal{I}}$ to denote the function $(t,x) \mapsto t$ where $t \in {\mathbb{R}}^q$ and $x \in \mathcal{X}$. Again, we use $\pi_n := \pi_{n,\theta_n^0}$ to denote the projection operator onto $\Theta_{n,\theta_n^0}$. Let $\Pi_{n,\theta}$ be defined such that, for any function $g: {\mathbb{R}}^q \times \mathcal{X} \rightarrow {\mathbb{R}}^q$ with $g \circ (\theta,{\mathcal{I}}_x) \in L^2(P_0)$, it holds that $\Pi_{n,\theta} (g) \circ (\theta,{\mathcal{I}}_x) =\pi_{n,\theta} (g \circ (\theta,{\mathcal{I}}_x))$; that is, letting $\beta_j$ be the quantity that depends on $g$ and $\theta$ such that $\pi_{n,\theta} (g \circ (\theta,{\mathcal{I}}_x))=\left( \sum_{j=1}^K \beta_j \phi_j \right) \circ (\theta,{\mathcal{I}}_x)$, we define $\Pi_{n,\theta} (g) := \sum_{j=1}^K \beta_j \phi_j$.
We introduce conditions and derive theoretical results that are parallel to those in Section \[section: simple case\].
[Aapproxidentity]{}\[Sufficiently small approximation error to ${\mathcal{I}}$ for $\Theta_{n,\theta_0}$\] \[Aapproxidentity2\] $\| \theta_0 - \Pi_{n,\theta_0}({\mathcal{I}}) \circ (\theta_0,{\mathcal{I}}_x) \|=o(n^{-1/4})$.
[Aapproxw]{}\[Sufficiently small approximation error to $\dot{\psi}$ for $\Theta_{n,\theta_0}$ and convergence rate of $\theta_n^*$\] \[Aapproxw2\] $\| [\dot{\psi} - \Pi_{n,\theta_0}(\dot{\psi})] \circ (\theta_0,{\mathcal{I}}_x) \| \cdot \| \theta_n^*-\theta_0 \|=o_p(n^{-1/2})$.
Additional regularity conditions can be found in Appendix \[section: general series additional regularity conditions\]. We now present a theorem that establishes the efficiency of the plug-in estimator based on $\theta_n^*$.
\[Tefficiency2\] Under Conditions \[Adloss\]–\[APsiremainder\], \[Ainit\], \[Aestimation\], \[Aapproxidentity2\], \[Aapproxw2\], \[ALipschitzidentity2\], \[ALipschitzw2\], \[Aempiricalprocess\] and \[Afinitevar\], $\Psi(\theta_n^*)$ is an asymptotically linear estimator of $\Psi(\theta_0)$ with influence function $v \mapsto \alpha_{0,\ell}^{-1} \left\{-\ell_{0}'[\dot{\psi} \circ (\theta_0, {\mathcal{I}}_x)](v) + {\mathbb{E}}_{P_0} \left[ \ell_{0}'[\dot{\psi} \circ (\theta_0, {\mathcal{I}}_x)](V) \right]\right\}$, that is, $$\Psi(\theta_n^*) = \Psi(\theta_0) + \frac{1}{n} \sum_{i=1}^n \alpha_{0,\ell}^{-1} \left\{ -\ell_{0}'[\dot{\psi} \circ (\theta_0, {\mathcal{I}}_x)](V_i) + {\mathbb{E}}_{P_0} \left[ \ell_{0}'[\dot{\psi} \circ (\theta_0, {\mathcal{I}}_x)](V) \right] \right\}+o_p(n^{-1/2}).$$ As a consequence, $\sqrt{n} [\Psi(\theta_n^*)-\Psi(\theta_0)] \overset{d}{\rightarrow} N(0, \xi^2)$ with $\xi^2 := {\textnormal{Var}}_{P_0}(\ell_{0}'[\dot{\psi} \circ (\theta_0, {\mathcal{I}}_x)](V))/\alpha_{0,\ell}^2$. In addition, under Conditions \[Acloseminimizer\] and \[Aempiricalprocesslike\] in Appendix \[section: regularity additional conditions\], $\Psi(\hat{\theta}_n)$ is efficient under nonparametric models.
When $\Psi$ depends on both $\theta_0$ and $P_0$, we can readily adapt this method as in Section \[section: generalized simple case\].
We now present a condition for selecting $K$ via $k$-fold CV, in parallel with Condition \[Abadseries\] from Section \[section: CV\].
[Abadseries]{}\[Bounded approximation error of $\dot{\psi}$ relative to ${\mathcal{I}}$\] \[Abadseries2\] There exists a constant $C>0$ such that, with probability tending to one, $\| \dot{\psi} \circ (\theta_n^0,{\mathcal{I}}_x) - \Pi_{K,\theta_n^0}(\dot{\psi}) \circ (\theta_n^0,{\mathcal{I}}_x) \| \leq C \| \theta_n^0 - \Pi_{K,\theta_n^0}({\mathcal{I}}) \circ (\theta_n^0,{\mathcal{I}}_x) \|$ for all $K$.
Condition \[Abadseries2\] is more stringent than Condition \[Abadseries\]. For the generalized data-adaptive series, $\dot{\Psi}$ may depend on components of $P_0$ other than $\theta_0$. However, Condition \[Abadseries2\] requires $\dot{\psi}$ to be highly smooth, which essentially assumes the other components that $\dot{\Psi}$ depends on to be highly smooth.
The following theorem shows that $k$-fold CV can be used to select $K$ under certain conditions.
\[Tcv2\] Assume Conditions \[Adloss\]–\[APsiremainder\], \[Ainit\], \[Aestimation\], \[Aapproxidentity2\], \[ALipschitzidentity2\], \[ALipschitzw2\], \[Aempiricalprocess\] and \[Afinitevar\] hold for a deterministic $K=K(n)$. Suppose that part \[ALipschitzw2 first half\] of Condition \[ALipschitzw2\] holds. With $\theta_n^\sharp:=\theta_{K^*}(\theta_n^0)$, $\Psi(\theta_n^\sharp)$ is an asymptotically linear estimator of $\Psi(\theta_0)$ with influence function $v \mapsto \alpha_{0,\ell}^{-1} \left\{-\ell_{0}'[\dot{\psi} \circ (\theta_0,{\mathcal{I}}_x)](v) + {\mathbb{E}}_{P_0} \left[ \ell_{0}'[\dot{\psi} \circ (\theta_0,{\mathcal{I}}_x)](V) \right]\right\}$, that is, $$\Psi(\theta_n^\sharp) = \Psi(\theta_0) + \frac{1}{n} \sum_{i=1}^n \alpha_{0,\ell}^{-1} \left\{ -\ell_{0}'[\dot{\psi} \circ (\theta_0,{\mathcal{I}}_x)](V_i) + {\mathbb{E}}_{P_0} \left[ \ell_{0}'[\dot{\psi} \circ (\theta_0,{\mathcal{I}}_x)](V) \right] \right\}+o_p(n^{-1/2}).$$ Therefore, $\sqrt{n} [\Psi(\theta_n^\sharp)-\Psi(\theta_0)] \overset{d}{\rightarrow} N(0, \xi^2 )$ with $\xi^2 := {\textnormal{Var}}_{P_0}(\ell_{0}'[\dot{\psi} \circ \theta_0](V))/\alpha_{0,\ell}^2$. In addition, under Conditions \[Acloseminimizer\] and \[Aempiricalprocesslike\] in Appendix \[section: regularity additional conditions\], $\Psi(\hat{\theta}_n)$ is efficient under a nonparametric model.
Simulation {#section: sim2}
----------
In the following simulations, we consider the problem in Example \[example: treatment effect heterogeneity\]. As we show in Appendix \[appendix: change norm\], letting $g_0: x \mapsto P_0 (A=1|X=x)$ be the propensity score and setting $\theta=(\mu_0,\mu_1)$, with $\ell(\theta): v \mapsto a [z-\mu_1(x)]^2 + (1-a) [z-\mu_0(x)]^2$, the generalized data-adaptive series methodology may be used to obtain an efficient estimator. As in Section \[section: sim\], we conduct two simulation studies, the first demonstrating Theorem \[Tcv2\] and the other exploring the robustness of CV against violation of Condition \[Abadseries2\].
### Demonstration of Theorem \[Tcv2\]
We choose $\theta_0$ to be a discontinuous function while $g_0$ is highly smooth. We compare the performance of plug-in estimators based on three different nonparametric regressions: (i) polynomial regression with the degree selected by 5-fold CV (poly), which results in a traditional sieve estimator, (ii) gradient boosting (xgb) [@Friedman2001; @Friedman2002; @Mason1999; @Mason2000], and (iii) generalized data-adaptive trigonometric series estimation with gradient boosting as the initial ML fit and 5-fold CV to select the number of terms in the series (xgb.trig). Further details of the simulation setting are provided in Appendix \[appendix: simulation\].
Fig \[Fmse\_bias2\] presents $n \cdot \text{MSE}$ and $\sqrt{n} \cdot |\text{bias}|$ for each estimator, whereas Table \[TableCI2\] presents the coverage probability of 95% Wald CIs based on these estimators. There are a few runs in the simulation with noticeably poor behavior, so we trimmed the most extreme values when computing MSE and bias in Fig \[Fmse\_bias2\] (1% of all Monte Carlo runs). The outliers may be caused by the performance of gradient boosting and the instability of 5-fold CV. In practice, the user may ensemble more ML methods and use 10-fold CV to mitigate such behavior. We note that xgb.trig and xgb.1step estimators perform well, while poly and xgb plug-in estimators do not appear to be efficient. Based on gradient boosting, our estimator and the one-step corrected estimator both appear to be efficient, but the construction of our estimator has the advantage of not requiring the analytic expression of an influence function.
![The relative MSE, $n \cdot \text{MSE}/\xi^2$, and the relative absolute bias, $\sqrt{n} \cdot |\text{bias}/\Psi(\theta_0)|$, of estimators of $\Psi(\theta_0)={\textnormal{Var}}_{P_0}(\mu_{0,1}(X)-\mu_{0,0}(X))$ where $\mu_{0,a}: x \mapsto {\mathbb{E}}_{P_0}[Y|A=a,X=x]$. $\xi^2 := P_0 \text{IF}^2$ is the asymptotic variance that the $n \cdot \text{MSE}$ of an efficient estimator should converge to. poly: plug-in estimator based on polynomial sieve estimation. xgb: plug-in estimator based on gradient boosting. xgb.1step: one-step correction (debiasing) of the plug-in estimator based on gradient boosting. xgb.trig: data-adaptive series with trigonometric series composed with gradient boosting. All tuning parameters are CV-selected. The y-axis for relative MSE is scaled based on logarithm for readability. Note that the $n \cdot \text{MSE}$ for xgb.trig and xgb.1step tend to $\xi^2$, but those for poly and xgb do not.[]{data-label="Fmse_bias2"}](Fmse_bias2.pdf)
n poly xgb xgb.1step xgb.trig
------- ------ ------ ----------- ----------
500 0.85 0.76 0.89 0.90
1000 0.68 0.78 0.93 0.93
2000 0.44 0.81 0.93 0.92
5000 0.11 0.80 0.89 0.87
10000 0.00 0.79 0.92 0.90
20000 0.00 0.67 0.91 0.88
: Coverage probability of 95% Wald CI based on estimators of $\Psi(\theta_0)={\textnormal{Var}}_{P_0}(\mu_{0,1}(X)-\mu_{0,0}(X))$ where $\mu_{0,a}: x \mapsto {\mathbb{E}}_{P_0}[Y|A=a,X=x]$. poly: plug-in estimator based on polynomial sieve estimation. xgb: plug-in estimator based on gradient boosting. xgb.1step: one-step correction (debiasing) of the plug-in estimator based on gradient boosting. xgb.trig: data-adaptive series with trigonometric series composed with gradient boosting. All tuning parameters are CV-selected. The CI is constructed based on the influence function. The coverage probabilities for xgb.trig and xgb.1step are relatively close to 95%, but those for poly and xgb are not.[]{data-label="TableCI2"}
### Violation of Condition \[Abadseries2\]
We also study via simulation the behavior of our estimator when Condition \[Abadseries2\] is violated. We note that whether Condition \[Abadseries2\] holds depends on the smoothness of $g_0$. We choose $g_0$ to be rougher than ${\mathcal{I}}$ with $g_0$ being an element of $C^2[-1,1]$ but not of $C^3[-1,1]$. Consequently, $\dot{\Psi}$ cannot be approximated by our generalized data-adaptive series as well as ${\mathcal{I}}$, but its smoothness is sufficient for the existence of a deterministic $K$ to achieve efficiency. Appendix \[appendix: simulation\] describes further details of this simulation setting.
Table \[Tablenonsmooth2\] presents the performance of our estimator based on 5-fold CV. We observe that its scaled $\text{MSE}$ appears to converge to one, but it is unclear whether its scaled $\text{bias}$ converges to zero for large $n$, and so our method may be overly biased.. The coverage of 95% Wald CIs is close to the nominal level, suggesting that the bias may be fairly small relative to the standard error of the estimator at the sample sizes considered. Therefore, according to this simulation, our generalized data-adaptive series methodology appears to be robust against violation of Condition \[Abadseries2\].
n relative $\text{MSE}$ root-$n$ absolute relative bias 95% Wald CI coverage
------- ----------------------- --------------------------------- ----------------------
500 1.02 0.28 0.92
1000 1.13 0.26 0.91
2000 1.10 0.19 0.94
5000 1.03 0.02 0.93
10000 0.96 0.23 0.95
20000 0.99 0.24 0.94
: Performance of the plug-in estimator of $\Psi(\theta_0)={\textnormal{Var}}_{P_0}(\mu_{0,1}(X)-\mu_{0,0}(X))$ where $\mu_{0,a}: x \mapsto {\mathbb{E}}_{P_0}[Z|A=a,X=x]$ based on data-adaptive series. Here the propensity score $g_0: x \mapsto {\mathbb{E}}_{P_0}[A|X=x]$ is rough. The relative MSE is $n \cdot \text{MSE}/\xi^2$ where $\xi^2 := P_0 \text{IF}^2$ is the asymptotic variance that the $n \cdot \text{MSE}$ of an efficient estimator should converge to; the root-$n$ abs relative bias is $\sqrt{n} |\text{bias}/\Psi(\theta_0)|$. The performance appears to be acceptable in view of small $\text{MSE}$ and reasonable CI coverage.[]{data-label="Tablenonsmooth2"}
Discussion {#section: discussion}
==========
Numerous methods have been proposed to construct efficient estimators for statistical parameters under a nonparametric model, but each of them has one or more of the following undesirable limitations: (i) their construction may require specialized expertise that is not accessible to most statisticians; (ii) for any given data set, there may be little guidance, if any, on how to select a key tuning parameter; and (iii) they may require stringent smoothness conditions. In this paper, we propose two sieve-like methods that can partially overcome these difficulties.
Our first approach, namely that based on HAL, can be further generalized to the case in which the flexible fit is an empirical risk minimizer over a function class assumed to contain the unknown function. The key condition \[AM\] may be modified in that case as long as it ensures that certain perturbations of the unknown function still lie in that function class. We note that our methods may also be applied under semiparametric models.
A major direction for future work is to construct valid CIs without the knowledge of the influence function of the resulting plug-in estimator. The nonparametric bootstrap is in general invalid when the overall summary is not Hadamard differentiable and especially when the method relies on CV [@bickel1997; @hall2013], but a model-based bootstrap is a possible solution (Chapter 28 of [@VanderLaan2018]). In many cases only certain components of the true data-generating distribution must be estimated to obtain a plug-in estimator, while its variance may depend on other components that are not explicitly estimated. Therefore, generating valid model-based bootstrap samples is generally difficult.
Our proposed sieve-like methods may be used to construct efficient plug-in estimators for new applications in which the relevant theoretical results are difficult to derive. They may also inspire new methods to construct such estimators under weaker conditions.
Review of variation norm {#appendix: varnorm}
========================
In this appendix, we briefly review the notations and definition of the variation norm of a càdlàg function $\theta: \left[ x^{(\ell)},x^{(u)} \right] \subseteq {\mathbb{R}}^d \rightarrow {\mathbb{R}}$. Here, $x^{(\ell)}$ and $x^{(u)}$ are vectors in ${\mathbb{R}}^d$; with $\leq$ being entrywise, $\left[ x^{(\ell)},x^{(u)} \right] := \left\{x \in {\mathbb{R}}^d: x^{(\ell)} \leq x \leq x^{(u)} \right\}$. We refer to [@benkeser2016] and [@VanderLaan2017] for more details on variation norm.
For any nonempty index set $s \subseteq \{1,2,\ldots,d\}$ and any $x=(x_1,x_2,\ldots,x_d) \in \left[ x^{(\ell)},x^{(u)} \right]$, we define $x_s := \{x_j: j \in s\}$ and $x_{-s} := \{x_j: j \in \{1,2,\ldots,d\} \setminus s \}$ to be entries of $x$ with indices in and not in $s$ respectively. We defined the $s$-section of $\theta$ as $\theta_s := \theta(x_1 {\mathbbm{1}}(1 \in s), x_2 {\mathbbm{1}}(2 \in s), \ldots, x_d {\mathbbm{1}}(d \in s))$. We can subsequently obtain the following representation of $\theta$ at any $x \in \left[ x^{(\ell)},x^{(u)} \right]$ in terms of sums and integrals of the variation of $s$-sections of $\theta$ [@gill1993]: $$\theta(x) = \theta \left( x^{(\ell)} \right) + \sum_{s \in \{1,\ldots,d\}, s \neq \emptyset} \int_{\left( x^{(\ell)},x \right]} \theta_s(d \tilde{x}).$$ The variation norm is then subsequently defined as $$\| \theta \|_{{\mathrm{v}}}:= \left| \theta \left( x^{(\ell)} \right) \right| + \sum_{s \in \{1,\ldots,d\}, s \neq \emptyset} \int_{\left( x^{(\ell)},x^{(u)} \right]} \left| \theta_s(d \tilde{x}) \right|.$$
Modification of chosen norm for evaluating the conditions: case study of mean counterfactual outcome {#appendix: change norm}
====================================================================================================
In this appendix, we consider a parameter that requires a modification in the chosen norm for evaluating the conditions. In particular, we discuss estimating counterfactual mean outcome in Example \[example: ATE\].
Let $g_0: x \mapsto P_0(A=1|X=x)$ be the propensity score function. A natural choice of the loss function is $\ell(\theta): o \mapsto a[u-\theta(x)]^2$. Indeed, learning a function with this loss function is equivalent to fitting a function within the stratum of observations that received treatment 1. Unfortunately, this loss function does not satisfy Condition \[Aquadraticloss\] with $L^2(P_0)$-norm, because $P_0 \{\ell(\theta) - \ell(\theta_0)\} = P_0 \{g_0 \cdot (\theta-\theta_0)^2\}$ cannot be well approximated by $\alpha_{0,\ell} P_0 \{(\theta-\theta_0)^2\}/2$ for any constant $\alpha_{0,\ell}>0$ unless $g_0$ is a constant. One way to overcome this challenge is to choose the alternative inner product $\langle \theta_1, \theta_2 \rangle_{g_0} := P_0 \{g_0 \theta_1 \theta_2\}$ and its induced norm $\| \cdot \|_{g_0}$. In this case, Condition \[Aquadraticloss\] is satisfied once $\| \cdot \|$ is replaced by $\| \cdot \|_{g_0}$ in the condition statement. Under this choice, $\Psi'_{\theta_0}=P_0 (\theta-\theta_0)=\langle 1/g_0, \theta-\theta_0 \rangle_{g_0}$. We may redefine the corresponding $\dot{\Psi}$ similarly as the function that satisfies $\Psi'_{\theta_0} = \langle \dot{\Psi}, \theta-\theta_0 \rangle_{g_0}$, and it immediately follows that $\dot{\Psi}=1/g_0$. Moreover, under a strong positivity condition, namely $g_0(X) \geq \delta_g >0$ a.s. for some $\delta_g$, which is a typical condition in causal inference literature [@VanderLaan2018; @Yang2018], then it is straightforward to show that $\delta_g \| \cdot \| \leq \| \cdot \|_{g_0} \leq \| \cdot \|$; that is, $\| \cdot \|_{g_0}$ is equivalent to $L^2(P_0)$-norm. Using this fact, it can be shown that all other conditions with respect to the $L^2(P_0)$-inner product are equivalent to the corresponding conditions with respect to $\langle \cdot,\cdot \rangle_{g_0}$.
Therefore, the data-adaptive series can be applied to estimation of the counterfactual mean outcome under our conditions for $L^2(P_0)$-inner product. If we use the targeted form in Remark \[remark: targeted series\], then we need a flexible estimator of $g_0$ and the procedure is almost identical to a TMLE [@VanderLaan2018]. If we use the generalized data-adaptive series, we would require sufficient amount of smoothness for $g_0(\cdot)$. In the latter case, the change in norm when evaluating the conditions is a purely technical device and the estimation procedure is the same as would have been used if we had used the $L^2(P_0)$-norm. We also note that the same argument may be applied to Example \[example: treatment effect heterogeneity\], as we did in Section \[section: sim2\].
Additional conditions {#section: additional technical conditions}
=====================
Throughout the rest of this appendix, we use $C$ to denote a general absolute positive constant that can vary line by line.
HAL {#section: HAL additional regularity conditions}
---
\[Alossdist\] Let $M$ satisfy Condition \[AM\]. For any $\epsilon>0$, it holds that $\inf_{\theta\in\Theta_M : \|\ell(\theta)-\ell(\theta_0)\|>\epsilon} P_0 \ell(\theta) > P_0 \ell(\theta_0)$.
Note that Condition \[Alossdist\] implies that for any sequence $\theta_n \in \Theta_M$ with $P_0 \{ \ell(\theta_n)-\ell(\theta_0) \} \rightarrow 0$, it holds that $\| \ell(\theta_n) - \ell(\theta_0) \| \rightarrow 0$.
\[AHALempiricalprocess\] For any fixed $\vartheta \in \Theta_M$ and some $\Delta > 0$, it holds that $\ell(\theta)$, $\ell_{0}'[\theta-\theta_0]$ and $\{r[\theta-\theta_0] - r[\theta + \delta (\vartheta-\theta)-\theta_0]\}/\delta$ are càdlàg for all $\theta \in \Theta_M$ and all $\delta \in [0,\Delta]$. Moreover, the following terms are all finite: $$\sup_{\theta \in \Theta_M} \| \ell(\theta) \|_{{{\mathrm{v}}}}, \quad \sup_{\theta \in \Theta_M} \| \ell_{0}'[\theta-\theta_0] \|_{{{\mathrm{v}}}}, \quad \sup_{\theta \in \Theta_M, \delta \in [0,\Delta]} \left\| \frac{r[\theta-\theta_0] - r[\theta -\theta_0 + \delta (\vartheta-\theta)]}{\delta} \right\|_{{{\mathrm{v}}}}.$$ In addition, $\| \ell_0'[\hat{\theta}_n - \theta_0] \|$ and $\sup_{\delta \in [0,\Delta]} \left\| \{r[\hat{\theta}_n-\theta_0] - r[\hat{\theta}_n -\theta_0 + \delta (\vartheta-\hat{\theta}_n)]\}/\delta \right\|$ converge to 0 in probability.
\[AHALfinitevar\] $\xi^2 := {\textnormal{Var}}_{P_0} (\ell_{0}'[\dot{\Psi}](V))/\alpha_{0,\ell}^2 < \infty$.
Data-adaptive series {#section: series additional regularity conditions}
--------------------
\[ALipschitzidentity\] For sufficiently large $n$, $\| \Pi_{n,\theta_0}({\mathcal{I}}) \circ \theta - \Pi_{n,\theta_0}({\mathcal{I}}) \circ \theta_0 \| \leq C \| \theta - \theta_0\|$ for all $\theta \in \Theta$ with $\| \theta - \theta_0 \| \leq n^{-1/4}$.
\[ALipschitzw\] For sufficiently large $n$, for all $\theta \in \Theta$ with $\| \theta - \theta_0 \| \leq n^{-1/4}$,
1. \[ALipschitzw first half\] $\| \dot{\psi} \circ \theta - \dot{\psi} \circ \theta_0 \| \leq C \| \theta - \theta_0 \|$;
2. $\| \Pi_{n,\theta_0}(\dot{\psi}) \circ \theta - \Pi_{n,\theta_0}(\dot{\psi}) \circ \theta_0 \| \leq C \| \theta - \theta_0\|$.
\[Aempiricalprocess\] There exists some $\Delta>0$ such that $$\begin{aligned}
& \sup_{\delta \in [0,\Delta]} \left| (P_n-P_0) \left\{ \frac{r[\theta_n^*-\theta_0] - r[\pi_n((1-t) \theta_n^* + \delta (\pm \dot{\psi} \circ \theta_0 + \theta_0))-\theta_0]}{\delta} \right\} \right|=o_p(n^{-1/2}), \\
& (P_n-P_0) \ell_{0}'[(\pm \dot{\psi} \circ \theta_0 + \theta_0) - \pi_n(\pm \dot{\psi} \circ \theta_0 + \theta_0)]=o_p(n^{-1/2}), \\
& (P_n-P_0) \ell_{0}'[\theta_n^*-\theta_0]=o_p(n^{-1/2}).
\end{aligned}$$
\[Afinitevar\] $\xi^2 := {\textnormal{Var}}_{P_0} (\ell_{0}'[\dot{\Psi}](V))/\alpha_{0,\ell}^2 < \infty$.
Generalized data-adaptive series {#section: general series additional regularity conditions}
--------------------------------
[ALipschitzidentity]{}\[Lipschitz continuity of projected ${\mathcal{I}}$ for $\Theta_{n,\theta_0}$\] \[ALipschitzidentity2\] For sufficiently large $n$, $\| \Pi_{n,\theta_0}({\mathcal{I}}) \circ (\theta,{\mathcal{I}}_x) - \Pi_{n,\theta_0}({\mathcal{I}}) \circ (\theta_0,{\mathcal{I}}_x) \| \leq C \| \theta - \theta_0\|$ for all $\| \theta - \theta_0 \| \leq n^{-1/4}$.
[ALipschitzw]{}\[Lipschitz continuity of $\dot{\psi}$ and its projection for $\Theta_{n,\theta_0}$\] \[ALipschitzw2\] For sufficiently large $n$, for all $\| \theta - \theta_0 \| \leq n^{-1/4}$,
1. \[ALipschitzw2 first half\] $\| \dot{\psi} \circ (\theta,{\mathcal{I}}_x) - \dot{\psi} \circ (\theta_0,{\mathcal{I}}_x) \| \leq C \| \theta - \theta_0 \|$;
2. $\| \Pi_{n,\theta_0}(\dot{\psi}) \circ (\theta,{\mathcal{I}}_x) - \Pi_{n,\theta_0}(\dot{\psi}) \circ (\theta_0,{\mathcal{I}}_x) \| \leq C \| \theta - \theta_0\|$.
Conditions for efficiency of the plug-in estimator {#section: regularity additional conditions}
--------------------------------------------------
Define a collection of submodels $$\left\{ \{P_{H,\delta}: \delta \in B_H \subseteq {\mathbb{R}}\}: H \in \mathscr{H} \right\}$$ for which: (i) $\mathscr{H}$ is a subset of $L_0^2(P_0)$ and the $L_0^2(P_0)$-closure of its linear span is $L_0^2(P_0)$; and (ii) each $\{P_{H,\delta}: \delta \in B_H \subseteq {\mathbb{R}}\}$ is a regular univariate parametric submodel that passes through $P_0$ and has score $H$ for $\delta$ at $\delta=0$. For each $H \in \mathscr{H}$ and $\delta \in B_H$, we define $\theta_{H,\delta} \in \operatorname*{argmin}_{\theta \in \Theta} P_{H,\delta} \ell(\theta)$. In this appendix, for all small $o$ and big $O$ notations, we let $\delta \rightarrow 0$ with $H$ fixed.
\[Acloseminimizer\] For any given $H \in \mathscr{H}$, $\|\theta_{H,\delta} - \theta_0\| = o(\delta^{1/2})$.
\[Aempiricalprocesslike\] For any given $H \in \mathscr{H}$ and $\vartheta$, there exists positive $\delta'=o(\delta)$ such that $(P_{H,\delta} - P_0) \{ r[(1-\delta')(\theta_{H,\delta} - \theta_0) + \delta' \dot{\Psi}] - r[\theta_{H,\delta} - \theta_0] \}/\delta' = o(\delta)$.
Discussion of technical conditions for data-adaptive series and its generalization {#section: condition discussion}
==================================================================================
Theorem \[Tefficiency\]
-----------------------
[**Condition \[Aestimation\]**]{} usually imposes an upper bound on the growth rate of $K$. To see this, we show that Condition \[Aestimation\] is equivalent to a term being $o_p(n^{-1/4})$, and an upper bound of this term is controlled by $K$. Let $\theta_n^\dagger \in \operatorname*{argmin}_{\theta \in \Theta_n} P_0 \ell(\theta)$ be the true-risk minimizer in $\Theta_n$. Under Conditions \[Aquadraticloss\], \[Ainit\], \[Aapproxidentity\] and \[ALipschitzidentity\], by Lemma \[Lestimation\], it follows that Condition \[Aestimation\] is equivalent to requiring that $\| \theta_n^* - \theta_n^\dagger \|=o_p(n^{-1/4})$. Note that $\theta_n^*$ minimizes the empirical risk in $\Theta_n$, and M-estimation theory [@vandervaart1996] can show that $\| \theta_n^* - \theta_n^\dagger \|$ can be upper bounded by an empirical process term, whose upper bound is related to the complexity of $\Theta_n$, namely how fast $K$ grows with sample size. To ensure this bound is $o_p(n^{-1/4})$, $K$ must not grow too quickly.
[**Condition \[Aapproxidentity\]**]{} assumes that the identity function can be well approximated by the series $\phi_k$ with the specified number of terms $K$ in the $L^2(P_{\theta_0})$ sense. If $\text{Span}\{\phi_1,\ldots,\phi_K\}$ does not contain ${\mathcal{I}}$ for any $K$, then sufficiently many terms must be included to satisfy this condition; that is, this condition imposes a lower bound on the rate at which $K$ should grow with $n$. Even if $\text{Span}\{\phi_1,\ldots,\phi_K\}$ does contain ${\mathcal{I}}$ for some finite $K$, this condition still requires that $K$ is not too small.
[**Condition \[Aapproxw\]**]{} is implied by the following condition in view of Lemma \[Lrate\]:
[Aapproxw]{} \[Aapproxwsufficient\] $\| [\dot{\psi} - \Pi_{n,\theta_0}(\dot{\psi})] \circ \theta_0 \|=o(n^{-1/4})$.
This condition is similar to Condition \[Aapproxidentity\]. However, in general, we do not expect $\dot{\psi}$ to be contained in $\text{Span}\{\phi_1,\ldots,\phi_K\}$ for any $K$, and hence this condition generally imposes a lower bound on the rate of $K$. Note that Condition \[Aapproxwsufficient\] is stronger than Condition \[Aapproxw\], and there are interesting examples where \[Aapproxw\] holds but \[Aapproxwsufficient\] fails to hold. Indeed, if $\theta_n^*$ converges to $\theta_0$ at a rate much faster than $n^{-1/4}$, then \[Aapproxw\] can be satisfied even if $\| [\dot{\psi} - \Pi_{n,\theta_0}(\dot{\psi})] \circ \theta_0 \|$ decays to zero in probability relatively slowly — that is, the convergence rate of $\theta_n^*$ can compensate for the approximation error of $\dot{\psi}$. This is one way in which we can benefit from using flexible ML algorithms to estimate $\theta_0$: if $\theta_n^0$ converges to $\theta_0$ at a fast rate, then we can expect $\theta_n^*$ to also have a fast convergence rate.
[**Conditions \[Aestimation\], \[Aapproxidentity\] and \[Aapproxw\]**]{} are not stringent provided sufficient smoothness of $\dot{\psi}$ and a reasonable series. For example, as noted in [@Chen2007], when $\dot{\psi}$ has a bounded $p$-th order derivative and the polynomial, trigonometric series or spline with degree at least $p+1$ is used, then if $K^2/n \rightarrow 0$ ($K^3/n \rightarrow 0$ for polynomial series), the term in Condition \[Aestimation\] is $O_p(\sqrt{K/n})$; the terms in Condition \[Aapproxidentity\] and the sufficient Condition \[Aapproxwsufficient\] are $O(K^{-p/q})$. Therefore, we can select $K$ to grow at a rate faster than $n^{q/(4p)}$ and slower than $n^{1/2}$ ($n^{1/3}$ for polynomial series). If $p$ is large, then this allows for a wide range of rates for $K$. Typically $\dot{\Psi}$ (and hence $\dot{\psi}$) is only related to the summary of interest $\Psi$ but not the true function $\theta_0$. For example, for the summary $\Psi(\theta)=P_0 (f \circ \theta)$ at the beginning of Section \[section: simple case method\], $\dot{\psi}=f'$ is variation independent of $\theta_0$. It is often the case that $\Psi$ is smooth and so is $\dot{\psi}$, so $p$ is often sufficiently large for this window to be wide.
[**Condition \[ALipschitzidentity\]**]{} is usually easy to satisfy. Since $\Pi_{n,\theta_0}({\mathcal{I}})$ is a linear combination of $\{\phi_k: k \in \{1,\ldots,K\} \}$ and is an approximation of a highly smooth function ${\mathcal{I}}$, if the series $\phi_k$ is smooth, then we can expect that $\Pi_{n,\theta_0}({\mathcal{I}})$ will be Lipschitz uniformly over $n$, that is, that Condition \[ALipschitzidentity\] holds. For example, using polynomial series, cubic splines or trigonometric series imply that this condition holds.
[**Condition \[ALipschitzw\]**]{} imposes Lipschitz continuity conditions on $\dot{\psi}$ and $\Pi_{n,\theta_0}(\dot{\psi})$ uniformly over $n$. The Lipschitz continuity of $\dot{\psi}$ has been discussed above. As for $\Pi_{n,\theta_0}(\dot{\psi})$, similarly to Condition \[ALipschitzidentity\], as long as the series $\phi_k$ being used is smooth, $\Pi_{n,\theta_0}(\dot{\psi})$ would be Lipschitz continuous uniformly over $n$.
Theorem \[Tefficiency2\]
------------------------
The conditions are similar to those in Theorem \[Tefficiency\]. However, Condition \[Aapproxw2\] can be more stringent than Condition \[Aapproxw\]. For generalized data-adaptive series, the dimension of the argument of the series is larger. Hence, as noted in [@Chen2007], \[Aapproxw2\] may require more smoothness of $\dot{\psi}$ in order that $\dot{\psi}$ can be well approximated by $\Pi_{n,\theta_0}(\dot{\psi})$. However, in general, we do not expect the smoothness of $\dot{\psi}$ to depend on $\Psi$ alone but no components of $P_0$, so the amount of smoothness of $\dot{\psi}$ may be more limited in practice.
It is also worth noting that, similarly to Theorem \[Tefficiency\], a sufficient condition for Condition \[Aapproxw2\] is the following:
[Aapproxw]{} \[Aapproxw2sufficient\] $\| [\dot{\psi} - \Pi_{n,\theta_0}(\dot{\psi})] \circ (\theta_0,{\mathcal{I}}_x) \| = o(n^{-1/4})$.
Lemmas and technical proofs {#appendix: proof}
===========================
Highly Adaptive Lasso (HAL) {#appendix: HAL}
---------------------------
Under Conditions \[Aquadraticloss\] and \[Acadlag\]–\[AHALempiricalprocess\], Lemma 1 and its corollary in [@VanderLaan2017] show that $\| \hat{\theta}_n - \theta_0 \|=o_p(n^{-1/4})$.
We show that the small perturbations of $\hat{\theta}_n$ in certain directions are contained in $\Theta_M$. Let $\vartheta_\delta=\hat{\theta}_n+\delta (\dot{\Psi}+\theta_0-\hat{\theta}_n)$ be a path indexed by $\delta$ $(0 \leq \delta < 1)$ that is a perturbation of $\hat{\theta}_n$. Note that for all $\delta$, $\vartheta_\delta$ is càdlàg by Condition \[Acadlag\] and we have that $$\| \vartheta_\delta \|_{{{\mathrm{v}}}} = \| (1-\delta) \hat{\theta}_n + \delta (\dot{\Psi} + \theta_0) \|_{{{\mathrm{v}}}} \leq (1-\delta) \| \hat{\theta}_n \|_{{{\mathrm{v}}}} + \delta (\| \dot{\Psi} \|_{{{\mathrm{v}}}} + \| \theta_0 \|_{{{\mathrm{v}}}}) \leq (1-\delta) M + \delta M =M$$ by Condition \[AM\]. Hence $\vartheta_\delta \in \Theta_M$. The same result holds for the path $\tilde{\vartheta}_\delta := \hat{\theta}_n+\delta (-\dot{\Psi}+\theta_0-\hat{\theta}_n)$.
Combining this observation with the $P_0$-Donkser property of $\Theta_{M'}$ for any fixed $M'>0$ and Conditions \[Adloss\]–\[Aquadraticloss\], \[AHALfinitevar\], we have that all of the conditions of Theorem 1 in [@Shen1997] are satisfied with all sieves being $\Theta_M$. The desired asymptotic linearity result follows. The efficiency result is shown in Appendix \[appendix: regular proof\].
Recall that $\mathcal{X} \subseteq {\mathbb{R}}^d$. Similar to $x^{(l)}$, let $x^{(u)}=\inf \{x: P_0(X \leq x) = 1\}$ where $\inf$ and $\leq$ are entrywise. To avoid clumsy notations, in this proof we drop the subscript in $\theta_0$ and use $\theta$ instead. This should not introduce confusion because other functions (e.g., an estimator of $\theta_0$) are not involved in the statement or proof. Using the results reviewed in Appendix \[appendix: varnorm\], $$\begin{aligned}
\| \dot{\Psi} \|_{{{\mathrm{v}}}} &= | \dot{\Psi}(x^{(\ell)}) | + \sum_{s \subseteq \{1,2,\ldots,d\}, s \neq \emptyset} \int_{x^{(\ell)}_s}^{x^{(u)}_s} | \dot{\Psi}_s (du) | \\
&= | \dot{\Psi}(x^{(\ell)}) | + \sum_{s \subseteq \{1,2,\ldots,d\}, s \neq \emptyset} \int_{x^{(\ell)}_s}^{x^{(u)}_s} | \dot{\psi}'(z)| \Big|_{z=\theta_s(u)} | \theta_s (du) |.\end{aligned}$$ Since $$\begin{aligned}
|\theta(x)| &= \left| \theta(x^{(\ell)}) + \sum_{s \subseteq \{1,2,\ldots,d\}, s \neq \emptyset} \int_{x^{(\ell)}_s}^{x_s} \theta_s (du) \right| \\
&\leq | \theta(x^{(\ell)}) | + \sum_{s \subseteq \{1,2,\ldots,d\}, s \neq \emptyset} \int_{x^{(\ell)}_s}^{x_s} | \theta_s (du) | \\
&\leq | \theta(x^{(\ell)}) | + \sum_{s \subseteq \{1,2,\ldots,d\}, s \neq \emptyset} \int_{x^{(\ell)}_s}^{x^{(u)}_s} | \theta_s (du) | = \| \theta \|_{{{\mathrm{v}}}},\end{aligned}$$ we have $| \dot{\psi}'(z)| \Big|_{z=\theta_s(u)} \leq \sup_{z': |z'| \leq \| \theta_0 \|_{{{\mathrm{v}}}}} |\dot{\psi}'(z')| =B$ for all $x^{(\ell)} \leq u \leq x^{(u)}$, so $$\| \dot{\Psi} \|_{{{\mathrm{v}}}} \leq | \dot{\Psi}(x^{(\ell)}) | + \sum_{s \subseteq \{1,2,\ldots,d\}, s \neq \emptyset} \int_{x^{(\ell)}_s}^{x^{(u)}_s} B | \theta_s (du) | \leq | \dot{\Psi}(x^{(\ell)}) | + B \| \theta_0 \|_{{{\mathrm{v}}}}.$$
\[Lcvbound\] Suppose that Condition \[Acadlag\] holds, $\theta_0$ is càdlàg, $\| \theta_0 \|_{{{\mathrm{v}}}}<\infty$ and for any $M$, $\sup_{\theta \in \Theta_M} \| \ell(\theta) \| < \infty$. Let $M_n$ be a (possibly random) sequence such that $P_0 \{ \ell(\hat{\theta}_{n,M_n}) - \ell(\theta_0) \}=o_p(1)$. Then for any $\epsilon>0$, with probability tending to one, $M_n \geq \| \theta_0 \|_{{{\mathrm{v}}}} - \epsilon$. Therefore, for any fixed $\epsilon>0$, with probability tending to one, $M_n + \epsilon \geq (\| \theta_0 \|_{{{\mathrm{v}}}} - \epsilon) + \epsilon = \| \theta_0 \|_{{{\mathrm{v}}}}$.
We prove by contradiction. Suppose the claim is not true, i.e. there exists $\epsilon, \delta > 0$ such that $P(M_n < \| \theta_0 \|_{{{\mathrm{v}}}} - \epsilon) \geq \delta$ for all $n \in \mathcal{N}$, where $\mathcal{N}$ is an infinite set. Let $\theta_{0,M} \in \operatorname*{argmin}_{\theta \in \Theta_M} P_0 \ell(\theta)$. Then for all $n \in \mathcal{N}$, with probability at least $\delta$, $$\begin{aligned}
P_0 \{ \ell(\hat{\theta}_{n,M_n}) - \ell(\theta_0) \} &= P_0 \{ \ell(\hat{\theta}_{n,M_n}) - \ell(\theta_{0,M_n}) \} + P_0 \{ \ell(\theta_{0,M_n}) - \ell(\theta_0) \} \\
&\geq P_0 \{ \ell(\theta_{0,M_n}) - \ell(\theta_0) \} \\
&\geq P_0 \{ \ell(\theta_{0,\| \theta_0 \|_{{{\mathrm{v}}}} - \epsilon}) - \ell(\theta_0) \},
\end{aligned}$$ which is a positive constant since the function class $\Theta_{\|\theta_0\|_{{{\mathrm{v}}}} - \epsilon}$ does not contain $\theta_0$ and this term is non-negligible bias. This contradicts the assumption that $P_0 \{ \ell(\hat{\theta}_{n,M_n}) - \ell(\theta_0) \}=o_p(1)$ and hence the desired follows.
Therefore, if $\| \dot{\Psi} \|_{{{\mathrm{v}}}} \leq F(\| \theta_0 \|_{{{\mathrm{v}}}})$ for a known increasing function $F$, then with probability tending to one, $F(M_n+\epsilon)$ is a valid bound on $\| \hat{\theta}_n \|_{{\mathrm{v}}}$ that can be used to obtain an efficient plug-in estimator. Moreover, if the bound is loose, i.e. $\| \dot{\Psi} \|_{{{\mathrm{v}}}} < F(\| \theta_0 \|_{{{\mathrm{v}}}})$, and $F$ is continuous, then there exists some $\epsilon>0$ such that $\| \dot{\Psi} \|_{{{\mathrm{v}}}} \leq F(\| \theta_0 \|_{{{\mathrm{v}}}} - \epsilon)$ and hence $\| \dot{\Psi} \|_{{{\mathrm{v}}}} \leq F(M_n)$ with probability tending to one.
Note that this lemma only concerns learning a function-valued feature but not estimating $\Psi(\theta_0)$. There are examples where $\dot{\Psi}$ depends on components of $P_0$, say $\eta_0$, other than $\theta_0$. However, if $\eta_0$ can be learned via HAL, then Lemma \[Lcvbound\] can be applied. Therefore, if it is known that $\| \dot{\Psi} \|_{{{\mathrm{v}}}} \leq F(\| \theta_0 \|_{{{\mathrm{v}}}}, \| \eta_0 \|_{{{\mathrm{v}}}})$ for a known increasing function $F$, then we can use a bound on $\| \hat{\theta}_n \|_{{\mathrm{v}}}$ obtained in a similar fashion as above from the sequence $M_n$ to construct an efficient plug-in estimator $\Psi(\hat{\theta}_n)$.
Now consider obtaining $M_n$ by $k$-fold CV from a set of candidate bounds. Then, under Conditions \[Acadlag\]–\[AHALempiricalprocess\], by (i) Lemma 1 and its corollary of [@VanderLaan2017], and (ii) the oracle inequality for $k$-fold CV in [@Vanderlaan2003cv], $P_0 \{ \ell(\hat{\theta}_{n,M_n}) - \ell(\theta_0) \}=o_p(n^{-1/4})$ if (i) one candidate bound is no smaller than $\| \theta_0 \|_{{\mathrm{v}}}$, and (ii) the number of candidate bounds is fixed. Therefore, the above results apply to this case.
Data-adaptive series estimation {#appendix: sieve}
-------------------------------
We first present and prove two lemmas that lead to Theorems \[Tefficiency\] and \[Tefficiency2\].
\[Lrate\] Under Conditions \[Ainit\], \[Aapproxidentity\] and \[ALipschitzidentity\], $\| \pi_n(\theta_0) - \theta_0 \|=o_p(n^{-1/4})$. Under an additional condition \[Aestimation\], $\| \theta_n^* -\theta_0 \|=o_p(n^{-1/4})$.
By triangle inequality, $\| \pi_n(\theta_0)-\theta_0 \| \leq \| \theta_0 - \theta_n^0 \| + \| \theta_n^0 - \pi_n(\theta_n^0) \| + \| \pi_n(\theta_n^0) - \pi_n(\theta_0) \|$. We bound these three terms separately.
[**Term 1**]{}: By Condition \[Ainit\], $\| \theta_0 - \theta_n^0 \|=o_p(n^{-1/4})$.
[**Term 2**]{}: By the definition of projection operator, $$\| \theta_n^0 - \pi_n(\theta_n^0) \| = \| \theta_n^0 - \Pi_{n,\theta_n^0} ({\mathcal{I}}) \circ \theta_n^0 \| \leq \| \theta_n^0 - \Pi_{n,\theta_0} ({\mathcal{I}}) \circ \theta_n^0 \|.$$ We bound the right-hand side by showing this term is close to $\| \theta_0 - \Pi_{n,\theta_0}({\mathcal{I}}) \circ \theta_0 \|$ up to an $o_p(n^{-1/4})$ term. By the reverse triangle inequality and the triangle inequality, $$\begin{aligned}
& \left| \| \theta_n^0 - \Pi_{n,\theta_0} ({\mathcal{I}}) \circ \theta_n^0 \| - \| \theta_0 - \Pi_{n,\theta_0} ({\mathcal{I}}) \circ \theta_0 \| \right| \\
&\quad \leq \| [\theta_n^0 - \Pi_{n,\theta_0} ({\mathcal{I}}) \circ \theta_n^0] - [\theta_0 - \Pi_{n,\theta_0} ({\mathcal{I}}) \circ \theta_0] \| \\
&\quad = \| [\theta_n^0 - \theta_0] - [\Pi_{n,\theta_0} ({\mathcal{I}}) \circ \theta_n^0 - \Pi_{n,\theta_0}({\mathcal{I}}) \circ \theta_0] \| \\
&\quad \leq \| \theta_n^0 - \theta_0 \| + \| \Pi_{n,\theta_0} ({\mathcal{I}}) \circ \theta_n^0 - \Pi_{n,\theta_0}({\mathcal{I}}) \circ \theta_0 \| \\
&\quad \leq \| \theta_n^0 - \theta_0 \| + C \| \theta_n^0 - \theta_0 \|, & \text{(Condition~\ref{ALipschitzidentity})}\end{aligned}$$ which is $o_p(n^{-1/4})$ by Condition \[Ainit\]. Therefore, by Condition \[Aapproxidentity\], $$\| \theta_n^0 - \pi_n(\theta_n^0) \| \leq \| \theta_n^0 - \Pi_{n,\theta_0} ({\mathcal{I}}) \circ \theta_n^0 \| \leq \| \theta_0 - \Pi_{n,\theta_0} ({\mathcal{I}}) \circ \theta_0 \| + o_p(n^{-1/4}) = o_p(n^{-1/4}).$$
[**Term 3**]{}: By the definition of projection and Condition \[Ainit\], $\| \pi_n(\theta_n^0) - \pi_n(\theta_0) \| \leq \| \theta_n^0 - \theta_0 \| = o_p(n^{-1/4})$.
[**Conclusion from the three bounds**]{}: $\| \pi_n(\theta_0) - \theta_0 \|=o_p(n^{-1/4})$.
If, in addition, Condition \[Aestimation\] also holds, then $\| \theta_n^* -\theta_0 \| \leq \| \pi_n(\theta_0) - \theta_0 \| + \| \theta_n^* - \pi_n(\theta_0) \| = o_p(n^{-1/4})$.
The same result holds for the generalized data-adaptive series under Conditions \[Ainit\], \[ALipschitzidentity2\], \[Aapproxidentity2\] and \[Aestimation\] (if relevant). The proof is almost identical and is therefore omitted.
\[Lapproxw\] Under Condition \[ALipschitzw\], $\| \dot{\psi} \circ \theta_0 - \pi_n (\dot{\psi} \circ \theta_0) \| \leq C \| \theta_n^0-\theta_0 \| + \| \dot{\psi} \circ \theta_0 - \Pi_{n,\theta_0}(\dot{\psi}) \circ \theta_0 \|$. Therefore, under Conditions \[Ainit\]–\[Aapproxw\], $\| \dot{\psi} \circ \theta_0 - \pi_n (\dot{\psi} \circ \theta_0) \| \cdot \| \theta_n^*-\theta_0 \|=o_p(n^{-1/2})$.
By the definition of the projection operator and triangle inequality, $$\| \dot{\psi} \circ \theta_0 - \pi_n (\dot{\psi} \circ \theta_0) \| \leq \| \dot{\psi} \circ \theta_0 - \pi_n (\dot{\psi} \circ \theta_n^0) \| \leq \| \dot{\psi} \circ \theta_0 - \dot{\psi} \circ \theta_n^0 \| + \| \dot{\psi} \circ \theta_n^0 - \pi_n (\dot{\psi} \circ \theta_n^0) \|.$$ We bound the two terms on the right-hand side separately.
[**Term 1**]{}: By Condition \[ALipschitzw\], $\| \dot{\psi} \circ \theta_0 - \dot{\psi} \circ \theta_n^0 \| \leq C \| \theta_0 - \theta_n^0 \|$.
[**Term 2**]{}: This term can be bounded similarly as in Lemma \[Lrate\]. By the reverse triangle inequality and the triangle inequality, $$\begin{aligned}
& \left| \| \dot{\psi} \circ \theta_n^0 - \Pi_{n,\theta_0} (\dot{\psi}) \circ \theta_n^0 \| - \| \dot{\psi} \circ \theta_0 - \Pi_{n,\theta_0} ({\mathcal{I}}) \circ \theta_0 \| \right| \\
&\quad \leq \| [\dot{\psi} \circ \theta_n^0 - \Pi_{n,\theta_0} (\dot{\psi}) \circ \theta_n^0] - [\dot{\psi} \circ \theta_0 - \Pi_{n,\theta_0}(\dot{\psi}) \circ \theta_0] \| \\
&\quad = \| [\dot{\psi} \circ \theta_n^0 - \dot{\psi} \circ \theta_0] - [\Pi_{n,\theta_0} (\dot{\psi}) \circ \theta_n^0 - \Pi_{n,\theta_0}(\dot{\psi}) \circ \theta_0] \| \\
&\quad \leq \| \dot{\psi} \circ \theta_n^0 - \dot{\psi} \circ \theta_0 \| + \| \Pi_{n,\theta_0} (\dot{\psi}) \circ \theta_n^0 - \Pi_{n,\theta_0}(\dot{\psi}) \circ \theta_0 \| \\
&\quad \leq C \| \theta_n^0 - \theta_0 \| + C \| \theta_n^0 - \theta_0 \| & \text{(Condition~\ref{ALipschitzw})} \\
&\quad = C \| \theta_n^0 - \theta_0 \|.\end{aligned}$$ Therefore, by the definition of the projection operator and Condition \[ALipschitzw\], $$\begin{aligned}
\| \dot{\psi} \circ \theta_n^0 - \pi_n (\dot{\psi} \circ \theta_n^0) \| &\leq \| \dot{\psi} \circ \theta_n^0 - \Pi_{n,\theta_0} (\dot{\psi}) \circ \theta_n^0 \| \\
&\leq \| \dot{\psi} \circ \theta_0 - \Pi_{n,\theta_0} (\dot{\psi}) \circ \theta_0 \| + C \| \theta_n^0 - \theta_0 \|.\end{aligned}$$
[**Conclusion from the two bounds**]{}: $\| \dot{\psi} \circ \theta_0 - \pi_n (\dot{\psi} \circ \theta_0) \| \leq C \| \theta_n^0-\theta_0 \| + \| \dot{\psi} \circ \theta_0 - \Pi_{n,\theta_0}(\dot{\psi}) \circ \theta_0 \|$.
Under Conditions \[Ainit\]–\[Aapproxw\], using Lemma \[Lrate\], it follows that $\| \dot{\psi} \circ \theta_0 - \pi_n (\dot{\psi} \circ \theta_0) \| \cdot \| \theta_n^*-\theta_0 \|=o_p(n^{-1/2})$.
Note that $\pi_n$ is a linear operator. Lemma \[Lrate\] and \[Lapproxw\] along with other conditions satisfy the assumptions in Corollary 2 in [@Shen1997]. Theorem \[Tefficiency\] follows.
The proof of Theorem \[Tefficiency2\] is almost identical.
Nest we present and prove a lemma allows us to interpret Condition \[Aestimation\] as an upper bound on the rate of $K$.
\[Lestimation\] Under Conditions \[Aquadraticloss\], \[Ainit\], \[Aapproxidentity\] (\[Aapproxidentity2\] resp.) and \[ALipschitzidentity\] (\[ALipschitzidentity2\] resp.), $\| \pi_n(\theta_0) - \theta_n^\dagger \|=o_p(n^{-1/4})$.
By definition of $\theta_n^\dagger$ and Condition \[Aquadraticloss\], we have $$\| \theta_n^\dagger - \theta_0 \|^2 \leq C P_0 \{ \ell(\theta_n^\dagger) - \ell(\theta_0) \} \leq C P_0 \{ \ell(\pi_n(\theta_0)) - \ell(\theta_0) \} \leq C \| \pi_n(\theta_0) - \theta_0 \|^2,$$ the right-hand side of which is $o_p(n^{-1/2})$ by Lemma \[Lrate\] (or its corresponding version under Conditions \[ALipschitzidentity2\] and \[Aapproxidentity2\]). Therefore, $\| \theta_n^\dagger - \theta_0 \| = o_p(n^{-1/4})$ and hence $\| \pi_n(\theta_0) - \theta_n^\dagger \| \leq \| \pi_n(\theta_0) - \theta_0 \| + \| \theta_n^\dagger - \theta_0 \| =o_p(n^{-1/4})$.
We finally prove the efficiency of the data-adaptive series estimator with $K$ selected by CV.
By Lemma \[Lrate\] and Condition \[Aquadraticloss\], for that existing deterministic $K$, $P_0 \{ \ell(\theta_K^*(\theta_n^0)) - \ell(\theta_0) \} \leq C \| \theta_K^*(\theta_n^0) - \theta_0 \|^2=o_p(n^{-1/2})$. By the oracle inequality for CV in [@Vanderlaan2003cv], $P_0 \{ \ell(\theta_n^\sharp) - \ell(\theta_0) \} = o_p(n^{-1/2})$. By Condition \[Aquadraticloss\], $\| \theta_n^\sharp - \theta_0 \|^2 \leq C P_0 \{ \ell(\theta_n^\sharp) - \ell(\theta_0) \} = o_p(n^{-1/2})$ and hence $\| \theta_n^\sharp - \theta_0 \|=o_p(n^{-1/4})$. So with probability tending to one, $$\begin{aligned}
\| \dot{\psi} \circ \theta_n^0 - \pi_{K^*,\theta_n^0} (\dot{\psi} \circ \theta_n^0) \| &= \| \dot{\psi} \circ \theta_n^0 - \Pi_{K^*,\theta_n^0}(\dot{\psi}) \circ \theta_n^0 \| \\
&\leq C \| \theta_n^0 - \Pi_{K^*,\theta_n^0}({\mathcal{I}}) \circ \theta_n^0 \| & \text{(Condition~\ref{Abadseries})} \\
&\leq C \| \theta_n^0 - \theta_n^\sharp \| & \text{(definition of the projection operator)} \\
&\leq C (\| \theta_n^0 - \theta_0 \| + \| \theta_n^\sharp - \theta_0 \|), & \text{(triangle inequality)}\end{aligned}$$ which is $o_p(n^{-1/4})$ by Condition \[Ainit\]. Hence, $$\begin{aligned}
\| \dot{\psi} \circ \theta_0 - \pi_{K^*,\theta_n^0}(\dot{\psi} \circ \theta_0) \| &\leq \| \dot{\psi} \circ \theta_0 - \pi_{K^*,\theta_n^0}(\dot{\psi} \circ \theta_n^0) \| \\
&\leq \| \dot{\psi} \circ \theta_0 - \dot{\psi} \circ \theta_n^0 \| + \| \dot{\psi} \circ \theta_n^0 - \pi_{K^*,\theta_n^0}(\dot{\psi} \circ \theta_n^0) \| \\
&\leq C \| \theta_n^0 - \theta_0 \| + o_p(n^{-1/4}), & \text{(Condition~\ref{ALipschitzw})}\end{aligned}$$ which is $o_p(n^{-1/4})$ by Condition \[Ainit\].
This bounds the approximation error $\| \dot{\psi} \circ \theta_0 - \pi_{K^*,\theta_n^0}(\dot{\psi} \circ \theta_0) \|$ for $\dot{\psi}$, a result that is similar to Lemma \[Lapproxw\] combined with Conditions \[Ainit\] and \[Aapproxw2sufficient\]. Along with other conditions, the assumptions in Corollary 2 in [@Shen1997] are satisfied and hence $\Psi(\theta_n^\sharp)$ is an asymptotically linear estimator of $\Psi(\theta_0)$. We prove the efficiency in Appendix \[appendix: regular proof\].
Efficiency {#appendix: regular proof}
----------
It is sufficient to show that the influence function of our proposed estimators is the canonical gradient under a nonparametric model. Let $H \in \mathscr{H}$ be fixed. In the rest of this proof, for all small $o$ and big $O$ notations, we let $\delta \rightarrow 0$. The proof is similar to the proof of asymptotic linearity in [@Shen1997] except that the estimator of $\theta_0$ and the empirical distribution $P_n$ are replaced by $\theta_{H,\delta}$ and $P_{H,\delta}$ respectively.
Let $\delta'$ satisfy Condition \[Aempiricalprocesslike\]. We note that $$\begin{aligned}
P_{H,\delta} \ell(\theta_{H,\delta}) &= P_{H,\delta} \ell(\theta_0) + P_0 [\ell(\theta_{H,\delta}) - \ell(\theta_0)] + (P_{H,\delta} - P_0) [\ell(\theta_{H,\delta}) - \ell(\theta_0)] \\
&= P_{H,\delta} \ell(\theta_0) + P_0 [\ell(\theta_{H,\delta}) - \ell(\theta_0)] + (P_{H,\delta} - P_0) \ell_0'[\theta_{H,\delta} - \theta_0] + (P_{H,\delta} - P_0) r[\theta_{H,\delta} - \theta_0].
\end{aligned}$$ We also note that $(1-\delta')\theta_{H,\delta} + \delta' (\theta_0 \pm \dot{\Psi}) \in \Theta$ if $|\delta|$ is sufficiently small. Then, similarly, by replacing $\theta_{H,\delta}$ with $(1-\delta')\theta_{H,\delta} + \delta' (\theta_0 + \dot{\Psi})$ in the above equation, we have that $$\begin{aligned}
\begin{split}
\label{eq: regularity key2}
& P_{H,\delta} \ell((1-\delta')\theta_{H,\delta} + \delta' (\theta_0 + \dot{\Psi})) \\
&= P_{H,\delta} \ell(\theta_0) + P_0 [\ell((1-\delta')\theta_{H,\delta} + \delta' (\theta_0 + \dot{\Psi})) - \ell(\theta_0)] + (P_{H,\delta} - P_0) \ell_0'[(1-\delta')\theta_{H,\delta} + \delta' (\theta_0 + \dot{\Psi}) - \theta_0] \\
&\quad+ (P_{H,\delta} - P_0) r[(1-\delta')(\theta_{H,\delta} - \theta_0) + \delta' \dot{\Psi}].
\end{split}
\end{aligned}$$ Take the difference between the above two equations. By the linearity of $\ell_0'$, we have that $$\begin{aligned}
& P_{H,\delta} \ell((1-\delta')\theta_{H,\delta} + \delta' (\theta_0 + \dot{\Psi})) - P_{H,\delta} \ell(\theta_{H,\delta}) \\
&= P_0 [\ell((1-\delta')\theta_{H,\delta} + \delta' (\theta_0 + \dot{\Psi})) - \ell(\theta_0)] - P_0 [\ell(\theta_{H,\delta}) - \ell(\theta_0)] \\
&\quad+ \delta' (P_{H,\delta} - P_0) \ell_0'[\dot{\Psi} - \theta_{H,\delta} + \theta_0] + (P_{H,\delta} - P_0) \{ r[(1-\delta')(\theta_{H,\delta} - \theta_0) + \delta' \dot{\Psi}] - r[\theta_{H,\delta} - \theta_0] \} \\
&= \frac{\alpha_{0,\ell}}{2} \| (1-\delta')(\theta_{H,\delta} - \theta_0) + \delta' \dot{\Psi} \|^2 - \frac{\alpha_{0,\ell}}{2} \| \theta_{H,\delta} - \theta_0 \|^2 \\
&\quad+ o\left( \| \theta_{H,\delta} - \theta_0 \|^2 + \| (1-\delta')(\theta_{H,\delta} - \theta_0) + \delta' \dot{\Psi} \|^2 \right) & \text{(Condition~\ref{Aquadraticloss})} \\
&\quad+ \delta' (P_{H,\delta} - P_0) \ell_0'[\dot{\Psi}] - \delta' (P_{H,\delta} - P_0) \ell_0'[\theta_{H,\delta} - \theta_0] + \delta' o(\delta) & \text{(Condition~\ref{Aempiricalprocesslike})}\\
&= \delta' \alpha_{0,\ell} \langle \theta_{H,\delta} - \theta_0, \dot{\Psi} \rangle - \delta' \alpha_{0,\ell} \| \theta_{H,\delta} - \theta_0 \|^2 + \delta'^2 \frac{\alpha_{0,\ell}}{2} \| \theta_{H,\delta} - \theta_0 + \dot{\Psi} \|^2 \\
&\quad+ o\left( \| \theta_{H,\delta} - \theta_0 \|^2 + \| (1-\delta')(\theta_{H,\delta} - \theta_0) + \delta' \dot{\Psi} \|^2 \right) + \delta' (P_{H,\delta} - P_0) \ell_0'[\dot{\Psi}] + \delta' o(\delta) \\
&\leq \delta' \alpha_{0,\ell} \langle \theta_{H,\delta} - \theta_0, \dot{\Psi} \rangle + \delta'^2 \frac{\alpha_{0,\ell}}{2} \| \theta_{H,\delta} - \theta_0 + \dot{\Psi} \|^2 \\
&\quad+ o\left( \| \theta_{H,\delta} - \theta_0 \|^2 + \| (1-\delta')(\theta_{H,\delta} - \theta_0) + \delta' \dot{\Psi} \|^2 \right) + \delta' (P_{H,\delta} - P_0) \ell_0'[\dot{\Psi}] + \delta' o(\delta).
\end{aligned}$$ Since the left-hand side of the above display is nonnegative, by Condition \[Acloseminimizer\], we have that $$0 \leq \langle \theta_{H,\delta} - \theta_0, \dot{\Psi} \rangle + \alpha_{0,\ell}^{-1} (P_{H,\delta} - P_0) \ell_0'[\dot{\Psi}] + O(\delta') + o(\delta) = \langle \theta_{H,\delta} - \theta_0, \dot{\Psi} \rangle + \alpha_{0,\ell}^{-1} (P_{H,\delta} - P_0) \ell_0'[\dot{\Psi}] + o(\delta).$$
Similarly, by replacing $(1-\delta')\theta_{H,\delta} + \delta' (\theta_0 + \dot{\Psi})$ with $(1-\delta')\theta_{H,\delta} + \delta' (\theta_0 - \dot{\Psi})$ in , we show that $0 \leq -\langle \theta_{H,\delta} - \theta_0, \dot{\Psi} \rangle - \alpha_{0,\ell}^{-1} (P_{H,\delta} - P_0) \ell_0'[\dot{\Psi}] + o(\delta)$. Therefore, $|\langle \theta_{H,\delta} + \theta_0, \dot{\Psi} \rangle + \alpha_{0,\ell}^{-1} (P_{H,\delta} - P_0) \ell_0'[\dot{\Psi}]| = o(\delta)$ and $$\begin{aligned}
\Psi(\theta_{H,\delta}) - \Psi(\theta_0) &= \langle \theta_{H,\delta} - \theta_0, \dot{\Psi} \rangle + O(\|\theta_{H,\delta} - \theta_0\|^2) \\
&= -\alpha_{0,\ell}^{-1} (P_{H,\delta} - P_0) \ell_0'[\dot{\Psi}] + o(\delta) + O(\|\theta_{H,\delta} - \theta_0\|^2) \\
&= -\alpha_{0,\ell}^{-1} (P_{H,\delta} - P_0) \ell_0'[\dot{\Psi}] + o(\delta). & \text{(Condition~\ref{Acloseminimizer})}
\end{aligned}$$ Consequently, $\lim_{\delta \rightarrow 0} [\Psi(\theta_{H,\delta}) - \Psi(\theta_0)]/\delta = P_0 \left\{ -\alpha_{0,\ell}^{-1} \ell_0'[\dot{\Psi}] \cdot H \right\}$ and hence $\alpha_{0,\ell}^{-1} \left\{-\ell_0'[\dot{\Psi}] + P_0 \ell_0'[\dot{\Psi}]\right\}$ is the canonical gradient of $\Psi$ under a nonparametric model. Since the influence functions of our asymptotically linear estimators are equal to this canonical gradient, our proposed estimators are efficient under a nonparametric model.
Simulation setting details {#appendix: simulation}
==========================
In all simulations, since $\theta_0(x)={\mathbb{E}}_{P_0}[Z|X=x]$ is the conditional mean function, the loss function was chosen to be the square loss $\ell(\theta): v \mapsto (z-\theta(x))^2$.
HAL {#appendix: HAL setting}
---
In the simulation, we generate data from the distribution defined by $$X \sim \text{N}(0,1), \ \theta_0(x)=\exp\{-(-1+2x+2x^2)/2\}, Z|X=x \sim \text{Exponential}(\text{rate}=1/\theta_0(x)).$$ The sample sizes being considered are 500, 1000, 2000, 5000 and 10000. For each scenario we run 1000 replicates. We chose M.gcv+ to be 3.1 times M.cv.
Data-adaptive series {#data-adaptive-series}
--------------------
### Demonstration of Theorem \[Tcv\] {#appendix: data daptive series setting}
In the simulation, we generate data from the distribution defined by $X \sim \text{Unif}(-1,1), \ Z|X=x \sim \text{N}(\theta_0(x),0.25^2)$ where $$\begin{aligned}
\theta_0: x &\mapsto I(-1 \leq x < -3/4) + \pi I(-3/4 \leq x < -1/2) + 10 x^2 I(-1/4 \leq x < 1/4) + \sqrt{2} I(1/4 \leq x < 1/2) \\
&\quad + \exp(-1) I(1/2 \leq x < 3/4) + \sqrt[3]{3} I(3/4 \leq x \leq 1),\end{aligned}$$ When using the trigonometric series, we first shift and scale the initial function range to be $[-1/2,1/2]$ and then use the basis for the interval $[-1,1]$ (i.e. $\sin(j \pi z), \cos(j \pi z)$) in sieve estimation to avoid the poor behavior of trigonometric series near the boundary. We consider sample sizes 500, 1000, 2000, 5000, 10000 and 20000. For each sample size, we run 1000 simulations.
### Violation of Condition \[Abadseries\] {#appendix: data daptive series rough setting}
In the simulation, we generate data from the distribution defined by $X \sim \text{Unif}(-1,1), \ Z|X=x \sim \text{N}(\theta_0(x),1)$ where $\theta_0: x \mapsto \cos(10 x)$. The estimand is $\Psi(\theta_0)=P_0 (f \circ \theta_0)$ where $$\begin{aligned}
f: z &\mapsto \left[ \frac{3}{10 \pi} \cos(5 \pi z) - \frac{3}{8} \right] I \left( z < -\frac{1}{2} \right) -\frac{3}{2} z^2 I \left( -\frac{1}{2} \leq z < 0 \right) \\
&\quad\ \, + 3 z^2 I \left( 0 \leq z < \frac{1}{2} \right) + \left[ -\frac{3}{2} \exp(2-4 z) - 3 z + \frac{15}{4} \right] I \left( z \geq \frac{1}{2} \right).\end{aligned}$$ We consider sample sizes 500, 1000, 2000, 5000, 10000 and 20000; for each sample size, we run 1000 simulations. Our goal is to explore the behavior of the plug-in estimator when $f$, instead of $\theta_0$, is rough, so we use kernel regression [@Nadaraya1964] to estimate $\theta_0$ for convenience.
Generalized data-adaptive series {#generalized-data-adaptive-series}
--------------------------------
### Demonstration of Theorem \[Tcv2\] {#appendix: general data daptive series setting}
In the simulation, we generate data from the distribution defined by $X \sim \text{Unif}(-1,1), \ A|X=x \sim \text{Bern}({\text{expit}}(-x)), \ Y|A=a,X=x \sim \text{N}(\mu_{0,a}(x),0.25^2)$ where $$\begin{aligned}
\mu_{00}: x &\mapsto I(-1 \leq x < -3/4) + \pi I(-3/4 \leq x < -1/2) + 10 x^2 I(-1/4 \leq x < 1/4) + \sqrt{2} I(1/4 \leq x < 1/2) \\
&\quad + \exp(-1) I(1/2 \leq x < 3/4) + \sqrt[3]{3} I(3/4 \leq x \leq 1), \\
\mu_{01}: x &\mapsto x^2 I(x < -1/3) + \exp(x) I(-1/3 \leq x < 1/3) + I(x > 1/3)\end{aligned}$$ The series is the tensor product [@Chen2007] of univariate trigonometric series in \[appendix: data daptive series setting\]. The sample sizes are the same as in \[appendix: data daptive series setting\].
### Violation of Condition \[Abadseries2\] {#appendix: general data daptive series rough setting}
In the simulation, we generate data from the distribution defined by $X \sim \text{Unif}(-1,1), \ A|X=x \sim \text{Bern}(g_0(x)), \ Y|A=a,X=x \sim N(\mu_{0,a}(x),0.25^2)$ where $\mu_{0a}: x \mapsto \exp(-x^2 + 0.8 a x + 0.5 a)$ ($a \in \{0,1\}$) and $$\begin{aligned}
g_0: x &\mapsto {\text{expit}}\Bigg\{ \left( -\frac{5}{3} x^3 - \frac{15}{4} x^2 - \frac{5}{3} x - \frac{25}{96} \right) I\left( x \leq -\frac{1}{2} \right) + \left( \frac{5}{6} x^4 + \frac{5}{3} x^3 \right) I\left( -\frac{1}{2} < x \leq 0 \right) \\
&\quad + \frac{5}{3} x^3 I\left( 0 < x \leq \frac{1}{2} \right) + \left( 5x^2 - \frac{15}{4} x + \frac{5}{6} \right) I\left( x > \frac{1}{2} \right) \Bigg\}.\end{aligned}$$ We consider sample sizes 500, 1000, 2000, 5000, 10000 and 20000; for each sample size, we run 1000 simulations. Our goal is to explore the behavior of the plug-in estimator when $\dot{\Psi}$, instead of $\theta_0$, is rough, so we use kernel regression [@Nadaraya1964] to estimate $\theta_0$ for convenience.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was partially supported by the National Institutes of Health under award number DP2-LM013340 and R01HL137808. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In condensed matter physics many features can be understood in terms of their topological properties. Here we report evidence of a topological quantum transition driven by the charge-phonon coupling in the spinless Haldane model on a honeycomb lattice, a well-known prototypical model of Chern insulator. Starting from parameters describing the topological phase in the bare Haldane model, we show that the increasing of the strength of the charge lattice coupling drives the system towards a trivial insulator. The average number of fermions in the Dirac point, characterized by the lowest gap, exhibits a finite discontinuity at the transition point and can be used as direct indicator of the topological quantum transition. Numerical simulations show, also, that the renormalized phonon propagator exhibits a two peak structure across the quantum transition, whereas, in absence of the mass term in the bare Hadane model, there is indication of a complete softening of the effective vibrational mode signaling a charge density wave instability.'
author:
- 'L. M. Cangemi'
- 'A. S. Mishchenko'
- 'N. Nagaosa'
- 'V. Cataudella'
- 'G. De Filippis'
title: 'Topological quantum transition driven by charge-phonon coupling in the Haldane Chern insulator'
---
In the last decades, topological insulators have seen a tremendous growth of attention: starting from the early-days discovery of the integer Quantum Hall Effect (QHE) [@Von; @Th], and fractional QHE [@Tsui; @Lau], which is now considered a prototype of topologically ordered state [@Hasan], the field has achieved a widespread popularity following the seminal works by Kane, Mele [@Kane], and Bernevig et al. [@ber], which predicted the occurrence of time-reversal-symmetry protected topological phases in 2D, known as Quantum Spin Hall Effect [@hal1; @moore]. Great amount of work has been done to extend these theories to 3D [@Hasan; @Qi]. Several distinct groups of candidate materials have been proposed for the observation of these novel phases.
A wide class of theories describing topological insulators, i.e. simmetry-protected topological insulators, are based on free-fermion models, with topologically non-trivial band structures [@ber1]. These phases have been classified according to a set of rules [@kit; @sch; @jef]. Chern insulators belong to the family of QHE, i.e. symmetry class $A$, are based on free fermion theories, and allow the occurrence of chiral edge-modes [@jac]. Due to these properties, Chern insulating phase has been considered as the simplest meaningful example of topological phase [@rac]. The spinless Haldane model on a honeycomb lattice is a well-known prototypical model of Chern insulator [@hal]. It is a paradigmatic example of a Hamiltonian featuring topologically distinct phases, where the quantum Hall effect appears as an intrinsic property of a band structure, rather than being caused by an external magnetic field. This model has recently regained attention, following its experimental realization using cold atoms platforms [@jot], as well as interacting superconducting circuits [@rou].
In general, so far the most of theoretical work has been done aimed at understanding the effect of Coulomb correlations on the topological properties [@1; @2; @3; @4; @5; @naoto; @prok]. On the other hand, electron-phonon interaction is so inevitably present in any solid that, from the first principles, one cannot even distinguish and separate Coulomb and electron-phonon interaction because they are unambiguously connected [@Tupi]. The issue of Coulomb correlations is so well developed that the focus of current studies is already the settling of the details of the already known phase diagrams using better and better methods, see e.g. [@prok]. To the contrary, there are only few studies of the influence of electron-phonon coupling (EPC) [@mona; @hol] all considering models different from Haldane Chern insulator. To fill this gap, we account for the lattice quantum dynamics including on-site optical phonons coupled [*a la*]{} Holstein to spinless fermions, described by the Haldane model. We perform a numerical study of the bulk properties of the interacting system, employing Cluster Pertubation Theory(CPT) [@sen], that, starting from the exact numerical computation of the the Green function, performed on a suitably chosen cluster, allows to compute the interacting Green functions of the whole lattice, an experimentally accessible function through angle resolved photoemission spectroscopy measurements [@damascelli].
We find evidence of a topological phase transition driven by EPC. Starting from the topological phase in the bare Haldane model, we show that the increasing of the strength of the EPC drives the system towards a trivial insulator. Across the phase transition, a strong hybridization of the quasiparticle bands of the bare Haldane model occurs. Numerical simulations show also that the renormalized phonon propagator exhibits a two peak structure across the quantum transition, whereas, in absence of the mass term, there is indication of a complete softening of the effective vibrational mode signaling a charge density wave instability.
[*The model.*]{} The Haldane model [@hal] describes spinless fermions on a honeycomb lattice at half-filling. It includes an on-site mass term, $M$, and a complex next nearest neighbor hopping. $M$ breaks the inversion simmetry of the lattice. The complex tunnelling breaks the time-reversal simmetry, realizing a staggered magnetic field on the lattice without a net magnetic flux through the plaquette. The Hamiltonian reads $$\label{eq:Haldane}
H_{H}=-\sum_{i,j}t_{i,j} c_{i}^{\dagger}c_{j}+ M\sum_{i}\xi_{i}c_{i}^{\dagger}c_{i}$$ where $c_{i}^{\dagger}(c_{i})$ are fermionic creation (annihilation) operators on the site $i$, $t_{i,j}=t_{1}$ ($t_2 e^{i\xi_{i}\phi}$) is the nearest (next nearest) neighbor electronic hopping, and $\xi_{i}$ is an integer which takes the values $\pm 1$ respectively on the two sublattices $(A,B)$. The model in Eq.(\[eq:Haldane\]) describes a topological Chern insulator, gapped at Dirac points $\text{K}\mbox{, } \text{K}^{\prime}$ of the Brillouin zone. By rewriting Eq.(\[eq:Haldane\]) in the quasi-momentum space, and taking as $(a_i,b_i)$ the two different fermionic operators at each sublattice site, it can be found: $$H_{H}=\sum_k
\begin{pmatrix}a^{\dagger}_k & b^{\dagger}_k
\end{pmatrix}
\mathcal{H}(\boldsymbol{k})
\begin{pmatrix} a_k\\b_k
\end{pmatrix}$$ with $\mathcal{H}(\boldsymbol{k})=\epsilon(\boldsymbol{k})\, {\mathds{1}} + \boldsymbol{h(\boldsymbol{k})}\cdot{\boldsymbol{\sigma}}$. Here $\boldsymbol{\sigma}=\sigma_i\mbox{, } i=x,y,z$ denote Pauli matrices, the quasi-momentum $\boldsymbol{k}=(k_x\mbox{, }k_y)$ belongs to the first Brillouin zone, and $\boldsymbol{h}(\boldsymbol{k})$ is a 3D vector, $\boldsymbol{h}(\boldsymbol{k})=(h_x(\boldsymbol{k}),h_y(\boldsymbol{k}),h_z(\boldsymbol{k}))$ [@Supplement]. Units are such that $\hbar=1$. In order to diagonalize this two band model, we have to perform an unitary transformation and, then, introduce a new set of wavefunctions: $|{k,+} \rangle$, and $|{k,-}\rangle$ [@Supplement]. The corresponding creation and annihilation operators, $\gamma^{\dagger}_{k,\pm}$ and $\gamma_{k,\pm}$, define the quasiparticles of the Haldane model: $H_{H}=\sum_k (E_{k,-} \gamma^{\dagger}_{k,-} \gamma_{k,-}+ E_{k,+} \gamma^{\dagger}_{k,+} \gamma_{k,+})$. Here $E_{k,\mp}$ represent the lower and upper energy bands with respect to the chemical potential $\mu$. The gap in the points $\text{K}\mbox{, } \text{K}^{\prime}$ is given, respectively, by: $$\label{eq:gap}
\Delta =2 \mid M\mp 3\sqrt{3}t_2\sin\phi\mid.$$ The Haldane model predicts the existence of different parameter regions in which the system behaves as an insulator, separated by a curve in the parameter space where gap closure occurs. These two regions describe topologically distinct insulating phases, marked by the values of a topological invariant, i.e. the Chern number $C_h$. The topologically non-trivial insulating phase is characterized by $C_h=\pm 1$, and it occurs for $ -M_c< M < M_c$, where $M_c \equiv 3\sqrt{3}t_2\sin\phi$. In all other cases $C_h=0$, and the system behaves as a trivial insulator.
Our aim is to investigate the effects on the topological properties of the system when fermions couple to the lattice degrees of freedom. We introduce in the Hamiltonian Eq.(\[eq:Haldane\]) an interaction term [*a la*]{} Holstein, where charge fluctuations are linearly coupled to the displacement of local lattice vibrations: $$\label{eq:Interacting}
H=H_{H} + \omega_{0}\sum_i d^{\dagger}_i d_i + g\omega_{0}\sum_i(c^{\dagger}_i c_i-\frac{1}{2})(d^{\dagger}_i + d_i)$$ We employ shorthand notation $d^{\dagger}_i$($d_i$) for two different bosonic operators, which respectively create (annihilate) a phonon on the two $\left ( A,B \right)$ sublattice sites, $\omega_{0}$ is the optical mode frequency, and $g$ represents the strength of the coupling with lattice. We introduce also the dimensionless parameter $\lambda=g^2 \omega_0/4 t_1$. Here we restrict our attention to the case of half-filling, i.e. $\sum_{i \in A} a^{\dagger}_i a_i + \sum_{i \in B} b^{\dagger}_i b_i =N_c=N/2$, where $N_c$ ($N$) is the number of unit cells (lattice sites).
[*Lang-Firsov approach (LFA).*]{} If the optical mode frequency is the highest energy scale (antiadiabatic regime), i.e. $\omega_0 \gg t_1, t_2, M$, the physics is well captured by LFA [@mahan], that is based on the unitary transformation: $\tilde{H}=e^{S} H e^{-S}$, where $ S = g \sum_i (c^{\dagger}_i c_i -\frac{1}{2}) (d^{\dagger}_i - d_i)$. In the new basis, the electronic hopping is assisted by phononic operators that, in the antiadiabatic regime, can be treated as a small perturbation. This approximation leads to renormalized values of $t_1$ and $t_2$ through the factor $e^{-\frac{4 \lambda t_1}{\omega_0}}$. On the other hand, as it is straightforward to verify, the value of $M$ is not affected by the unitary transformation. The net result is that it is possible to replace the Hamiltonian $\tilde{H}$ with that of an effective Haldane model, where now the parameters, and then the topological-trivial insulator transition, are controlled by the strength of the EPC. In other words, by increasing the value of $\lambda$, it is possible to induce a topological quantum transition. On the other hand, this approach becomes exact only in the limit $t_1=t_2=0$. In order to investigate if these effects survive for parameter values of physical interest, a more accurate treatment of EPC is needed. To this aim we employ the CPT [@Supplement], that allows us to compute the electronic Green function, $G_{i,j}(\boldsymbol{q},z)$, of the interacting system, from which detailed informations on the renormalized band structure as well as spectral functions can be derived. Here $i$ stands for $\left ( A,B \right )$, i.e. indicates the two sublattices, and $z=\omega+i\eta$ lies in the complex upper half plane. Starting from $G_{i,j}$, it is straightforward to derive the Green functions $G_{(+,+)}$ and $G_{(-,-)}$, corresponding to the quasiparticle operators of the bare Haldane model. We will focus our attention on the following set of parameters: $t_2/t_1=0.3$, $\omega_0=3 t_1$, $\phi=\frac{\pi}{2}$ and two different values of $M$, i.e. $M_1=0.94 M_c$ and $M_2=0.42 M_c$. In the absence of EPC, these two values describe the topological insulator phase near and far from, respectively, the transition towards to a trivial insulator. Furthermore, for these values of the parameters, the lowest gap is located at $\text{K}$ point, and $H_H$ exhibits hole-particle symmetry so that $\mu=0$.
![\[fig:1\] (color online) (a) and (b): parameters of the effective Haldane model vs $\lambda$; (c) and (d): behavior of the gap and the energies of the peaks of $A_{(A,A)}$ and $A_{(B,B)}$, at $\text{K}$ and $\text{K}^{\prime}$, as function of $\lambda$. ](fig_1.pdf){width="0.99\columnwidth"}
[*The results.*]{} Within the hole sector, near $\text{K}$ point, we followed the dispersion of lowest energy quasiparticle peak associated to one of the two spectral weight functions $A_{(\mp,\mp)}(\boldsymbol{q},\omega)=-\frac{\Im{G_{(\mp,\mp)}(\boldsymbol{q},z)}}{\pi}$. It turns out to be equivalent to that of an effective Haldane model. In Fig. \[fig:1\]a and Fig. \[fig:1\]b we plot, as function of $\lambda$, the renormalized values of the electronic hopping and $M$, i.e. $t_{1r}$, $t_{2r}$ and $M_r$, and compare them with those predicted within LFA. In the CPT all the parameters, including $M$, are renormalized, but, also in this approach, a topological quantum transition occurs. Indeed, around $\lambda_c \simeq 0.08$, the ratio $\frac{M_r}{3\sqrt{3}t_{2r}}$ becomes greater than 1, signaling the phase transition. Fig. \[fig:1\]c shows that the gap, by increasing $\lambda$, first decreases, at $\lambda_c$ becomes zero, and then increases. It is also worth noting that, within the bare Haldane model, the spectral weight functions corresponding to the two sublattices assume the following form: $A_{(A,A)}(\boldsymbol{q},\omega)=\frac{(1+n_z)}{2}\delta(\omega-E_{q,+})+\frac{(1-n_z)}{2}\delta(\omega-E_{q,-})$ and $A_{(B,B)}(\boldsymbol{q},\omega)=\frac{(1-n_z)}{2}\delta(\omega-E_{q,+})+\frac{(1+n_z)}{2}\delta(\omega-E_{q,-})$, where $n_z=\frac{h_z}{\parallel\boldsymbol{h}\parallel}$. Here $\delta(\omega)$ is the Dirac delta function. On the other hand, we emphasize that, again at $\lambda=0$, if the system is in the topological phase $0<M<M_c$, $n_z$, when evaluated at $\text{K}$ and $\text{K}^{\prime}$, assumes opposite values, respectively $-1$ and $1$, whereas, in the trivial insulating phase ($M>M_c$), $n_z=1$ at both $\text{K}$ and $\text{K}^{\prime}$. Then, at $\lambda=0$, it is clear that $A_{(A,A)}(\text{K},\omega)$ ($A_{(B,B)}(\text{K},\omega)$) is peaked only at $E_{\text{K},-}$ ($E_{\text{K},+}$) in the topological phase and only at $E_{\text{K},+}$ ($E_{\text{K},-}$) in the trivial insulating phase. On the other hand, $A_{(A,A)}(\text{K}^{\prime},\omega)$ ($A_{(B,B)}(\text{K}^{\prime},\omega)$) has spectral weight different from zero only at $E_{\text{K}^{\prime},+}$ ($E_{\text{K}^{\prime},-}$), independently on the phase. We followed, as function of $\lambda$, the peak position of these two spectral functions at both $\text{K}$ and $\text{K}^{\prime}$. Fig. \[fig:1\]d shows that at $\lambda<\lambda_c$ ($\lambda>\lambda_c$) the behavior of the fermions on the two sublattices is in agreement with that predicted by the bare Haldane model in the topological (trivial) insulating phase. It confirms that at $\lambda_c$ a quantum transition occurs.
![\[fig:2\] (color online) (a) and (b): $A_{(-,-)}$ and $A_{(+,+)}$, at $\text{K}$, just below and above $\lambda_c$; (c) and (d): density of states with zoom (insets) around $\mu$ ($\omega=0$), at $\lambda=0$ and $\lambda>\lambda_c$. ](fig_2.pdf){width="0.99\columnwidth"}
Now we focus our attention on the spectral weight functions corresponding to the operators describing the quasiparticles in the bare Haldane model, i.e. $A_{(-,-)}$ and $A_{(+,+)}$. These two functions, at the Dirac point $\text{K}$, are plotted in Fig. \[fig:2\]a and Fig. \[fig:2\]b for two different values of $\lambda$, $\lambda=0.075$ and $\lambda=0.085$, respectively before and after the topological phase transition. Crossing $\lambda_c$, the energy gap closes and opens again, and, at the same time, the character of the two bands changes, i.e. the peak of $A_{(-,-)}$ ($A_{(+,+)}$) is located above (below) the chemical potential. The plots (Fig. \[fig:2\]c and Fig. \[fig:2\]d) of the density of states associated to the two bands, $DOS_{(-,-)}(\omega)=\frac{1}{N_c}\sum_q A_{(-,-)}(\boldsymbol{q},\omega)$ and $DOS_{(+,+)}(\omega)=\frac{1}{N_c} \sum_q A_{(+,+)}(\boldsymbol{q},\omega)$, furtherly clarify this picture. Indeed $DOS_{(-,-)}$ ($DOS_{(+,+)}$) (see the two insets) exhibits a peak above (below) $\mu$, at $\lambda>\lambda_c$. It indicates a strong hybridization between the quasiparticles of the bare Haldane model across the topological quantum transition. In the density of states, the Van Hove singularities and the satellite bands, stemming from the EPC, are clearly distinguishable.
![\[fig:3\] (color online) The average number of fermions $n_{(-,-)}$, at $\text{K}$ ((a) and (c)) and along $\text{K}^{\prime}-\Gamma-\text{K}$ ((b) and (d), in the inset $n_{(A,A)}$), for two different values of $M$. ](fig_3.pdf){width="0.99\columnwidth"}
In Fig. \[fig:3\]a we plot the average number of fermions $n_{(-,-)}(\boldsymbol{q})$, at $\boldsymbol{q}=\text{K}$, as function of $\lambda$: $n_{(-,-)}(\boldsymbol{q})=\int_{-\infty}^{\infty} A_{(-,-)}(\boldsymbol{q},\omega) n_F(\omega) d\omega$, where $n_F(\omega)$ is the Fermi function. By decreasing the broadening factor $\eta$, it becomes more and more clear that $n_{(-,-)}(\text{K})$ exhibits a finite discontinuity at the transition point, so that it can be used as direct indicator of the topological quantum transition. We find also (Fig. \[fig:3\]c) that a greater EPC is needed to destroy the topological phase when the initial parameters of the bare Haldane model are such that spinless fermions are well inside the topological phase. In this case the discontinuity at the transition point reduces indicating the presence of strong electron-electron correlations induced by the EPC. The plots in Fig. \[fig:3\]b and Fig. \[fig:3\]d, i.e the behavior, across the phase transition, of $n_{(-,-)}(\boldsymbol{q})$ along the line $\text{K}^{\prime}-\Gamma-\text{K}$, point out that the quantum transition affects only a small region of the Brillouin zone around the Dirac point $\text{K}$.
![\[fig:4\] (color online) Phonon spectral weight function for four different values of $\lambda$. ](fig_4.pdf){width="0.99\columnwidth"}
Finally, we investigate the effects of the quantum transition on the lattice. To this aim, we emphasize that the exact integration of the phonon degrees of freedom, through path integral technique, leads to a retarded electron-electron interaction on the same sublattice. This coupling is controlled by the bare phonon propagator, $D^0(\boldsymbol{q},z)=\frac{1}{z-\omega_0}-\frac{1}{z+\omega_0}$, and the charge-phonon vertex: $V^{0}_{i,i}(\boldsymbol{q},z)=\frac{g^2 \omega_0^2}{N_c} D^0(\boldsymbol{q},z)$ [@mahan; @fetter]. At the lowest order in the EPC, there is not any coupling between two electrons on different sublattices, i.e. $V^{0}_{(A,B)}(\boldsymbol{q},z)=0$. On the other hand, the effective interaction between two charge carriers obeys the Dyson equation [@mahan; @fetter]: $$\begin{aligned}
V^{eff}_{i,j}(\boldsymbol{q},z)=V^{0}_{i,j}(\boldsymbol{q},z)+V^{0}_{i,h}(\boldsymbol{q},z)\Pi^{*}_{h,k}(\boldsymbol{q},z\
)V^{eff}_{k,j}(\boldsymbol{q},z),
\nonumber\end{aligned}$$ which defines the proper polarization insertion $\Pi^{*}_{i,j}(\boldsymbol{q},z)$. Since $\Pi^{*}_{i,j}$, in general, is a non diagonal matrix, there is an effective phonon mediated interaction even between two charge carriers located in different sublattices. At the lowest order $\Pi^{*}_{i,j}(\boldsymbol{q},z)$ is the particle-hole bubble. The next step is to replace, in this lowest order diagram, the unperturbed electron Green functions with the interacting Green functions calculated within the CPT. This procedeure allows to obtain the effective interaction between two electrons and, then, the renormalized phonon propagator $D_{i,j}$.
We focus our attention on the spectral weight function $B_{(A,A)}(\boldsymbol{q},\omega)=-\frac{\Im{D_{(A,A)}}(\boldsymbol{q},z)}{\pi}$, an odd function, that, in the absence of EPC, is peaked at $\omega=\omega_0$. At $\lambda \ne 0$, it exhibits a softening at $\boldsymbol{q_c}$ around $\frac{\boldsymbol{b_1}}{2}$ and $\frac{\boldsymbol{b_2}}{2}$, where $\boldsymbol{b_1}$ and $\boldsymbol{b_2}$ are the primitive vectors of the reciprocal lattice. In Fig. \[fig:4\] we plot $B_{(A,A)}(\boldsymbol{q_c},\omega)$ for four different values of the EPC. Near the topological phase transition there is a splitting of the main peak. By increasing $\lambda$, the spetral weight of the lowest (highest) energy peak increases (decreases), and, at around $\lambda_c$, the two peaks have the same intensity. We emphasize that the energy of the lowest peak is strictly related to the energy difference between the two Dirac points, both in the hole and particle sectors (compare energy of K and K’ in Fig. \[fig:1\]d and its inset), i.e. $\text{K}$ and $\text{K}^{\prime}$ are connected by the EPC. On the other hand, the highest energy peak is reminiscent of the bare phonon frequency. Finally, at $M=0$, by increasing EPC, the peak softening becomes more and more pronounced signaling a charge density wave instability (see supplemental material [@Supplement]).
[*Conclusion.*]{} We have investigated the effects of the interaction between the vibrational modes of the lattice and the spinless charge carriers in the Haldane model on a honeycomb lattice. We found evidence of a topological quantum transition. Starting from the topological phase in the bare Haldane model, the increasing of the strength of the EPC, $\lambda$, drives the system towards a trivial insulator. By varying $\lambda$, the energy gap first decreases, closes at the transition point, and then increases. Across the transition point, a strong hybridization between the quasiparticles of the bare Haldane model occurs near the Dirac point characterized by the lowest gap. The average number of fermions exhibits a finite discontinuity at the transition in this particular point of the Brillouin zone and can be used as direct indicator of the topological quantum transition. We have also shown that the renormalized phonon propagator exhibits a two peak structure across the quantum transition, whereas, in absence of the mass term, there is indication of a complete softening of the effective vibrational mode signaling a charge density wave instability.
[10]{} \[1\][[\#1]{}]{} urlstyle \[1\][DOI \#1]{}
K. v. Klitzing, G. Dorda, and M. Pepper, Phys. Rev. Lett. **45**, 494 (1980).
D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Phys. Rev. Lett. **49**, 405 (1982).
D. C. Tsui, H. L. Stormer, and A. C. Gossard, Phys. Rev. Lett. **48**, 1559 (1982).
R. B. Laughlin, Phys. Rev. Lett. **50**, 1395 (1983).
M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. **82**, 3045 (2010).
C. L. Kane and E. J. Mele, Phys. Rev. Lett. **95**, 226801 (2005).
B. A. Bernevig, T. L. Hughes, and S. C. Zhang, Science **314**, 1757 (2006).
D. N. Sheng, Z. Y. Weng, L. Sheng, and F. D. M. Haldane, Phys. Rev. Lett. **97**, 036808 (2006).
J. E. Moore and L. Balents Phys. Rev. B **75**, 121306(R) (2007).
X. L. Qi and S. C. Zhang, Rev. Mod. Phys. **83**, 1057 (2011).
B. A. Bernevig, [*Topological insulators and superconductors*]{} (Princeton University press, Princeton and Oxford, 2013).
A. Kitaev, AIP Conference Proceedings **1134**, 22 (2009), https://aip.scitation.org/doi/pdf/10.1063/1.3149495.
A. P. Schnyder, S. Ryu, A. Furusaki, and A. W. W. Ludwig, Phys. Rev. B **78**, 195125 (2008).
J. C. Y. Teo and C. L. Kane, Phys. Rev. B **82**, 115120 (2010).
R. Jackiw and C. Rebbi, Phys. Rev. D **13**, 3398 (1976).
S. Rachel, Reports on Progress in Physics **81**, 116501 (2018).
F. D. M. Haldane, Phys. Rev. Lett. **61**, 2015 (1988).
G. Jotzu, M. Messer, R. Desbuquois, M. Lebrat, T. Uehlinger, D. Greif, and T. Esslinger, Nature **515**, 237240 (2014)
P. Roushan, C. Neill, Y. Chen, M. Kolodrubetz, C. Quintana, N. Leung, M. Fang, R. Barends, B. Campbell, Z. Chen, B. Chiaro, A. Dunsworth, E. Jerey, J. Kelly, A. Megrant, J. Mutus, P. J. J. O’Malley, D. Sank, A. Vainsencher, J. Wenner, T. White, A. Polkovnikov, A. N. Cleland, and J. M. Martinis, Nature **515**, 241 (2014).
C. N. Varney, K. Sun, M. Rigol, and V. Galitski, Phys. Rev. B **82**, 115125 (2010).
S. Rachel and K. Le Hur, Phys. Rev. B **82**, 075106 (2010).
M. Hohenadler and F. F. Assaad, Journal of Physics:Condensed Matter **25**, 143201 (2013).
M. Daghofer and M. Hohenadler, Phys. Rev. B **89**, 035103 (2014).
S. Capponi and A. M. Läuchli, Phys. Rev. B **92**, 085146 (2015).
B. J. Yang, E. G. Moon, H. Isobe and N. Nagaosa, Nature Physics **10**, 774-778 (2014).
I. S. Tupitsyn and N. V. Prokof’ev, arXiv:1809.01258 \[cond-mat.str-el\]
I. S. Tupitsyn, A. S. Mishchenko, N. Nagaosa, and N. Prokof’ev, Phys. Rev. B. **94**, 155145 (2016)
M. M. Möller, G. A. Sawatzky, M. Franz, and M. Berciu, Nature Communications **8**, Article number: 2267 (2017).
C. Chen, X. Y. Xu, Zi Y. Meng, and M. Hohenadler, Phys. Rev. Lett. **122**, 077601 (2019).
D. Sénéchal, D. Perez, and D. Plouffe, Phys. Rev. B **66**, 075129 (2002).
A. Damascelli, Physica Scripta **T109**, 61–74, (2004).
Supplemental Material addresses more in detail the Haldane model, the CPT and the case where the mass term, $M$, is zero. Here polaronic effects and an effective hybridization between the two quasiparticle bands of the bare Haldane model occur. By increasing $\lambda$, the system exhibits a charge density wave instability. On the other hand, at $M=0$, the topological-trivial insulator transition is not observed.
G. D. Mahan, [*Many-particle physics*]{}, New York, Plenum Press, 1981.
A. L. Fetter and J. D. Walecka, [*Quantum Theory of Many Particle System*]{}, McGraw-Hill Book Company, New York, 34, 1971.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'As an approximate nearest neighbor search technique, hashing has been widely applied in large-scale image retrieval due to its excellent efficiency. Most supervised deep hashing methods have similar loss designs with embedding learning, while quantizing the continuous high-dim feature into compact binary space. We argue that the existing deep hashing schemes are defective in two issues that seriously affect the performance, *i.e.*, bit independence and bit balance. The former refers to hash codes of different classes should be independent of each other, while the latter means each bit should have a balanced distribution of $+1s$ and $-1s$. In this paper, we propose a novel supervised deep hashing method, termed Hadamard Codebook based Deep Hashing (HCDH), which solves the above two problems in a unified formulation. Specifically, we utilize an off-the-shelf algorithm to generate a binary Hadamard codebook to satisfy the requirement of bit independence and bit balance, which subsequently serves as the desired outputs of the hash functions learning. We also introduce a projection matrix to solve the inconsistency between the order of Hadamard matrix and the number of classes. Besides, the proposed HCDH further exploits the supervised labels by constructing a classifier on top of the outputs of hash functions. Extensive experiments demonstrate that HCDH can yield discriminative and balanced binary codes, which well outperforms many state-of-the-arts on three widely-used benchmarks.'
author:
- |
Shen Chen^1^, Liujuan Cao^1^, Mingbao Lin^1^, Yan Wang^2^, Xiaoshuai Sun^1^,\
**Chenglin Wu^3^, Jingfei Qiu^4^ Rongrong Ji^1,4^[^1]**\
^1^Fujian Laboratory of Sensing and Computing for Smart City,\
Department Cognitive Science, School of Informatics, Xiamen University, China,\
^2^Pinterest, San Francisco, USA, ^3^Fuzhi, Xiamen, China, ^4^Peng Cheng Laboratory, Shenzhen, China\
{chenshen,lmbxmu}@stu.xmu.edu.cn, {caoliujuan,xssun,rrji}@xmu.edu.cn,\
[email protected], [email protected], [email protected]
bibliography:
- 'mybibliography.bib'
title: Hadamard Codebook Based Deep Hashing
---
Introduction
============
With the rapid growth of image data on the Internet, approximate nearest neighbor (ANN) search has attracted extensive research attention. Among various ANN techniques, hashing has been a popular solution due to the low storage cost and fast retrieval speed [@Gionis1999SimilaritySI; @Weiss2008SpectralH; @Liu2011HashingWG; @Liu2012SupervisedHW; @Gong2013IterativeQA; @Xia2014SupervisedHF; @Yang2015SupervisedLO; @Li2015FeatureLB; @Li2017DeepSD; @cao2017hashnet; @Yang2018; @cakir2019hashing]. Hashing aims to transform high-dimensional continuous feature into compact binary codes, while preserving the structure of the original data. Coming with the recent advance in deep learning, the recent trend of hashing has focused on leveraging deep models to generate hash codes [@Xia2014SupervisedHF; @Lai2015SimultaneousFL; @Li2015FeatureLB; @cao2017hashnet; @cakir2019hashing], which have shown superior improvements over the traditional hashing methods like Locality Sensitive Hashing (LSH) [@Gionis1999SimilaritySI], Spectral Hashing (SH) [@Weiss2008SpectralH] and Iterative Quantization (ITQ) [@Gong2013IterativeQA].
One practical challenge in hashing is the binary constraint. To solve the challenge, most state-of-the-art deep hashing methods [@zhu2016deep; @cao2017hashnet; @cakir2019hashing] follow a very similar design as *embedding learning*, with tweaks on the binary constraint and optimization techniques. For instance, HashNet [@cao2017hashnet] preserves similarity information of pairwise images by weighted maximum likelihood. MIHash [@cakir2019hashing] optimizes the Mutual Information [@cover2012elements] of neighbors and non-neighbors. However, there exist an inherent resemblance between hashing and embedding learning. And it retains unclear whether such resemblance is due to that the discrete variable constraints do not change the problem structure, or indeed overlook, the intrinsic problem of hash function learning.
{width="100.00000%"}
To take a deeper look at the intrinsic problem, following the loss design tricks scattered in the traditional hashing methods [@Weiss2008SpectralH; @liu2010large; @Liu2011HashingWG; @liu2014discrete], we argue that two important properties are long undermined in the existing deep hashing methods, *i.e.*, *bit independence* and *bit balance*. In terms of bit independence, hash codes of different classes should be independent of each other, which can be interpreted from the information theory perspective. Under a setting of fixed dimensions, independent hash codes (whether it is from random projection or by design) could take better advantage of the hash bits, which is also validated in [@Gong2013IterativeQA]. In terms of bit imbalance, the values of a bit should not be sparsely distributed, *i.e.*, mostly $1$ or $-1$, but should instead have a balanced distribution. As a direct validation, instead of directly using a sign function with a threshold to quantize a hash bit (either 1 or -1), we simply replace $sgn(x)$ with $sgn(x-\text{mean})$ and achieve 1.9% $m$AP gain for HashNet model on CIFAR-10 [@krizhevsky2009learning]. Note that some works [@Yang2015SupervisedLO; @shen2017deep] proposed to achieve these two properties by introducing *independence loss* and *balance loss*, the performance of which is however limited as the numerical optimization often leads to a local minimum and also inevitably introduces more hyper-parameters to be tuned. Although such loss considers reasonable expectation on the hash codes, the optimization may be still suboptimal by lacking analytical guidance from quantized statistics such as codebooks to guide the search.
In this paper, inspired by previous works [@Lin2018SupervisedOH; @Lin2019HadamardMG] that primarily addressed online hashing, we propose a Hadamard Codebook based Deep Hashing (HCDH), which tackles the above challenges in a unified framework. In principle, we resort to the power tool of Hadamard matrix that is absolutely independent and balanced (with a tiny cost of one wasted bit), together with the recent advances in deep feature learning, as illustrated in Fig.\[network\]. In the training stage, we generate the Hadamard matrix with $K$ bits via an off-the-shelf algorithm [@sylvester1867lx]. Then $C$ column vectors are randomly and non-repeatedly selected from Hadamard matrix to serve as the anchor codebook, referred as *Hadamard codebook*, which guides the learning of hash codes. In a supervised setting, we also introduce a projection matrix to solve the inconsistency between the order of Hadamard matrix and the number of classes. Under such a circumstance, the learned hash codes have a good separation between classes, with maximized information gain for each bit. To further exploit the supervised labels, we incorporate a deep classifier into the binary code learning process, as illustrated in the Hash Learning part of Fig.\[network\], which enables the discriminative hash codes and image representation can be learned simultaneously in a scalable end-to-end fashion.
Our main contributions can be summarized as follows:
- We identify two key issues that are long overlooked in the existing deep hashing methods, *i.e.*, bit independence and bit balance.
- We propose a unified Hadamard codebook model together with deep feature learning to tackle the above two issues. The proposed HCDH method is robust, efficient and scalable for large-scale image retrieval.
- Extensive experiments demonstrate that the proposed HCDH well beats the existing state-of-the-arts [@Gionis1999SimilaritySI; @Gong2013IterativeQA; @Liu2012SupervisedHW; @Shen2015SupervisedDH; @Yang2015SupervisedLO; @Li2015FeatureLB; @Li2017DeepSD; @cao2017hashnet; @cakir2019hashing] on three widely-used benchmark datasets, *i.e.*, CIFAR-10, NUS-WIDE, and ImageNet.
The Proposed Approach
=====================
Problem Definition
------------------
Let $\mathbf{X}=\left\{\mathbf{x}_i\right\}_{i=1}^{N}$ denote a set of $N$ training images labeled with $C$ classes. Each image belongs to one class (single-label case) or several classes (multi-label case). Without loss of generality, we consider a label matrix $\mathbf{Y}=\left\{\mathbf{y}_i\right\}_{i=1}^{N}$, where $\mathbf{y}_i\in\{0,1\}^{C}$ denotes the label encoding of $\mathbf{x}_i$, and $C$ is the number of classes. The $c$-th element of $\mathbf{y}_i$ being 1 indicates that $\mathbf{x}_i$ belongs to class $c$. The goal of hashing is to learn a mapping $\Omega:\mathbf{X} \rightarrow\{-1,1\}^{K \times N}$ that projects input points $\mathbf{X}$ into $K$-bit compact hash codes $\mathbf{B}=\Omega\left(\mathbf{X}\right)$.
The Framework
-------------
Unlike previous works that explicit model bit independence and balance in the loss design [@Yang2015SupervisedLO; @shen2017deep], we aim to find a projection matrix $\mathbf{W}\in\mathbb{R}^{C \times K}$ that transforms the label matrix $\mathbf{Y}$ from the label space to the Hamming space, in which the bit balance and bit independence are well preserved. $\mathbf{Y}$ in Hamming space then serves as anchors to guide the learning of hash codes $\mathbf{B}$. That is: $$\label{loss1}
\begin{aligned} \min_{\mathbf{W},\Omega} \ &\mathit{L} = \frac{1}{2}||\mathbf{W}^T \mathbf{Y}-\mathbf{B}\|^{2} \\ &\mathit{s.t.} \ \mathbf{B}=\Omega\left(\mathbf{X}\right). \end{aligned}$$
The above optimization depends on both matrix $\mathbf{W}$ and mapping $\Omega$, which correspond to the Hadamard Codebook and the Hash Learning, as shown in Fig.\[network\]. In the following, we show that the optimal matrix **W** can be obtained directly rather than learning, and the mapping $\Omega$ is learned via deep neural network.
Hadamard Codebook
-----------------
As stated in [@Weiss2008SpectralH], the optimal hash codes $\mathbf{B}$ should satisfy: 1) Independence: Hash codes of different classes are independent of each other. 2) Balance: Each bit has a 50% chance of being 1 or -1. We formulate the definition of bit independence and bit balance by the following: $$\label{independence}
\mathbf{W}^{T} \mathbf{W}=\mathbf{I},$$ $$\label{balance}
\mathbf{W}^{T} \mathbf{1}=\mathbf{0}.$$
We adopt Hadamard matrix [@hadamard1893resolution] as the backbone to construct the codebook of classes, which well conforms the properties of independence and balance. Specifically, the Hadamard matrix is an $n$-order orthogonal matrix, *i.e.*, both its row vectors and column vectors are pairwisely orthogonal, which by nature satisfies Eq.(\[independence\]). In other words: $$\mathbf{H H}^{T}=n \mathbf{I}_{n}, \text { or } \mathbf{H}^{T} \mathbf{H}=n \mathbf{I}_{n},$$ where $\mathbf{H}$ is a Hadamard matrix and $\mathbf{I}_{n}$ is an $n$-order identity matrix. Besides, rows or columns of $\mathbf{H}$ are half $+1s$ and half $-1s$ (except the first row or column), which by nature satisfies Eq.(\[balance\]).
Hence, by eliminating the first row or the first column, Hadamard matrix can be used as an efficient codebook for learning hash codes (referred *Hadamard codebook*).
Practically, a $2^k$-order Hadamard matrix can be constructed recursively by the Sylvester’s algorithm [@sylvester1867lx]. That is: $$\label{generate-hadamard}
\begin{aligned} \mathbf{H}_{2^{k}} & =\left[\begin{array}{cc}{\mathbf{H}_{2^{k-1}}} & {\mathbf{H}_{2^{k-1}}} \\ {\mathbf{H}_{2^{k-1}}} & {-\mathbf{H}_{2^{k-1}}}\end{array}\right], \\ \mathbf{H}_{2} & =\left[\begin{array}{rr}{1} & {1} \\ {1} & {-1}\end{array}\right].\end{aligned}$$
Furthermore, Hadamard matrix of orders 12 and 20 can be constructed by Hadamard transformation [@hadamard1893resolution]. Since code length frequently used in applications of binary hashing is $2^k$, we mainly adopt Eq.(\[generate-hadamard\]) to generate the target binary codes.
Therefore, given the bit number $K$ and the classes number $C$, we generate the $K$-order Hadamard matrix by Eq.(\[generate-hadamard\]), and then randomly select $C$ column vectors as the Hadamard codebook. Each column vector of Hadamard codebook serves as a unique codeword for each class. However, the key problem is that this is only feasible when $C$ is less than the bit number $K$, which disobeys the real-world scenario where the class number $C$ is often much larger than $K$. Namely, for the case of $C > K$, there are not enough column vectors in Hadamard codebook to assign a unique codeword for each class, making it impossible to ensure the generated Hadamard codebook are orthogonal to each other. To solve the above problem, we define the order of Hadamard matrix $K^{*}$ as follows: $$\label{hadamard-order}
K^{*}=\min \{r|r=2^{k}, r \geq K, r \geq C, k=1,2, \dots\}.$$
To further solve the inconsistency between $K^*$ and $K$, we utilize LSH [@Gionis1999SimilaritySI] to randomly generate a Gaussian distribution matrix $\mathbf{T} \in \mathbb{R}^{K^* \times K}$, which transforms Hadamard matrix $\mathbf{H^*}$ from $\mathbb{R}^{K^* \times K^*}$ to $\mathbb{R}^{K^* \times K}$. Furthermore, the sign function is adopted to obtain the desired binary codes as: $$\label{hadamard-case2}
\mathbf{H^*}=sgn(\mathbf{H^*T}).$$
Finally, $C$ column vectors from matrix $\mathbf{H^*}$ are randomly and non-repeatedly selected to form the Hadamard codebook $\mathbf{H}$, which serves as the anchor codebook to guide the learning of hash codes, as elaborated later.
So far, we have got the Hadamard codebook that satisfies the two properties defined by Eq.(\[independence\]) and Eq.(\[balance\]). We then reformulate Eq.(\[loss1\]) and define the *hadamad loss* as: $$\label{hadamard-loss}
\begin{aligned} \min_{\Omega} \ &\mathit{L}_{H}=\frac{1}{2}||\mathbf{H}^T\mathbf{Y}-\mathbf{B}\|^{2} \\ &\mathit{s.t.} \ \mathbf{B}=\Omega\left(\mathbf{X}\right). \end{aligned}$$
The generation for the desired Hadamard codebook is summarized in Alg. \[alg1\].
The number of classes $C$ and code length $K$. Hadamard Codebook $\mathbf{H} \in \mathbb{R}^{C \times K}$\
Set the value of $K^*$ by Eq.(\[hadamard-order\]). Generate $K^*$-order Hadamard martix $\mathbf{H^*}$ by Eq.(\[generate-hadamard\]). Randomly generate $\mathbf{T} \in \mathbb{R}^{K^* \times K}$ from Gaussian distribution. Compute $\mathbf{H^*}$ by Eq.(\[hadamard-case2\]). Randomly select $C$ column vectors from matrix $\mathbf{H^*}$ as Hadamard Codebook $\mathbf{H}$. Hadamard Codebook $\mathbf{H}$.
Learning Hash Functions
-----------------------
We further construct the mapping $\Omega$ by adding a hash layer with $K$ units on top of the feature layer of network $\mathcal{F}$, as illustrated in Fig.\[network\]. Accordingly, the hash codes are obtained by taking the sign of the hash layer outputs as: $$\label{deep-hadamard-loss}
\mathbf{B}=sgn \Big(\mathcal{F}\big(\mathbf{X}; \Theta\big)\Big),$$ where $\Theta$ denotes the parameters of network $\mathcal{F}$, and $sgn(\cdot)$ is the sign function. Since the sign function $sgn(\cdot)$ makes the problem NP-hard, a soft sign function $tanh(\cdot)$ is adopted as the activation function of hash layer to approximate $sgn(\cdot)$, which brings the new formulation of Eq.(\[hadamard-loss\]) as: $$\label{hadamard-loss-tanh}
\begin{aligned} \min_{\Theta} \ &\mathit{L}_{H}=\frac{1}{2}||\mathbf{H}^T\mathbf{Y}-\mathbf{B}\|^{2} \\ &\mathit{s.t.} \ \mathbf{B}=tanh\Big(\mathcal{F}\big(\mathbf{X}; \Theta\big)\Big). \end{aligned}$$
The combination of Hadamard codebook and CNN further enables the linking of hash code to the backend classification tasks. In particular, a classification layer is constructed on top of the hash layer. Unlike the previous work [@Yang2015SupervisedLO] that treats the classification and code learning into separated streams, we merge both tasks into one-stream to learn simultaneously, *i.e.*, the outputs of hash layer are directly guided by the Hadamard codebook and the deep classifier.
To further improve the adaptability of our approach, we adopt different classification losses for different kinds of labels, *i.e.*, single-label case and multi-label case. For the single-label case, we adopt *Cross Entropy Loss* as the classification loss, defined as: $$\label{cross-entropy-loss}
\mathit{L}_{CE}=-\frac{1}{N}\sum_{i=1}^{N} \log \frac{e^{\widetilde{\Theta}_{\mathbf{y}_i}(\mathbf{b}_i)}}{\sum_{j=1}^{n} e^{\widetilde{\Theta}_{\mathbf{y}_j}(\mathbf{b}_i)}},$$ where $\widetilde{\Theta}$ denotes the parameters of classification layer and $\mathbf{b}_i \in \mathbb{R}^{K}$ denotes the hash layer output of the $i$-th image from the $\mathbf{y}_i$-th class.
For the multi-label case, we adopt *Binary Cross Entropy Loss* as the classification loss, defined as: $$\label{BCE-loss}
\begin{aligned} \mathit{L}_{BCE}=&-\frac{1}{N C} \sum_{i=1}^{N} \sum_{j=1}^{C}\left({\mathbf{y}_{i j} \cdot \log \frac{e^{\widetilde{\Theta}_{\mathbf{y}_i}(\mathbf{b}_i)}}{\sum_{j=1}^{n} e^{\widetilde{\Theta}_{\mathbf{y}_j}}(\mathbf{b}_i)}}\right.\\ &+\left(1-\mathbf{y}_{i j}\right) \cdot \log \left(1-\frac{e^{\widetilde{\Theta}_{\mathbf{y}_i}(\mathbf{b}_i)}}{\sum_{j=1}^{n} e^{\widetilde{\Theta}_{\mathbf{y}_j}(\mathbf{b}_i)}}\right). \end{aligned}$$
By integrating hadamard loss and classification loss into a unified deep network, we formulate the final optimization problem of HCDH as: $$\label{loss_total}
\min_{\Theta,\widetilde{\Theta}} \ \mathit{L}_{H} + \lambda \mathit{L}_{CE}, \text{ or } \mathit{L}_{H} + \lambda \mathit{L}_{BCE},$$ where $\lambda$ is a hyper-parameter to balance the hadamard loss and classification loss. The parameters of network, *i.e.*, $\Theta$ and $\widetilde{\Theta}$, are updated via back propagation.
--------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
16 bits 32 bits 64 bits 128 bits 16 bits 32 bits 64 bits 128 bits 16 bits 32 bits 64 bits 128 bits
LSH 0.130 0.146 0.166 0.176 0.475 0.535 0.559 0.629 0.053 0.114 0.174 0.277
ITQ 0.179 0.192 0.201 0.215 0.579 0.648 0.682 0.689 0.077 0.180 0.271 0.348
KSH 0.465 0.496 0.517 0.526 0.631 0.639 0.656 0.654 0.241 0.345 0.429 0.472
SDH 0.483 0.532 0.560 0.565 0.562 0.705 0.713 0.745 0.441 0.550 0.605 0.630
SSDH 0.573 0.612 0.685 0.699 0.710 0.763 0.769 0.770 0.527 0.619 0.652 0.686
DPSH 0.641 0.659 0.674 0.677 0.767 0.784 0.795 0.808 0.183 0.287 0.384 0.461
DSDH 0.605 0.623 0.636 0.651 0.778 0.803 0.819 0.828 0.156 0.216 0.282 0.341
HashNet 0.663 0.687 0.696 0.705 **0.783** 0.811 0.829 **0.840** 0.464 0.593 0.655 0.702
MIHash 0.760 0.792 0.817 0.829
HCDH **0.769** **0.774** **0.785** **0.790** **0.820** **0.830** **0.636** **0.691** **0.719** **0.732**
--------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
{width="100.00000%"}
Experiments
===========
Datasets and Evaluation setup
-----------------------------
We conduct extensive evaluations on three widely-used benchmark datasets, *i.e.*, CIFAR-10 [@krizhevsky2009learning], NUS-WIDE [@Chua2009NUSWIDEAR], and ImageNet [@Deng2009ImageNetAL].
- **CIFAR-10** is a dataset containing 60,000 images evenly divided into 10 categories. Following the protocol in [@Lai2015SimultaneousFL], we randomly select 100 images per class as the query set, 500 images per class as the training set, and the rest are used to form the database.
- **NUS-WIDE** is a dataset which contains 269,648 images in 81 ground truth categories. Following the protocol in [@Lai2015SimultaneousFL], we consider a subset of 195,834 images associated with the 21 most frequent concepts, and randomly sample 100 images per class to form the query set, 500 images per class to form the training set. The remaining forms the database.
- **ImageNet** is a dataset containing over 1.2M images in the training set and 50K images in the validation set, where each image is single-labeled by one of the 1,000 categories. Following the protocol in [@cao2017hashnet], we randomly select 100 categories, use all the images of these categories in the training set to form the database, and use all the images in the validation set to form the query set. We also randomly select 100 images per category from the database to form the training set.
We evaluate the retrieval results based on two widely-adopted metrics: mean Average Precision ($m$AP) and Precision-Recall curves (PR curves). Following the protocol in [@cao2017hashnet], we adopt $m$AP@5000 and $m$AP@1000 for NUS-WIDE and ImageNet datasets due to their large scale, respectively.
We compare the retrieval performance of our method with several classic non-deep hashing methods such as LSH [@Gionis1999SimilaritySI], ITQ [@Gong2013IterativeQA], KSH [@Liu2012SupervisedHW], SDH [@Shen2015SupervisedDH], and the state-of-the-art deep hashing methods including SSDH [@Yang2015SupervisedLO], DPSH [@Li2015FeatureLB], DSDH [@Li2017DeepSD], HashNet [@cao2017hashnet] and MIHash [@cakir2019hashing]. For deep hashing methods, we directly use the raw image pixels as the inputs and adopt the AlexNet [@Krizhevsky2012ImageNetCW] as the backbone network. For non-deep hashing methods, we use the deep features extracted from the AlexNet pre-trained on ImageNet as inputs. To guarantee a fair comparison, the results of baselines are executed using the implementations kindly provided by their authors.
We implement the HCDH on open-source PyTorch [@Paszke2017AutomaticDI]. The parameters of network are initialized by the pre-trained AlexNet on ImageNet. In the training phase, we employ stochastic gradient descent (SGD) with 0.9 momentum and 0.0005 weight decay, and set the min-batch size to 128. The learning rate is set to an initial value of $10^{-4}$, with 50% decrease every 50 epochs. The weight parameter $\lambda$ for three datasets are empirically set to 1, 0.1 and 0.01, respectively.
Results and Discussions
-----------------------
Tab.\[mAP\] shows the $m$AP comparisons on CIFAR-10, NUS-WIDE and ImageNet with respect to different bits number. We observe that: (1) HCDH substantially outperforms all comparison methods. For example, compared with the state-of-the-art, MIHash, HCDH improves the average $m$AP on CIFAR-10 and ImageNet by 3.5% and 4.2%, respectively. Similar results can be observed on the multi-label dataset, NUS-WIDE. In addition, HCDH performs very well even in low bits (*e.g*, 16 bits), while other methods show a significant decrease. To explain, HCDH adopts Hadamard codebook to guide the learning of hash codes. Since the balance and independence are guaranteed by Hadamard codebook, the information gain of each bit is maximized, which ensures our excellent performance in low bits. In comparison, the affinity matching approaches (*e.g.*, MIHash, HashNet) require a longer length of bits to achieve similar results. (2) Compared with SSDH that includes the balance and independence in the loss design, HCDH has a considerable improvement on all three datasets, which indicates that the numerical optimization is hard to find a global minimum. Instead, it introduces the guidance from Hadamard codebook to make a big difference. (3) The pairwise-based DPSH and DSDH show poor performance on ImageNet. This is due to the data imbalance problem between similar and dissimilar pairs [@cao2017hashnet; @cao2018deep]. Differently, HCDH is trained in a point-wise manner and is not affected by the data imbalance. Hence, our method is more robust and suitable for large-scale datasets. (4) In most cases, the deep hashing methods perform better than the traditional hashing methods, which indicates the effectiveness of incorporating deep neural network in hashing.
Fig.\[precision-recall\] shows the retrieval performance in terms of Precision-Recall curves (PR curves) with respect to different bits number. HCDH delivers higher precision than the state-of-the-arts at the same recall rate on both CIFAR-10 and ImageNet. Competitive results can also be observed on NUS-WIDE. This further demonstrates that HCDH is also favorable for precision-oriented retrieval systems.
Visualization of Hash Codes
---------------------------
We visualize the learned embeddings using t-SNE [@maaten2008visualizing]. As illustrate in Fig.\[t-SNE\], we plot the visualization for 64-bits hash codes produced by HCDH and the top competing method, *i.e.*, MIHash, on CIFAR-10. On one hand, the hash codes generated by HCDH show discriminative structures among different classes. This is indeed predictable from the Hadamard codebook of HCDH, in which the codeword of each class is orthogonal to each other. On the other hand, hash codes generated by MIHash have higher overlap between classes. This is also consistent with the fact that MIHash does not specifically optimize for a criterion related to class overlap, which belongs to the simpler affinity matching approaches.
![The confusion matrix on CIFAR-10.[]{data-label="confusion-matrix"}](figures/confusion_matrix){width="35.00000%"}
![The ratio of $+1s$ and $-1s$ with respect to different bits number in the hash codes learned by HCDH on CIFAR-10.[]{data-label="Bihistogram"}](figures/Bihistogram){width="40.00000%"}
-------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
16 bits 32 bits 64 bits 128 bits 16 bits 32 bits 64 bits 128 bits 16 bits 32 bits 64 bits 128 bits
HCDH **0.769** **0.774** **0.785** **0.790** **0.779** **0.820** **0.830** **0.832** **0.636** **0.691** **0.719** **0.732**
HCDH-H 0.749 0.759 0.756 0.740 0.772 0.812 0.823 0.827 0.609 0.681 0.708 0.711
HCDH-C 0.740 0.751 0.767 0.768 0.625 0.734 0.784 0.798 0.616 0.678 0.694 0.700
HCDH-2 0.733 0.744 0.755 0.764 0.771 0.813 0.821 0.825 0.600 0.665 0.690 0.704
-------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
![$m$AP with respect to different $\lambda$ on three datasets. The value $\lambda$ is selected from \[0, 0.01, 0.05, 0.1, 0.5, 1, 5, 10\] and the code length is 64.[]{data-label="params-sensitivity"}](figures/parameter_sensitivity){width="45.00000%"}
Analysis of Hash Properties
---------------------------
To further demonstrate the effectiveness of Hadamard codebook for hashing, we analyze how HCDH ensures three key properties of hash codes, *i.e.*, binarization, independence and balance.
### Binarization.
To illustrate the binarization of Hadamard codebook, Fig.\[distribution\] presents the distribution of hash features (without $sgn$) produced by HCDH and MIHash on CIFAR-10. The code length is set to 64. Clearly, we can see that the distribution of HCDH is almost concentrated around 1 and -1, while MIHash shows a bimodal distribution and concentrates around 0. This is due to the fact that HCDH adopts the binary Hadamard codebook as guidance information, which directly pushes hash codes toward 1 and -1 in the training stage. In contrast, MIHash can be regarded as a sort of embedding learning, which ignores the essential properties of hashing, *e.g.*, binarization, and inevitably leads to *quantization error* [@Gong2013IterativeQA].
### Independence.
To illustrate the independence of Hadamard codebook, Fig.\[confusion-matrix\] presents the confusion matrix produced by HCDH on CIFAR10. Specifically, for each query point, we calculate the frequency of different classes according to the retrieval results and give the tops larger weights to obtain the desired confusion matrix. An entry with higher brightness indicates that the corresponding class is retrieved more correctly, and vice versa. It is obvious that the diagonal entries of the confusion matrix are the brightest, while the rest are mostly close to 0. This is mainly due to the orthogonality of Hadamard codebook, which ensures the distinction between classes. Besides, the entry values between similar classes are slightly higher, *e.g.*, cat and dog, which indicates that the semantic information between classes is also well preserved.
### Balance.
To illustrate the balance of Hadamard codebook, we calculate the ratio of $+1s$ and $-1s$ with respect to different bits number in the hash codes generated by HCDH on CIFAR-10. The results are shown in Fig.\[Bihistogram\]. It’s very clear that the number of $+1s$ and $-1s$ in the hash codes is basically the same across all the bits. It validates that HCDH can learn balanced hash codes, which maximizes the information gain in each bit.
Parameter Sensitivity
---------------------
The $m$AP results of HCDH with respect to different values of the hyper-parameter $\lambda$ on three datasets are shown in Fig.\[params-sensitivity\]. We tune the value of $\lambda$ in the range of \[0, 0.01, 0.05, 0.1, 0.5, 1, 5, 10\], and set the code length to 64. By imposing a large $\lambda$, *e.g.*, close to 10, the HCDH gradually degenerates into a simple classification model. Due to the lack of guidance from Hadamard codebook, the $m$AP results on three datasets have a significant decrease. By imposing a small $\lambda$, *e.g.*, close to 0, HCDH merely utilizes the Hadamard codebook to learn hash codes. As can be seen from the experimental results, the retrieval performance first ascends and then decreases. The best performances for CIFAR-10, NUS-WIDE and ImageNet are obtained when the values of $\lambda$ are set to 1, 0.1, and 0.01, respectively.
Abalation Study
---------------
To evaluate the contributions of Hadamard codebook and the co-trained deep classifier on the final performance, we investigate three variants of HCDH: (1) **HCDH-H**, a variant only using Hadamard codebook for training; (2) **HCDH-C**, a variant only using deep classifier for training; (3) **HCDH-2**, the variant adopting the two-stream architecture [@Yang2015SupervisedLO] instead of our one stream architecture. The $m$AP results with respect to different bits number on three benchmarks are reported in Tab.\[$m$AP-variants\].
By exploiting semantic information via deep classifier, HCDH outperforms HCDH-H by 2.9%, 0.7% and 1.7% respectively in average $m$AP. We attribute this to the random selection of Hadamard codebook from Hadamard matrix, which cannot guarantee the semantic similarity between classes. Similarly, HCDH-C suffers from an average $m$AP decreases of 2.3%, 8.0% and 2.3%, especially on NUS-WIDE, which substantially underperforms HCDH. These results show that using only the classification model can not ensure the discriminability of hash codes, and is not suitable for multi-label datasets in practice. It is worth noting that, in most cases, HCDH-H outperforms HCDH-C, which demonstrates the superiority of the Hadamard codebook in hash learning.
Another key observation is, by using two-stream architecture, HCDH-2 incurs large average $m$AP decreases of 3.1%, 0.8% and 3.0% compared with HCDH. In the two-stream framework, the classification stream is only employed to learn image representation, which does not contribute directly to the learning of hash functions. In contrast, HCDH uses CNN to learn the image representation and hash functions simultaneously. The hash codes are directly guided by the Hadamard codebook and classification information.
Conclusion
==========
In this paper, we propose a novel deep supervised hashing method, called HCDH, for large-scale image retrieval. With the power of Hadamrd codebook, the issues of bit independence and bit balance in the existing deep hashing methods can be effectively solved. We also introduce a deep classifier to further exploit the supervised labels. Comprehensive experiments justify that HCDH generates balanced and discriminative binary codes that yield state-of-the-art performance on three standard benchmarks, *i.e.*, CIFAR-10, NUS-WIDE, and ImageNet.
[^1]: Corresponding Author.
|
{
"pile_set_name": "ArXiv"
}
|
---
address: ' $^{(1)}$ Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing, China'
author:
- 'Jian-feng Wu$^{(1)}$'
title: 'Note on refined topological vertex, Jack symmetric functions and instanton counting (I)'
---
Abstract {#abstract .unnumbered}
========
In this article, we calculated the refined topological vertex for the one parameter case using the Jack symmetric functions. Also, we obtain the partition function for elliptic N=2 models, the results coincide with those of Nekrasov instanton counting partition functions for the $N=2^{\ast}$ theories.
Introduction
============
The study of refined topological vertex sheds lights on many other physicial or mathematical problems in recent years. For physical interests, we are interested in the $N=2$ gauge system\[1-10\], which can be realized as a IIA string theory compactifying on certain toric Calabi-Yau three-folds(CY-3folds) by the geometric engineering. The instanton part of the N=2 theory is captured by the topological string amplitude. This amplitude, can be calculated by the refined topological vertex[@TopVertex] formulation. Otherwise, the $N=2$ systems can also have the standard NS5-D4 brane configurations and show a great many interesting properties, such as the S-duality, the natural confinement, the integrability and so on and so forth. Recently, Alday, Gaitto and Tachikawa(AGT)[@AGT] showed that for an arbitrary $N=2$ $SU(2)$ superconformal gauge system, there is a dual 2d theory which is a Liouville theory living on the moduli space of the 4d gauge theory\[13-31\]. The moduli space (Seiberg-Witten curve) of the $N=2$ theory is also the ramification of the Riemann surface M5 branes wrapping on. Later on, Dijkgraaf and Vafa[@DV2009a] proved this 2d-4d relation using intrinsic correspondences of topological string-matrix model-Liouville theory. Recently, Cheng, Dijkgraaf and Vafa[@DV2010b] extended this proof to more detailed cases, they showed that the instanton part of Nekrasov partition function, which duals to the conformal block of the 2d conformal field theory(CFT), is in fact a linear combination of the non-perturbative string partition functions.
The NS5-D4 brane configuration of a given $N=2$ gauge theory has a rather simple translation to topological string. Roughly speaking, the brane configuration diagram can be seen as the singular version of the related toric diagram of the related CY-3fold. For example, if we consider there are N coincided D4 branes truncated by 2 separated NS5 branes, which low energy theory is a 4d $N=2$ $U(N)$ gauge theory with $N_f = 2N$ fundamental matters. Classically, there are two N-fold singularities located at the two intersection points of the branes. However, while lift this to M-theory, these singularities can be “blown-up” to an sequence of $S^{2}$s due to the quantum effect. Then the brane diagram becomes an N-ramified Riemann sphere on which M5 branes wrapping on. On the topological string side, to get the same gauge theory, by the standard geometric engineering, one can identify the “blow-up” process that the conifold singularities transit to resoloved ones. Fig.1 shows the simple case for $N=2$ $U(2)$ gauge theory.
![[]{data-label="Fig1.eps"}](Fig1.eps)
The topological string partition function on a given toric CY3-fold is expected to correspond to instanton sums in the related gauge theory. The instanton part of the partition function of a certain $N=2$ gauge theory can also have a brane expression, the D0-D4 configuration. In this configuration, the D0 branes dissolve into the D4 branes as the instanton background of the gauge theory. These D0 branes come from the M-theory compactification as the Kaluza-Klein modes. Apart from M5 branes, there are M2 branes which are the magnetic dual of the M5 branes. Then if there are M2 branes intersecting with M5 branes, then after the M-theory compactification, they become the D2-D0 bound states. If these D2 branes wrap on some nontrivial Lagrangian 2-cycles in the CY3-fold, they behave just like D0 branes for the observer living on the D4 branes, thus they also contribute to the instanton counting of the $N=2$ gauge theories. So the instantons of the $N=2$ theory are expected to relate to the D2-D0 bound states. Besides, as noted in [@RTV], the instanton calculation has more refined information. This information comes from the Nekrasov $\Omega$ deformation of $N=2$ gauge theories. From the M-theory point of view, the CY3 compactification gives a 5d gauge theory living on $\mathbb{C}^{2}\times S^1$ with $S^{1}$ the M-theory cycle. The BPS spectrum of the theory corresponds to the little group representation of the motion group of $\mathbb{C}^{2}\times S^1$, which is, obviously, the $SO(4)=SU(2)_L \times SU(2)_R \in SO(5)$. The $\Omega$ deformation is a $T^2$ action on $\mathbb{C}^2$: $$T^2 : (z_1,
z_2)\longmapsto\,\,(e^{i\epsilon_1}z_2,e^{i\epsilon_2}z_2).$$ This deformation has a direct impact on the definition of the topological amplitude, which can be easily calculated by the topological vertex formulism. Now the fundamental vertices change to the refined ones which are the two-parameter generalization of the original topological vertices. This change is due to the fact that topological string amplitude counts the holomorphic maps from string world sheet to Langrangian submanifolds of the toric CY3-fold. On the other hand, the maps also correspond to BPS bound states[@Rajesh-Vafa1; @Rajesh-Vafa2] of M2 branes, which are representations of $SO(4)=SU(2)_L \times SU(2)_R$, now twisted by the $\Omega$ deformation. Thus the refined topological vertex is a fundamental block for building the $N=2$ theories.
It is crucial that this observation also implies the AGT relation, and reveals the essential net dualities between these topological string-matrix model and $N=2$ 4d gauge theories-2d conformal field theories. However, for generic $N=2$ theories, it is hard to verify these dualities, since there are many ambiguities in all of these theories. The simpler cases are the so called “$N=2^{\ast}$ theory”, which only involves an adjoint matter, and the “necklace” quiver $N=2$ theories which only have bifundamental matters. [^1]In this article, we will concentrate on these theories. Their brane configurations are just the elliptic models, which have punctured torus $\mathcal{T}_{M,1}$ as Seiberg-Witten curves. The related integrable system is the two dimensional elliptic Calogero-Sutherland(eCS) model, from which one can easily read off the Liouville/Toda theories living on torus. From either the topological string or the eCS model, one can have a rather simple description of the one parameter refined topological vertex and further the instanton counting by invoking the Jack symmetric functions. We found that the refined topological vertex have a simple description by using the Jack polynomials. The instanton counting of $N=2$ quiver theories can also be computed by the same Jack polynomials. The main result of this article is the following closed formulae for $N=2$ $U(N)$ $M$-node($M\geq2$) necklace quiver gauge theories(see Fig.\[Fig2.eps\])
![[]{data-label="Fig2.eps"}](Fig2.eps)
$$\begin{aligned}
&&{\bf
Z}_{bifund}^{4D\,\,\,inst}(\vec{a}_{\ell},\vec{\lambda}_{\ell},\vec{a}_{\ell
+1},\vec{\lambda}_{\ell +1}; m_{\ell}) = \prod_{m,n=1}^{N}\langle
E^{m^{(\ell,\ell+1)}_{m,n}}(E^{\ast})^{\beta -
m^{(\ell,\ell+1)}_{m,n}-1} J_{\lambda_{\ell,m}},
J_{\lambda_{\ell+1,n}}\rangle_{\beta} \\ &&{\bf
Z}_{adj}^{4D\,\,\,inst}(\vec{a}_{\ell},\vec{\lambda}_{\ell};
m_{\ell})={\bf
Z}_{bifund}^{4D\,\,\,inst}(\vec{a}_{\ell},\vec{\lambda}_{\ell},\vec{a}_{\ell},\vec{\lambda}_{\ell};
m_{\ell})\nonumber\\\nonumber &&{\bf
Z}_{vec}^{4D\,\,\,inst}(\vec{a},\vec{\lambda}) = 1/{\bf
Z}_{adj}^{4D\,\,\,inst}(\vec{a},\vec{\lambda};0)\\\label{M-Necklace}
&&{\bf
Z}^{U(N)\,\,\,inst}_{M-necklace}=\sum_{\vec{\lambda}_1,\cdots,\vec{\lambda}_M}\prod_{i,\ell=1}^{M}
\tilde{Q}_{i}^{|\vec{\lambda}_i|}{\bf
Z}_{vec}(\vec{a}_i,\vec{\lambda}_i){\bf
Z}_{bifund}(\vec{a}_{\ell},\vec{\lambda}_{\ell},\vec{a}_{\ell+1},\vec{\lambda}_{\ell+1};m_{\ell})\nonumber\\&&=\sum_{\vec{\lambda}_1,\cdots,\vec{\lambda}_M}\prod_{i,\ell=1}^{M}(
\tilde{Q}_{i})^{|\vec{\lambda}_i|}\prod_{j,k=1}^{N}{\Large\mbox
{$[$}}(\langle E^{a^{(i)}_{j,k}}(E^{\ast})^{\beta - a^{(i)}_{j,k}-1}
J_{\lambda_{i,j}}, J_{\lambda_{i,k}}\rangle_{\beta}|_{j\neq
k})\langle J_{\lambda_{i,j}},
J_{\lambda_{i,j}}\rangle_{\beta}{\Large\mbox
{$]$}}^{-1}\nonumber\\&&\times \prod_{m,n=1}^{N}\langle
E^{m^{(\ell,\ell+1)}_{m,n}}(E^{\ast})^{\beta -
m^{(\ell,\ell+1)}_{m,n}-1} J_{\lambda_{\ell,m}},
J_{\lambda_{\ell+1,n}}\rangle_{\beta}.\end{aligned}$$
Here $\vec{a}_{\ell}=\{a_{\ell,1},\cdots,a_{\ell,N}\}$ and $\vec{\lambda}_{\ell}=\{\lambda_{\ell,1},\cdots,\lambda_{\ell,N}\}$ defines the Coulomb parameter vector and instanton partition Young tableau vector of the $\ell$-th $U(N)$ gauge group, respectively. $m_{\ell}=\frac{\tilde{m}}{\epsilon_2}$ denotes the mass of the $\ell$-th bifundamental matter.$$\tilde{Q}_i = \text{exp}(2\pi i
\tau_{UV}^i),\,\,\,\,\, \tau_{UV} = \frac{4\pi i}{g_{UV}^2} +
\frac{\theta_{UV}}{2\pi}$$ are the sewing parameters. $a^{(i)}_{j,k}=a_{i,j} - a_{i,k},
a_{i,j}=\tilde{a}_{i,j}/\epsilon_2,\,\, m^{(\ell,\ell+1)}_{m,n} =
a_{\ell+1,n}-a_{\ell,m}-m_{\ell}.$ $$E=1+e_{[1]}+e_{[2]}+\cdots=\text{exp}{\Large\mbox
{(}}\sum_{n>0}\frac{(-1)^n}{n}{p_n}{\Large\mbox {)}}$$ is related to Dijkgraaf-Vafa’s topological B-brane background and this will be shown explicitly in the second part of this note[@WuJF10b], $e_{[m]}, p_n$ is elementary and power sum polynomials respectively. $E^{\ast}$ is the adjoint action under the inner product of Jack polynomials [@Okounkov]$$\langle E J_{\lambda},
J_{\mu}\rangle_{\beta}=\langle J_{\lambda}, (E^{\ast})
J_{\mu}\rangle_{\beta},\,\,\,\beta=-\frac{\epsilon_1}{\epsilon_2}.$$ The inner product is defined and proved in [@Okounkov] as following $$\begin{aligned}
\label{Okounkov}\langle
E^m(E^{\ast})^{\beta-m-1}J_{\lambda},J_{\mu}\rangle_{\beta}&=&(-1)^{\lambda}\beta^{-|\lambda|-|\mu|}\prod_{s\in
\lambda}(m+a_{\lambda}(s)+1+\beta
l_{\mu}(s))\\\nonumber&\times&\prod_{t\in
\mu}(m-a_{\mu}(t)-\beta(l_{\lambda}(t)+1)),\end{aligned}$$ here $$a_{\lambda}(s)=\lambda_i - j,\,\,\,\,\,\,\, l_{\lambda}(s) =
\lambda^{t}_{j}-i$$ are hook arm-length and leg-length of box $s=(i,j)$ of the Young tableau respectively.
The structure of this article is as following. In section 2, we review the refined topological vertex formulation in the A-model setup and its applications to instanton counting problems of $N=2^{\ast}$ theories. The eCS model and its spectrum which is captured by Jack symmetric functions, are described in section 3. In section 4, we show that the Jack symmetric functions exactly reproduce the Nekrasov instanton partition function as expected. This computation confirms the relation between topological string theoty, which geometric engineers the $N=2^{\ast}$ theory, and the 2d eCS therory, which relates to the 4d theory by the AGT relation[@Donagi; @NekSha]. Section 5 is left for conclusions and further interests.
Refined Topological Vertex and instanton counting in $N=2^{\ast}$ theories
==========================================================================
The refined topological vertex(RTV) is a two-parameter generalization of the ordinary topological vertex. In the topological vertex formulation, one can easily get the partition function of an A-model which generates an $N=2$ gauge theory by geometric engineering. On the other hand, the same $N=2$ theory can be obtained by the NS5-D4 brane setup of IIA string theory. The bridge between these two apparently different configurations is the large $n$ transition. On the field theory side, the nonperturbative part of the partition function is captured by the Nekrasov instanton counting, which involves the so-called $\Omega$ deformation of $\mathbb R^4$. On the topological A-model side, the $\Omega$ deformation relates to the two-parameter generalization of the topological vertex, which is the RTV. The refined partition function of topological string is equivalent to the Nekrasov partition function of $N=2$ theories[@RTV; @Aganagic; @Taki; @Awata05; @Awata09].
Since we will frequently use the relation between these two procedures, it is necessary to review the refined topological vertex and its connection with Nekrasov’s partition function.
Brane setup and toric diagram
-----------------------------
The brane setup of $N=2$ theories can be translated into toric diagrams of topological A-model as following. One draws the brane intersection diagram of a desired $N=2$ theory as in Fig.3a, then blows up every 4-vertex as two 3-vertices, adjusts the toric diagram to match with the geometric engineering procedure[@GE][^2], as showing in Fig.3b.
![[]{data-label="Fig3.eps"}](Fig3.eps)
From the NS5-D4 intersection branes configuration, one can immediately read off its low energy effective theory is just the $N=2$ gauge theory. The pure gauge part of the theory comes from the coincided D4 branes, the matters are due to the truncation of the two NS5 branes [^3]. However, the topological string realization of the $N=2$ theory is totally different. The pure gauge part comes from the blowup of the singularities of the ALE space in Calabi-Yau. The matters correspond to D-branes wrapping on Lagrangian submanifolds in Calabi-Yau.
The detailed relation of these two realizations of $N=2$ gauge theories were considered in Dijkgraaf and Vafa’s article [@DV2009a] which we will now briefly review in the following. Instead of the A-model, they considered the mirror B-model realization. The Coulomb parameters of the gauge theory which are positions of D4 branes in their transverse directions, relate to the large $n$ limit of the condensation of D2 branes, or equivalently, the condensation of the screen charges in the 2d CFT language of the B-model. The matters are related to the insertions of stacks of D2 branes which can be written as vertex operators in 2d CFT. Their masses correspond to the numbers of branes. The Nekrasov $\Omega$ deformation is translated to a phase changing of the complex coordinate of the spectral curve. We will come back to these points in the second part of this note.
The refined topological vertex
------------------------------
The refined topological vertex is defined as [@RTV] $$\begin{aligned}
C_{\lambda\mu\nu}(t,q) &=&
\left(\frac{q}{t}\right)^{\frac{\parallel\mu\parallel^2+\parallel\nu\parallel^2}{2}}t^{\frac{\kappa}{\mu}}P_{\nu^t}(t^{-\rho};q,t)\\\nonumber&\times&
\sum_{\eta}\left(\frac{q}{t}\right)^{\frac{|\eta|+|\lambda|-|\mu|}{2}}s_{\lambda^t/\eta}(t^{-\rho}q^{-\nu})s_{\mu/\eta}(t^{-\nu^t}q^{-\rho})\\\nonumber
P_{\nu^t}(t^{-\rho};q,t) &=&
t^{\frac{\parallel\nu\parallel^2}{2}}\tilde{Z}_{\nu}(t,q) =
\prod_{s\in\nu}\left(1-t^{l_{\nu}(s)+1}q^{a_{\nu}(s)}\right)^{-1}\\\nonumber
t=e^{\beta\epsilon_1},\,\,\,\,q=e^{-\beta\epsilon_2},&&\parallel\mu\parallel^2
= \sum_i \mu_i^2,\,\,\,\,\,\rho =
\{-\frac{1}{2},-\frac{3}{2},-\frac{5}{2},\cdots\}\end{aligned}$$ where $\lambda,\mu,\nu$ denote Young tableaus of partitions of instantons. $s_{\lambda}$ and $s_{\lambda/\eta}$ are the Schur and the skew Schur function which is briefly reviewed in Appendix.A. $P_{\nu^t}(t^{-\rho};q,t)$ is the Macdonald function.
![[]{data-label="Fig4.eps"}](Fig4.eps)
For a toric diagram describing a chosen CY-3fold, the refined partition function can be calculated by gluing all topological vertices.[^4] For ${\cal O}(-1)\oplus{\cal
O}(-1)\mapsto\mathbb{P}^1$ as in Fig.\[Fig4.eps\]a, the refined partition function can be written as $$\begin{aligned}
Z(t,q,Q)&=&\sum_{\nu}Q^{|\nu|}(-1)^{|\nu|}C_{{\o}{\o}\nu}(t,q)C_{{\o}{\o}\nu^t}(q,t)\\\nonumber
&=&
\sum_{\nu}Q^{|\nu|}(-1)^{\nu}q^{\frac{\parallel\nu\parallel^2}{2}}t^{\frac{\parallel\nu^t\parallel^2}{2}}\tilde{Z}_{\nu}(t,q)\tilde{Z}_{\nu^t}(q,t)\\\nonumber
&=&
\sum_{\nu}\frac{Q^{|\nu|}(-1)^{\nu}q^{\frac{\parallel\nu\parallel^2}{2}}t^{\frac{\parallel\nu^t\parallel^2}{2}}}
{\prod_{s\in\nu}(1-t^{l_{\nu}(s)+1}q^{a_{\nu}(s)})(1-t^{l_{\nu}(s)}q^{a_{\nu}(s)+1})}.\end{aligned}$$
For more complicated toric diagram, the calculation principle is the same.
Refined partition functions for 5D $N=2^{\ast}$ theories
--------------------------------------------------------
### U(1) theory
The simplest 5D $N=2^{\ast}$ theory is the $U(1)$ gauge theory with a single adjoint hypermultiplet[@Iqbal; @Aganagic]. The toric diagram looks the same as the ${\cal O}(-1)\oplus{\cal
O}(-1)\mapsto\mathbb{P}^1$ but partially compactifying the two external legs as shown in Fig.4b. Now the refined partition function reads $$\begin{aligned}
\label{5DU(1)}
Z^{5D}_{\nu,\nu^t}(Q, Q_m, t, q) &=&
\sum_{\mu,\nu}(-Q)^{|\nu|}(-Q_m)^{|\mu|}C_{{\o}\mu\nu}(t,q)C_{\o\mu^t\nu^t}(q,t)\\\nonumber
&=&
\sum_{\lambda,\mu,\eta^1,\eta^2}(-Q)^{|\nu|}(-Q_m)^{|\mu|}\left(\frac{q}{t}\right)^{\frac{\parallel\mu\parallel^2+\parallel\nu\parallel^2}{2}}
\left(\frac{t}{q}\right)^{\frac{\parallel\mu^t\parallel^2+\parallel\nu^t\parallel^2}{2}}t^{\frac{\kappa(\mu)}{2}}q^{\frac{\kappa(\mu^t)}{2}}
\\\nonumber
&\times&t^{\frac{\parallel\nu\parallel^2}{2}}q^{\frac{\parallel\nu^t\parallel^2}{2}}\tilde{Z}_{\nu}(t,q)\tilde{Z}_{\nu^t}(q,t)
s_{\mu}(t^{-\nu^t}q^{-\rho})s_{\mu^t}(t^{-\rho}q^{-\nu})\\\nonumber
&=&
\sum_{\nu}(-Q)^{|\nu|}t^{\frac{\parallel\nu^t\parallel^2}{2}}q^{\frac{\parallel\nu\parallel^2}{2}}\frac{\prod_{i,j=1}^{\infty}(1-Q_m t^{-\nu^t_i-\rho_j}q^{-\nu_j-\rho_i})}
{\prod_{s\in\nu}(1-t^{l_{\nu}(s)+1}q^{a_{\nu}(s)})(1-t^{l_{\nu}(s)}q^{a_{\nu}(s)+1})}.\end{aligned}$$ This 5D refined partition function contains a perturbative part which is just the zero-instanton part $Z^{5D}_{\o}(Q,Q_m,t,q)$, thus the pure instanton part is given by $$\begin{aligned}
\label{U(1)inst}
Z^{5D}_{inst}(Q_m, t,q) &=& \frac{Z^{5D}_{\nu,\nu^t}(Q, Q_m, t,
q)}{Z^{5D}_{\o,\o}(Q,Q_m,t,q)}=\sum_{\nu}(-Q)^{|\nu|}\left(\frac{q}{t}\right)^{\frac{|\nu|}{2}}\\\nonumber
&\times&\prod_{(i,j)\in\nu}\frac{(1-Q_m
t^{-\nu^t_i-\rho_j}q^{-\nu_j-\rho_i})(1-Q_m
t^{\nu^t_i+\rho_j}q^{\nu_j+\rho_i})}{(1-t^{-l_{\nu}(s)-1}q^{-a_{\nu}(s)})(1-t^{l_{\nu}(s)}q^{a_{\nu}(s)+1})}\end{aligned}$$
### U(2) theory
We now consider the U(2) theory using the same RTV formulation. The toric diagram is showed in Fig. 6, the 5D refined partition function is given by
![[]{data-label="Fig5.eps"}](Fig5.eps)
$$\begin{aligned}
Z^{5D}_{\nu_1,\nu_1^t; \nu_2, \nu_2^t}(U(2)) &=& \sum_{{\nu_i},
{\mu_i},
\lambda}\prod_{i=1}^2(-Q_{i})^{|\nu_i|}(-Q_{m_i})^{|\mu_i|}(-Q)^{|\lambda|}\\\nonumber
&\times&C_{\o\mu_1\nu_1}(t,q)C_{\lambda\mu_1^t\nu_1^t}(q,t)C_{\lambda^t\mu_2\nu_2}(t,q)C_{\o\mu_2^t\nu_2^t}(q,t)\\\nonumber
&=&\sum_{{\nu_i}, {\mu_i},{\eta_i}
\lambda}\prod_{i=1}^2(-Q_{i})^{|\nu_i|}(-Q_{m_i})^{|\mu_i|}(-Q)^{|\lambda|}\left(\frac{q}{t}\right)^{\frac{|\eta_1|-|\eta_2|}{2}}
t^{\frac{\parallel\nu_1^t\parallel^2+\parallel\nu_2^t\parallel^2}{2}}q^{\frac{\parallel\nu_1\parallel^2+\parallel\nu_2\parallel^2}{2}}\\\nonumber
&\times&\tilde{Z}_{\nu_1}(t,q)\tilde{Z}_{\nu_1^t}(q,t)\tilde{Z}_{\nu_2}(t,q)\tilde{Z}_{\nu_2^t}(q,t)s_{\mu_1^t/\eta_1}(t^{-\rho}q^{-\nu_1})s_{\mu_1}(q^{-\rho}t^{-\nu_1^t})\\\nonumber
&\times&
s_{\mu_2/\eta_2}(t^{-\nu_2^t}q^{-\rho}s_{\mu_2^t}(q^{-\nu_2}t^{-\rho})s_{\lambda^t/\eta_1}(t^{-nu_1^t}q^{-\rho})s_{\lambda/\eta_2}(t^{-\rho}q^{-\nu_2})\\\nonumber
&=&\sum_{\nu_1,\nu_2}t^{\frac{\parallel\nu_1^t\parallel^2+\parallel\nu_2^t\parallel^2}{2}}q^{\frac{\parallel\nu_1\parallel^2+\parallel\nu_2\parallel^2}{2}}
\tilde{Z}_{\nu_1}(t,q)\tilde{Z}_{\nu_1^t}(q,t)\tilde{Z}_{\nu_2}(t,q)\tilde{Z}_{\nu_2^t}(q,t)\\\nonumber
&\times& \prod_{i,j=1}^{\infty}\frac{1-Q
t^{\rho_i-\nu_{1,j}^t}q^{-\rho_j-\nu_{2,i}}(1-QQ_{m_1}Q_{m_2}
t^{-\rho_i - \nu_{1,j}^t}q^{-\rho_j-\nu_{2,i}})}{1-QQ_{m_1}
t^{i-1-\nu_{1,j}^t}q^{j-\nu_{2,i}}}\\\nonumber &\times&
\frac{(1-Q_{m_1} t^{-\rho_i-\nu_{1,j}^t}q^{-\rho_j -
\nu_{1,i}})(1-Q_{m_2} t^{-\rho_i-\nu_{2,j}^t}q^{-\rho_j -
\nu_{2,i}})}{(1-QQ_{m_2} t^{i-\nu_{1,j}^t} q^{j-1-\nu_{2,i}})}.\end{aligned}$$
The instanton part of the refined partition reads $$\begin{aligned}
\label{U(2)inst}
Z_{inst}^{5D}(U(2)) &=& \frac{Z^{5D}_{\nu_1,\nu_1^t; \nu_2,
\nu_2^t}(U(2))}{Z^{5D}_{\o,\o;
\o,\o}(U(2))}=\sum_{\nu_1,\nu_2}\prod_{i=1}^2(-\sqrt{\frac{q}{t}}Q_{i})^{|\nu_i|}\\\nonumber
&\times& \prod_{(j,k)\in\nu_i}(1-Q_{m_i}
t^{-\rho_j-\nu_{i,k}^t}q^{-\rho_k - \nu_{i,j}})(1-Q_{m_i}
t^{\rho_j+\nu_{i,k}^t}q^{\rho_k + \nu_{i,j}})\\\nonumber &\times&
\prod_{(j,k)\in\nu_1}(1-Q_i'
t^{\rho_j+\nu_{2,k}^t}q^{\rho_k+\nu_{1,j}})\prod_{(j,k)\in\nu_2}(1-Q_i'
t^{-\rho_j-\nu_{1,k}^t}q^{-\rho_k-\nu_{2,j}})\\\nonumber &\times&
\left[\prod_{(j,k)\in\nu_1}(1-QQ_{m_1}
t^{-j+\nu_{2,k}^t}q^{\nu_{1,j}-k+1})\prod_{(j,k)\in\nu_2}(1-QQ_{m_1}
t^{j-\nu_{1,k}^t-1}q^{-\nu_{2,j}+k})\right]^{-1}\\\nonumber &\times&
\left[\prod_{(j,k)\in\nu_1}(1-QQ_{m_2}
t^{-j+\nu_{2,k}^t+1}q^{\nu_{1,j}-k})\prod_{(j,k)\in\nu_2}(1-QQ_{m_2}
t^{j-\nu_{1,k}^t}q^{-\nu_{2,j}+k-1})\right]^{-1}\\\nonumber &\times&
\left[\prod_{s\in\nu_i}(1-t^{-l_{\nu_i}(s)-1}q^{-a_{\nu_i}(s)})(1-t^{l_{\nu_i}(s)}q^{a_{\nu_i}(s)+1})\right]^{-1},\end{aligned}$$ here we define $Q_1' = Q,\,\,\,\,\, Q_2' = QQ_{m_1}Q_{m_2}$.
### $U(2)\times U(2)$ theory
The toric diagram for $N=2^{\ast}$ $U(2)\times U(2)$ theory is given in Fig. 6a. Using the gluing rule[@Taki; @Aganagic], one can truncate the toric diagram into two separate ones denoted by $T_1$ and $T_2$(as showing in Fig. 6b). The 5D refined partition for $T_1$ and $T_2$ are
![[]{data-label="Fig6.eps"}](Fig6.eps)
$$\begin{aligned}
Z_{\nu_1,\nu_3^t; \nu_2, \nu_4^t}^{T_1, 5D} &=&
Z^{5D}_{\nu_1,\nu_3^t; \nu_2, \nu_4^t}(U(2))
\\\nonumber &=&
(-Q_{m_1})^{|\mu_1|}(-Q_{m_4})^{|\mu_4|}(-\hat{Q}_1)^{|\lambda_1|}(-Q_{1})^{|\nu_1|}(-Q_{2})^{|\nu_2|}\\\nonumber
&\times&
C_{\o\mu_1\nu_1}(t,q)C_{\lambda_1\mu_1^t\nu_3^t}(q,t)C_{\lambda_1^t\mu_4\nu_2}(t,q)C_{\o\mu_4^t\nu_4^t}(q,t),\\
Z_{\nu_3,\nu_1^t; \nu_4, \nu_2^t}^{T_2, 5D} &=&
Z^{5D}_{\nu_3,\nu_1^t; \nu_4, \nu_2^t}(U(2))
\\\nonumber &=&
(-Q_{m_2})^{|\mu_2|}(-Q_{m_3})^{|\mu_3|}(-\hat{Q}_2)^{|\lambda_2|}(-Q_{3})^{|\nu_3|}(-Q_{4})^{|\nu_4|}\\\nonumber
&\times&
C_{\o\mu_3\nu_3}(t,q)C_{\lambda_2\mu_3^t\nu_1^t}(q,t)C_{\lambda_2^t\mu_2\nu_4}(t,q)C_{\o\mu_2^t\nu_2^t}(q,t),\end{aligned}$$
respectively. The instanton part is given by $$\begin{aligned}
Z^{5D}_{inst}(U(2)\times U(2)) = \frac{Z_{\nu_1,\nu_3^t; \nu_2,
\nu_4^t}^{T_1, 5D}Z_{\nu_3,\nu_1^t; \nu_4, \nu_2^t}^{T_2,
5D}}{Z_{\o,\o; \o, \o}^{T_1, 5D}Z_{\o,\o; \o, \o}^{T_2, 5D}}.\end{aligned}$$ After an elementary calculation, we get: $$\begin{aligned}
\label{5DU(2)2}
Z_{inst}^{5D} &=&
\sum_{\{\nu_i\}}\prod_{i=1}^4\left(-\sqrt{\frac{q}{t}}Q_{i}\right)^{|\nu_i|}\\\nonumber
&\times& \prod_{\{r,s\}}\prod_{(j,k)\in\nu_r}(1-Q_{m_s}
t^{-\rho_k-\nu_{s,j}^t}q^{-\rho_j-\nu_{r,k}})(1-Q_{m_r}
t^{\rho_k+\nu_{s,j}^t}q^{\rho_j+\nu_{r,k}})\\\nonumber
&\times&\prod_{m=1}^2\prod_{(j,k)\in\nu_2}(1-\hat{Q}_{1,m}t^{-\rho_j-\nu_{3,k}^t}q^{-\rho}_k-\nu_{2,j})
\prod_{(j,k)\in \nu_3}(1-\hat{Q}_{1,m}t^{\rho_j+\nu_{2,k}^t
q^{\rho_k +\nu_{3,j}}})\\\nonumber
&\times&\prod_{n=1}^2\prod_{(j,k)\in\nu_4}(1-\hat{Q}_{2,n}t^{-\rho_j-\nu_{1,k}^t}q^{-\rho}_k-\nu_{4,j})
\prod_{(j,k)\in \nu_1}(1-\hat{Q}_{2,n}t^{\rho_j+\nu_{4,k}^t
q^{\rho_k +\nu_{1,j}}})\\\nonumber
&\times&\left[\prod_{s\in_{\nu_i}}(1-t^{-l_{\nu_i}(s)-1}q^{-a_{\nu_i}(s)})(1-t^{l_{\nu_i}(s)}q^{a_{\nu_i}(s)+1})\right]^{-1}
\\\nonumber
&\times&\left[\prod_{\{p,q\}}\prod_{(j,k)\in\nu_p}(1-\tilde{Q}_p
t^{j-\nu_{q,k}^t}q^{-\nu_{p,j}+k-1})\prod_{(j,k)\in\nu_q}(1-\tilde{Q}_p
t^{\nu_{p,k}^t -j+1} q^{\nu_{q,j}-k}q^{-\nu_{p,j}+k-1})\right]^{-1},\end{aligned}$$ where $$\begin{aligned}
\nonumber\{r,s\}\in\{1,3\} \,\,or \,\,\{2,4\}, r\neq
s,\,\,\,\,\{p,q\}\in\{1,2\} \,\,or \,\,\{3,4\}, p\neq q\\\nonumber
\hat{Q}_{1,1} = Q_{m_1},\,\,\,\hat{Q}_{1,2} =
\hat{Q}_2Q_{m_2}Q_{m_3},\,\,\,\hat{Q}_{2,1} =
Q_{m_2},\,\,\,\hat{Q}_{2,2} = \hat{Q}_1Q_{m_1}Q_{m_4}\\\nonumber
\tilde{Q}_1 = \hat{Q}_1Q_{m_1},\,\,\tilde{Q}_2 =
\hat{Q}_2Q_{m_2}\frac{q}{t},\,\,\tilde{Q}_3 =
\hat{Q}_2Q_{m_3},\,\,\tilde{Q}_4 = \hat{Q}_2Q_{m_4}\frac{q}{t}.\end{aligned}$$ If one defines the following identites $$\begin{aligned}
\nonumber
Z_{\nu^{(\ell)}_a,\nu^{(\ell+1)}_b}^{bifund,\,\,5D}(Q^{(\ell,\ell+1)}_{ab},t,q)&=&\prod_{(i,j)\in\nu_a^{(\ell)}}
(1-Q^{(\ell,\ell+1)}_{ab}t^{\nu_{b,j}^{(\ell+1)t}-i}q^{\nu_{a,i}^{(\ell)}-j+1})\\\nonumber&\times&\prod_{(i,j)\in\nu_a^{(\ell+1)}}
(1-Q^{(\ell,\ell+1)}_{ab}t^{-\nu_{a,j}^{(\ell)t} +i
-1}q^{-\nu_{b,i}^{(\ell+1)}+j})\\\nonumber Z^{vec,
\,\,5D}_{\nu^{(\ell)}_a,\nu^{(\ell)}_b}(Q^{\ell}_{ab},t,q) &=&
\left[\prod_{(i,j)\in\nu_a}(1-Q^{(\ell)}_{ab}t^{\nu_{b,j}^t-i}q^{\nu_{a,i}-j+1})\prod_{(i,j)\in\nu_b}(1-Q^{(\ell)}_{ab}t^{-\nu_{a,j}^t
+i -1}q^{-\nu_{b,i}+j})\right]^{-1}\\
Z^{vec,
\,\,5D}_{\nu^{(\ell)}_a,\nu^{(\ell)}_a}(t,q)
&=&\left[\prod_{(i,j)\in\nu_a}(1-t^{-l_{\nu_a}(s)-1}q^{-a_{\nu_a}(s)})(1-t^{l_{\nu_a}(s)}q^{a_{\nu_a}(s)+1})\right]^{-1},\end{aligned}$$ the above refined 5D instanton partition can be written as $$\begin{aligned}
Z_{inst}^{5D} &=&
\sum_{\{\nu_i\}}\prod_{i=1}^4\left(-\sqrt{\frac{q}{t}}Q_{m_i}\right)^{|\nu_i|}
\prod_{a,b
=1}^2Z_{\nu^{(\ell)}_a,\nu^{(\ell+1)}_b}^{bifund,\,\,5D}(Q^{(\ell,\ell+1)}_{ab},t,q)\\\nonumber
&\times&Z^{vec,
\,\,5D}_{\nu^{(\ell)}_a,\nu^{(\ell)}_b}(Q^{(\ell)}_{ab},t,q)Z^{vec,
\,\,5D}_{\nu^{(\ell)}_a,\nu^{(\ell)}_a}(t,q).\end{aligned}$$ This formula coincides with Nekrasov’s instanton partition function for $5D\,\,\, N=2\,\,\, U(2)\times U(2)$ gauge theory.
### $U(N)$ M-node necklace quiver theory
The generalization to $U(N)$ M-node necklace quiver theory (Fig.9) is straightforward, and the result has the following expression
![[]{data-label="Fig7.eps"}](Fig7.eps)
$$\begin{aligned}
Z^{U(N)\,\,\,inst}_{M-necklace} &=&
\prod_{i=1,\ell=1}^{N,M}(-\sqrt{\frac{q}{t}}Q^{(\ell)}_{i})^{\nu^{(\ell)}_i}\prod_{a,b
=1}^N
Z_{\nu^{(\ell)}_a,\nu^{(\ell+1)}_b}^{bifund,\,\,5D}(Q^{(\ell,\ell+1)}_{ab},t,q)\\\nonumber
&\times&Z^{vec,
\,\,5D}_{\nu^{(\ell)}_a,\nu^{(\ell)}_b}(Q^{(\ell)}_{ab},t,q)Z^{vec,
\,\,5D}_{\nu^{(\ell)}_a,\nu^{(\ell)}_a}(t,q).\end{aligned}$$
This can be easily proved by using the RTV formulation and the mathematical induction.
4D field theory limit
---------------------
To compare the refined partition functions with the real 4D theories, one could shrink the perimeter of the cyclic 5-th dimension to zero, that is, $\beta\rightarrow 0$.
For $U(1)$ theory, the parameters are set as $$Q_m = \sqrt{\frac{t}{q}}e^{\beta (-\tilde{m})} = e^{\beta({\epsilon_+}/2-\tilde{m})},\,\,\, Q = \sqrt{\frac{q}{t}}e^{\beta(-\tilde{a}+\tilde{m})} =
e^{\beta(-\tilde{a}+\tilde{m}-\epsilon_+/2)},$$ where $$m=\frac{\tilde{m}}{\epsilon_2},\,\,a=\frac{\tilde{a}}{\epsilon_2}.$$ Then the U(1) instanton partition function reads $$\begin{aligned}
{\bf Z}^{4D, U(1)}_{inst}&=&\text{lim}_{\beta\rightarrow
0}{}Z^{5D}_{inst}(Q_m, t,q)\\\nonumber &=&
\sum_{\nu}(\sqrt{\frac{q}{t}}Q)^{|\nu|}\prod_{s\in\nu}\frac{(-m+\beta
l(s)+a(s)+1)(-m-\beta (l(s)+1)-a(s))}{(\beta(a(s)+1)+l(s))(\beta
a(s)+1+l(s))}\end{aligned}$$
For $U(2)$ theory, the parameters are set as $$Q_{m_1}=Q_{m_2}=\sqrt{\frac{t}{q}}e^{\beta (-\tilde{m})},\,\,\,\, QQ_{m_1}=QQ_{m_2}=e^{\beta(-\tilde{a})}.$$ The instanton partition function for this theory is $$\begin{aligned}
\nonumber{\bf Z}^{4D,U(2)}_{inst} &=& \text{lim}_{\theta\rightarrow
0}Z_{inst}^{5D}(U(2))\\\nonumber &=&
\sum_{\nu_1,\nu_2}\prod_{i=1,j=1}^2(\sqrt{\frac{q}{t}}Q_i)^{|\nu_i|}\\\nonumber&\times&\prod_{s\in\nu_i}(-m_{i,j}+a_{\nu_i}(s)+1+\beta
l_{\nu_j}(s))\prod_{t\in\nu_j}(-m_{i,j}-a_{\nu_j}(t)-\beta
(l_{\nu_i}(t)+1))\\\nonumber &\times& \left[\prod_{i\neq
j}\prod_{s\in\nu_i}(a_{i,j}+a_{\nu_i}(s)+1+\beta
l_{\nu_j}(s))\prod_{t\in\nu_j}(a_{i,j}-a_{\nu_j}(t)-\beta
(l_{\nu_i}(t)+1))\right]^{-1}\\\nonumber&\times&\left[\prod_{s\in\nu_i}(\beta(l_{\nu_i}(s)+1)+a_{\nu_i}(s))(\beta
l_{\nu_i}(s)+1+a_{\nu_i}(s))\right]^{-1},\end{aligned}$$ here$$m_{i,j} = a_i-a_j-m, \,\,\,a_{i,j}=a_i-a_j,
\,\,\,a_{1,2}=-a_{2,1}=a.$$ Now one can immediately read off the expression (\[M-Necklace\]) we proposed in the introduction section. This is just a substitution $$\tilde{Q}_{i} =
\sqrt{\frac{q}{t}}Q_i, \,\,\,\, \lambda_{\ell,i} = \nu^{(\ell)}_i.$$
The $U(2)\times U(2)$ instanton partition function is just a product of two $U(2)$ ones, $$\begin{aligned}
\nonumber{\bf Z}^{4D,U(2)\times U(2)}_{inst} &=&
\text{lim}_{\theta\rightarrow 0}Z_{inst}^{5D}(U(2)\times
U(2))\\\nonumber
&=&\sum_{\vec{\nu}_1,\vec{\nu}_2}\prod_{i,\ell=1}^{2}(
\tilde{Q}_{i})^{|\vec{\nu}_i|}\prod_{j,k=1}^{2}{\Large\mbox
{$[$}}(\langle E^{a^{(i)}_{j,k}}(E^{\ast})^{\beta - a^{(i)}_{j,k}-1}
J_{\nu_{i,j}}, J_{\nu_{i,k}}\rangle_{\beta}|_{j\neq k})\langle
J_{\nu_{i,j}}, J_{\nu_{i,j}}\rangle_{\beta}{\Large\mbox
{$]$}}^{-1}\nonumber\\&&\times \prod_{m,n=1}^{2}\langle
E^{m^{(\ell,\ell+1)}_{m,n}}(E^{\ast})^{\beta -
m^{(\ell,\ell+1)}_{m,n}-1} J_{\nu_{\ell,m}},
J_{\nu_{\ell+1,n}}\rangle_{\beta}.\end{aligned}$$ It is easy to generalize to the M-node quiver $U(N)$ theory. The result is given in Eq.(\[M-Necklace\]).
Jack symmetric functions and eCS models
=======================================
Since the $N=2^{\ast}$ theories are all superconformal field theories, according to the 4d-2d relation proposed by Alday, Gaiotto and Tachikawa [@AGT], there are 2D Liouville/Toda integrable systems corresponding to these gauge theories. The $N=2^{\ast}$ theories are related to the elliptic Calogero-Sutherland(eCS) models. The CS model[^5] plays an important role in many subjects in physics and mathematics. Such as conformal field theory(CFT), unitary matrix models, fractional quantum hall effects(FQHE), etc. Its spectrum can be totally released from the so-called Jack polynomials. In principle, the total system can be solved by using the properties of Jack polynomials, from the CFT point of view, Jack polynomials have natural meanings of characters of the symmetry which drives the model. For instance, the Jack polynomials related to certain Young tableaux are believed to correspond to the singular vectors of $W$-algebra, this algebra reflects the hidden $W_{1+\infty}$ symmetry of CS model.
The instanton counting of the $N=2^{\ast}$ theories should be related to the counting of the BPS spectrums in 4D gauge theories. In 2D point of view, this can be seen as the counting of the admissible representations of the eCS model, that is, the counting of singular vectors in the model. As pointed out in Awata, Sakamoto’s works[@Awata95; @Sakamoto] on singular vectors in CS model, Jack polynomials and skew Jack polynomials define the singular vector space under $W$-algebra[^6].
Jack polynomials and Calogero-Sutherland model
----------------------------------------------
The Hamiltonian of Calogero-Sutherland model is given by: $$H=P_i^2+\beta(\beta-1) sin^{-2} (\frac{1}{2}(x_i-x_j)) \\
=(-i\partial_i-iA_i)(-i\partial_i+iA_i) \\
=-\partial_i^2+A_i^2-\sum _i \partial_i A_i$$ here $\partial_i=\partial_{x_i}$, $A_i=\sum_i \beta ctg
x_{ij}$ its ground state captures by the equation of motion: $$(-i\partial_i+iA_i)\psi_0 =0,$$ the general solution gives $$\psi_0=\sum_{i<j}sin^{\beta}(x_i-x_j).$$ Define the excitation state as $\psi_\lambda=J_{\lambda}\psi_0$, thus it should satisfy $$[-\partial_i^2+\beta \sum ctg x_{ij}
(\partial_i -
\partial_j)] J_{\lambda} = c_{\lambda}J_{\lambda}.$$ It is not hard to get the operator formulism for this $\hat{J}_{\lambda}$. However, there are very simple vertex operator maps from Calogero-Sutherland model to CFT. Denote $$\begin{gathered}
\psi_0= \langle k_f|V_k(z_1)\cdots V_k(z_n)|k_i\rangle \\\nonumber
z_i=e^{ix_i}\\\nonumber V_k (z_i)= e^{ik\phi(z_i)}\\\nonumber\end{gathered}$$ and choose choose proper vacuum momentum $k_i$ as that $$\psi_0 \sim \prod_{i<j}^N (z_i-z_j)^{k^2}\prod_{i=1}^N z_i^{k_i
\cdot k}\nonumber,$$ then the excitation state is just as following $$\begin{aligned}
\label{CSjack}
\psi_{\lambda}&=& \langle k_f|\hat{J_{\lambda}}V_k(z_1)\cdots
V_k(z_n)|k_i\rangle\\\nonumber \hat{J}_{\lambda}&=&\sum_n
d_{\lambda}^{[n]} \hat{P}_{[n]}=\sum_n d_{\lambda}^{[n]} \frac
{\hat{a}_{[n]}}{(\sqrt{\beta}) ^{l(\lambda)}}\\\nonumber
\hat{a}_{[n]}&=&\hat{a}_{n_1} \cdots \hat{a}_{n_l},\nonumber\end{aligned}$$ here $\hat{P}_{[n]}$ is the operator formulism of Newton polynomial, $\ell(\lambda)$ is the total number of rows in $\lambda$. $d_{\lambda}^{[n]}$is the normalization factor such that the normalization of $J_{\lambda}(z^i)$(for the partition $\lambda=\{j^{k_j}\}$) $$\begin{aligned}
\langle J_{\lambda}, J_{\mu}\rangle_{\theta} &=&
\delta_{\lambda\mu}d_{\lambda}^{[n]}d_{\mu}^{[n]}\langle
k_f|\frac{\hat{a}_{\lambda}\hat{a}_{-\mu}}{\beta^{\frac{1}{2}\ell(\lambda)}\beta^{\frac{1}{2}\ell(\mu)}}|k_i+Nk\rangle\\\nonumber
&=&z_{\vec{\lambda}_j}\beta^{-\ell(\lambda)}d_{\lambda}^{[n]}d_{\mu}^{[n]}
= \delta_{\lambda\mu}j_{\lambda},\\\nonumber
j_{\lambda}&=&\prod_{s\in\lambda}(a_{\lambda}(s)+\beta(l_{\lambda}(s)+1))(\beta
l_{\lambda}(s) +a_{\lambda}(s)+1),
\\\nonumber z_{\vec{\lambda}_j}&=& \prod_{j=1}^{\infty}j^{k_j}k_j!\,\,\,\,,\end{aligned}$$here $k_i \longrightarrow k_i +Nk$ reflects the action of the zero modes of vertex operators. By the mode expansion of the free boson field $\phi(z)$, one reaches $$\begin{gathered}
\nonumber\phi(z)=\hat{q}+\hat{p}lnz +\sum_{n\in
\mathbb{Z},n\neq0}\frac {\hat{a}_{-n}}{n} z^n\\\nonumber V_k(z)=e^{k
\cdot \phi(z)}.\end{gathered}$$ Substitute these to Eq.(\[CSjack\]), it is easy to show that $$\begin{aligned}
\nonumber\psi_{\lambda}&=& J_{\lambda} \psi_0 \\\nonumber &=&\langle
k_f| d_{\lambda}^{[n]} \frac {\hat{a}_{[n]}}{(\sqrt{\beta})
^{l(\lambda)}} \prod_{i<j}^N (z_i-z_j)^{k^2}\prod_{i=1}^N z_i^{k_i
\cdot k} e^{\sum_{m \in \mathbb{Z}^{+}}k \frac {\hat{a}_{-m}}{m}
z_i^m}|k_i+Nk\rangle
\\\nonumber&=&\langle k_f| d_{\lambda}^{[n]} \frac {\sum_i k
z_i^{n_1}}{\sqrt{\beta}} \frac {\sum_i k z_i^{n_2}}{\sqrt{\beta}}
\cdots \frac {\sum_i k z_i^{n_l}}{\sqrt{\beta}} \psi_0({z_i})
|k_i+Nk\rangle\end{aligned}$$ $e^{\sum_{m \in \mathbb{Z}^{+}}k \frac
{\hat{a}_{-m}}{m} z_i^m}$is the remaining $\prod_i^N
V_k^{+}$after normal ordering, we have used the relation $\hat{a}
e^{\hat{a}^{+} \alpha} |0\rangle= \alpha e^{\hat{a}^{+} \alpha }
|0\rangle$ from the second step to the third step of the above expression. If $k=\sqrt{\beta}$, we see that the Jack polynomial $J_{\lambda}(z)$ is actually can be seen as the excitation state of the CS model.
Screening charges and singular vectors
--------------------------------------
It is shown in Dijkgraaf and Vafa’s article[@DV2009a] the screening charges are related to instanton insertions. The screening charges of CS model are defined as in[@Awata95; @WXYjack] $$\alpha_+ = k, \,\,\,\alpha_- = -\frac{1}{k},$$ then by the Felder’s cohomology[@Felder] and using Thorn’s method[@Awata95; @Thorn; @WXYjack], one can easily prove that the singular vector ${|\chi_{-r,-s}^+\rangle}$ associated with rectangular Young tableau $\lambda = \{s^r\}$ can be written as $$\begin{aligned}
{|\chi_{-r,-s}^+\rangle}
&\!\!=\!\!&
\oint\prod_{j=1}^r\frac{dz_j}{2\pi i}\cdot
\prod_{i=1}^r:e^{\alpha_+\phi(z_i)}:
{|\alpha_{r,-s}\rangle} \\\nonumber
&\!\!=\!\!&
\oint\prod_{j=1}^r\frac{dz_j}{2\pi iz_j}\cdot
\prod_{i,j=1 \atop i<j}^r(z_i-z_j)^{2\beta}\cdot
\prod_{i=1}^rz_i^{(1-r)\beta-s}\cdot
\prod_{j=1}^re^{\alpha_+\phi_-(z_j)}
{|\alpha_{-r,-s}\rangle}\\\nonumber
&\!\!\!\!&
{|\alpha\rangle}=e^{\alpha\hat{q}}{|0\rangle},\,\,\,\alpha_{r,s}=\frac{(1-r)\alpha_+}{2}+\frac{(1-s)\alpha_-}{2}\end{aligned}$$ the integratation contours have been chosen as the Felder’s contours as in Fig.8
[ ]{}
[**Figure. 8:**]{} Felder’s Integration contours
[ ]{} The Jack polynomial can be identified with the following expression $$\begin{aligned}
{\cal N}_{r,s}^+ \,{\cal N}_{(s^r)}^+ J_{(s^r)}(x)
&\!\!=\!\!&
{\langle\alpha_{r,s}|}C_{k}{|\chi_{r,s}^+\rangle} \\\nonumber
&\!\!=\!\!&
\oint\prod_{j=1}^r\frac{dz_j}{2\pi iz_j}\cdot
\prod_{i,j=1 \atop i<j}^r(z_i-z_j)^{2\beta}\cdot
\prod_{i=1}^rz_i^{(1-r)\beta-s}\cdot
\prod_i\prod_{j=1}^r(1-w_iz_j)^{-\beta}\\\nonumber
&&C_{k}=e^{k\sum_{n>0}\frac{1}{n}a_n p_n} =
\prod_{i}V^-_{k}(w_i),\,\,\,V^-_{k}(w_i)=e^{-k\phi_-(w_i)}
\label{J+rs}\end{aligned}$$ where the normalization constants ${\cal N}_\lambda^+$ [@Stanley] and ${\cal N}_{r,s}^+$ [@Awata95] are given by $$\begin{aligned}
{\cal N}_{\lambda}^+
=
\prod_{s\in\lambda}
\frac{(\ell_{\lambda}(s)+1)\beta+a_{\lambda}(s)}
{\ell_{\lambda}(s)\beta+a_{\lambda}(s)+1}, \qquad
{\cal N}_{r,s}^+
=
\frac{1}{r!}
\prod_{j=1}^r\frac{\sin\pi j\beta}{\sin\pi\beta}\cdot
\frac{\Gamma(r\beta+1)}{\Gamma(\beta+1)^r}.
\label{Nrs}\end{aligned}$$ Similarly, as proved in [@Awata95], the Jack polynomials associated with non-rectangular Young tableaux are related to the singular vectors for $W_N$-algebra. $$\begin{aligned}
{|\chi_{\vec{r},\vec{s}}^-\rangle}
=
\oint\prod_{a=1}^{N-1}\prod_{j=1}^{r^a}\frac{dz^a_j}{2\pi i}\cdot
\prod_{a=1}^{N-1}\prod_{j=1}^{s^a}:e^{\alpha_+\phi^a(z^a_j)}:
{|\vec{\lambda}_{\vec{r},\vec{s}}^-
-\alpha_+\sum_{a=1}^{N-1}r^a\vec{\alpha}^a\rangle}\end{aligned}$$ with $s^1>\cdots>s^{N-1}$. The corresponding Young tableau is showed as follows. The operator formalism of generic Jack polynomial can be identified with the insertion between the left and the right vacuum denote by ${\langle\lambda_{\vec{r},\vec{s}}|}$ and ${|\vec{\lambda}_{\vec{r},\vec{s}}^-
-\alpha_+\sum_{a=1}^{N-1}r^a\vec{\alpha}^a\rangle}$. It follows $$\begin{aligned}
\hat{J}_{\lambda}\sim\prod_{i}V^-_{k}(w_i)\oint\prod_{a=1}^{N-1}\prod_{j=1}^{r^a}\frac{dz^a_j}{2\pi
i}\cdot
\prod_{a=1}^{N-1}\prod_{j=1}^{s^a}:e^{\alpha_+\phi^a(z^a_j)}:\end{aligned}$$
Jack polynomial and Nekrasov’s instanton partition function
===========================================================
The existence of singular vectors implies that correlation functions in CS model can be split into conformal blocks. These conformal blocks, due to the AGT relation, should be exact the instanton partition function of the related $N=2^{\ast}$ theory. However, for the M-node necklace quiver gauge theory, the related 2D correlation function is still hard to calculate. However, we can read off that there should have a more simple description of this correlation function from the result we obtained in present paper. It is just simple multiplications of two point functions within the insertion of two Jack polynomials! This is a factorization formulism rather than a summation, the combinatorial properties of conformal blocks are totally determined by the Jack polynomials. Now we extract these information at the level of result. We will explain the hidden physics using Dijkgraaf-Vafa’s mirror B-model picture in the second part of this note.
The deformation parameters of the eCS model can be written as \_1=ig\_sk, \_2= ,Q=k+=.
The bifundamental part is the building block of the instanton partition function. The expression reads \_[bifund]{}\^[4Dinst]{}(\_,\_,\_[+1]{},\_[+1]{}; m\_) = \_[m,n=1]{}\^[N]{}E\^[m\^[(,+1)]{}\_[m,n]{}]{}(E\^)\^[- m\^[(,+1)]{}\_[m,n]{}-1]{} J\_[\_[,m]{}]{}, J\_[\_[+1,n]{}]{}\_.The insersion of $E^{m^{(\ell,\ell+1)}_{m,n}},\,\,m^{(\ell,\ell+1)}_{m,n} =
a_{\ell+1,n}-a_{\ell,m}-m_{\ell}$ can be rewritten as follows $$\begin{aligned}
{\langle0|}C_k\text{exp}{(\frac{\tilde{m}}{-\epsilon_2}\frac{(-1)^np_n}{n})}
&=&{\langle0|}C_k\text{exp}{(\frac{im}{g_s}\frac{(-1)^na_n}{n})}\\\nonumber
&=&{\langle0|}C_k\prod_{\tilde{m}}\Gamma^-(-1) \\\nonumber
&=&{\langle0|}s_{\mu}(-1,-1,-1,\cdots)C_k
\\\nonumber
&=&\sum_{\mu}{\langle\mu|}(-1)^{|\mu|}C_k.\end{aligned}$$ The conjugate state induced by the insertion of $(E^{\ast})^{\beta-m-1}$ has the same express except the conjugate charge is given by $\epsilon_+ - m$ as expectation. The whole expression now is given by $$\begin{aligned}
\sum_{\mu}{\langle\mu|}(-1)^{|\mu|}\hat{J_{\lambda}}\hat{J_{\nu}}\sum_{\mu'}(-1)^{|\mu'|}{|\mu'\rangle}\\\nonumber\end{aligned}$$ When one expands the $s_{\mu}$ as the monomial symmetric function $m_{\mu}$, and writes the Jack polynomial as the complete homogeneous symmetric function $h_{\lambda}$[^7], one immediately gets the right expression of the inner product. The insertions of $E$ and $E^{\ast}$ have a explanation that $m$ Wilson loops translated between the associated branes, also, this will be shown in the second part of this note.
Conclusions and discussions
===========================
We calculate in this note the instanton partition function of the elliptic N=2 M-node quiver gauge theory using the refined topological vertex formulation. The result exactly coincident with Nekrasov’s instanton partition. We find the instanton counting of $N=2^{\ast}$ theories has a neat expression in terms of Jack polynomials as expected[@NekSha]. We give a explanation of the expression at the level of result. This result implies that the AGT duality between 4D $N=2$ supersymmetric gauge theories and the 2D conformal field theories has more refined structures such as the physical reason of the factorization of conformal blocks.
Acknowledgement {#acknowledgement .unnumbered}
===============
The author thanks Yingying Xu and Song He for useful discussions on topological vertex and instanton counting. Prof. Ming Yu gave a great many of supports on the operator formalism of Jack polynomials.
[99]{}
R. Donagi, [*“Seiberg-Witten Integrable Systems”*]{} “Surveys in Differential Geometry”, arxiv: alg-geom/9705010
N. Nekrasov [*“Seiberg-Witten prepotential from instanton counting”*]{} “Proceedings of the ICM, Beijing 2002”, vol. 3, 477–496 arxiv: hep-th/0206061
N. Nekrasov, S. Shatashvili [*“Quantum integrability and supersymmetric vacua”*]{} arXiv:0901.4748 \[hep-th\]
N. Seiberg, E. Witten,[*“Electric-magnetic duality, monopole condensation, and confinement in N=2 supersymmetric Yang-Mills theory”*]{}, Nuclear Phys. B [**426**]{} (1): 19¨C52. N. Seiberg, E. Witten, [*“Monopoles, duality and chiral symmetry breaking in N=2 supersymmetric QCD”*]{}, Nuclear Phys. B [**431**]{} (3): 484¨C550 W. Lerche [*“Introduction to Seiberg-Witten Theory and its Stringy Origin”*]{}Nucl.Phys.Proc.Suppl. [**55B**]{} (1997) 83-117; Fortsch.Phys. [**45**]{} (1997) 293-340; arxiv: hep-th/9611190
R. Dijkgraaf, C. Vafa, [*“Toda Theories, Matrix Models, Topological Strings, and N=2 Gauge Systems”*]{} ,arXiv:0909.2453v1 \[hep-th\]
M. Cheng, R. Dijkgraaf, C. Vafa, [*“Non-Perturbative Topological Strings And Conformal Blocks”*]{}, arXiv:1010.4573v1 \[hep-th\]
E. Witten, [*“Solutions Of Four-Dimensional Field Theories Via M Theory”*]{}, Nucl.Phys.B [**500**]{}:3-42,1997
R. Donagi, E. Witten, [*“Supersymmetric Yang-Mills Systems And Integrable Systems”*]{}, Nucl.Phys.B [**460**]{}:299-334,1996
P. Argyres, N. Seiberg, [*“S-duality in N=2 supersymmetric gauge theories”*]{}, JHEP [**0712**]{}:088,2007
D. Gaiotto, [*“N = 2 Dualities”*]{}, arXiv:0904.2715v1 \[hep-th\]
M. Aganagic, A. Klemm, M. Marino, C. Vafa, [*“The Topological Vertex”*]{}, Commun. Math. Phys. [**254**]{}, 425-478(2005)
A. Iqbal, C. Kozcaz, C. Vafa[*“The Refined Topological Vertex”*]{}, JHEP [**0910**]{}:069,2009
L.F. Alday, D. Gaiotto, Y. Tachikawa, [*“Liouville Correlation Functions from Four-dimensional Gauge Theories”*]{} Lett. Math. Phys. [**91**]{} (2010) 167-197
R. Szabo, [*“Instantons, Topological Strings and Enumerative Geometry ”*]{}, arXiv:0912.1509
N. Drukker, D. Morrison, T. Okuda, [*“Loop operators and S-duality from curves on Riemann surfaces ”*]{}, arXiv:0907.2593; N. Drukker, J. Gomis, T. Okuda, J. Teschner[*“Gauge Theory Loop Operators and Liouville Theory”*]{}, arXiv:0909.1105
N. Drukker, D. Gaiotto, J. Gomis[*“The Virtue of Defects in 4D Gauge Theories and 2D CFTs”*]{}, arXiv:1003.1112
T. Eguchi, K. Maruyoshi, [*“Penner Type Matrix Model and Seiberg-Witten Theory”*]{}, arXiv:0911.4797
T. Eguchi, K. Maruyoshi, [*“Seiberg-Witten theory, matrix model and AGT relation”*]{}, arXiv:1006.0828
N. Nekrasov, E. Witten, [*“The Omega Deformation, Branes, Integrability, and Liouville Theory”*]{}, arXiv:1002.0888
Y. Nakayama[*“Refined Cigar and Omega-deformed Conifold”*]{}, arXiv:1004.2986
K. Maruyoshi, M. Taki, [*“Deformed Prepotential, Quantum Integrable System and Liouville Field Theory”*]{}, arXiv:1006.1214
H. Liu, [*“Notes On U(1) Instanton Counting On $A_{l-1}$ ALE Spaces ”*]{}, arXiv:009.3324
L. Alday, D. Gaiotto, S. Gukov, Y. Tachikawa, H. Verlinde [*“Loop and surface operators in N=2 gauge theory and Liouville modular geometry ”*]{}, arXiv:0909.0945
D. Nanopoulos, D. Xie[*“Hitchin Equation, Singularity, and N=2 Superconformal Field Theories ”*]{}, arXiv:0911.1990
A. Marshakov, A. Mironov, A. Morozov, “[*On Combinatorial Expansions of Conformal Blocks*]{},” [[ arXiv:0907.3946 \[hep-th\]]{}]{}. A. Mironov, S. Mironov, A. Morozov, A. Morozov, “[C*FT exercises for the needs of AGT*]{},” [[ arXiv:0908.2064 \[hep-th\]]{}]{}. A. Mironov, A. Morozov, “[*The Power of Nekrasov Functions*]{},” [[ arXiv:0908.2190 \[hep-th\]]{}]{}. A. Mironov, A. Morozov, “[*On AGT relation in the case of U(3)*]{},” [[arXiv:0908.2569 \[hep-th\]]{}]{}. A. Marshakov, A. Mironov, A. Morozov, “[*On non-conformal limit of the AGT relations*]{},” [[ arXiv:0909.2052 \[hep-th\]]{}]{}. A. Marshakov, A. Mironov, A. Morozov, “[*Zamolodchikov asymptotic formula and instanton expansion in N=2 SUSY $N_f=2N_c$ QCD*]{},” [[ arXiv:0909.3338 \[hep-th\]]{}]{}. A. Mironov and A. Morozov, “[ *Proving AGT relations in the large-c limit*]{},” [[ arXiv:0909.3531 \[hep-th\]]{}]{}. A. Mironov and A. Morozov, “[*Nekrasov Functions and Exact Bohr-Zommerfeld Integrals*]{},” [[ arXiv:0910.5670 \[hep-th\]]{}]{}. V. Alba and A. Morozov, “[*Non-conformal limit of AGT relation from the 1-point torus conformal block*]{},” [[ arXiv:0911.0363 \[hep-th\]]{}]{}. K. Maruyoshi, M. Taki, S. Terashima, F. Yagi, [*“New Seiberg Dualities from N=2 Dualities”*]{}, [[*JHEP*]{} [**0909**]{}:031,2009]{} [[ arXiv:0907.2625 \[hep-th\]]{}]{}. D. Gaiotto, [*“Asymptotically free N=2 theories and irregular conformal blocks”*]{}, [[arXiv:0908.0307 \[hep-th\]]{}]{}. S. M. Iguri, C. A. Nunez, [“*Coulomb integrals and conformal blocks in the AdS3-WZNW model*]{},” [[ arXiv:0908.3460 \[hep-th\]]{}]{}.
R. Poghossian, “[*Recursion relations in CFT and N=2 SYM theory*]{},” [[arXiv:0909.3412 \[hep-th\]]{}]{}. G. Bonelli, A. Tanzini, “[*Hitchin systems, N=2 gauge theories and W-gravity*]{},” [[ arXiv:0909.4031 \[hep-th\]]{}]{}. S. Giombi, V. Pestun, “[*The 1/2 BPS ’t Hooft loops in N=4 SYM as instantons in 2d Yang-Mills*]{},” [[ arXiv:0909.4272 \[hep-th\]]{}]{}.
H. Ooguri, C. Vafa, [*“Knots Invariants and Topological Strings”*]{}, Nucl.Phys.B [**577**]{}:419-438,2000
S. Katz, A. Klemm, C. Vafa[*“Geometric Engineering of Quantum Field Theories”*]{}, Nucl.Phys. B[**497**]{} (1997) 173-195
E. Carlsson, A. Okounkov[*“Exts and Vertex Operators”*]{}, arXiv:0801.2565
R. Gopakumar, C. Vafa[*“M theory and Topological Strings I”*]{}, arXiv:hep-th/9809187
R. Gopakumar, C. Vafa[*“M theory and Topological Strings II”*]{}, arXiv:hep-th/9812127
M. Taki, [*“Surface Operator, Bubbling Calabi-Yau and AGT Relation”*]{}, arXiv:1007.2524
T. Dimofte, S. Gukov, L. Hollands[*“Vortex Counting and Lagrangian 3-manifolds”*]{}, arXiv:1006.0977
A. Iqbal, C. Kozcaz, T. Sohail[*“Periodic Schur Process, Cylindric Partitions and N=2\* Theory”*]{}, arXiv:0903.0961
H. Awata, Y. Matsuo, S. Odake, [*“Excited States of Calogero-Sutherland Model and Singular Vectors of the $W_N$ Algebra”*]{}, Nucl.Phys. B[**449**]{} (1995) 347-374
H. Awata, H. Kanno, [*“Instanton counting, Macdonald function and the moduli space of D-branes”*]{}, JHEP [**0505**]{} (2005) 039
H. Awata, Y. Yamada, [*“Five-dimensional AGT Conjecture and the Deformed Virasoro Algebra”*]{}, JHEP [**1001**]{}:125,2010
R. Sakamoto, J. Shiraishi, D. Arnaudon, L. Frappat, E. Ragoucy, [*“Correspondence between conformal field theory and Calogero-Sutherland model”*]{}, Nucl.Phys. B[**704**]{} (2005) 490-509
R. Brower, C. Thorn, [*“Eliminating Spurious States from the Dual Resonance Model”*]{}, Nucl.Phys. B[**31**]{}, 163-182 (1971)
M. Yu, J. F. Wu, Y. Y. Xu, [*“Singular Vectors in Calogero-Sutherland Models and a New Approach to Skew Jack Polynomials”*]{} (in preparing)
J. F. Wu, [*“Note on Refined Topological Vertex, Jack Polynomials and Instanton Counting(II)”*]{}, (to appear)
G. Felder, [*“BRST approach to minimal models”*]{}, Nucl.Phys. B [**317**]{} (1989) 215-236
R. Stanley, [*“Some combinatorial properties of Jack symmetric functions”*]{}, Advances in Mathmatics [**77**]{}, 76-115(1989)
I. Macdonald, [*“Symmetric functions and Hall polynomials”*]{} 2nd Edition, Camb. Univ. Press(1995)
[^1]: For convenience, we call all the elliptic $N=2$ models $N=2^{\ast}$ theories.
[^2]: For a toric CY-3fold related to a gauge theory, there should exists a preferred direction in which all gluing legs of the toric diagram are parallel.
[^3]: Our main considerations in the present article do not involve fundamental matters. In the brane setup they do not only from the infinity D4 branes ending on the left or the right of NS5 branes, but also can be alternatively realized as the addition of D6 branes.
[^4]: If there are framing differences in the gluing process, one should also introduce the framing factors. In this article, we will assume all the edges in toric diagram are in the standard framing.
[^5]: The eCS model can be seen as the analytic continuation of the original CS model.
[^6]: Actually, the Jack polynomials associated with rectangular Young tableaux are singular vectors of Virasoro algebra. The non-rectangular ones are related to W-algebra.
[^7]: This can be done as that in Stanley’s article[@Stanley] and Macdonald’s textbook[@Macdonald].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The magnetic ground state phase diagram of the ferromagnetic Kondo-lattice model is constructed by calculating internal energies of all possible bipartite magnetic configurations of the simple cubic lattice explicitly. This is done in one dimension (1D), 2D and 3D for a local moment of $S=\frac{3}{2}$. By assuming saturation in the local moment system we are able to treat all appearing higher local correlation functions within an equation of motion approach exactly. A simple explanation for the obtained phase diagram in terms of bandwidth reduction is given. Regions of phase separation are determined from the internal energy curves by an explicit Maxwell construction.'
author:
- 'S. Henning'
- 'W. Nolting'
title: 'The ground state magnetic phase diagram of the ferromagnetic Kondo-lattice model'
---
Introduction
============
The ferromagnetic Kondo lattice model (FKLM), also referred to as $s$-$d$ model or double exchange model, is the basic model for understanding magnetic phenomena in systems where local magnetic moments couple ferro-magnetically to itinerant carriers. This holds for a wide variety of materials.
In the context of transition metal compounds Zener proposed the double exchange mechanism to explain ferromagnetic (FM) metallic phase in the manganites [@Zener51_1; @Zener51_2]. In these materials the Mn $5d$ shells are split by the crystal field into three degenerate $t_{2g}$ orbitals which are localized and form a total spin $S=\frac{3}{2}$ according to atomic selection rules and two $e_g$ orbitals providing the itinerant electrons. These electrons couple via Hund exchange coupling ferro magnetically with the localized spins. Therefore the FKLM is a basic ingredient to describe the rather complex physics of the manganites [@Dagotto03; @Stier07; @Stier08].
Another nearly ideal field of application of the FKLM is the description of the rare earth materials Gd and EuX (X=O,S,Se,Te). These materials have a half filled $4f$ shell in common that is strongly localized and the electrons in this shell couple to a total spin momentum of $S=\frac{7}{2}$. The FKLM was then used successfully to explain the famous redshift of the absorption edge of the optical $4f$-$5d$ transition in the ferromagnetic semiconductor EuO[@Busch64; @Rys67]. In \[\] a many-body analysis of the FKLM in combination with a band structure calculation was used to get a realistic value for the Curie temperature of the ferromagnetic metal Gd that is in good agreement with experiment.
Although it is necessary to extent the FKLM in order to get a realistic description of the above mentioned examples knowledge of the properties of the pure (single band) FKLM is crucial for understanding these materials.
To reveal the ground state magnetic phases one has to solve the many-body problem of the FKLM. This was already done in previous works by using different techniques. Dynamical mean field theory (DMFT) was used by several authors \[\] to get information about different magnetic domains. In \[\] a continuum field theory approach was used to derive the 2D phase-diagram at $T=0$. Classical Monte Carlo simulations were performed in \[\]. For 1D systems numerical exact density-matrix renormalization group calculations were done in \[\]. In \[\] the authors have used a Green function method to test the validity of assuming the quantum localized spins to be classical objects. Extended FKLMs including more material specific effects were for instance investigated in \[\].
In this work we will compare all bipartite magnetic configurations for the simple cubic (sc) lattice by calculating their respective internal energies. To this end the electronic Green function has to be determined. This is done by an equation of motion approach and, assuming that the local moment system is saturated, we are able to show that all appearing local higher correlation functions can be treated exactly. From the calculated internal energies the phase-diagram is constructed and region of phase-separation are determined.
The paper is organized as follows. In Sec. \[sec:model&theory\] the model Hamiltonian and details of the calculation are presented. In Sec. \[sec:results&discussion\] we discuss the phase-diagrams and give an explanation for the sequence of phases obtained by looking at the quasi-particle density of states. In Sec. \[sec:summary&outlook\] we summarize the results and give an outlook on possible directions for further research.
Model and Theory {#sec:model&theory}
================
Model Hamiltonian
-----------------
For a proper description of different (anti-) ferromagnetic alignments of localized magnetic moments it is useful to divide the full lattice into two or more sub-lattices (primitive cells) each ordering ferro magnetically.\
In this work we only consider simple cubic bipartite lattices, i.e. anti-ferromagnetic configurations that can be obtained by dividing the simple cubic lattice into two sub-lattices. In Fig.(\[fig:latticetypes\]) all possible decompositions in two and three dimensions are shown. In case of 1D only the ferromagnetic and g-type anti-ferromagnetic phase remain.
 Magnetic phases considered in this work (1D omitted).](lattice){width="8.0cm"}
The Hamiltonian of the FKLM in second quantization reads as follows: $$\begin{aligned}
\label{eq:hamiltonian}
\lefteqn{H=H_{s}+H_{sf}=\sum_{ij\sigma}\sum_{\alpha \beta}T^{\alpha \beta}_{ij}
c^+_{i\alpha\sigma}c_{j\beta\sigma}}\nonumber\\
& & -\frac{J}{2}\sum_{i\sigma}\sum_{\alpha}\left( z_\sigma S^z_{i\alpha}
c^+_{i\alpha\sigma}c_{i\alpha\sigma}+S^\sigma_{i\alpha}c^+_{i\alpha-\sigma}
c_{i\alpha\sigma}\right).\end{aligned}$$ The first term describes the hopping of Bloch electrons with spin $\sigma$ between different sites. The lattice sites $\mathbf{R}_{i\alpha}$ are denoted by a Latin index $i$ for the unit cell and an Greek index $\alpha \in {A,B}$ for the corresponding sub-lattice, i.e. $\mathbf{R}_{i\alpha}=\mathbf{R}_i+\mathbf{r}_{\alpha}$. The second term describes a local Heisenberg-like exchange interaction between the itinerant electrons and local magnetic moments $\mathbf{S}_{i\alpha}$ where $J>0$ is the strength of this interaction, $z_{\uparrow\downarrow}=\pm 1$ accounts for the two possible spin projections of the electrons and ($S^{\sigma}_{i\alpha}=
S^{x}_{i\alpha} + z_{\sigma}iS^{y}_{i\alpha}$) denotes the spin raising/lowering operator.\
internal energy
---------------
The internal energy of the FKLM at $T=0$ is given by ground state expectation value of the Hamiltonian: $$U=\langle H \rangle = \frac{1}{2}\sum_{\alpha\sigma}\int_{-\infty}^{\infty}
f_{-}(E)ES_{\alpha\sigma}(E)dE
\label{eq:internalE}$$ where $S_{\alpha\sigma}(E)=-\frac{1}{\pi}\mathrm{Im}G_{\alpha\sigma}(E)$ is the local spectral density, $f_{-}(E)$ denotes the Fermi function and $G_{\alpha\sigma}(E)$ denotes the local electronic Green function (GF). Note, that this formula is obtained by a straightforward calculation of the ground-state expectation value of the Hamiltonian (\[eq:hamiltonian\]) using the spectral theorem and is therefore exact.
Our starting point is the equation of motion (EQM) for the electronic GF: $$\sum_{l\gamma}\left( E\delta^{\alpha\gamma}_{il}-T^{\alpha\gamma}_{il}\right)
G^{\gamma\beta}_{lj\sigma} = \delta^{\alpha\beta}_{ij}
-\frac{J}{2}\left( I^{\alpha\alpha\beta}_{iij\sigma}
+ F^{\alpha\alpha\beta}_{iij\sigma}
\right)
\label{eq:elecGF_EQM}$$ with Ising-GF: $I^{\alpha\gamma\beta}_{ikj\sigma}=z_{\sigma}{\ensuremath{\langle\!\langle S^z_{i\alpha}c_{k\gamma\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}$ and spin-flip-GF: $F^{\alpha\gamma\beta}_{ikj\sigma}={\ensuremath{\langle\!\langle S^{-\sigma}_{i\alpha}c_{k\gamma-\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}$. Our basic assumption for the ground state is perfect saturation of the local moment system [^1]. With this assumption the Ising-GF can be decoupled exactly: $$I^{\alpha\gamma\beta}_{ikj\sigma}(E)\rightarrow z_{\sigma}z_{\alpha}SG^{\gamma\beta}_{kj}(E)
\label{eq:ising_decoup}$$ where $z_{\alpha}=\pm 1$ denotes the direction of sub-lattice magnetization. In a first attempt to solve Eq. (\[eq:elecGF\_EQM\]) we have neglected spin-flip processes completely ($F^{\alpha\gamma\beta}_{ikj\sigma}\approx0$). With (\[eq:ising\_decoup\]) we then get a closed system of equations which can be solved for the electronic GF by Fourier transformation: $$\begin{aligned}
G^{(\mathrm{MF})}_{\alpha\sigma}(E)
&=& \frac{1}{N}\sum_{\mathbf{q}}
G^{\alpha\alpha(\mathrm{MF})}_{\mathbf{q}\sigma}(E)\\
&=& \frac{1}{N}\sum_{\mathbf{q}}\frac{1}
{E+z_{\sigma}z_{\alpha}\frac{J}{2}S-\epsilon^{\alpha\alpha}_{\mathbf{q}}
-\frac{\epsilon^{\alpha\bar{\alpha}}_{\mathbf{q}}\epsilon^{\bar{\alpha}\alpha}_{\mathbf{q}}}
{E+z_{\sigma}z_{\bar{\alpha}}\frac{J}{2}S
-\epsilon^{\bar{\alpha}\bar{\alpha}}_{\mathbf{q}}}\nonumber
}
\label{eq:green_MF}\end{aligned}$$ where $\epsilon^{\alpha\beta}_{\mathbf{q}}$ is the Fourier transform of the hopping integral and $\bar{\alpha}=-\alpha$ denotes the complementary sub-lattice. We will call this solution the “mean-field” (MF) solution. Note, that the ferromagnetic phase is contained in the above formula by setting $\epsilon^{\alpha\bar{\alpha}}_{\mathbf{q}}$ to zero.
To go beyond the MF treatment it is necessary to find a better approximation for the spin-flip-GF. To this end we write down the EQM for the spin-flip-GF: $$\begin{aligned}
\label{eq:sf_EQM}
\lefteqn{
\sum_{l\mu}\left( E\delta^{\gamma\mu}_{kl} - T^{\gamma\mu}_{kl}\right)
F^{\alpha\mu\beta}_{ilj\sigma} =}\\
& {\ensuremath{\langle\!\langle \left[S^{-\sigma}_{i\alpha},H_{sf}\right]_{-}
c_{k\gamma-\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}+
{\ensuremath{\langle\!\langle S^{-\sigma}_{i\alpha}\left[c_{k\gamma-\sigma},H_{sf}\right]_{-};c^+_{j\beta\sigma}\rangle\!\rangle}}\nonumber\end{aligned}$$ Our strategy to get an approximate solution for the spin-flip-GF is to treat the non-local correlations on a mean-field level whereas the local terms will be treated more carefully. This is similar to the idea of the dynamical mean field theory (DMFT) developed for strongly correlated electron systems.[@Georges96] Let us start with the non-local ($i\ne k$ or $i=k$ but $\alpha \ne \gamma$) GFs first. It can be shown [@Nolting97] that the higher GFs resulting from the commutator of $S^{-\sigma}_{i\alpha}$ with $H_{sf}$ are approximately given by the product of the spin-flip-GF times spin-wave energies of the local moment system. Therefore it is justified to neglect the resulting GFs since the spin-wave energies are typically 3-4 orders of magnitude smaller than the local coupling $J$ [@Nolting97; @Santos02].\
The second term on the rhs of (\[eq:sf\_EQM\]) gives two higher GFs which we decouple on a mean-field level: $$\begin{aligned}
\lefteqn{
{\ensuremath{\langle\!\langle S^{-\sigma}_{i\alpha}\left[c_{k\gamma-\sigma},H_{sf}\right]_{-};c^+_{j\beta\sigma}\rangle\!\rangle}}
\approx -\frac{J}{2}}\nonumber\\
&\left({\ensuremath{\langle S^{-\sigma}_{i\alpha}S^{\sigma}_{k\gamma} \rangle}}
{\ensuremath{\langle\!\langle c_{k\gamma\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}
-z_{\sigma}{\ensuremath{\langle S^{z}_{k\gamma} \rangle}}{\ensuremath{\langle\!\langle S^{-\sigma}_{i\alpha}c_{k\gamma-\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}
\right)\nonumber\\
&\rightarrow z_{\sigma}z_{\gamma}S\frac{J}{2} F^{\alpha\gamma\beta}_{ikj\sigma}.
\label{eq:sf_approx_nonloc}\end{aligned}$$ where in the last step the saturated sub-lattice magnetization is exploited.\
We now come to the local terms ($i=k$, $\alpha=\gamma$). The two higher GFs resulting from the second commutator on the rhs of (\[eq:sf\_EQM\]) reduce to: $$\begin{aligned}
{\ensuremath{\langle\!\langle S^{-\sigma}_{i\alpha}S^{\sigma}_{i\alpha}c_{i\alpha\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}
&\rightarrow& S(1-z_{\sigma}z_{\alpha})G^{\alpha\beta}_{ij\sigma}\\
{\ensuremath{\langle\!\langle S^{-\sigma}_{i\alpha}S^{z}_{i\alpha}c_{i\alpha-\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}
&\rightarrow& (z_{\alpha}S + z_{\sigma}\delta_{-\sigma\alpha})F^{\alpha\alpha\beta}_{iij\sigma}.\nonumber
\label{eq:loccorr_1}\end{aligned}$$ Additionally we get a higher order Ising-GF and spin-flip-GF from the first commutator. The higher order spin-flip-GF can be treated [*exactly*]{} by using the EQM of the (known) Ising-GF given in the appendix (\[eq:ising\_EQM\]). This leads to: $$\begin{aligned}
\lefteqn{
{\ensuremath{\langle\!\langle S^{-\sigma}_{i\alpha}n_{i\alpha\sigma}c_{i\alpha-\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}
\rightarrow }\nonumber\\
&z_{\sigma}z_{\alpha}\frac{2}{J}S\left(\delta^{\alpha\beta}_{ij}-\sum_{l\mu}
\left( (E+z_{\sigma}z_{\alpha}\frac{J}{2}S)\delta^{\alpha\mu}_{il}-T^{\alpha\mu}_{il}\right)
G^{\mu\beta}_{lj\sigma}\right)\nonumber\\
& -\left(z_{\sigma}z_{\alpha}S-\delta_{\sigma\alpha}\right)F^{\alpha\alpha\beta}_{iij\sigma}.
\label{eq:higher_sf}\end{aligned}$$ The higher order Ising-GF can be traced back to the higher order spin-flip-GF by writing down its EQM and make use of saturation in the local-moment system (see appendix \[app:higherIsing\] for details): $$\begin{aligned}
\lefteqn{
{\ensuremath{\langle\!\langle S^{z}_{i\alpha}n_{i\alpha-\sigma}c_{i\alpha\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}
\rightarrow z_{\alpha}S\left(G^{\alpha\beta(\mathrm{MF})}_{ij\sigma}
{\ensuremath{\langle n_{j\beta-\sigma} \rangle}}\right.}\nonumber\\
&\left. -\frac{J}{2}\sum_{l\gamma}G^{\alpha\gamma(\mathrm{MF})}_{il\sigma}
{\ensuremath{\langle\!\langle S^{-\sigma}_{l\gamma}n_{l\gamma\sigma}c_{l\gamma-\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}\right).
\label{eq:higher_ising}\end{aligned}$$ It is a major result of this work that it is possible to incorporate all local correlations without approximation, i.e. to treat all local higher order GFs exactly. Combining the results for the appearing higher GFs found in (\[eq:sf\_approx\_nonloc\]), (\[eq:loccorr\_1\]), (\[eq:higher\_sf\]) and (\[eq:higher\_ising\]) we can now solve (\[eq:sf\_EQM\]) for the spin-flip-GF:
$$F^{\alpha\alpha\beta}_{iij\sigma}=-\frac{JSG^{(\mathrm{MF})}_{\alpha-\sigma}}
{1+z_{\sigma}z_{\alpha}\frac{J}{2}G^{(\mathrm{MF})}_{\alpha-\sigma}}
\left(z_{\sigma}z_{\alpha}G^{\alpha\beta(\mathrm{MF})}_{ij\sigma}\left({\ensuremath{\langle n^{\beta}_{j-\sigma} \rangle}}-\delta_{\sigma\beta}\right)
+\sum_{l\gamma}\left(\delta^{\alpha\gamma}_{il}\delta_{\sigma-\alpha}+
G^{\alpha\gamma(\mathrm{MF})}_{il\sigma}\delta_{\sigma\gamma}\sum_{t\eta}
\left( G^{\mu\nu(\mathrm{MF})}_{jk\sigma} \right)^{-1\:\gamma\eta}_{lt}\right)
G^{\eta\beta}_{tj\sigma}
\right).$$
Inserting this result into (\[eq:elecGF\_EQM\]) and performing a Fourier transformation we finally get: $$\label{eq:electGF_sf}
\sum_{\gamma}\left(\left(G^{\mu\nu(\mathrm{MF})}_{\mathbf{q}\sigma}\right)^{-1}_{\alpha\gamma}
-A^\alpha_\sigma\left(\delta_{\sigma-\alpha}\delta_{\alpha\gamma}+
G^{\alpha\sigma(\mathrm{MF})}_{\mathbf{q}\sigma}
\left(G^{\mu\nu(\mathrm{MF})}_{\mathbf{q}\sigma}\right)^{-1}_{\sigma\gamma}\right)\right)
G^{\gamma\beta}_{\mathbf{q}\sigma}(E) = \delta_{\alpha\beta} +
z_{\sigma}z_{\alpha}A^\alpha_\sigma G^{\alpha\beta(\mathrm{MF})}_{\mathbf{q}\sigma}
\left({\ensuremath{\langle n^{\beta}_{-\sigma} \rangle}}-\delta_{\sigma\beta}\right)$$
with $$A^\alpha_\sigma(E) = \frac{J^2 S G^{(\mathrm{MF})}_{\alpha-\sigma}(E)}
{2+z_{\sigma}z_{\alpha}JG^{(\mathrm{MF})}_{\alpha-\sigma}(E)}.$$ This equation allows for a self-consistent calculation of the electronic GF and we will call this the spin-flip (SF) solution.\
One important test for the above result is to compare it with exact known limiting cases. We found that (\[eq:electGF\_sf\]) reproduces the solution of the ferro-magnetically saturated semiconductor [@Shastry81; @Allan82] in the limit of zero band-occupation. Additionally the 4-peak structure of the spectrum as known from the “zero-bandwidth”-limit [@Nolting84] is retained whereas the peaks are broadened to bands with their center of gravity at the original peak positions.
phase separation
----------------
To determine the regions of phase separation in the phase diagram we have used an explicit Maxwell construction as shown in Fig.\[fig:maxwell\].
 Explicit Maxwell construction for determining the boundaries of phase separated regions.](maxwell){width="6.0cm" height="4.0cm"}
The condition for the boundaries of the phase separated region is: $$\left.\frac{dU_1}{dn}\right|_{n=n_1}=\frac{U_2(n_2)-U_1(n_1)}{n_2-n_1}=
\left.\frac{dU_2}{dn}\right|_{n=n_2}.
\label{eq:maxwell}$$
Results and Discussion {#sec:results&discussion}
======================
The internal energy of the FKLM at $T=0$ is given as an integral (\[eq:internalE\]) over the product of (sub-lattice) quasi-particle density of states (QDOS) times energy up to Fermi-energy. For understanding the resulting phase-diagrams it is therefore useful to have a closer look at the QDOS first. In Fig.\[fig:dos\_MF\] the sub-lattice MF-QDOS is shown for the different magnetic phases investigated (in 3D). The underlying full lattice is of simple cubic type with nearest neighbor hopping $T$ chosen such that the bandwidth $W$ is equal to $W=1$ eV in the case of free electrons ($J=0$ eV). The local magnetic moment is equal to $S=\frac{3}{2}$.
![\[fig:dos\_MF\]Sub-lattice quasi particle density of states (QDOS) of up and down electrons obtained from the MF-GF (\[eq:green\_MF\]) for two values of local coupling $J$ shown for different magnetic configurations. Parameters: $S=\frac{3}{2}$ and free electron bandwidth: $W=1.0$ eV.](dos_MF){width="8.0cm"}
We have plotted the up and down-electron spectrum separately for two different values of $J=0.1/1.0$ eV. The exchange splitting $\Delta_{ex}=JS$ eV of up and down-band is clearly visible. The decisive difference between the phases for nonzero values of $J$ is bandwidth reduction from ferromagnetic over a, c to g-afm phase. The reason for this behavior becomes clear by looking at the magnetic lattices shown in Fig.\[fig:latticetypes\]. In the ferromagnetic case an (up-)electron can move freely in all 3 directions of space without paying any additional potential energy. In a-type anti-ferromagnetic phase the electron can still move freely within a plane but when moving in the direction perpendicular to the plane it needs to overcome an energy-barrier $\Delta_{ex}$. Hence the QDOS for large values of $J$ resembles the form of 2D tight-binding dispersion. The bandwidth is reduced due to the confinement of the electrons. In the c-afm phase the electron can only move freely along one direction and the QDOS becomes effectively one dimensional. Finally in the g-type phase the electron in the large $J$ limit is quasi localized and the bandwidth gets very small. We will see soon that this bandwidth-effect is mainly responsible for the structure of the phase-diagram. Before we come to this point we want to discuss the influence of spin-flip processes as incorporated in (\[eq:electGF\_sf\]).
 Sub-lattice QDOS of up and down electrons obtained from the SF-GF (\[eq:electGF\_sf\]) for three different band-fillings $n$ shown for the ferromagnetic and a-afm phase. The local coupling $J=0.5$ eV is fixed. Dotted line: corresponding MF result. Horizontal lines: respective Fermi levels. Other parameters as in Fig.\[fig:dos\_MF\].](dos_le){width="8.0cm"}
In Fig.\[fig:dos\_MCDA\] the QDOS for $J=0.5$ eV is shown for three different band fillings $n$. The corresponding Fermi energies are marked by horizontal lines. The apparent new feature are the scattering states in the down spectrum for band fillings below half filling. Thereby the spectral weight of the scattering states is more and more reduced with increasing Fermi level. A second effect is that the sharp features in the MF-QDOS of the anti-ferromagnetic phases are smeared out. Compared to the MF results the overall change of QDOS below Fermi energy due to the inclusion of spin-flip processes is small and will not affect the form of the phase-diagram drastically. However non-negligible changes can be expected. Note that the model shows perfect particle-hole symmetry. Therefore the results for the internal energy will be the same for $n=x$ and $n=2-x$ ($x=0\dots1$, $n=1$: half filling).\
We come now to the discussion of the phase-diagrams which we got by comparing the internal energies of the different phases explicitly.
{width="16.0cm"}
The pure phase-diagrams (without phase-separation) are shown in Fig.\[fig:phase\_os\] whereas the different phases are marked by color code. In the first column the results of the MF-calculation are shown for the 1-, 2- and 3-dimensional case. The second column shows the effects of inclusion of spin-flip processes. We will concentrate here mainly onto the 3D case since most of the given arguments hold equally for the 1D and 2D case. For $J=0$ the system is paramagnetic (black bar at bottom). For larger $J$ ($J>0$) a typical sequence appear: for low band-fillings $n$ the system is always ferromagnetic and, with increasing $n$, it becomes a-type then c-type and finally g-type anti-ferromagnetic. This behavior is understood easily by looking at the formula for the internal energy (\[eq:internalE\]) and the MF-QDOS in Fig.\[fig:dos\_MF\]. Because of the bandwidth-effect discussed already the band-edge of the ferromagnetic state is always lowest in energy and will give therefore the lowest internal energy for small band-occupation. But since the QDOS of the anti-ferromagnetic phases increase much more rapidly than the ferromagnetic one, these give more weight to low energies in the integral (\[eq:internalE\]) and will become lowest in energy eventually for larger band-fillings. Therefore the bandwidth-effect is main effect explaining the order of phases with increasing $n$. A very interesting feature can be found in the region: $J=0.2\dots0.3$. In this region the ferromagnetic phase is directly followed by the c-afm phase for increasing $n$ although the a-afm phase has a larger bandwidth than the c-afm phase. This can be explained by the two-peak structure of the c-afm-QDOS. Due to the first peak at low energies these energies are much more weighted than in the a-afm case and the c-afm phase will become lower in energy than the a-afm phase. Since the reduction of bandwidth of the anti-ferromagnetic phases compared to the ferromagnetic phase is more pronounced for larger values of $J$ the ferromagnetic region is growing in this direction.\
The paramagnetic phase (black bar at $J=0$) disappear for any finite $J$ since due to the down-shift of the up-spectrum of the ferromagnetic phase their internal energy will always be lower.
{width="16.0cm"}
When comparing the MF and the SF-phase-diagram they appear to be very similar at first glance. However two interesting differences can be found, namely an increased $J$ region without a-afm-phase and the vanishing c-phase above $J \approx 0.8$eV.
Fig.\[fig:phase\_ms\] shows the phase-diagrams where regions of phase-separation, which we have determined by an explicit Maxwell construction (\[eq:maxwell\]), are marked by colored stripes. The two colors denote the involved pure phases. As one can see large regions become phase-separated, whereas the two participating phases are mostly determined by the adjacent pure phases. There is one interesting exception from this: above a certain $J$ only fm/g-afm phase-separation survives and suppresses all other phases in this area. Inclusion of spin-flip processes as shown in the right column of Fig.\[fig:phase\_ms\] push this $J$ up to higher values. Generally spin-flip processes seem to reduce phase-separation as can be seen in the g-afm phase and e.g. at the border between fm and c-afm phase.
Our results are in good qualitative agreement with numerical and DMFT results reported by others[@Dagotto98; @Chattopadhyay01; @Lin05]. It is common to all these works that for small coupling strength $J$ there is only a small ferromagnetic region at low band occupation $n$ followed by more complicated (anti-ferromagnetic, spiral, canted) spin states/phase-separation. With increasing $J$ the region of fm is also increased to larger $n$ values. Near half-filling ($n=1$) one will find always anti-ferromagnetism/phase-separation. phase-diagram very similar to our 2D-FM result shown in Fig.\[fig:phase\_ms\] was obtained by Pekker et.al.\[\]. The positions of A and G phase are in nearly perfect agreement. However the authors seem not to have taken into account phase-separation between A and G phase and their finding of FM/A phase-separation near half-filling at larger $J$ is not in accordance with our results.
Summary and Outlook {#sec:summary&outlook}
===================
We have constructed phase diagrams of the FKLM in 1D, 2D and 3D by comparing the internal energies of all possible bipartite magnetic configurations of the simple cubic lattice. To this end the electronic GF is calculated by an EQM approach. We can show, that it is possible to treat all appearing higher local correlation functions exact and we derive an explicit formula for the electronic GF (\[eq:electGF\_sf\]). The obtained sequence of phases with increasing band occupation $n$ and Hunds coupling $J$ is explained by the reduction of QDOS bandwidth due to electron confinement. Region of phase separation are then determined from the internal energy curves by an explicit Maxwell construction.
In the phase diagram obtained only phases appear that have explicitly considered by us. Therefore an important extension of this work could be the inclusion of more complicated spin structures like canted/spiral spin states as reported by others \[\]. However the bandwidth criterion obtained here can certainly be applied to such more complicated states also.
EQM of the Ising-GF
===================
$$\begin{aligned}
\lefteqn{\sum_{l\mu}\left( E\delta^{\gamma\mu}_{kl} - T^{\gamma\mu}_{kl}\right)
I^{\alpha\mu\beta}_{ilj\sigma} =
z_{\sigma}\delta^{\gamma\beta}_{kj}\langle S^z_{i\alpha} \rangle }\nonumber\\
& &
-\frac{J}{2}\left({\ensuremath{\langle\!\langle S^z_{i\alpha}S^z_{k\gamma}c_{k\gamma\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}
+z_{\sigma}
{\ensuremath{\langle\!\langle S^z_{i\alpha}S^{-\sigma}_{k\gamma}c_{k\gamma-\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}\right. \nonumber\\
& & \left. +z_{\sigma}\sum_{\sigma'}z_{\sigma'}
{\ensuremath{\langle\!\langle S^{\sigma'}_{i\alpha}c^+_{i\alpha-\sigma'}c_{i\alpha\sigma'}c_{k\gamma\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}\right),
\label{eq:ising_EQM}\end{aligned}$$
higher order Ising-GF {#app:higherIsing}
=====================
The higher order Ising-GF can be decomposed into: $${\ensuremath{\langle\!\langle S^{z}_{i\sigma}n_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}}\rightarrow
z_{\alpha}S{\ensuremath{\langle\!\langle n_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}}$$ when a saturated sub-lattice magnetization is assumed. The EQM of the remaining GF turns out to be: $$\begin{aligned}
\label{eq:EQM_higherIsing}
\lefteqn{(E+z_{\sigma}z_{\alpha}\frac{J}{2}S){\ensuremath{\langle\!\langle n_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}}} \nonumber\\
=& \sum_{l\gamma}T^{\alpha\gamma}_{il}{\ensuremath{\langle\!\langle c^+_{l\gamma-\sigma}c_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} &(\mathrm{I})\nonumber\\
+& \sum_{l\gamma}T^{\alpha\gamma}_{il}{\ensuremath{\langle\!\langle c^+_{i\alpha-\sigma}c_{l\gamma-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} &(\mathrm{II})\nonumber\\
+& \sum_{l\gamma}T^{\alpha\gamma}_{il}{\ensuremath{\langle\!\langle c^+_{i\alpha-\sigma}c_{i\alpha-\sigma}c_{l\gamma\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} &(\mathrm{III})\nonumber\\
-& 2\sum_{l\gamma}T^{\alpha\gamma}_{il}{\ensuremath{\langle\!\langle c^+_{l\gamma-\sigma}c_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} &\nonumber\\
+& \delta^{\alpha\beta}_{ij}{\ensuremath{\langle n_{i\alpha-\sigma} \rangle}}
-\frac{J}{2}{\ensuremath{\langle\!\langle S^{-\sigma}_{i\alpha}n_{i\alpha\sigma}c_{i\alpha-\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}}.&\end{aligned}$$ Subtracting the term denoted by (I) from this equation one gets: $$\begin{aligned}
\lefteqn{\sum_{l\gamma}(E\delta^{\alpha\gamma}_{il}-T^{\alpha\gamma}_{il}+z_{\sigma}z_{\alpha}\delta^{\alpha\gamma}_{il}
\frac{J}{2}S)\times}\nonumber\\
& & {\ensuremath{\langle\!\langle c^+_{l\gamma-\sigma}c_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}}\nonumber\\
&=& \sum_{l\gamma}T^{\alpha\gamma}_{il}{\ensuremath{\langle\!\langle c^+_{i\alpha-\sigma}c_{l\gamma-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} \nonumber\\
&+& \sum_{l\gamma}T^{\alpha\gamma}_{il}{\ensuremath{\langle\!\langle c^+_{i\alpha-\sigma}c_{i\alpha-\sigma}c_{l\gamma\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} \nonumber\\
&-& 2\sum_{l\gamma}T^{\alpha\gamma}_{il}{\ensuremath{\langle\!\langle c^+_{l\gamma-\sigma}c_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} \nonumber\\
&+& \delta^{\alpha\beta}_{ij}{\ensuremath{\langle n_{i\alpha-\sigma} \rangle}}
-\frac{J}{2}{\ensuremath{\langle\!\langle S^{-\sigma}_{i\alpha}n_{i\alpha\sigma}c_{i\alpha-\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}}.\end{aligned}$$ This can be solved for ${\ensuremath{\langle\!\langle n_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}}$ by left-multiplying with the MF-GF matrix: $$\begin{aligned}
\label{eq:1_sol}
\lefteqn{{\ensuremath{\langle\!\langle n_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{i\alpha\sigma}\rangle\!\rangle}}=} \nonumber\\
& & \sum_{kl\eta\gamma}G^{(\mathrm{MF})\alpha\eta}_{ik\sigma}T^{\eta\gamma}_{kl}
{\ensuremath{\langle\!\langle c^+_{i\alpha-\sigma}c_{l\gamma-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} \nonumber\\
&+& \sum_{kl\eta\gamma}G^{(\mathrm{MF})\alpha\eta}_{ik\sigma}T^{\eta\gamma}_{kl}
{\ensuremath{\langle\!\langle c^+_{i\alpha-\sigma}c_{i\alpha-\sigma}c_{l\gamma\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} \nonumber\\
&-& 2\sum_{kl\eta\gamma}G^{(\mathrm{MF})\alpha\eta}_{ik\sigma}T^{\eta\gamma}_{kl}
{\ensuremath{\langle\!\langle c^+_{l\gamma-\sigma}c_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} \nonumber\\
&+& G^{(\mathrm{MF})\alpha\beta}_{ij\sigma}{\ensuremath{\langle n_{j\beta-\sigma} \rangle}} \nonumber \\
&-& \frac{J}{2}\sum_{k\eta}G^{(\mathrm{MF})\alpha\eta}_{ik\sigma}
{\ensuremath{\langle\!\langle S^{-\sigma}_{k\eta}n_{k\eta\sigma}c_{k\eta-\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}}.\end{aligned}$$ Two other equations are obtained from (\[eq:EQM\_higherIsing\]) by subtracting term (II) or (III) and performing the same steps as before. This yields: $$\begin{aligned}
\label{eq:2_sol}
\lefteqn{{\ensuremath{\langle\!\langle n_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{i\alpha\sigma}\rangle\!\rangle}}=} \nonumber\\
& & \sum_{kl\eta\gamma}G^{(\mathrm{MF})\alpha\eta}_{ik\sigma}T^{\eta\gamma}_{kl}
{\ensuremath{\langle\!\langle c^+_{l\gamma-\sigma}c_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} \nonumber\\
&+& \sum_{kl\eta\gamma}G^{(\mathrm{MF})\alpha\eta}_{ik\sigma}T^{\eta\gamma}_{kl}
{\ensuremath{\langle\!\langle c^+_{i\alpha-\sigma}c_{i\alpha-\sigma}c_{l\gamma\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} \nonumber\\
&-& 2\sum_{kl\eta\gamma}G^{(\mathrm{MF})\alpha\eta}_{ik\sigma}T^{\eta\gamma}_{kl}
{\ensuremath{\langle\!\langle c^+_{l\gamma-\sigma}c_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} \nonumber\\
&+& G^{(\mathrm{MF})\alpha\beta}_{ij\sigma}{\ensuremath{\langle n_{j\beta-\sigma} \rangle}} \nonumber\\
&-& \frac{J}{2}\sum_{k\eta}G^{(\mathrm{MF})\alpha\eta}_{ik\sigma}
{\ensuremath{\langle\!\langle S^{-\sigma}_{k\eta}n_{k\eta\sigma}c_{k\eta-\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}}\end{aligned}$$ and $$\begin{aligned}
\label{eq:3_sol}
\lefteqn{{\ensuremath{\langle\!\langle n_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{i\alpha\sigma}\rangle\!\rangle}}=} \nonumber\\
& & \sum_{kl\eta\gamma}G^{(\mathrm{MF})\alpha\eta}_{ik\sigma}T^{\eta\gamma}_{kl}
{\ensuremath{\langle\!\langle c^+_{l\gamma-\sigma}c_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} \nonumber\\
&+& \sum_{kl\eta\gamma}G^{(\mathrm{MF})\alpha\eta}_{ik\sigma}T^{\eta\gamma}_{kl}
{\ensuremath{\langle\!\langle c^+_{i\alpha-\sigma}c_{l\gamma-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} \nonumber\\
&-& 2\sum_{kl\eta\gamma}G^{(\mathrm{MF})\alpha\eta}_{ik\sigma}T^{\eta\gamma}_{kl}
{\ensuremath{\langle\!\langle c^+_{l\gamma-\sigma}c_{i\alpha-\sigma}c_{i\alpha\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}} \nonumber\\
&+& G^{(\mathrm{MF})\alpha\beta}_{ij\sigma}{\ensuremath{\langle n_{j\beta-\sigma} \rangle}} \nonumber\\
&-& \frac{J}{2}\sum_{k\eta}G^{(\mathrm{MF})\alpha\eta}_{ik\sigma}
{\ensuremath{\langle\!\langle S^{-\sigma}_{k\eta}n_{k\eta\sigma}c_{k\eta-\sigma};c^{+}_{j\beta\sigma}\rangle\!\rangle}}\end{aligned}$$ Adding (\[eq:2\_sol\]) and (\[eq:3\_sol\]) and subtracting (\[eq:1\_sol\]) one finally gets: $$\begin{aligned}
\lefteqn{
{\ensuremath{\langle\!\langle S^{z}_{i\alpha}n_{i\alpha-\sigma}c_{i\alpha\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}
= z_{\alpha}S\left(G^{\alpha\beta(\mathrm{MF})}_{ij\sigma}
{\ensuremath{\langle n_{j\beta-\sigma} \rangle}}\right.}\nonumber\\
&\left. -\frac{J}{2}\sum_{l\gamma}G^{\alpha\gamma(\mathrm{MF})}_{il\sigma}
{\ensuremath{\langle\!\langle S^{-\sigma}_{l\gamma}n_{l\gamma\sigma}c_{l\gamma-\sigma};c^+_{j\beta\sigma}\rangle\!\rangle}}\right).\end{aligned}$$
[24]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, ****, ().
, ** (, ).
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , , , , ****, ().
, , , ****, (), .
, , , ****, (), .
, ****, (), .
, , , , ****, ().
, , , , , , , , ****, ().
, , , , ****, ().
, ****, (), .
, ****, (), .
, , , , ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
, ****, ().
[^1]: Although it is known that the Neel state is not the ground state of a Heisenberg anti-ferromagnet deviations from saturation are small for a local magnetic moment $S > \frac{1}{2}$ (see e.g. Ref. \[\]).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Humans are able to comprehend information from multiple domains for e.g. speech, text and visual. With advancement of deep learning technology there has been significant improvement of speech recognition. Recognizing emotion from speech is important aspect and with deep learning technology emotion recognition has improved in accuracy and latency. There are still many challenges to improve accuracy. In this work, we attempt to explore different neural networks to improve accuracy of emotion recognition. With different architectures explored, we find (CNN+RNN) + 3DCNN multi-model architecture which processes audio spectrograms and corresponding video frames giving emotion prediction accuracy of 54.0% among 4 emotions and 71.75% among 3 emotions using IEMOCAP[@IEMOCAP] dataset.'
author:
- |
Mandeep Singh\
SCPD, Stanford University\
Stanford, CA\
[[[email protected]](https://www.linkedin.com/in/smandeep/)]{}
- |
Yuan Fang\
ICME, Stanford University\
Stanford, CA\
[<[email protected]>]{}
bibliography:
- 'egbib.bib'
title: Emotion Recognition in Audio and Video Using Deep Neural Networks
---
Introduction
============
Emotion recognition is an important ability for good interpersonal relations and plays an important role in an effective interpersonal communications. Recognizing emotions, however, could be hard; even for human beings, the ability of emotion recognition varies among persons.
The aim of this work is to recognize emotions in audios and audios+videos using deep neural networks. In this work, we attempt to understand bottlenecks in existing architecture and input data, and explored novel ways on top of existing architectures to increase emotion recognition accuracy.
The dataset we used is IEMOCAP[@IEMOCAP], which contains 12 hours audiovisual data of 10 people(5 females, 5 males) speaking in anger, happiness, excitement, sadness, frustration, fear, surprise, other and neutral state.
Our work mainly consists of two stages. First, we build neural networks to recognize emotions in audios by replicating and expanding upon the work of [@inproceedings]. The input of the models are the audio spectrograms converted from the audio of an actor speaking a sentence, and the models give one output which is the emotion the actor has when saying that sentence. The models only predict one of the four different emotions, e.g. happiness, anger, sadness, and neutral state, which were chosen for comparison with [@inproceedings]. The deep learning architectures we explored were CNN, CNN + RNN, CNN + LSTM.
After achieving a comparably good accuracy on audios comparing with [@inproceedings], we build models which predict emotions using audio spectrogram and video frames in a video, since we believe video frames contain additional emotion-related information that can help us to achieve a better emotion prediction performance. The inputs of the models are the audio spectrogram and video frames, which are converted and extracted from the sound and images of a video recording an actor speaking one sentence. The output of the models is still one of the four selected emotions mentioned above. Inspired by the work of [@DBLP:journals/corr/TorfiIND17], we explore the model made of two sub-networks; the first sub-network is a 3D CNN which takes in the video frames, and the second one is a CNN+RNN which takes in the audio spectrogram, and the last layer of the two sub-networks are concatenated and followed by a fully connected layer that output the prediction.
The metric we use for evaluation is the overall accuracy for both the audio and audio+video models.
\[fig:onecol1\]
Related Work
============
Emotion recognition is an important research area that many researchers work on in recent years using various methods. Using speech signals[@kwon2003emotion], facial expression[@gouta2000emotion], and physiological changes[@kim2008emotion] are some of the common approaches researchers arise to approach the emotion recognition problem. In this work, we will use audio spectrograms and video frames to do emotion recognition.
It has been shown, emotion recognition accuracy can be improved with statistical learning of low-level features (frequency & signal power intensity) by different layers of deep learning network. Mel-scale spectrograms for speech recognition was demonstrated to be useful in [@deng_2014]. There has been state of the art speech recognition method that uses linearly spaced audio spectrograms as described in [@AmodeiABCCCCCCD15] [@HannunCCCDEPSSCN14]. Our work related to emotion recognition using audio spectrogram follows the approach described in [@inproceedings]. Audio spectrogram is an image of audio signal which consists of 3 main components namely: 1. Time on x-axis. 2. frequency on y-axis. 3. power intensity on the colorbar scale which can be in decibels(dB) as shown in Fig. 1. [@sahu] covers machine learning methods to extract temporal features from the audio signals. The goodness in machine learning models is, it’s training & prediction latency is good but, prediction accuracy is low. The CNN model that uses audio spectrograms to detect emotion has better prediction accuracy compared to machine learning model.
Comparing the CNN network used in [@inproceedings] & [@DBLP:journals/corr/TorfiIND17] for training using audio spectrograms, [@inproceedings] uses wider kernel window size with zero padding while [@DBLP:journals/corr/TorfiIND17] uses smaller window size and no zero padding. With wider kernel window size we are able to see larger vision of the input which allows for more expressive power. In order to avoid loosing features use of zero padding becomes important. The zero padding decreases as the number of CNN layers increase in the architeture used in [@inproceedings]. [@DBLP:journals/corr/TorfiIND17] avoids adding zero padding in order to not consume extra virtual zero-energy coefficients which are not useful in extracting local features. One drawback that we see in [@DBLP:journals/corr/TorfiIND17] is that it does not compare performance between audio model & audio+video model being used. One goodness observed in [@DBLP:journals/corr/TorfiIND17] is that it does not do noise removal from the audio input data while [@inproceedings] uses noise removal techniques in the audio spectrogram before training the model.
To achieve better prediction accuracy, a natural progression of emotion recognition using audio sprectrogram is to include facial features extracted from the video frames. [@DBLP:journals/corr/abs-1902-01019] & [@article_facial_video] implements facial emotion recognition using images and video frames respectively but, without audio. [@DBLP:journals/corr/TorfiIND17] & [@DBLP:journals/corr/abs-1807-00230] implements neural network architecture which processes audio spectrogram & video frames to recognize emotion. Both [@DBLP:journals/corr/TorfiIND17] and [@DBLP:journals/corr/abs-1807-00230] implement a self-supervised model for cooperative learning of audio & video models on different dataset. [@DBLP:journals/corr/abs-1807-00230] further does a supervised learning on the pre-trained model to do classification. The model come up by [@DBLP:journals/corr/TorfiIND17] and [@DBLP:journals/corr/abs-1807-00230] are very similar; both are two-stream models that contains one part for audio data, and one part for video data. The only difference is the way the kernel size, layer number, input data dimension are set. These hyperparamters are set differently because their input data is different.[@DBLP:journals/corr/TorfiIND17] tends to use smaller input size, and kernel size because its input images only capture mouth, which doesn’t contain as much information as the image which captures the movement of a person used in [@DBLP:journals/corr/abs-1807-00230].
Dataset & Features
==================
Dataset
-------
The dataset we use is IEMOCAP [@IEMOCAP] corpora as it is the best known comprehensibly labeled public corpus of emotion speech by actors.[@lee2015] uses this IEMOCAP dataset to generate state of the art results at the time. IEMOCAP contains 12 hours audio and visual data of conversations of two persons (1 female and 1 male for one conversation, and there are 5 females and 5 males in total), where each sentence in conversations is labelled with one emotion–anger, happiness, excitement, sadness, frustration, fear, surprise, other or neutral state.
Data pre-processing
-------------------
### Audio Data Pre-processing
IEMOCAP data corpus contain audio wav files with various time length and with marking of actual emotion label for corresponding time segment. The audio wav files in IEMOCAP are generated at a sample rate of 22KHz. The audio spectrogram is extracted from the wav file using librosa[^1] python package with a sample rate of 44KHz. 44KHz sample rate was used because as per Nyquist-Shannon sampling theorem[^2] in order to fully recover a signal the sampling frequency should at least be twice the signal frequency. The audio signal frequency ranges from 20Hz to 20KHz. Hence, 44KHz sampling rate is commonly used rate for sampling. The spectrograms were generated in 2 segments which are: 1. Original time length of utterance of sentence or emotion. 2. Clip each utterance of sentence into 3 second clips. Another data segmentation that was done is with noise cleanup and without noise cleanup. We have named these segmentation as DSI, DSII, DSIII & DSIV. This data segmentation is summarized in Table 1. Model training is done on these data segments separately.
Dataset segmentation Type Noise Cleanup Name
----------------------------------------- --------------- --------
Original time length of utterance No DS I
Clip each utterance into 3 second clips No DS II
Original time length of utterance Yes DS III
Clip each utterance into 3 second clips Yes DS IV
: Segmentation of input data generation.
\[fig:onecol\]
In order to get rid of the background noise we applied bandpass filter[^3] [^4] between 1Hz to 30KHz. Denoising or noise cleanup of the input audio signal for data augmentation is also followed by [@AmodeiABCCCCCCD15]. Sentence utterances that are less than 3 second are padded with noise to maintain uniformity in noise frequency and noise amplitude w.r.t noise in other parts of the signal. Initially zero padding was also experimented with to have 3 second time scale and then noise is added with signal to noise ratio (SNR) of 1 through out the signal time length but, this resulted in distorting the original audio signal. The resulting signal is then denoised. Denoising helps in making the frequency, time scale and amplitude features of the input audio signal to be more visible in the hopes of getting better prediction accuracy per emotion. All the audio spectrograms are generated with same colorbar intensity scale (+/- 60dB) to maintain uniformity of the spectrogram across the board among different emotions. This is similar to normalization of data. As seen in Fig. 2 after denoising only the signal that contains actual information remain with high power intensity or signal amplitude. Other regions in the spectrogram remains with low power intensity relative to where there is actual signal of interest. Compared to Fig. 1 where some signal intensity is observed throughout the time scale, which is actually the noise. The spectrogram images generated are of size 200x300 pixels.
Total count of 3 second audio spectrograms among 4 different emotions is summarized in Table. 2. As observed the happy emotion count is significantly low. So we duplicated the happy data to reach total count of 1600. Similarly anger emotion count was also duplicated. Sad & Neutral data count was reduced to match to 1600 data points for each emotion. A total images of 6400 is used for training the model. Data balance is crucial for the model to train well. 400 images from each emotion is used for model validation purpose. The images used for validation are never part of training set.
At first, we started off with audio spectrograms that contains xy axis and colorbar scale but, we removed the scales after learning that including axis & scale could be contributing negatively to prediction accuracy.
To observe class accuracy improvement, input audio spectrograms were data augmented by cropping and rotation. Each image was cropped by 10 pixels from the top and resized back to 200x300 pixels. This cropping is done to simulate frequency change in the emotion by small amount. Similarly, each image was also rotated by +/- 10 degrees. This rotation also simulates frequency change but it also shifts the time scale. Augmenting data that changes time scale is not preferred hence the rotation was done to a very small degree of 10 degrees. With cropping and rotation, total count of data used for training becomes 19200. The model training was done separately with original images and images with data augmentation for comparison.Horizontal flip of images were avoided as this means flipping the timescale and enacts a person speaking in reverse, which will lead to lower model prediction accuracy.
Model training on audio spectrogram that contains full time length, and not 3 second, was done separately. For given 3 second audio spectrogram, it was replaced with the full time length spectrogram, thus maintaining data count for balancing.
Visual analysis of around 100 audio spectrograms were done. It was observed that maximum frequency observed among all these spectrograms is around 8KHz. This means around 60% of the spectrogram image is blue which does not carry any information from emotion perspective. All the input audio spectrograms were cropped from the top by 60% and resized back to 200X300 pixels. An ideal method would be to generate spectrograms specifying fixed frequency scale if the frequency range is known prior.
Emotion Count of data points
--------- ----------------------
Happy 786
Sad 1752
Anger 1458
Neutral 2118
: Data count of each emotion.
### Video Data Pre-processing
Since our work also include implementing video model to see room for improvement in prediction accuracy of emotion recognition, we also did video data pre-processing. For the video data, we first clipped each video file into sentences according to how we processed audio files. This ensured that we are querying that part of the video file that corresponds to given audio spectrogram. Then we extracted 20 images per 3 second from each video avi file that corresponds to 3 sec audio spectrogram. The video contains both the actors in the frames hence the frames were cropped accordingly from left or right to only capture the actor whose emotion is being recorded. We then cropped the video frames further to cover the actor’s face/head. The final resolution of video frames is 60x100. One limitation with the dataset is that, in the video the actors are not speaking facing the camera, therefore full facial expression corresponding to a given emotion are not visible.
While processing the extraction of audio spectrograms and video frames, it was observed that the memory usage on the machine was more than 12GB. This lead to machine crashes. Therefore, to extract data, each audio and video file was processed individually in batch. Python script[^5] was launched individually through unix shell script.
Methods & Model Architecture
============================
In this section, we will talk about the models we have built for emotion recognition in audios in the ’Audio Models’ subsection, and models for emotion recognition in audios+videos in ’Audio+Video Models’ subsection.
Audio Models
------------
By replicating and expanding upon the network architecture used in [@inproceedings] we formulate three different models. The first model is a CNN model, which consists of three 2D convolutional layers and maxpooling layers followed by two fully connected layers, as shown in Fig.\[fig:audiom\]. The second architecture we build is that we add a LSTM layer after the convolutional layers in the CNN model we have built, and we will call this model CNN+LSTM in this work. In the third model, we replace the LSTM layer with a vanilla RNN layer, and this model is named CNN+RNN in this work. A graph for the architecture of CNN+RNN is shown in Fig.\[fig:audiom\]. The loss we use for training the model is the cross entropy loss. $$\begin{aligned}
L_\textbf{cross entropy}=\frac{1}{N}\sum_{n=1}^N-\text{log}(\frac{\text{exp}(x_c^n)}{\sum_j \text{exp}(x_j)})\end{aligned}$$ where N is the number of data in the dataset, $x_c^n$ is the true class’s score of the n-th data point, $x_j$ is the j-th class’s score of the n-th input data. Minimizing the cross entropy loss will force our model to learn the emotion-related features from the audio spectrogram because when the loss will be minimum only when for a datapoint, the score of the true class is remarkably larger than the score of all other classes.
Audio+Video Models
------------------
Inspired by the work of [@DBLP:journals/corr/TorfiIND17], our audio+video model is a two-stream network that consists of two sub-networks as shown in Fig.\[fig:videom\](a). The first sub-network is the audio model, which we choose to use the best-performing audio model we have built–CNN & RNN as shown in Fig.\[fig:audiom\]. The architecture of the first sub-network is the same as the audio model except that it dumps the original output layer in order to get high-level features of audio spectrograms, as shown in Fig.\[fig:videom\](CNN+RNN). The second sub-network is the video model, and it is made of four 3D convolutional layers, and three 3D maxpooling layers, followed by two fully connected layers, as shown in Fig.\[fig:videom\](3D CNN). Finally, the last layer of the two sub-networks are concatenated together, followed by one output layer, as shown in Fig.\[fig:videom\](a).
We train this audio+video model using two different methods–semi-supervised training and supervised training. For semi-supervised training method, we first pre-train our model using video frames and audio spectrogram from the same video and from different videos, as shown in Fig.\[fig:videom\](b). This forces the model to learn the correlation between the visual and auditive elements of a video. The input of the pre-training process has three distinct types–positive (the audio spectrogram and video frames are from the same video); hard negative (the audio spectrogram and video frames are from different videos with different emotions); super hard negative (the audio spectrogram and video frames are from different videos with the same emotion). The loss function we use for pre-training is the contrastive loss. $$L_{\text{contrastive loss}}=\frac{1}{N}\sum_{n=1}^N L_1^n+L_2^n$$ where $$L_1^n=(y^n)\left\|f_v(v^n)-f_a(a^n)\right\|_2^2$$ $$L_2^n=(1-y^n)\text{max}(\eta-\left\|f_v(v^n)-f_a(a^n)\right\|_2,0)^2$$ N is the number of datapoints in the dataset, $v^n, a^n$ are the video frames and audio spectrogram of the n-the datapoint, $f_v, f_a $ are the video and audio sub-networks, $y_n$ is one if the video frames and audio spectrogram are from the same video, and zero otherwise.$\eta$ is the margin hyperparameter. $\left\|f_v(v^n)-f_a(a^n)\right\|_2$ should be small when the video frames and audio spectrogram are from the same video, and large when they come from different videos. Therefore, by minimizing the contrastive loss, the audio and video models are forced to output similar values when their inputs are from the same video, and very disctint values when they are not. This allows the model to learn the connection between audio and visual elements from the same video.
After pre-training is done, we do supervised learning on the pre-trained model where the input is the audio spectrogram and video frames of a video and output is the emotion predicted, as shown in Fig.\[fig:videom\](a). The loss of our model is the cross entropy, and the formula is the same as in Equation.1.
The second training method is that we do supervised training directly on the model without pre-training process.
Experiments & Results
=====================
For model evaluation, prediction accuracy is the key metric used. For results comparison, the accuracy was compared with accuracy reported in [@inproceedings]. Since we balanced the data count therefore the overall accuracy and class accuracy as reported in [@inproceedings] are mathematically equal terms in our work. Our work aimed to achieve prediction accuracy of around 60% considering 4 emotions.
We trained the model on all 4 segmentation of dataset generated and observed that results on data with original time scale and without noise cleanup gives the best accuracy. The results reported are based on this dataset. Spectrograms with noise removed theoretically sounds promising but it did not work due to 2 possible reasons. First, the algorithm used to remove noise reduces signal amplitude which may lead to some feature suppression. An algorithm that amplifies the signal back need to be explored. Some techniques for e.g. subtracting noise from the signal and multiplying final signal with a constant was explored but they all resulted in signal distortion. Secondly, having noise in the spectrogram simulates real scenario and during model training noise could indirectly acts as a regularizer. [@DBLP:journals/corr/TorfiIND17] also does not remove noise from the input audio spectrograms.
Hyperparameters
---------------
We started off with prediction on 4 emotions and most of the work, results and analysis is based off of these 4 emotions. Our validation accuracy did not go beyond 54.00% and we saw overfitting during model training beyond this point. This lead us to experiment with various hyperparameters in the optimizer and in the network model layers for e.g. kernel size, size of input and output in each layer, dropout, batchnorm, data augmentation, l1 & l2 regularization.
Adam optimizer was used with learning rate of 1e-4 to train the model as this gave best accuracy. We experimented with 1e-3 & 1e-5 and observed the model did not train well with these settings. It was observed weight decay(parameter that controls l2 regularization) of 0.01 in Adam optimizer improved the accuracy by 1%. Weight decay values of 0.005 and 0.02 were also experimented with but, did not help. All other parameters was kept default in the optimizer.
Enabling l1 regularization, data augmentation of rotation and cropping, batchnorm resulted in no change or improvement in accuracy. This is possibly due to that the model learned all the features it can from the available data based on the model architecture.
Tuning of dropout probability was also experimented with and optimal value of 0.2 for the last fully connected layer and 0.1 for the dropout in RNN layer was obtained.
The input & output dimensions in the audio network layers was doubled & quadrupled which resulted in accuracy improvement of 1-2%. Increasing the input output dimensions in the layers also resulted in high memory usage during model training. We attempted to extend this learning on the video network but due to limited memory of 12GB on the machine we were unable to carry out this experiment. This leads us to strongly believe that there is room for improvement that needs more experimentation on a machine with large memory. The accuracy improvement is also evident from different model architectures we used starting from CNN, CNN+RNN and CNN+RNN+3DCNN which actually is having more parameters in the model to learn features better.
80% of total data points were used for training and rest for validation. Batch of 64 data points per iteration is used to train the model. Higher batch count resulted in long iteration time and high memory usage thus, 64 was picked.
Using normalization in image transformation with mean of \[0.485, 0.456, 0.406\] and standard deviation of \[0.229, 0.224, 0.225\] on all images improved accuracy by 0.37%.
Validation Set Accuracy
-----------------------
Table \[fig:tab\] summarizes the validation set accuracy obtained among different architectures.
Architecture Accuracy(%) Data Aug. Emotion
--------------- ------------- ----------- ---------
CNN 52.23 No H,S,A,N
CNN 51.90 Yes H,S,A,N
CNN+LSTM 39.77 No H,S,A,N
CNN+LSTM 39.65 Yes H,S,A,N
CNN+RNN 54.00 No H,S,A,N
CNN+RNN 70.25 No S,A,N
CNN+RNN+3DCNN 51.94 No H,S,A,N
CNN+RNN+3DCNN 71.75 No S,A,N
: Validation set accuracy over CNN, CNN+LSTM, CNN+RNN & CNN+RNN+3DCNN among 4 and 3 different emotions. []{data-label="fig:tab"}
Loss & Classification Accuracy History on CNN+RNN+3DCNN
-------------------------------------------------------
Fig. \[fig:long\_loss4\] is the contrastive loss curve obtained during self supervision model training. We ran self supervision model with 5 epochs and 10 epochs separately and fed the learned weights in these 2 experiments into CNN+RNN+3DCNN model for classification training. We observed the self supervised model run with 5 epochs gave better classification accuracy by 0.5% compared to the self supervised model run with 10 epochs. This could be attributed to overfitting of weights learned in self supervised model when run with 10 epochs.
Fig. \[fig:long\_loss\] is the softmax/cross entropy loss curve obtained on the best model which is CNN+RNN+3DCNN. Since the loss is reported per iteration hence it appears noisy but we observed that per epoch it is decreasing on logarithmic scale.
Fig. \[fig:long\_acc\] is the classification accuracy history on best model which is CNN+RNN+3DCNN. We obtained best validation accuracy of 71.75% considering 3 emotions(sad, anger, neutral).
Confusion Matrix on CNN+RNN & CNN+RNN+3DCNN
-------------------------------------------
Fig. \[fig:long\_conf\] is the confusion matrix obtained with CNN+RNN. From this confusion matrix we see only happy emotion is predicted poorly compared to other emotions. This led us to explore CNN+RNN & CNN+RNN+3DCNN architecture only on 3 emotions (instead of 4) to understand if we do see performance improvement when switching from audio only inputs to audio+video inputs. Fig. \[fig:long\_conf\_3\_emo\] is the confusion matrix obtained with the best model which is CNN+RNN+3DCNN.
Results Analysis
----------------
From Table \[fig:tab\], considering 4 emotions, we can see that CNN+RNN is the best performing architecture and data augmentation doesn’t improve the accuracy. CNN does not work well comparing with CNN+RNN because CNN has the same architecture as the first few layers of CNN+RNN, and is comparably simple. Therefore, CNN+RNN will learn features of higher-level and performs better compared with CNN. For CNN+LSTM, it does have a more complex architecture, however, when we were tuning the hyperparams, we found out that accuracy improved slightly by increasing dropout probability in CNN+LSTM , indicating that CNN+LSTM could be overly complex for our dataset and training purpose. Also, adding model complexity requires more careful hyperparameters tuning, and since CNN+RNN is giving a relatively good performance compared with [@inproceedings], we decided not to bother with adjusting CNN+LSTM.
From Table \[fig:tab\], it also evident that CNN+RNN+3DCNN architecture which uses video frames along with audio spectrogram is the best considering 3 emotions but, the accuracy did not improve significantly to CNN+RNN. This is due to the fact that the cropping window to focus on the face/head to recognize facial emotion was large as the actors are not facing the camera and they moved during their speech. Auto detecting face/head with detection model and then cropping based on the bounding box would be ideal and accuracy is expected to increase significantly. Considering 4 emotions, CNN+RNN+3DCNN performed worse compared to CNN+RNN is because the model prediction accuracy for happy emotion itself is bad due its low data count, therefore when the video frames are used which are only using facial expression from the side only confuses the model to learn poorly.
Data augmentation does not increase the validation accuracy and even slightly makes the model perform worse could be due to that the image generated by cropping and rotation loses some emotion-related features, since it alters the frequency and time scale. Which is similar to altering the pitch of the audio and reversing the audio of a sentence, and could confuse the model.
From confusion matrix, we observed that happiness prediction is low compared to other emotions. One possible reason for this is, happiness data set count is very low compared to other emotions, and over-sampling by repetition the happiness data set is not enough. More dataset of happiness is expected to improve happiness prediction accuracy.
Comparing our results with [@inproceedings], we lag their class accuracy by 5.4% but, comparing the overall accuracy considering 3 emotions our work achieved accuracy of 71.75% which is better by 2.95%.
Conclusion/Future Work
======================
Our work demonstrated emotion recognition using audio spectrograms through various deep neural networks like CNN, CNN+RNN & CNN+LSTM on IEMOCAP[@IEMOCAP] dataset. We then explored combining audio with video to achieve better performance accuracy through CNN+RNN+3DCNN. We demostrated that CNN+RNN+3DCNN performs better as it learns emotion features from the audio signal(CNN+RNN) and also learns emotion features from facial expression in video frames(3DCNN) thus complementing each other.
To further improve the accuracy of our model we plan to explore more and touch on various aspects. We want to explore more noise removal algorithms and generate audio spectrograms without noise in them. This work will help in analyzing if removing noise actually helps or it acts as a regularizer and we don’t need to remove noise from the spectrograms. We also want to explore, if there are multiple people speaking how the model predicts the emotion in such scenarios. Next, we want to explore auto cropping around the face/head from video frames. We strongly believe it will significantly improve the prediction accuracy. As far as data augmentation is concerned, even though none of direct data augmentation methods proved to be useful but, adding signal with very low amplitude and varying frequency onto the speech signal and then generating audio spectrogram from the resulting signal will create unique data points and help in getting rid of model overfitting. If there were machines/GPUs with more memory we wanted to experiment with increasing input and output dimensions in each layer in the network to obtain optimal point. There is definitely a room to get better accuracy using this method. We then want to experiment with prediction latency among different models and there architecture size. We also wanted to experiment more with CNN+LSTM network and fine tune it to see what is the best accuracy we can achieve with this model. We did try transfer learning using ResNet18 but didn’t achieve good results. Need to do more experimentation on how to transfer learn using existing models. Lastly, try the model on all 12 emotions in the dataset and understand bottlenecks and come up with neural network solutions that can predict with high accuracy.
Link to github code
===================
$\text{https://github.com/julieeF/CS231N-Project}$
Contributions & Acknowledgements
================================
[Mandeep Singh](https://www.linkedin.com/in/smandeep/):Mandeep is student at Stanford under SCPD. He has worked at Intel as Design Automation Engineer for 8 years. Prior to joining Intel, he did masters in electrical engineering specializing in analog & mixed-signal design from SJSU.
Yuan Fang:Yuan is master’s student at Stanford in ICME department. Her interests lies in machine learning & deep learning.
We would like to thank the CS231N Teaching Staff for guiding us through the project. We also want to thank Google Cloud Platform and Google Colaboratory for providing us free resources to carry out experimentation involved in this work.
[^1]: https://librosa.github.io/librosa/index.html
[^2]: https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon\_sampling\_theorem
[^3]:
[^4]:
[^5]:
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We report on the temporal behavior of the high-energy power law continuum component of gamma-ray burst spectra with data obtained by the Burst and Transient Source Experiment. We have selected 126 high fluence and high flux bursts from the beginning of the mission up until the present. Much of the data were obtained with the Large Area Detectors, which have nearly all-sky coverage, excellent sensitivity over two decades of energy and moderate energy resolution, ideal for continuum spectra studies of a large sample of bursts at high time resolution. At least 8 spectra from each burst were fitted with a spectral form that consisted of a low-energy power law, a spectral break at middle energies and a high-energy continuum. In most bursts (122), the high-energy continuum was consistent with a power law. The evolution of the fitted high-energy power-law index over the selected spectra for each burst is inconsistent with a constant for 34% of the total sample. The sample distribution of the average value for the index from each burst is fairly narrow, centered on $-2.12$. A linear trend in time is ruled out for only 20% of the bursts, with hard-to-soft evolution dominating the sample (100 events). The distribution for the total change in the power-law index over the duration of a burst peaks at the value $-0.37$, and is characterized by a median absolute deviation of 0.39, arguing that a single physical process is involved. We present analyses of the correlation of the power-law index with time, burst intensity and low-energy time evolution. In general, we confirm the general hard-to-soft spectral evolution observed in the low-energy component of the continuum, while presenting evidence that this evolution is different in nature from that of the rest of the continuum.'
author:
- 'Robert D. Preece, Geoffrey N. Pendleton, Michael S. Briggs, Robert S. Mallozzi, and William S. Paciesas'
- 'David L. Band and James L. Matteson'
- 'C. A. Meegan'
title: |
BATSE Observations of Gamma-Ray Burst Spectra. IV.\
Time-Resolved High-Energy Spectroscopy
---
Introduction
============
In the first six years of operation, the Burst and Transient Source Experiment (BATSE), on board the [*Compton Gamma Ray Observatory*]{} (CGRO), has accumulated a vast amount of spectral data on gamma-ray bursts. Although the BATSE Large Area Detectors (LADs) have only moderate energy resolution compared with the Spectroscopy Detectors (SDs), they have unprecedented effective area over their entire energy range (28 keV – 1.8 MeV). By studying spectroscopy data from the LADs for bright events such as those reported on by [@ford95], who used SD data, we can track the evolution of fitted spectral parameters with finer time resolution, and we can extend the analysis to fainter events. In this paper, we analyze 126 bursts at high time resolution, with more than 8 spectra per event, concentrating on the higher-energy behavior, where it was difficult for the SDs to obtain good statistics.
As with much of the field of GRB studies, theoretical modeling of continuum spectral emission naturally breaks into two periods: before and after the publication of the first BATSE results ([@meegan92]). The paired observation of burst isotropy on the sky along with an inhomogeneous distribution of events with brightness, and presumably distance, has established the conclusion that GRBs occur much farther away, and are consequently much brighter, than previously expected. Instead of comprising a nearby Galactic disk population, burst sources either reside in a very large Galactic halo or else they are truly cosmological (we will not consider here another possible scenario: that bursts may arise in a local heliospheric halo, such as the Oort cloud \[e.g.: [@bickert94]; but also see: [@cbt94]\]). Such an uncertainty in distance has had dire theoretical consequences; no single model has surfaced that can accommodate both distance scales, since such a model would have to account for luminousities that differ by $\sim 10$ orders of magnitude. The early theoretical work was dominated by the physics of strong-magnetic field Galactic-disk neutron stars (see [@harding91], for a review), which has as its basis the efficient mechanism of quantum synchrotron emission. Of course, energization of these systems was a crucial problem, in that the emission timescales are on the order of $10^{-17}$ s, for a typical field strength of $10^{12}$ G required to produce a cyclotron absorption line fundamental at $\sim 20$ keV, as observed in X-ray pulsars ([@voges82]). Nevertheless, continuum modeling of then-current spectral data enjoyed a moderate success (many references in [@ho92]).
All this began to break down with the placement of burst sources no closer than a large Galactic halo, as most of the strong-field models have restrictive luminosity constraints. Cosmological burst emission scenarios proposed to date are less predictive, but have had little time yet to mature. For the most part, interest has been focused on merging neutron stars, since the total energy budget is about right for very distant events. What happens after the merger is what distinguishes the models from each other. A simple fireball was proposed by many workers ([@cavallo78], [@goodman86], [@piran94]). Non-thermal emission, such as is observed in GRB spectra, is very difficult to produce in an optically-thick source, although as a fireball expands and becomes optically thin, a high-energy power-law component becomes possible. However, it was soon realized that in the environment of two colliding compact objects, baryon contamination of the fireball would pose a problem, diverting energy from the direct production of fireball radiation into the acceleration of material (see discussion in [@fish95]). In order to address this problem, several workers proposed that the observed gamma-ray emission originates not in the original event but is a by-product of the kinetic energy gained at the expense of the fireball. Maximal acceleration of the explosion products leads to a relativistic blast front, which can cause shocks when colliding with interstellar material, either by encountering dense knots or eventually by sweeping up matter in the path of the shock front ([@rees92], [@meszaros93]). Shocks can also arise internal to the outgoing relativistic wind, in the case where the central engine is variable ([@reesmesz94], [@pacxu94]). It is important to note that the energy distribution of the shock-accelerated particles that gives rise to the observed emission is not predicted in any of these models; however, the distributions can be inferred from observation. The most efficient radiation mechanism is synchrotron, which produces a characteristic low-energy power-law behavior ([@katz94], [@tavani96]). The high-energy spectral shape for this model comes from the distribution of Lorentz factors for the baryons arising in the shock, which is typically a broken power law. Dispersion of blast-front velocities will give rise to observable hard-to-soft spectral evolution, both in individual pulses, as well as over the course of the entire event. Some of this behavior has been noted by [@ford95]; however, the opposite behavior is also seen, as well as a mixture of both.
Apart from the details of individual theoretical models, what new can be learned from analysis of spectral data? First, we have the well-known observation that GRB spectra are non-thermal. There is good evidence that some time-averaged GRB spectra are composed of power-law emission to several 10s of MeV in energy ([@matz85], [@hanlon95]). Burst emission indeed reaches very high energies, as evidenced by the single 18 GeV photon observed by [*EGRET*]{} ([@hurley94]), albeit at a considerable delay from the initial outburst. This alone can say much about the distribution of particles doing the emitting, as well as the possible optical depth. Other than a multi-temperature blackbody, which can mimic a power-law spectrum over a limited energy range, non-thermal emission arises from non-thermal particles. The evolution of the particle distribution, by cooling, for example, bears a simple relationship to the evolution of the emission for many radiation mechanisms. In the fireball model, optical depths are much greater than unity during the phase in which the matter gets accelerated. Thermal emission from the fireball is not observed in the gamma-ray band (although it may be visible in X-rays below $\sim 20$ keV, [@preece96]). In any cosmological model, it is very difficult to avoid conditions that will rapidly lead to large optical depths via the photon-photon pair production process. This occurs in the collision of two photons where the product of their energies is greater than $2 m_{\rm e}^2/(1 - {\rm cos}\psi)$, $\psi$ being the angle between the photons’ directions and $m_{\rm e}=511$ keV is the rest mass of the electron. Many bursts have substantial emission at 500 keV and greater, so if the high-energy emission is not to be quenched by a runaway pair-fireball, the emission must be highly beamed. The high energy power-law index and its time evolution should constrain the mechanism through which particles are giving up their energy in emission, as well as reflecting the behavior of the injection mechanism. Cases where the high-energy component comes and goes within a burst or is absent altogether may represent quenching by a mechanism that rapidly increases the optical depth, such as photon-photon pair production. In this case, it is expected that the intensity should drop during periods of quenched emission, or in other words, there should be a hardness-intensity correlation.
In this paper, we will present a study of the time-evolution of burst spectra, concentrating on the high-energy power-law component. In §2 we discuss the burst sample selection and the details of the spectral fitting analysis. The results are covered in §3 and their implications are discussed in §4. In the Appendices, we summarize the general characteristics of BATSE and then discuss in detail the energy calibration procedure for the LADs, which has made the current work possible.
Analysis Methodology
====================
In order to have a sample of bursts with at least 8 spectra with count rates high enough to obtain well-determined spectral parameters, we selected a subset of bright bursts based upon either the total fluence or peak flux, as determined from the LAD 4-channel discriminator data ([@meegan96]). We required a fluence ($> 20$ keV) greater than $4 \times 10^{-5}$ erg cm$^{-2}$. However, the set of bursts for which the fluence can be calculated is limited by several considerations, such as data availability, telemetry gaps in the data coverage and possible contamination of portions of some bursts with other active sources (in particular, with solar flares in the first year of the mission). Thus, we made an additional selection of those bursts which had a peak flux from 50 – 300 keV on the 256 ms timescale in the 3B Catalog (and later) above 10 photon s$^{-1}$ cm$^{-2}$. Each burst was then binned in time, so that each spectrum to be analyzed had a signal to noise ratio (SNR) of at least 45 in the typically 28 to 1800 keV energy range of the HERB data (High Energy Resolution Burst data: for a description of the instrument and spectroscopy datatypes, please see Appendix A). Bursts with less than 8 spectra after binning were dropped from the sample. Most spectra in bright bursts are well in excess of this SNR, which guarantees $> 2 \sigma$ of signal per energy resolution element, assuming a flat count spectrum. Roughly 20 resolution elements ($= \delta E$, the FWHM of the detector energy resolution) are required to cover the typical LAD energy range, thus the 128-channel HERB spectra are over-resolved in energy. LAD data types other than HERB are under-resolved, which is why HERB is preferred for spectroscopy. Some bursts did not have complete coverage in the HERB data (especially before a flight software revision that allowed longer accumulations during quiescent portions of a burst), in which case we used other available data, as discussed below. There are 126 BATSE bursts in our sample matching these criteria.
Background was determined independently for each channel, typically using spectra from within $\pm 1000$ s of the burst trigger (giving at least three background HER spectra before and after the burst). The form of the background model was a fourth-order polynomial in each energy channel, where the fitted rates are time-averaged over each spectral accumulation, rather than determined at the centers. This was done to avoid underestimating the background rate at a peak or overestimating it at a valley. The SNR was determined by comparison with the chosen background model, interpolated to the time of the accumulated spectrum.
Spectra were fitted by one of several spectral forms, depending upon the best fit obtained to the average spectrum over the entire burst. The primary spectral form we used is the function of [@band92] (GRB, in table \[table1\]), which consists of two smoothly-joined power-laws: $$\begin{aligned}
f(E) & = & A (E/100)^{\alpha} \exp{(-E(2+\alpha)/E_{\rm peak})}\nonumber\\
{\rm if} \quad E & < & (\alpha-\beta)E_{\rm peak}/(2+\alpha){\rm ,}\\
{\rm and} \quad f(E) & = & A \{(\alpha-\beta)
E_{\rm peak}/[100(2+\alpha)]\}^{(\alpha-\beta)} \exp{(\beta-\alpha)}
(E/100)^{\beta}\nonumber\\
{\rm if} \quad E & \geq & (\alpha-\beta)E_{\rm peak}/(2+\alpha){\rm .} \nonumber\end{aligned}$$ The two power-law indices, $\alpha$ and $\beta$, are constrained such that the resulting model is always concave downwards ($\alpha > \beta$; our definition includes a possible minus sign for each index). If, in addition, the high-energy power-law index ($\beta$) is less than $-2$, the model peaks in $\nu
\cal{F}_{\nu}$ (that is, $E^2$ times the photon spectrum) within the BATSE energy range. The model is parameterized by the energy of the peak in $\nu
\cal{F}_{\nu}$ ($E_{\rm{peak}}$), rather than the energy of the break between the power laws ($E_0 = E_{\rm{peak}} (2+\alpha)/(\alpha-\beta)$). If the fitted value of $\beta$ is very negative, roughly less than $-5$, the spectral form approaches that of unsaturated inverse-Compton thermal emission ([@randl79]), a low-energy power-law with an exponential cut-off (COMP, in table \[table1\]). This can be viewed as a generalization of the spectral form of optically-thin thermal bremsstrahlung (neglecting any Gaunt factor), which has a low-energy power law index of $-1$. The GRB spectral form is a continuous function that does not allow a sharp spectral break, so that in cases where $E_{\rm{peak}}$ ($< E_0$) is close to the high end of the energy range for the data, $\beta$ may not be well-determined. For such cases, we used instead a simple broken power law (BPL, in table \[table1\]), in order to force $E_{\rm{peak}} = E_0$, usually resulting in acceptable fits to the high-energy component.
Since we are concerned in this paper about the high-energy power-law behavior, we made a number of tests to be sure that our choices of spectral models do not affect the end result. To do this, we fit several trial bursts with several different models and compared the resulting fits. The simplest test of the robustness of our fitting procedure was to fit a single power law to each spectrum above a fixed cut-off energy that was determined by the maximum over the entire set of fitted spectra in the burst of the value of the break energy $E_0$ between the two power-law components. This eliminated any affect the fit to the low-energy data might have on the fitted value of $\beta$. That is, curvature in the global model fit may tend to pull the local fit of the high-energy power-law index to a larger or smaller value, depending on how well the actual data tolerate the curvature. For example, the data may break more sharply than the model, which leads to a fitted value for $\beta$ that is steeper than it should be. Conversely, in the broken power-law model, with no curvature built in, the high energy power law index may be pulled to a shallower value than the data require. Generally, the resulting time-history of the fitted parameters are consistent to within one-sigma errors. However, some differences were apparent when we compared the time-histories of the fitted high-energy power-law component between these two models, when both were applied to the same burst, as can be seen in figure \[beta\_compare\_fig\]. The average values of the fitted power-law indices (weighted by the errors) over the entire burst were slightly different ($\beta_{\rm ave} = -2.25$, for the GRB model fit; $= -2.16$, for the BPL), while the underlying pattern of the time-history of the parameters were similar. So while the time evolution of the high energy portion of the spectrum could be reliably traced by the fitted parameter for each model, there remains some ambiguity in the average high-energy slope. This effect should be worse for larger average values of $E_{\rm{peak}}$: the curvature inherent in the GRB model tends to restrict the range of energies available for determining $\beta$. The broken power-law model is plagued by a different problem: with an energy resolution (FWHM) of approximately 20% at 511 keV, we usually cannot determine the exact position of the break energy using LAD count spectral data.
With the fitted values of the break energy and $\beta$ possibly closely correlated, the reported 1$\sigma$ error on each parameter is only part of the story. That is, the errors are most accurately determined from a multi-dimensional $\chi^2$ contour plot for the correlated parameters, as seen in figure \[2dcontour\]. The contours represent $\Delta \chi^2$ values appropriate for one parameter of interest, so that the 1$\sigma$ contour is at $\Delta \chi^2 = 1$ (for this figure only; usually, one would be interested in both parameters jointly, resulting in larger contours). The 1D 1$\sigma$ error limits are formed by the maximum and minimum of the error ellipse projected onto the axis of the parameter being considered. The actual 1$\sigma$ errors reported here are obtained from the diagonal elements of the covariance matrix for each fit; this is equivalent to $\Delta \chi^2 = 1$, with the additional assumption that the fitted parameter value lies in the center of the error interval. By taking into account the joint error between the parameters, $\Delta \chi^2$ is increased to 2.3, so that the fitted values of the high-energy power law index can be reconciled to within one or two sigma between the two different spectral forms.
The fact that we obtain acceptable fits with different spectral models reflects on the ambiguity of the forward-folding process. Given the detector response, a count recorded in a given data bin could have come from a photon of any number of different energies, all greater than or approximately equal to the nominal energy range of the data bin. The dominant component of the response at low energies is the resolution-broadened photo-peak, centered on the photon energy. On top of this are counts derived from incomplete absorption of higher-energy photons in the detector, the off-diagonal component of the response. Consistent with the constraints imposed by the detector model, including especially the energy resolution, a given photon model folded through the detector response matrix will redistribute the predicted counts to best agree with the observed data. Thus, the solution to the forward-folding spectral fitting problem is not unique.
Table \[table1\] summarizes global aspects of the fits performed for each burst. We use the 3B catalog name ([@meegan96]) and BATSE trigger number to identify each burst, followed by the number of the detector with the smallest zenith angle with respect to the source, the spectral model used for fitting, the number of fitted spectra, the time interval selected for fitting, the average of the fitted values for $E_{\rm{peak}}$ and the fluence, summed over the fitted spectra. In cases where there are two or more detectors reported in the third column, a summed 16 energy channel data type (MER) was used, usually for lengthy events which ran out of HERB memory before the end of the burst. For a small number of cases where other data types were absent, we use SD 256 energy channel data (SHERB); these are indicated in column three with an ‘S’ appended before the detector number. The three models used in our analyses are indicated by their respective mnemonics (introduced above) in column four. The COMP spectral form has one less parameter than the others: there is no fitted high-energy power-law index. However, each of the models shares three corresponding parameters: amplitude, low-energy power-law index and $E_{\rm{peak}}$ (or spectral break energy for the broken power-law model). In the last two columns we indicate the average value for $E_{\rm{peak}}$ in keV and the total fluence for the fitted interval in erg cm$^{-2}$. Notice that three of the four bursts that required the COMP model did so because the high energy power law was completely unconstrained; indeed, for these bursts $E_{\rm{peak}}$ was also unconstrained, as the average value is far greater than the energy of the highest channel available in the data (typically 1800 keV). In the following analyses, we shall exclude these four bursts, since no trend in the high-energy power law index can be determined with our data.
Observations
============
We should like to know several things concerning the behavior of the high-energy power-law as a function of time. First of all, is it constant? If not, does the index change smoothly with time, as with the hard-to-soft spectral evolution observed in the $E_{\rm{peak}}$ parameter by [@ford95]? If the behavior is not smooth, is it correlated with other observable features in the burst time history, such as the instantaneous flux or the evolution of the low-energy spectral parameters? To investigate these questions, we subjected the fitted values of the high-energy power-law index to several statistical tests, and evaluated the probability the outcome of each could have occurred randomly. The results of our analyses, shown in table \[table2\], are described below. Each row of the table is indexed in the first column by the trigger name from table \[table1\]. For each burst, this is followed by the weighted average of $\beta$, the probability that a constant $\beta$ describes the data, the probability that a linear trend in $\beta$ describes the data, the slope from a linear fit to the time series of $\beta$, and the probabilities that the fitted values of $\beta$ are correlated with time, the burst time history or with the time series of $E_{\rm{peak}}$.
To start with, we would like to test the hypothesis that $\beta$ is a constant over the entire burst. In order to do this, we first computed a weighted average of the fitted values of the high-energy power-law index (which we will denote as $\beta$, regardless of which model we used for the fit) over the time interval selected for each burst. The weight applied to each term in the average is the square of the 1-sigma error, $\sigma_i$, of the fit: $$\beta_{\rm{ave}} = \sum_{i} \biggl(\frac{\beta_i}{{\sigma_i}^2}\biggr) \bigg/
\sum_{i} \biggl(\frac{1}{{\sigma_i}^2}\biggr).$$ In cases where the fit resulted in an undetermined value for $\beta$ for an individual spectrum, the value was thrown out of the weighted average. It should be noted that, with weighting of the individual values, as well as the elimination of undetermined values, the result is different from the value of $\beta$ obtained from a fit to the integrated spectrum. The third column in table \[table2\] gives the probability for $\chi^2$ obtained by subtracting the weighted average from the actual fitted values in each burst. The $\chi^2$-values are calculated assuming the model, and thus a small value (such as $< 10^{-4}$) indicates a problem with the assumption and thus the likelihood that the model is false. A histogram of the logarithm of these probabilities in figure \[ave\_pl\_prob\] ([*dotted line*]{}), shows that for some bursts, at least, a constant $\beta$ is consistent. What is not shown are the 30 bursts for which the probability is essentially zero. Including the bursts for which the log. of the probability is less than $-4$, we have 42 out of 122, or 34% of the total sample, that are not consistent with a simple, constant model in $\beta$. It is extremely unlikely that this distribution occurs randomly.
The distribution of $\beta_{\rm{ave}}$, shown as a histogram in figure \[beta\_dist\], improves on earlier work by [@band93], with a larger sample and better statistics per burst. However, the resulting values from these two studies cannot be compared directly, since here we have weighted each fitted value of $\beta$ by the parameter error, while in the previous study the fits were made to average spectra, which are implicitly weighted by intensity. Finally, the sample sets are different: the selection of events in [@band93] was based upon peak counts, not fluence or peak flux, since these were unknown at the time. The median value for the sample is $-2.12$, with an absolute deviation width of $w_{\rm ADev} \equiv {1 \over N}\,{\sum_{j\,=1}^{N} \mid x_{j} - x \mid}$ = 0.23 (where $x_{\it med}$ represents the median, which minimizes the absolute deviation), compared with the standard deviation of 0.30. The distribution has an extended negative tail that gives it a skew value of $-0.73$ (the skew is defined as the dimensionless third moment of the distribution, and is 0 for a Gaussian), large compared with the expected standard deviation of the skew of $\sqrt{15/N}=0.35$ for a purely Gaussian distribution. Given the large variation of other spectral parameters, such as $E_{\rm{peak}}$ which has a distribution at least as wide as the range of possible values, it is surprising that the high-energy behavior is so restricted. Plotted over the total distribution in figure \[beta\_dist\] is a histogram of those bursts for which $\beta$ is consistent with being constant (log. probability $> -4$ from figure \[ave\_pl\_prob\]).
Obviously, a constant value of $\beta$ is not acceptable for many bursts. A clear example of this is presented in figure \[beta\_1085\], which shows the time history of $\beta$ during 3B911118 and is an example of general hard-to-soft spectral evolution in $\beta$. The Spearman rank-order correlation of $\beta$ with time is given in column 6 of table \[table2\]. The correlation coefficient [*r*]{} is distributed between $-1$ and 1, and can be converted through the combination $$t = r \sqrt{\frac{N - 2}{1 - r^2}}$$ to a Student’s [*t*]{}-distribution for $N - 2$ degrees of freedom. Unlike the $\chi^2$ probabilities, correlation coefficients that are not consistent with roughly a normal distribution around 0 reject the null hypothesis that no correlation exists; therefore, small probabilities indicate significant correlation. The probabilities associated with [*r*]{}, calculated using equation 3 along with the number of spectra fitted ($N$) from column 5 of table \[table1\], reveal that a trend in the data exists for at least 21 of the events at the $10^{-3}$ significance level or smaller. This is a robust estimator for correlation; it indicates when a correlation is almost certainly present. However, the Spearman test does not take into account the errors for each point, so if there are a large number of outliers with large errors in the sample, the test will come up with poor results. Figure \[corr\_dist\] presents the distribution of the time correlation coefficients ([*solid line*]{}). The bulk of the distribution consists of negative correlations, indicating an anti-correlation of the power-law index with time, or hard-to-soft spectral evolution.
A linear fit to the time history of $\beta$ also indicates whether there is a monotonic trend in the data, while accurately treating the errors in the fitted power-law indices. The fifth column of table \[table2\] gives the linear coefficient, or slope, of such a fit, having the units of change in $\beta$ per unit time, or s$^{-1}$. The sign is such that hard-to-soft spectral evolution ($\beta$ grows more negative in time) results in a negative slope. The $\chi^2$ probability for this fit is given in the fourth column of the table and the distribution is also plotted on figure \[ave\_pl\_prob\] ([*solid line*]{}). In 24 cases out of the total sample, the log. probability was less than $-4$, indicating that the linear trend was a poor model of the data for those events. Comparing this result to that for the model of constant $\beta$, however, more bursts had acceptable fits to a linear trend at the same significance level (98 compared with 80 out of 122). There are far more cases of hard-to-soft spectral evolution (100) than there are for soft-to-hard evolution, which was already evident in figure \[corr\_dist\]. The first spectrum in many bursts is the hardest (see figure \[beta\_1085\]), while at the same time being one of the weakest. Since each burst has a different duration, the slopes in physical units may not be directly comparable. However, the fitted slope in $\beta$ times the duration of the fitted time interval, from column 6 of table \[table1\], is a dimensionless parameter ($\Delta \beta$) that represents the total change in $\beta$, assuming that the evolution in $\beta$ is linear (as it is for the majority of the sample). Figure \[slope\_dur\] shows that the distribution of $\Delta \beta$ has a single, roughly symmetric peak centered on $-0.374$, with one outlier (not shown in the figure). The median absolute deviation width of the distribution is $w_{\rm ADev} = 0.392$, compared with a standard deviation of $\sigma = 0.516$. This argues that a single physical process characterizes the majority of the sample; and again points out that hard-to-soft spectral evolution is typical behavior for the high-energy power-law component. Physical mechanisms for burst energetics should account for this, possibly via depletion of a reservoir of energy that is available for the burst. Otherwise, it may be that when the high-energy portion of the emission changes beyond this point, the total emission is quenched.
The linear fit to the power-law indices does not characterize the distribution well for many bursts (24 out of 122), indicating that other types of behavior may be present. Figure \[beta\_1085\] serves as an example of a burst that has strong hard-to-soft spectral evolution but where the linear fit is unacceptably poor. The residuals to the fit have considerable scatter that is correlated in successive time bins in several places on the figure. It is these residual patterns that we are interested in. Two possibilities are easily tested: there may be a correlation between the high-energy behavior and intensity within a burst (clearly not the case for 3B 911118 in figure \[beta\_1085\]), or the high-energy spectrum may be correlated with the evolution of the low-energy spectrum. Burst 3B 911118 is an example of this behavior, as can be seen in figure \[epeak\_beta\], where the fitted values of $E_{\rm peak}$ (representing the low-energy behavior) and $\beta$ have been plotted against each other.
For the case of correlation between hardness (as measured by the high-energy power-law index) and intensity (measured as total count rate in the fitted energy interval: $\sim 28$ – 1800 keV), we applied two statistical tests to the data and multiplied their probabilities in order to screen for candidates. The tests (described below) are likely to be correlated; however, each measures the hardness-intensity correlation differently, so that their product combines the best of each. We set the threshold for significance at $10^{-6}$ for the product, so to avoid false positives as much as possible. In both tests, we removed the first-order trend in the data by dividing by the linear fit to the power-law indices (which is described above). We do this, despite the fact that many bursts don’t show a linear trend in the high-energy power-law index, since there are a considerable number of bursts that do have a significant correlation between $\beta$ and time, while the burst intensity manifestly does not: a typical burst will have overlapping regions of both positive (rising portions) and negative (falling) correlation with time, so that the whole ensemble of $\beta$ values has no correlation. The overall linear trend may be larger than the amplitude of the residuals of the fitted linear model (this is the case in figure \[beta\_1085\]), in which case there is no significant hardness – intensity correlation as determined by $\beta$ alone. After detrending, the residuals may or may not be correlated with intensity. The Spearman rank-order test is relatively unequivocal: that is, if the resulting probability is low enough, then the desired correlation definitely exists. However, the converse is not true: the test can fail badly since it ignores the one-sigma errors in the fitted power-law indices. For this reason, we also have calculated the linear correlation coefficient between the detrended values of $\beta$ and intensity, where the inverses of the variances on the detrended power-law indices are used to weight their contribution ([@press92]). For this case, individual, poorly-determined indices that are only a few sigma away from being consistent with correlation contribute the same as well-determined ones closer to the center of the distribution. In practice, while this kind of test is a poor indicator of whether an observed correlation is statistically significant, it is a rough indication of the strength of a correlation under the assumption that a correlation definitely exists, so the two statistical tests we’ve chosen complement each other, to a certain extent. Their product selects those bursts that have low probabilities (indicating strong correlation) from both tests (assuming that by detrending no significant correlation was introduced that was not present in the original data). We have indicated the combined probabilities from both tests in the seventh column of table \[table2\] and also indicate the sign of the linear correlation coefficient. Since the power-law indices were detrended, a positive sign indicates a negative actual correlation; that is, the high-energy behavior is opposite that of the burst time history. An example of positive correlation in the detrended values of $\beta$ for 4B 960924 is shown in figure \[detrended\_beta\_5614\]. A small number of bursts (9), have significances less than $10^{-6}$. Of these, 6 are examples of positive correlation. A larger number (24) are significant at the $10^{-4}$ level.
Another possible type of behavior in $\beta$ that is testable with our data is a correlation with the low-energy spectral evolution. The most obvious such behavior is the hard-to-soft spectral evolution of $E_{\rm{peak}}$, discussed by [@ford95]. $E_{\rm{peak}}$ is a good measure for overall spectral evolution since it marks the peak in the power output of the spectrum per log. decade. Of course, $E_{\rm{peak}}$ is not defined for those portions of a burst where $\beta > -2$; in that case, we substitute the break energy of the spectrum instead. In addition, we wish to check for higher moments of correlation than is possible with a linear trend of $\beta$ in time, which was discussed above, such as the evolution of $\beta$ within individual peaks of a burst. In table \[table2\], column 8, we calculate the Spearman rank-order probability that the distribution of $\beta$ for a given burst is correlated with the distribution of $E_{\rm{peak}}$, which stands in here for the low-energy behavior. The best example of correlation with $E_{\rm{peak}}$ is shown in figure \[epeak\_beta\], which is a plot of the two fitted parameters against one another for 3B 911118. Out of 122 bursts, 15 bursts have probabilities less than $10^{-3}$, indicating correlation, and out of these only 5 have significant hard-to-soft spectral evolution, as measured by how many sigma the slope in $\beta$ in table \[table2\], column 5, deviates from 0. The important point is that, whereas hard-to-soft behavior can be demonstrated for large numbers of bursts in the evolution both $E_{\rm{peak}}$ and $\beta$, this behavior is generally not correlated between the two. Indeed, hard-to-soft evolution of $E_{\rm{peak}}$ within individual peaks of a burst is not typically observed with $\beta$, otherwise, far more instances of correlation between the two would have been observed.
Discussion
==========
In this series of BATSE spectral analysis papers, we have demonstrated several times the universal suitability of the ‘GRB’ spectral form for fitting burst spectra, whether it is applied to the total spectrum averaged over the burst ([@band93]), to time-resolved spectroscopy of bright bursts in the SD data in [@ford95], to joint fits of time-averaged spectra of bright bursts with the low-energy discriminator data ([@preece96]; although we see the model break down with low-energy excesses observed in 15% of GRBs) and now to time-resolved spectroscopy of bursts observed mostly with the BATSE LADs. In figure \[beta\_dist\], we now see that there is evidence of an average high-energy power law index that is $\sim -2$ in a large number of GRBs. In addition, the variance of this index over the sample is similar to that obtained by [@pend94a], using BATSE LAD discriminator data.
Table \[table2\] presents evidence that $\beta$ is not constant for 42 out of 122 bursts in our sample. The typical change in $\beta$ over an entire burst, $\sim 0.4$ (figure \[slope\_dur\]), is small compared with the average value of $\beta \approx -2.1$. We should consider which of the many emission models proposed for GRBs are consistent with these observations. A $-2$ power law slope is evidence for single-particle cooling, from either synchrotron losses or Compton scattering ([@blumgould70]). Typically, one would integrate the energy loss rate over the particle distribution; however, particles that are relatively cool with respect to their large, possibly relativistic bulk motion can be treated as monoenergetic in interactions with static external particles or fields. Bremsstrahlung losses are another matter. Such scenarios have been proposed for bursts of cosmological origin for external shocks, ([@rees92] & [@meszaros93]) as well as for synchrotron shocks ([@katz94] & [@tavani96]). It should be noted that the cooling timescale for most expected processes, especially those like synchrotron that involve magnetic fields, is far shorter than observed burst lifetimes by many orders of magnitude. In fact, this is a common problem with GRB models: an unspecified energy storage mechanisms usually must be invoked in order to extend the emission. Relativistic bulk motion, which is necessary to ensure that bursts do not degenerate into a pair fireball, can multiply the lab-frame lifetime by the Lorentz factor, usually considered to be on the order of 1000. This is not nearly long enough for processes such as synchrotron emission whose characteristic timescale may be on the order of $10^{-17}$ s. Clearly, in bursts, there is a reservoir of energy, possibly the protons that carry the bulk of the kinetic energy in the blast wave.
It appears that hard-to-soft spectral evolution predominates over soft-to-hard, as observed already in [@norris86], [@ford95] and [@band97]. In our study, the high-energy behavior follows this trend at the greater than the $3 \sigma$ level in 50 out of 122 cases, while the opposite is true for only 5 bursts at the same significance. This is independent of the low-energy behavior; indeed, we have a significant correlation with the low-energy behavior in only 15 cases and out of these, 5 have significant hard-to-soft spectral evolution. Taken together, we have evidence that the high-energy behavior is very much independent of the rest of the spectral evolution of a burst; in 35% of the cases, there is hard-to-soft spectral evolution, and no evolution in most of the rest, only 10% of all bursts failing the linear fit $\chi^2$ test.
As seen in figure \[beta\_dist\], there is a small group of ‘super-soft’ bursts characterized by $\beta_{\rm ave} < \sim -3.0$. Along with 4B 970111, which was an extremely bright burst with no apparent high-energy power law component (it was fitted with the COMP model), we have three such events. Several of these have no detectable emission above $\sim 600$ keV. This behavior is similar to the ‘no high-energy’ bursts of [@pendleton97], which were shown to be homogeneous in space. Since most of the homogeneous bursts were relatively weak, compared with the entire sample, here we must be observing the brightest few of that set, rather than 20%, as reported in [@pendleton97]. There may actually be a continuum of burst properties, with these bursts representing the furthest extreme. Bursts in this extreme (as well as some portions of other bursts that have very steep high-energy power laws) may be an indicator that some emission-limiting phenomenon such as pair-plasma attenuation may be at work. Indeed, in many cases, spectra in these bursts can be fitted by a spectral form that does not require a high-energy power-law (such as the COMP model). This also fits in with the observation that such events are typically weaker than average. In the context of shock models of GRBs, several parameters of the particle energy distribution determine the resulting spectrum. These may be factors such as the shape of the distribution, whether it is a power law, the maximum energy or the bulk Lorentz factor. It is very likely that the maximum energy of the accelerated particle distribution resulting from the shock could be drawn from an enormous range (out to several GeV, at least), depending on the conditions at the shock. Thus, the super-soft bursts may be representative of particle distributions that arise from weak shocks, affecting the shape or maximum energy in such a way to limit the high-energy emission.
Summary
=======
In this study, we have looked in detail at the temporal behavior of the high-energy power-law portion of GRB spectra from a sample of 126 bursts selected by either high flux or fluence. The average over all fitted spectra of all bursts in the sample for the high energy power law index ($\beta$) is $\sim
-2.12$, although fitting a constant, average index to the time history of $\beta$ in each burst resulted in unacceptable $\chi^2$ values for 34% of the bursts. In addition, of those bursts in which $\beta$ is not constant, a large number (100) show hard-to-soft spectral evolution, compared with those that have an overall, significant soft-to-hard trend. The total change of $\beta$ over the time interval chosen for fitting has a single-peaked distribution, centered on $-0.37$, indicating that theoretical modeling will have to explain why most bursts favor this value. In several bursts, the hard-to-soft spectral evolution is correlated with similar behavior at lower energies. We also find that some bursts have a significant correlation between $\beta$ and the burst time history, or equivalently, instantaneous flux. Some bursts in the sample were too soft to be characterized by a high-energy tail, while there are intervals in many bursts that have similar behavior, as has been reported by [@pendleton97]. Taken together, these results show that the high-energy spectral component has a rich life, independent to a large extent of the behavior of much the rest of the spectra.
Many thanks to Surasak Phengchamnan and Peter Woods for generating a list of post-3B catalog fluences and peak fluxes. We also thank the anonymous referee, for comments that lead to improvements in the paper. This work could not have been possible without our spectral analysis software (WINGSPAN). It is publicly available from the BATSE webserver: http://www.batse.msfc.nasa.gov/. BATSE work at UCSD is supported under NASA contract NAS 8-36081.
BATSE Large Area Detectors
==========================
The BATSE LADs are a set of eight identical NaI detectors, which are mounted on the corners of the [*CGRO*]{} and oriented to ensure maximum all-sky exposure. Perhaps the most important feature of the BATSE instrument is its ability to localize a transient cosmic source by the comparison of counting rates in the four detectors that directly see it ([@pendleton96]). This is an invaluable aid to spectroscopy, since the detector response is a strong function of the source-to-detector axis angle, with differing responses at different energies ([@pend95]). Thus, without location information, the detector response cannot be fully modeled, and spectral model fitting cannot be done accurately.
Spectral data from the LADs are compressed to either 128-channel high energy resolution background and burst data (HER and HERB datatypes, respectively) or 16-channel continuous background or medium energy resolution burst data (CONT and MER datatypes). The HER background data are typically accumulated over 300 s, while the CONT data are always accumulated every 2.048 s. The HERB burst trigger data are accumulated in a time-to-spill mode: one spectrum is generated in the time it takes to record 64 k counts (in units of 64 ms), currently with a fraction of the last available background rate subtracted, to ensure that longer accumulations are taken over periods when the burst has returned to background levels. This fraction was zero for roughly the first half of the mission, so bright, highly-variable bursts commonly ran out of available memory. For the four detectors recording the highest count rates at the time of the trigger, there are 128 spectral accumulations, each 128 ms in duration or greater. The lowest seven channels of the 128 are at or below the analog lower-level discriminator (LLD), and are unusable; the highest few channels suffer from saturation in the pulse amplifier and thus are also thrown out. The remaining channels are spaced quasi-logarithmically in energy, falling between approximately 28 keV and 2 MeV, with the exact energy coverage of each channel in each detector determined by a channel-to-energy conversion algorithm. It is important to note that these energy ranges are quite stable through the mission, due to automatic gain control of the PMT voltages. The energy resolution of the LADs was measured on the ground to be $\sim 20$% at 511 keV ([@horack91]), and has been quite stable in orbit.
Energy Calibration Methodology
==============================
In order for spectroscopy to be possible with the LAD HERB data, we have had to apply a correction to the channel-to-energy conversion algorithm that was developed before the launch of the spacecraft. Measurements of several calibration sources at known energies resulted in an empirical relationship between channel number and channel energy threshold ([@lestrade91]). The function fitted was essentially linear, with a small non-linear term (significant only at low energies), proportional to the square root of the channel number; thus there are three fitted parameters. After several bright bursts were observed in orbit, it became clear that each detector had a systematic pattern of residuals, localized to the low end of the count spectrum. With the assumption (tested below) that these features are intrinsic to each detector, and not a function of detector-to-source angle or source intensity, we developed a method of calibration using in-orbit data.
In order to properly calibrate the detectors, we must choose bright objects with well-known spectral properties, seen by each detector. Solar flares are generally not usable, since they are rarely seen by half the detectors, due to the pointing constraints of the spacecraftÕs solar panels, and their spectra are typically too soft. Earth occultation data from the Crab nebula was used by [@pend94b] to calibrate the 16 channel CONT spectra. However, this was not feasible for HER spectra because of telemetry constraints. We are left with bursts themselves. Averaged over their entire time history, at least some bursts can be expected to have a fairly smooth spectrum ([@band93]). Spectral features, such as lines, will tend to average out over time and in the LAD data will not contribute much overall, due to the moderate energy resolution of the detectors. For bright bursts, we can precisely determine the average spectrum from the well-calibrated SD spectral data ([@band92]) to use as a constraint in a joint fit with the LAD spectral data. The single time-averaged spectrum from the calibration burst is no longer available for spectroscopy; however, individual spectra from the burst are still usable for our analyses, for two reasons. First, continuum spectral fits are robust, in that they sample broad features in the spectrum, rather than the behavior of individual channels. Second, the calibration affects only the lowest channels of the spectra, and therefore does not affect spectral fitting of the high-energy power-law index, as long as it is determined by counts above $\sim$ 150 keV. In the present paper, we needed to obtain a global fit to each spectrum, so it was important to calibrate the lower channels as well as possible.
The general process is iterative: we jointly fit the LAD and SD spectral data for an entire outburst interval in a bright burst, using the standard calculation for the LAD data energy thresholds. The residuals of the fit to the LAD data are used to determine by how much to adjust the energy of each data channel edge in order to bring the count rate closer to the model rate. With this new set of edges, a new detector response matrix (DRM) is generated to account for the shift in the position of the photopeak with the change in output edges and the accompanying change in total response. The photon model is recalculated with the new DRM and count rate residuals are again determined. Since the pre-flight calibration produces acceptable agreement above $\sim 150$ keV, we limited the re-calibration to energies below 150 keV. We also enforced a fixed lower energy for HERB data channel 7 ($\sim 25$ keV), to limit the corrections to apply only to channels above the energy of the LLD, as this is currently not modeled in the DRM. The freedom of lower-energy edges to wander is highly constrained in the joint fit with the SD data, which overlap the LAD energy range and can extend the continuum fit to lower energies by up to 10 keV. Each of the edges within the two limits are recalculated in each new cycle until the value of $\chi^2$ for the fit stops decreasing. For each of the eight detectors, one calibration burst yields a set of offsets of new edges relative to the original edges, which can then be applied to all bursts observed by that detector throughout the mission. We have extensively tested the hypothesis that the non-linearities are intrinsic to the detector by examining the residuals to spectral fits of several very bright bursts in each detector with the new calibration. We have found excellent agreement of the calibration results between bursts, regardless of the angle, intensity or hardness of any given event.
Band, D. L., 1997, , 486, in press Band, D. L., et al. 1992, Exp. Astron., 2, 307 Band, D. L., et al. 1993, , 413, 281 Bickert, K. F., & Greiner, J. 1994, Compton Gamma-Ray Observatory Symposium, M. Friedlander, N. Gehrels & D. J. Macomb, New York: AIP, 1059 Blumenthal, G. B., & Gould, R. J. 1970, Rev. Mod. Phys., 42(2), 237 Cavallo, G., & Rees, M. J. 1978, , 183, 359 Clarke, T. E., Blaes, O., & Tremaine, S. 1994, , 107, 1873 Fishman, G. J., & Meegan, C. A. 1995, ARA&A, 33, 415 Ford, L. A., et al. 1995, , 439, 307 Goodman, J. 1986, , 308, L47 Hanlon, L. O., Bennet, K., Williams, O. R., Winkler, C, & Preece, R. D. 1995, Ap&SS, 231(1), 157 Harding, A. K. 1991, Phys. Rep., 206, 327 Ho, C., Epstein, R. I., Fenimore, E. E. 1992, Gamma-Ray Bursts, Cambridge: Cambridge Horack, J. M. 1991, Development of the Burst and Transient Source Experiment (BATSE), NASA Reference Publication 1268 Hurley, K., et al. 1994, Nature, 372, 652 Katz, J. I. 1994, , 432, L107 Lestrade J. P. 1991, NASA internal memo Matz, S. M., et al. 1985, , 288, L37 Meegan, C. A., et al. 1992, Nature, 355, 143 Meegan, C. A., et al. 1996, , 106, 65 (BATSE 3B Catalog) Mészáros, P. & Rees, M. J. 1993, , 405, 278 Norris, J. P., et al. 1986, , 301, 213 Paczyński, B, & Xu, G. 1994, , 427, 708 Pendleton, G. N., et al. 1994, , 431, 416 Pendleton, G. N., et al. 1994, The Second Compton Symposium, C.E. Fichtel, N. Gehrels & J. P. Norris, New York: AIP, 749 Pendleton, G. N., et al. 1995, NIMSA, 364, 567 Pendleton, G. N., et al. 1997, , submitted Pendleton, G. N., Briggs, M. S., & Meegan, C. A. 1996, Gamma-Ray Bursts, 3rd Huntsville Symposium, C. Kouveliotou, M. S. Briggs, & G. J. Fishman, New York: AIP, 877 Piran, T. 1994, Gamma-Ray Bursts, 2nd Huntsville Workshop, G. J. Fishman, J. J. Brainerd & K. Hurley, New York: AIP, 495 Preece, R. D., et al. 1996, , 473, 310 Press, W., Teukolsky, S., Vetterling, W., & Flannery, B. 1992, Numerical Recipes in FORTRAN, 2nd ed., New York: Cambridge University Press, 658 Rees, M. J., & Mészáros, P. 1992, , 258, 41 Rees, M. J., & Mészáros, P. 1994, , 430, L93 Rybicki, G. B., & Lightman, A. P. 1979, Radiative Processes in Astrophysics, New York: Wiley, 221 Tavani, M. 1996, , 466, 768 Voges et al. 1982, , 263, 803
\[table1\] \[table2\]
|
{
"pile_set_name": "ArXiv"
}
|
CERN–TH/2001–366 IFUP–TH/2001-41 IFIC/01-69 FTUV-01-1217 RM3-TH/2001-17
**Old and new physics interpretations**
**of the NuTeV anomaly**
**S. Davidson$^{a,b}$, S. Forte$^c$[^1], P. Gambino$^d$, N. Rius$^a$, A. Strumia$^{d\,2}$**
*(a) Depto. de Fisica Teórica and IFIC, Universidad de Valencia-CSIC, Valencia, Spain*
*(b) IPPP, University of Durham, Durham DH1 3LE,UK*
*(c) INFN, Sezione di Roma III, Via della Vasca Navale, I–00146, Roma, Italy*
*(d) Theoretical Physics Division, CERN, CH-1211 Genève 23, Suisse*
**Abstract**
> We discuss whether the NuTeV anomaly can be explained, compatibly with all other data, by QCD effects (maybe, if the strange sea is asymmetric, or there is a tiny violation of isospin), new physics in propagators or couplings of the vector bosons (not really), loops of supersymmetric particles (no), dimension six operators (yes, for one specific $\SU(2)_L$-invariant operator), leptoquarks (not in a minimal way), extra U(1) gauge bosons (maybe: an unmixed $Z'$ coupled to $B-3L_\mu$ also increases the muon $g-2$ by about $10^{-9}$ and gives a ‘burst’ to cosmic rays above the GZK cutoff).
Introduction
============
The NuTeV collaboration [@NuTeV] has recently reported a $\sim 3\sigma$ anomaly in the NC/CC ratio of deep-inelastic $\nu_\mu$-nucleon scattering. The effective $\nu_\mu$ coupling to left-handed quarks is found to be about $1\%$ lower than the best fit SM prediction.
As in the case of other apparent anomalies (e.g. $\epsilon'/\epsilon$ [@eps'exp; @trieste], the muon $g-2$ [@g-2; @lbl; @had], atomic parity violation [@Wood:1997zq; @russiAPV], and another puzzling NuTeV result concerning dimuon events [@dimuon], to cite only the most recent cases) one should first worry about theoretical uncertainties, mainly due to QCD, before speculating on possible new physics. After reviewing in section 2 the SM prediction for the NuTeV observables, in section 3 we look for SM effects and/or uncertainties which could alleviate the anomaly. In particular, we investigate the possible effect of next-to-leading order QCD corrections and consider the uncertainties related to parton distribution functions (PDFs). We notice that a small asymmetry between strange and antistrange in the quark sea of the nucleon, suggested by $\nu {\cal N}$ deep inelastic data [@BPZ], could be responsible for a significant fraction of the observed anomaly. We also study the effect a very small violation of isospin symmetry can have on the NuTeV result.
Having looked at the possible SM explanations, and keeping in mind that large statistical fluctuations cannot be excluded, we then speculate on the sort of physics beyond the SM that could be responsible for the NuTeV anomaly. We make a broad review of the main mechanisms through which new physics may affect the quantities measured at NuTeV and test them quantitatively, taking into account all the constraints coming from other data.
We take the point of view that interesting models should be able to explain a significant fraction of the anomaly. According to this criterion, we consider new physics that only affects the propagators (section \[oblique\]) or gauge interactions (section \[couplings\]) of the SM vector bosons, looking at the constraints imposed on them by a global fit to the electroweak precision observables. Many models can generate a small fraction of the observed discrepancy (see e.g. [@Roy]), but it is more difficult to explain a significant fraction of the anomaly. In section \[susy\] we consider the case of the minimal supersymmetric SM (MSSM) and look at possible MSSM quantum effects. In section \[NRO\] we turn to lepton-lepton-quark-quark effective vertices, focusing on the most generic set of dimension 6 operators. We find that very few of them can fit the NuTeV anomaly (in particular, only one $\SU(2)_L$-invariant operator). In section \[LQ\] and \[Z’\] we study how these dimension six operators could be generated by exchange of leptoquarks or of extra U(1) gauge bosons. Finally, we summarize our findings in section 10.
$$\begin{array}{|lccc|}\hline
\hbox{SM fermion}&{\rm U}(1)_Y&\SU(2)_L&\SU(3)_{\rm c}\cr \hline
U^c = u_R^c \phantom{*Ì{3\over 5}} & -{2 \over 3} & 1 & \bar{3} \cr
D^c = d_R^c \phantom{*Ì{3\over 5}} & \phantom{-}{1 \over 3}& 1 &\bar{3} \cr
E^c = e_R^c \phantom{*Ì{3\over 5}} &\phantom{-}1 & 1 &1 \cr
L=(\nu_L, e_L) & -{1 \over 2} & 2 &1\cr
Q=(u_L, d_L) &\phantom{-} {1\over 6} & 2 & 3\cr \hline
\end{array}\qquad
\renewcommand{{1.2}}{1.45}
\begin{array}{|c|cc|}\hline
Z\hbox{ couplings}
& g_L & g_R \\ \hline
\phantom{I}^{\phantom{I}^{\phantom{I}}}
\nu_e,\nu_\mu,\nu_\tau \phantom{I}^{\phantom{I}^{\phantom{I}}}
& \frac{1}{2} & 0 \\
\phantom{I}^{\phantom{I}^{\phantom{I}}}
e,\mu,\tau \phantom{I}^{\phantom{I}^{\phantom{I}}}
&-\frac{1}{2}+{s_{\rm W}}^2 & {s_{\rm W}}^2 \\
\phantom{I}^{\phantom{I}^{\phantom{I}}}
u,c,t \phantom{I}^{\phantom{I}^{\phantom{I}}}
& \phantom{-}\frac{1}{2}-\frac{2}{3}{s_{\rm W}}^2 & -\frac{2}{3}{s_{\rm W}}^2 \\
\phantom{I}^{\phantom{I}^{\phantom{I}}}
d,s,b \phantom{I}^{\phantom{I}^{\phantom{I}}}
& -\frac{1}{2}+\frac{1}{3}{s_{\rm W}}^2 & \frac{1}{3}{s_{\rm W}}^2 \\
\hline
\end{array}$$
The SM prediction
=================
### Tree level {#tree-level .unnumbered}
In order to establish the notation and to present the physics in a simple approximation, it is useful to recall the tree-level SM prediction for neutrino–nucleon deep inelastic scattering. The $\nu_\mu$-quark effective Lagrangian predicted by the SM at tree level is $${\mathscr{L}\,}_{\rm eff} = -
2\sqrt{2}G_F ([\bar{\nu}_\mu \gamma_\alpha \mu_L ][\bar{d}_L \gamma^\alpha u_L ] +\hbox{h.c.}) -
2\sqrt{2}G_F \sum_{A,q} g_{Aq}
[\bar{\nu}_\mu\gamma_\alpha \nu_\mu][\bar{q}_A \gamma^\alpha q_A]$$ where $A=\{L,R\}$, $q=\{u,d,s,\ldots\}$ and the $Z$ couplings $g_{Aq}$ are given in table \[tab:gAi\] in terms of the weak mixing angle ${s_{\rm W}}\equiv\sin \theta_{\rm W}$.
It is convenient to define the ratios of neutral–current (NC) to charged–current (CC) deep-inelastic neutrino–nucleon scattering total cross–sections $R_\nu$, $R_{\bar \nu}$. Including only first generation quarks, for an isoscalar target, and to leading order, these are given by $$\begin{aligned}
R_\nu &\equiv& \frac{\sigma(\nu {\cal N}\to \nu X)}{\sigma(\nu {\cal N}\to \mu X)} =
\frac{(3 g_L^2 + g_R^2)q + (3 g_R^2 + g_L^2)\bar q}{3 q +\bar q} = g_L^2 + r g_R^2\\
R_{\bar{\nu}} &\equiv& \frac{\sigma(\bar\nu {\cal N}\to \bar\nu X)}{\sigma(\bar\nu {\cal N}\to \bar\mu X)} =
\frac{(3 g_R^2 + g_L^2)q +(3g_L^2 + g_R^2)\bar q}{q +3\bar q} = g_L^2 + \frac{1}{r} g_R^2,\label{rdef}\end{aligned}$$ where $q$ and $\bar q$ denote the second moments of quark or antiquark distributions and correspond to the fraction of the nucleon momentum carried by quarks and antiquarks, respectively. For an isoscalar target, $q=(u+d)/2$, and we have defined $$r \equiv \frac{\sigma(\bar{\nu}{\cal N}\to \bar\mu
X)}{\sigma({\nu}{\cal N}\to \mu X)}
=\frac{3 \bar{q} +q}{3q+\bar{q}}$$ and $$g_L^2 \equiv g_{Lu}^2 + g_{Ld}^2 = \frac{1}{2}-\sin^2\theta_{\rm W}+\frac{5}{9}\sin^4\theta_{\rm W},\qquad
g_R^2\equiv g_{Ru}^2 + g_{Rd}^2 = \frac{5}{9}\sin^4\theta_{\rm W}.$$
The observables $R_\nu^{\rm exp}$ and $R_{\bar{\nu}}^{\rm exp}$ measured at NuTeV differ from the expressions given in eq. (\[rdef\]). On the theoretical side this is due to contributions from second–generation quarks, and because of QCD and electroweak corrections. On the experimental side, this is because total cross–sections can only be determined up to experimental cuts and uncertainties, such as those related to the spectrum of the neutrino beam, the contamination of the $\nu_\mu$ beam by electron neutrinos, and the efficiency of NC/CC discrimination. Once all these effects are taken into account, the NuTeV data can be viewed as a measurement of the ratios between the CC and the NC squared neutrino effective couplings. The values quoted in [@NuTeV] are $$\label{NuTeVgLgR}
g_L^2 = 0.3005\pm 0.0014\qquad\hbox{and}\qquad
g_R^2=0.0310\pm0.0011,$$ where errors include both statistical and systematic uncertainties.
The difference of the effective couplings $g^2_L-g^2_R$ (‘Paschos–Wolfenstein ratio’ [@PW]) is subject to smaller theoretical and systematic uncertainties than the individual couplings. Indeed, using eq. (\[rdef\]) we get $$\label{eq:PW}
R_{\rm PW} \equiv\frac{R_\nu - r R_{\bar{\nu}}}{1-r} =
\frac{\sigma(\nu {\cal N}\to \nu X)-\sigma(\bar\nu {\cal N}\to
\bar\nu X)}{\sigma(\nu {\cal N}\to \ell X) - \sigma(\bar{\nu}{\cal N}\to \bar{\ell}X)}=
g_L^2- g_R^2 = \frac{1}{2}-\sin^2 \theta_{\rm W},$$ which is seen to be independent of $q$ and $\bar{q}$, and therefore of the information on the partonic structure of the nucleon. Also, $R_{\rm PW}$ is expected to be less sensitive to the various corrections discussed above.
### Electroweak corrections and the SM fit {#electroweak-corrections-and-the-sm-fit .unnumbered}
The tree level SM predictions for $g_L$ and $g_R$ get modified by electroweak radiative corrections. These corrections depend on the precise definition of the weak mixing angle, and we therefore adopt the [*on-shell*]{} definition [@sirlin1980] and define ${s_{\rm W}}^2\equiv 1-M_W^2/M_Z^2$. One then obtains the following expressions for $g_{L,R}^2$ [@marciano80] $$g_L^2= \rho^2 (\frac12- {s_{\rm W}}^2 k + \frac59 {s_{\rm W}}^4 k^2),\qquad
g_R^2= \frac59 \rho^2 {s_{\rm W}}^4 k^2,
\label{ewcorr}$$ where, also including the most important QCD and electroweak higher order effects [@paolo] $$\begin{aligned}
\rho&\approx& 1.0086 + 0.0001 (M_t/{\,{\rm GeV}}-175) -0.0006 \ln (m_h/100{\,{\rm GeV}}),\\
k&\approx& 1.0349 + 0.0004 (M_t/{\,{\rm GeV}}-175)-0.0029 \ln (m_h/100{\,{\rm GeV}})\end{aligned}$$ for values of the top mass, $M_t$, and of Higgs mass, $m_h$, not far from $175$ and 100 GeV. Additional very small non-factorizable terms are induced by electroweak box diagrams.[^2]
We stress that the SM value of ${s_{\rm W}}$ depends on $M_t$ and $m_h$, unless only the direct measurements of $M_W$ and $M_Z$ are used to compute it. In particular, for fixed values of $M_t$ and $m_h$, the $W$ mass can be very precisely determined from $G_F$, $\alpha(M_Z)$, and $M_Z$. To very good approximation one has (see e.g. [@degrassi]) $$M_W= 80.387 -0.058 \ln (m_h/100{\,{\rm GeV}}) -0.008 \ln^2 (m_h/100{\,{\rm GeV}})
+ 0.0062 (M_t/{\,{\rm GeV}}-175).\nonumber$$ Without including the NuTeV results, the latest SM global fit of precision observables gives $M_t=176.1\pm 4.3{\,{\rm GeV}}$, $m_h=87^{+51}_{-34} {\,{\rm GeV}}$, from which one obtains $M_W=80.400\pm0.019{\,{\rm GeV}}$ [@grunewald], and therefore ${s_{\rm W}}^2 = 0.2226\pm0.0004$. The values of $g_L^2$ and $g_R^2$ corresponding to the best fit are $0.3042$ and $0.0301$, respectively. The small red ellipses in fig. \[plot1\] show the SM predictions for $g_L^2$ and $g_R^2$ at $68\%$ and $99\%$ CL, while the bigger yellow ellipses are the NuTeV data, at $68\%$ $90\%$ and $99\%$ CL. While $g_R^2$ is in agreement with the SM, $g_L^2$ shows a discrepancy of about $2.5\sigma$.
Here we have adopted the on-shell definition of ${s_{\rm W}}$ because it is well known that with this choice the electroweak radiative corrections to $g_{L}^2$ cancel to a large extent. In fact, at first order in $\delta\rho\equiv \rho-1$ and $\delta k\equiv k-1$, $g_L^2$ gets shifted by $\delta g_L^2\approx (2 \,\delta \rho- 0.551
\,\delta k) g_L^2$ and the leading quadratic dependence on $M_t$ is the same for $ \delta \rho$ and $ \delta k\,{s_{\rm W}}^2/c_{\rm W}^2 $. Therefore, the top mass sensitivity of $g_L^2$ is very limited when this effective coupling is expressed in terms of ${s_{\rm W}}^2=1-M_W^2/M_Z^2$. As leading higher order electroweak corrections are usually related to the high value of the top mass [@paolo], higher order corrections cannot have any relevant impact on the discrepancy between the SM and NuTeV.
Within the SM, one can extract a value of ${s_{\rm W}}^2$ from the NuTeV data. This is performed by the NuTeV collaboration using a fit to ${s_{\rm W}}^2$ and the effective charm mass that is used to describe the charm threshold. This fit is different from the one that gives $g_{L,R}^2$. The result is $m_c^{\rm eff}=1.32\pm 0.11$ GeV and $$\begin{aligned}
\label{eq:stwres}
{s_{\rm W}}^2&=&0.2276\pm0.0013~{\rm (stat.)}\pm0.0006~{\rm
(syst.)}\pm0.0006~{\rm (th.)}\\
&& - 0.00003 (M_t/{\,{\rm GeV}}-175) + 0.00032 \ln (m_h/100{\,{\rm GeV}}) .
\nonumber\end{aligned}$$ Here the systematic uncertainty includes all the sources of experimental systematics, such as those mentioned above, and it is estimated by means of a Monte Carlo simulation of the experiment. The theoretical uncertainty is almost entirely given by QCD corrections, to be discussed in section 3. The total uncertainty above is about 3/4 of that in the preliminary result from NuTeV ${s_{\rm W}}^2=0.2255 \pm 0.0021$ [@nutold] and about 1/2 of that in the CCFR result ${s_{\rm W}}^2=0.2236 \pm 0.0035$ [@CCFR].
Alternatively, by equating eq. (\[ewcorr\]) to the NuTeV results for $g_{L,R}^2$, we find $$\begin{aligned}
\label{eq:sWnuTeV}
{s_{\rm W}}^2\hbox{(NuTeV})=
0.2272 \pm 0.0017 \pm 0.0001 \hbox{ (top)} \pm 0.0002
\hbox{ (Higgs)}.\end{aligned}$$ The central value and the errors have been computed using the best global fit for $M_t$ and $m_h$. Eq. (\[eq:sWnuTeV\]) has a slightly larger error but is very close to eq. (\[eq:stwres\]). An additional difference between the two determinations is that eq. (\[eq:sWnuTeV\]) is based on a up-to-date treatment of higher order effects. Notice that the NuTeV error is much larger than in the global fit given above, from which eq. (\[eq:stwres\]) differs by about 3$\sigma$ and eq. (\[eq:sWnuTeV\]) by about $2.6\sigma$. The NuTeV result for ${s_{\rm W}}^2$ can also be re-expressed in terms of $M_W$. If we then compare with $M_W=80.451\pm 0.033{\,{\rm GeV}}$ from direct measurements at LEP and Tevatron, the discrepancy is even higher: more than $3\sigma$ in both cases. The inclusion of the NuTeV data in a global fit shifts the preferred $m_h$ value very slightly, but worsens the fit significantly ($\chi^2=30$ for 14 degrees of freedom).
Even without including NuTeV data, the global SM fit has a somewhat low goodness-of-fit, 8% if naïvely estimated with a global Pearson $\chi^2$ test. The quality of the fit becomes considerably worse if only the most precise data are retained [@altarelli]. Indeed, among the most precise observables, the leptonic asymmetries measured at LEP and SLD and $M_W$ point to an extremely light Higgs, well below the direct exclusion bound $m_h> 115 {\,{\rm GeV}}$, while the forward-backward hadronic asymmetries measured at LEP prefer a very heavy Higgs (for a detailed discussion, see [@altarelli; @chanowitz]). The effective leptonic couplings measured by the hadronic asymmetries differ by more then 3$\sigma$ from those measured by purely leptonic asymmetries. Therefore, the discrepancy between NuTeV and the other data depends also on how this other discrepancy is treated. For instance, a fit which excludes the hadronic asymmetries has a satisfactory goodness-of-fit, but $m_h=40{\,{\rm GeV}}$ as best fit value. In this case, the SM central values for $g_{L,R}^2$ are 0.3046 and 0.0299, and differ even more from the NuTeV measurements. On the other hand, even a very heavy Higgs would not resolve the anomaly: to explain completely the NuTeV result $m_h$ should be as heavy as 3 TeV, deep in the non-perturbative regime. The preference of the NuTeV result for a heavy Higgs is illustrated in fig. \[plot1\] where we display the point corresponding to the SM predictions with $m_h=500{\,{\rm GeV}}$ and $m_t=175{\,{\rm GeV}}$. This is suggestive that, as will be more clearly seen in the following, the central value of NuTeV cannot be explained by radiative corrections.
QCD corrections
===============
Most of the quoted theoretical error on the NuTeV determination of ${s_{\rm W}}^2$ is due to QCD effects. Yet, this uncertainty does not include some of the assumptions on which the Paschos–Wolfenstein relation, eq. (\[eq:PW\]), is based. Hence, one may ask: first, whether some source of violation of the Paschos–Wolfenstein relation which has not been included in the experimental analysis can explain the observed discrepancy, and second, whether some of the theoretical uncertainties might actually be larger than estimated in [@NuTeV].
A full next–to–leading order (NLO) treatment of neutrino deep–inelastic scattering is possible, since all the relevant coefficient functions are long known [@furpet]. If no assumption on the parton content of the target is made, including NLO corrections, the Paschos–Wolfenstein ratio eq. (\[eq:PW\]) becomes $$\begin{aligned}
R_{\rm PW}&=&g_L^2- g_R^2 \nonumber +{(u^--d^-)+(c^--s^-)\over{\cal Q}^-}
\Bigg\{\left[\frac{3}{2}(g_{Lu}^2-g_{Ru}^2) +
\frac{1}{2}(g_{Ld}^2-g_{Rd}^2)\right]+\\
&&+\frac{\alpha_s}{2\pi}(g_L^2-
g_R^2)(\frac{1}{4}\delta C^1-\delta C^3) \Bigg\}
+O({\cal Q}^-)^{-2}\label{eq:PWNLO}\end{aligned}$$ The various quantities which enter eq. (\[eq:PWNLO\]) are defined as follows: $\alpha_s$ is the strong coupling; $\delta C^1\equiv C^1-C^2$, $\delta C^3\equiv C^3-C^2$; $C^i$ is the the second moment of the next–to–leading contributions to the quark coefficient functions for structure function $F^i$; $ q^-\equiv q-\bar q$; ${\cal Q}^-\equiv (u^-+d^-)/{2}$; $u$, $d$, and so on are second moments of the corresponding quark and antiquark distributions. We have expanded the result in powers of ${1/
{\cal Q}^-}$, since we are interested in the case of targets where the dominant parton is the isoscalar ${\cal Q}^-=(u^-+d^-)/2$. Equation (\[eq:PWNLO\]) shows the well-known fact that the Paschos-Wolfenstein relation is corrected if either the target has an isotriplet component (i.e. $u\not=d$) or sea quark contributions have a $C$-odd component (i.e. $s^-\not=0$ or $c^-\not=0$). Furthermore, NLO corrections only affect these isotriplet or $C$-odd terms.
Let us now consider these corrections in turn. Momentum fractions are scale dependent; in the energy range of the NuTeV experiment ${\cal Q}^-\approx 0.18$ [@MRST], with better than 10% accuracy, so that $\left[\frac{3}{2}(g_{Lu}^2-g_{Ru}^2) +
\frac{1}{2}(g_{Ld}^2-g_{Rd}^2)\right]/{\cal Q}^-\approx 1.3$. Hence, a value $(u^--d^-)+(c^--s^-)\approx-0.0038$ is required to shift the value of ${s_{\rm W}}^2$ by an amount equal to the difference between the NuTeV value central eq. (\[eq:stwres\]) and the global SM fit.
The NuTeV experiment uses an iron target, which has an excess of neutrons over protons of about 6%. This violation of isoscalarity is however known to good accuracy, it is included in the NuTeV analysis, and it gives a negligible contribution to the overall error. A further violation of isoscalarity could be due to the fact that isospin symmetry is violated by the parton distributions of the nucleon, i.e. $u^p\not=d^n$ and $u^n\not=d^p$. This effect is considered by NuTeV [@NuTeV], but not included in their analysis. Indeed, isospin in QCD is only violated by terms of order $(m_u-m_d)/\Lambda$, and thus isospin violation is expected to be smaller than 1% or so at the NuTeV scale (where the scale dependence is rather weak) [@ivln]. However, if one were to conservatively estimate the associated uncertainty by assuming isospin violation of the valence distribution to be at most 1% (i.e. $(u^--d^-)/{\cal Q}^-\leq0.01$), this would lead to a theoretical uncertainty on ${s_{\rm W}}^2$ of order $\Delta {{s_{\rm W}}^2}=0.002$. This is a more than threefold increase in theoretical uncertainty, which would rather reduce the significance of the NuTeV anomaly.
A $C$-odd second moment of heavier flavours, $s^-\not=0$ or $c^-\not=0$ is not forbidden by any symmetry of QCD, which only imposes that the first moments of all heavy flavours must vanish. Neither of these effects has been considered by NuTeV. A nonzero value of $c^-$ appears very unlikely since the perturbatively generated charm component has $c^-=0$ identically for all moments, and even assuming that there is an ‘intrinsic’ charm component (i.e. $c\not=0$ below charm threshold due to nonperturbative effects) it is expected to have vanishing $c^-$ [@intc] for all moments. On the contrary, because the relevant threshold is in the nonperturbative region, the strange component is determined by infrared dynamics and there is no reason why $s^-=0$. In fact, explicit model calculations [@ints] suggest $s^-\approx 0.002$. Whereas such a $C$-odd strange component was at first ruled out by CCFR dimuon data [@ccfrs][^3], a subsequent global fit to all available neutrino data found evidence in favor of a strange component of this magnitude and sign [@BPZ], and showed that it does not necessarily contradict the direct CCFR measurement. More recent measurements [@Goncharov] confirm the CCFR results in a wider kinematic region; however, the quantitative impact of these data on a global fit is unclear. Even though it is not included in current parton sets, a small asymmetry $s^-\approx 0.002$ seems compatible with all the present experimental information [@RGRp]. Assuming $s^-\approx 0.002$ as suggested by [@BPZ], the value of ${s_{\rm W}}^2$ measured by NuTeV is lowered by about $\delta {s_{\rm W}}^2 =0.0026$. The corresponding shift of the PW line is displayed in fig. \[plot1\]. This reduces the discrepancy between NuTeV and the SM to the level of about one and a half standard deviations (taking the NuTeV error at face value), thus eliminating the anomaly.
Since NLO corrections in eq. (\[eq:PWNLO\]) only affect the $C$-odd or isospin-odd terms, they are in practice a sub–subleading effect. Numerically, $\delta C^1- 4\,\delta C^3=16/9$ so NLO effects will merely correct a possible isotriplet or $C$-odd contribution by making it larger by a few percent. Therefore, a purely leading-order analysis of $R_{\rm PW}$ is entirely adequate, and neglect of NLO corrections should not contribute significantly to either the central value of ${s_{\rm W}}^2$ extracted from $R_{\rm PW}$, nor to the error on it. It is important to realize, however, that this is not the case when considering the individual ratios $R_\nu$ and $R_{\bar\nu}$. Indeed, NLO corrections affect the leading–order expressions by terms proportional to the dominant quark component ${\cal Q}^-$, and also by terms proportional to the gluon distribution, which carries about 50% of the nucleon’s momentum. Therefore, one expects NLO corrections to $R_\nu$ and $R_{\bar
\nu}$ to be of the same size as NLO corrections to typical observables at this scale, i.e. around 10%. The impact of this on the values of $g^2_L$ and $g^2_R$, however, is difficult to assess: the NuTeV analysis makes use of a parton set which has been self–consistently determined fitting leading–order expressions to neutrino data, so part of the NLO correction is in effect included in the parton distributions. A reliable determination of $g^2_L$ and $g^2_R$ could only be obtained if the whole NuTeV analysis were consistently upgraded to NLO. As things stand, one should be aware that the NuTeV determination of $g^2_L$ and $g^2_R$, eq. (\[NuTeVgLgR\]), is affected by a theoretical uncertainty related to NLO which has not been taken into account and which may well be non-negligible. This uncertainty is however correlated between $g^2_L$ and $g^2_R$, and it cancels when evaluating the difference $g^2_L-g^2_R$.
On top of explicit violations of the Paschos–Wolfenstein relation, other sources of uncertainty are due to the fact that the experiment of course does not quite measure total cross–sections. Therefore, some of the dependence on the structure of the nucleon which cancels in ideal observables such as $R_\nu$ or $R_{\rm PW}$ remains in actual experimental observables. In order to estimate these uncertainties, we have developed a simple Monte Carlo which simulates the NuTeV experimental set-up. The Monte Carlo calculates integrated cross sections with cuts typical of a $\nu{\cal N}$ experiment, by using leading–order expressions. Because the Monte Carlo is not fitted self–consistently to the experimental raw data, it is unlikely to give an accurate description of actual data. However, it can be used to assess the uncertainties involved in various aspects of the analysis.
We have therefore studied the variation of the result for $R_{\rm PW}$ as several theoretical assumptions are varied, none of which affects the ideal observable $R_{\rm PW}$ but all of which affect the experimental results. First, we have considered the dependence on parton distributions. Although the error on parton distributions cannot really be assessed at present, it is unlikely to be much larger than the difference between leading–order and NLO parton sets. We can study this variation by comparing the CTEQL and CTEQM parton sets [@cteq5]. We also compare results to those of the MRST99 set [@MRST]. We find extremely small variation for $R_{\rm PW}$ and small variations even for the extraction of $g_{L,R}^2$. Specific uncertainties which may affect significantly neutrino cross sections are the relative size of up and down distributions at large $x$ [@udrat; @Kuhlmann:1999sf] and the size of the strange and charm component [@chsiz]. Both have been explored by MRST [@MRST], which produce parton sets where all these features are varied in turn. Again, using these parton sets in turn, we find no significant variation of the predicted $R_\nu$, $R_{\bar \nu}$, and of the extracted $g_{L,R}^2$. If, on the contrary, we relax the assumption $s=\bar s$, which is implicit in all these parton sets, we find a shift of $R_{\rm PW}$ in good agreement with eq. (\[eq:PWNLO\]). This conclusion appears to be robust, and only weakly affected by the choice of parton distributions and by the specific $x$-dependence of the $s-\bar{s}$ difference, provided the second moment of $s-\bar{s}$ is kept fixed. The lower cut ($20 {\,{\rm GeV}}$) imposed by NuTeV on the energy deposited in the calorimeter tends to decrease the sensitivity to the asymmetry $s^-$, as it mostly eliminates high $x$ events. However, this effect is relevant only for lower energy neutrinos, below about 100 GeV, and should be small in the case of NuTeV.
The dependence on the choice of parton distributions is shown in fig. \[plot1\] where blue $\times$ (red $+$) crosses correspond to MRST99 (CTEQ) points. We cannot show a NuTeV value, because we could not access the parton set used by NuTeV. The results are seen to spread along the expected PW line. The intercept of this line turns out to be determined by the input value of $g_L^2-g_R^2$, and to be completely insensitive to details of parton distributions. However, it should be kept in mind that inclusion of NLO corrections might alter significantly these results, by increasing the spread especially in the direction along the PW line, for the reasons discussed above.
Finally, we have tried to vary the charm mass, and to switch on some higher twist effects (specifically those related to the nucleon mass). In both cases the contributions to the uncertainty which we find are in agreement with those of NuTeV.
Oblique corrections {#oblique}
===================
After our review of the SM analysis, let us proceed with a discussion of possible effects of physics beyond the SM. We first concentrate on new physics which is characterized by a high mass scale and couples only or predominantly to the vector bosons. In this case its contributions can be parameterized in a model independent way by three ([*oblique*]{}) parameters. Among the several equivalent parameterizations [@others], we adopt $\epsilon_1,\epsilon_2,\epsilon_3$ [@eps]. Many models of physics beyond the SM can be studied at least approximately in this simple way.
Generic contributions to $\epsilon_1,\epsilon_2,\epsilon_3$ shift $g_{L,R}^2$ according to the approximate expressions $$\frac{\delta g_L^2}{ g_L^2}= 2.8 \,\delta \epsilon_1 -1.1 \,\delta
\epsilon_3;
\ \ \ \ \ \
\frac{\delta g_R^2}{ g_R^2}= -0.9 \,\delta \epsilon_1 +3.7 \,\delta
\epsilon_3.
\label{eps}$$ Of course, the $\epsilon_i$ parameters are strongly constrained by electroweak precision tests. In order to see if this generic class of new physics can give rise to the NuTeV anomaly, we extract the $\epsilon_i$ parameters directly from a fit to the electroweak data, without using the SM predictions for them. We use the most recent set of electroweak observables, summarized in table \[tab:data\], properly taking into account the uncertainties on $\alpha_{\rm
em}(M_Z)$ and $\alpha_{\rm s}(M_Z)$. The result is a fit to the $\epsilon_i$ very close to the one reported in [@altarelli] which we use in eq. (\[eps\]) after normalizing to the SM prediction at a reference value of $m_h$. The ellipses corresponding to the $68\%$ and $99\%$ CL are displayed in fig. \[fig:plot2\] (green, almost horizontal ellipses). They are centred roughly around the SM best fit point, because the SM predictions for $\epsilon_1$ and $\epsilon_3$ for $m_h\approx 100$ GeV are in reasonable agreement with the data (see also section 2). The difference between the best fit point and the light Higgs SM prediction for $(g_L^2,g_R^2)$ is much smaller than the NuTeV accuracy. Notice that, as mentioned in section 2, excluding the hadronic asymmetries from the fit would make an oblique explanation even harder.
Our conclusion is that oblique corrections cannot account for the NuTeV anomaly, as they can only absorb about $\sim1\sigma$ of the $\sim3\sigma$ NuTeV anomaly.
$$\begin{array}{rclrl}
G_{\rm F} &=& 1.16637~10^{-5}/{\,{\rm GeV}}^2& & \hbox{Fermi constant for $\mu$ decay}\\
M_Z &=& 91.1875 {\,{\rm GeV}}& &\hbox{pole $Z$ mass} \\
m_t &=& (174.3\pm5.1){\,{\rm GeV}}& &\hbox{pole top masss}\\
\alpha_{\rm s}(M_Z) &=& 0.119\pm0.003 & &\hbox{strong coupling}\\
\alpha_{\rm em}^{-1}(M_Z) &=& 128.936\pm0.046 & & \hbox{electromagnetic coupling}\\
M_W &=& (80.451 \pm 0.033){\,{\rm GeV}}& 1.8\hbox{-}\sigma & \hbox{pole $W$ mass} \\
\Gamma_Z &=& (2.4952 \pm 0.0023){\,{\rm GeV}}& -0.4\hbox{-}\sigma & \hbox{total $Z$ width} \\
\sigma_h &=&(41.540 \pm 0.037)\hbox{nb}& 1.6\hbox{-}\sigma & \hbox{$e\bar{e}$ hadronic cross section at $Z$ peak}\\
R_\ell &=& 20.767 \pm 0.025 & 1.1\hbox{-}\sigma & \hbox{$\Gamma(Z\to \hbox{hadrons})/\Gamma(Z\to\mu^+\mu^-)$}\\
R_b &=& 0.21646 \pm 0.00065 & 1.1\hbox{-}\sigma & \hbox{$\Gamma(Z\to b \bar b)/\Gamma(Z\to \hbox{hadrons})$}\\
R_c &=& 0.1719 \pm 0.0031 & -0.1\hbox{-}\sigma & \hbox{$\Gamma(Z\to c \bar c)/\Gamma(Z\to \hbox{hadrons})$}\\
A_{LR}^e &=& 0.1513 \pm 0.0021 & 1.6\hbox{-}\sigma & \hbox{Left/Right asymmetry in $e\bar{e}$}\\
A_{LR}^b &=& 0.922 \pm 0.02 & -0.6\hbox{-}\sigma &
\hbox{LR Forward/Backward asymmetry in $e\bar{e}\to b\bar{b}$}\\
A_{LR}^c &=& 0.670 \pm 0.026 & 0.1\hbox{-}\sigma & \hbox{LR
FB asymmetry in $e\bar{e}\to c\bar{c}$}\\
A_{FB}^\ell &=& 0.01714 \pm 0.00095 & 0.8\hbox{-}\sigma & \hbox{Forward/Backward asymmetry in $e\bar{e}\to \ell\bar{\ell}$}\\
A_{FB}^b &=& 0.099 \pm 0.0017 & -2.8\hbox{-}\sigma & \hbox{Forward/Backward asymmetry in $e\bar{e}\to b\bar{b}$}\\
A_{FB}^c &=& 0.0685 \pm 0.0034 & -1.7\hbox{-}\sigma & \hbox{Forward/Backward asymmetry in $e\bar{e}\to c\bar{c}$}\\
Q_W &=& -72.5 \pm 0.7 & 0.6\hbox{-}\sigma & \hbox{atomic parity violation in Cs}\\
\end{array}$$
Corrections to gauge boson interactions {#couplings}
=======================================
We now discuss whether the NuTeV anomaly could be explained by modifying the couplings of the vector bosons. This possibility could work if new physics only affects the $\bar{\nu}Z\nu$ couplings, reducing the squared $\bar{\nu}_\mu Z \nu_\mu$ coupling by $(1.16\pm 0.42)\%$ [@NuTeV]. This shift is consistent with precision LEP data, that could not measure the $\bar{\nu}Z \nu$ couplings as accurately as other couplings (no knowledge of the LEP luminosity is needed to test charged lepton and quark couplings), and found a $Z\to \nu\bar{\nu}$ rate $(0.53\pm 0.28)\%$ lower than the best-fit SM prediction. We could not construct a model that naturally realizes this intriguing possibility, because precision data test the $\bar{\mu} Z \mu$ and $\bar{\mu} W \nu_\mu$ couplings at the per mille accuracy. This generic problem is best understood by considering explicit examples. We first show that models where neutrinos mix with some extra fermion (thereby shifting not only the $\bar{\nu} Z \nu$ coupling, but also the $\bar{\ell} W \nu$ coupling) do not explain the NuTeV anomaly. Next, we discuss why a model where the $Z$ mixes with some extra vector boson (thereby shifting not only the $\bar{\nu} Z \nu$ coupling, but also the $Z$ couplings of other fermions) does not explain the NuTeV anomaly.
### Models that only affect the neutrino couplings {#models-that-only-affect-the-neutrino-couplings .unnumbered}
This happens e.g. in models where the SM neutrinos mix with right-handed neutrinos (a $1\%$ mixing could be naturally obtained in extra dimensional models or in appropriate 4-dimensional models [@NuMixing]). By integrating out the right-handed neutrinos, at tree level one obtains the low energy effective lagrangian $$\label{eq:nud}
{\mathscr{L}\,}_{\rm eff} = {\mathscr{L}\,}_{\rm SM} +\epsilon_{ij}~
2\sqrt{2}G_F (H^\dagger \bar{L}_i)i{\partial\!\!\!\raisebox{2pt}[0pt][0pt]{$\scriptstyle/$}}(HL_j),$$ where $i,j=\{e,\mu,\tau\}$, $L_i$ are the lepton left-handed doublets, $H$ is the Higgs doublet, and $\epsilon_{ij}=\epsilon_{ji}^*$ are dimensionless couplings. This peculiar dimension 6 operator only affects neutrinos. After electroweak symmetry breaking, it affects the kinetic term of the neutrinos, that can be recast in the standard form with a redefinition of the neutrino field. In this way, the $\bar{\nu}_i Z \nu_i$ and the $\bar{\nu}_i W \ell_i$ couplings become respectively $1-\epsilon_{ii}$ and $1-\epsilon_{ii}/2$ lower than in the SM ($\epsilon_{ii}$ is positive: gauge couplings of neutrinos get reduced if neutrinos mix with neutral singlets). The NuTeV anomaly would require $\epsilon_{\mu\mu}=0.0116\pm 0.0042$. However, a reduction of the $\bar{e} W \nu_e$ and $\bar{\mu} W\nu_\mu$ couplings increases the muon lifetime, that agrees at about the per-mille level with the SM prediction obtained from precision measurements of the electromagnetic coupling and of the $W$ and $Z$ masses. Assuming that no other new physics beyond the extra operator in eq. (\[eq:nud\]) is present, from a fit of the data in table \[tab:data\] we find that a flavour-universal $\epsilon_{ii}$ is constrained to be $\epsilon_{ii} = (0\pm 0.4)10^{-3}$. This bound cannot be evaded with flavour non universal corrections, that are too strongly constrained by lepton universality tests in $\tau$ and $\pi$ decays [@Pich]. In conclusion, $\epsilon_{\mu\mu}$ can possibly generate only a small fraction of the NuTeV anomaly.
In principle, the strong bound from muon decay could be circumvented by mixing the neutrinos with extra fermions that have the same $W$ coupling of neutrinos but a different $Z$ coupling. In practice, it is not easy to build such models.
### Models that only affect the $Z$ couplings {#models-that-only-affect-the-z-couplings .unnumbered}
Only the $Z$ couplings are modified, e.g., in models with an extra U(1) $Z'$ gauge boson that mixes with the $Z$ boson. The $Z'$ effects can be described by the $ZZ'$ mixing angle, $\theta$, by the $Z'$ boson mass, $M_{Z'}$, and by the $Z$ gauge current $J_{Z'}$. At leading order in small $\theta$ and $M_Z/M_{Z'}$, the tree-level low energy lagrangian gets modified in three ways.
1. the SM prediction for the $Z$ mass gets replaced by $M_Z^2 =M_Z^{2\rm SM} - \theta^2 M_{Z'}^2$;
2. the $Z$ current becomes $J_Z = J_Z^{\rm SM} - \theta J_{Z'}$;
3. at low energy, there are the four fermion operators generated by $Z'$ exchange, beyond the ones generated by the $W_\pm$ and $Z$ bosons: $${\mathscr{L}\,}_{\rm eff}(E \ll M_Z,M_{Z'}) = -\frac{J_{W_+} J_{W_-}}{M_W^2} -\frac{1}{2} \bigg[ \frac{J_Z^2}{M_Z^2} +
\frac{J_{Z'}^2}{M_{Z'}^2}\bigg] + \cdots.$$
As discussed in section \[oblique\], (1) cannot explain the $\sim 1\%$ NuTeV anomaly. Here we show that the same happens also for (2): the $Z$ couplings are constrained by LEP and SLD at the per-mille level, and less accurately by atomic parity violation data, as summarized in table \[tab:data\]. However, the less accurate of these data have $\sim 1\%$ errors, and present some anomalies. The $Z\to \nu\bar{\nu}$ rate and the Forward/Backward asymmetries of the $b$ and $c$ quarks show a few-$\sigma$ discrepancy with the best-fit SM prediction. But the $Z\to b\bar{b}$ and $Z\to c\bar{c}$ branching ratios agree with the SM. The best SM fit, including also the NuTeV data [@NuTeV], has $\chi^2 \approx 30$ with 14 d.o.f. In this situation, it is interesting to study if these anomalies could have a common solution with $Z$ couplings about $1\%$ different from the SM predictions. We therefore extract the $Z$ couplings directly from the data, without imposing the SM predictions for them. This kind of analysis has a general interest. Since we are here concerned with the NuTeV anomaly, we apply our results to compute the range of $(g_L^2, g_R^2)$ consistent with the electroweak data. We recall that both neutrino and quark couplings enter in determining $g_L^2$ and $g_R^2$.
We assume that the $Z$ couplings are generation universal and $\SU(2)_L$ invariant as in the SM: we therefore extract from the data the 5 parameters $g_Q$, $g_{U}$, $g_D$, $g_L$ and $g_E$ that describe the $Z$ couplings to the five kinds of SM fermions listed in table \[tab:gAi\]. In the context of $Z'$ models, this amounts to assume that the $Z'$ has generation-universal couplings that respect $\SU(2)_L$. This assumption of $\SU(2)_L$ invariance is theoretically well justified, although one could possibly invent some non minimal model where it does not hold. On the contrary, the universality assumption only has a pragmatic motivation: we cannot make a fit with more parameters than data.
We obtain the result shown by the large red ellipse on the right side of fig. \[fig:plot2\]. This generic class of models gives a best fit value close to the SM prediction. Although the error is much larger than in a pure SM fit, it does not allow to cleanly explain the NuTeV anomaly. We find that the global $\chi^2$ can be decreased by about 4 with respect to a SM fit: taking into account that we have five more parameters this is not a statistically significant reduction[^4] in agreement with old similar analyses [@e.g.].
One could generalize this analysis in several directions. For example, new physics could shift the on-shell $Z$ couplings tested at LEP and SLD differently from the low-energy $Z$ couplings relevant for NuTeV. Alternatively, there could be flavour dependent shifts of the $Z$ couplings. This happens e.g. in the model considered in [@Roy], where it is suggested that the NuTeV anomaly could be reproduced by a mixing between the $Z$ boson with a $Z'$ boson coupled to the lepton flavour numbers $L_\mu - L_\tau$. However, this mixing also shifts the couplings of charged $\tau$ and $\mu$ leptons, that are too precisely tested by LEP and SLD to allow for a significant fraction of the NuTeV anomaly.
Loop effects in the MSSM {#susy}
========================
It is well known that supersymmetric contributions to the electroweak precision observables decouple rapidly. Under the present experimental constraints it is very difficult to find regions of parameter space where radiative corrections can exceed a few per-mille. Explaining the NuTeV anomaly (a 1.2% discrepancy with the SM prediction for $g_L^2$) with low-energy supersymmetry looks hopeless from the start. Moreover, the dominant contributions to $\epsilon_1$ in the MSSM are always positive [@drees]. It then follows from eqs. (\[eps\]) that, in order to explain at least partially the measured value of $g_L^2$, the supersymmetric contributions to $\epsilon_3$ should be positive and of ${\cal O}(1\%)$.
An interesting scenario which can be easily investigated is the one recently proposed in [@altarelli] to improve the global fit to the electroweak data. As the main contributions of squark loops would be a positive shift in $\epsilon_1$, all squarks can be assumed heavy, with masses of the order of one TeV. Relatively large supersymmetric contributions are then provided by light gauginos and sleptons and can be parameterized in terms of only four supersymmetric parameters ($\tan\beta$, the Higgsino mass $\mu$, the weak gaugino mass $M_2$, and a supersymmetry-breaking mass of left-handed sleptons). The oblique approximation used in section \[oblique\] is not appropriate for light superpartners (sneutrinos can be as light as $50{\,{\rm GeV}}$). We therefore consider the complete supersymmetric one-loop corrections in this scenario (see [@altarelli] and refs. therein). Taking into account the various experimental bounds on the chargino and slepton masses, we find the potential shifts to $g_{L,R}^2$ shown in fig. \[fig:plot4\]. They are small and have the wrong sign. Low-energy supersymmetric loops cannot generate the NuTeV anomaly.
Non renormalizable operators {#NRO}
============================
Non renormalizable operators parameterize the effects of any new physics too heavy to be directly produced. As discussed in sections \[oblique\] and \[couplings\], new physics that affects the $Z,W_\pm$ propagators or couplings cannot fit the NuTeV anomaly without some conflict with other electroweak data. We now consider dimension six lepton-lepton-quark-quark operators that conserve baryon and lepton number.
We start from a phenomenological perspective, with $\SU(3) \otimes
{\rm U}(1)_{\rm em}$ invariant four fermion vertices, and determine which vertices could explain the NuTeV anomaly without conflicting with other data. Then we next consider which $\SU(2)_L$ invariant operators generate the desired four-fermion vertices. In the next sections, we will discuss new particles whose exchange could generate these operators.
Taking into account Fierz identities, the most generic Lagrangian that we have to consider can be written as $$\begin{aligned}
\nonumber
{\mathscr{L}\,}_{\rm eff} &=& {\mathscr{L}\,}_{\rm SM}-2 \sqrt{2} G_F\bigg[
\epsilon^{AB}_{ \bar{\ell}_i \ell_j \bar{q}_r q_s}
( \bar{\ell}_i \gamma^{\mu}P_A \ell_j)
(\bar{q}_r \gamma_{\mu}P_B q_s)+\\
&&+
\delta^{AB}_{\bar{\ell}_i \ell_j \bar{q}_r q_s}(\bar\ell_i P_A \ell_j)(\bar q_r P_B q_s)+
t^{AA}_{\bar{\ell}_i \ell_j \bar{q}_r q_s}
(\bar\ell_i \gamma^{\mu\nu} P_A \ell_j)(\bar q_r \gamma_{\mu\nu} P_A q_s)
\bigg].
\label{epsilon}\end{aligned}$$ where $\gamma_{\mu\nu}=\frac{i}{2}[\gamma_\mu , \gamma_\nu]$, $P_{R,L}\equiv (1\pm \gamma_5)/2$ are the right- and left-handed projectors, $q$ and $\ell$ are any quark or lepton, $A,B=\{L,R\}$ and $\epsilon$, $\delta$ and $t$ are dimensionless coefficients. In order to explain the NuTeV anomaly, new physics should give a negative contribution to $g_L^2$. This can be accomplished by
1. reducing the NC $\nu_\mu$-nucleon cross section. The operators given by new physics must contain left-handed first generation quarks.
2. increasing the CC $\nu_\mu$-nucleon cross section. The quarks do not need to be left-handed, and the quark in the final state needs not to be of the first generation.
We now show that ‘scalar’ operators (the ones with coefficients $\delta$) cannot explain NuTeV, left-handed ‘vector’ operators (with coefficient $\epsilon^{LL}$) can realize the first possibility and ‘tensor’ operators (with coefficient $t$) perhaps the second one.
The [**scalar operators**]{} with coefficient $\delta$ contribute to the charged current. In order to accommodate the NuTeV anomaly, these operators should appear with a relatively large coefficient $\delta \circa{>} 0.1$, since their contribution to CC scattering has only a negligible interference with the dominant SM amplitude. The interference is suppressed by fermion masses (for first generation quarks) or by CKM mixings (if a quark of higher generation is involved). For first generation quarks, this value of $\delta_{ \bar{\mu} \nu_\mu \bar{u} d }$ is inconsistent with $R_\pi$. When new physics-SM interference is included in this ratio, it becomes [@Shanker:1982nd] $$R_\pi\equiv \frac{\hbox{BR} (\pi\to e \bar{\nu}_e)}{\hbox{BR}(\pi\to \mu \bar{\nu}_\mu)} = R_\pi^{\rm SM}
\left[ 1 - 2 (\epsilon^{LL}_{ \bar{\mu} \nu_\mu \bar{u} d }- \epsilon^{LL}_{ \bar{e} \nu_e \bar{u} d }) -
\frac{2m_\pi^2}{ m_\mu(m_u +m_d)}
\delta^{LP}_{ \bar{\mu} \nu_\mu \bar{u} d } \right].
\label{pi}$$ The measured value, $R_\pi = (1.230
\pm 0.004)
\times 10^{-4}$ [@PDG], agrees with the SM prediction [@Marciano:1993sh; @Finkemeier:1996gi] $$R_\pi^{\rm SM}
= \frac{m_e^2 ( m_\pi^2 - m_e^2)^2}{m_\mu^2 ( m_\pi^2
- m_\mu^2)^2}(1-16.2 \frac{\alpha_{\rm em}}{\pi})$$ which implies $\delta_{ \bar{\mu} \nu_\mu \bar{u} d }\lsim 10^{-4}$. Furthermore, scalar operators which produce a $s,c$ or $b$ quark in the final state also cannot explain the NuTeV anomaly. The values of $\delta$ required would be in conflict with upper bounds on FCNC meson decays such as $K^+ \rightarrow \pi^+ \mu \bar{\mu}$, $D^+ \rightarrow \pi^+ \mu \bar{\mu}$, and $B^0 \rightarrow \mu \bar{\mu}$.
[**Vector operators**]{} can possibly generate the NuTeV anomaly if they are of $LL$ type. Assuming first generation quarks, the operators in eq. (\[epsilon\]) shift $g_L^2$ as g\_L\^2 &=& 2 . \[DgL2\] The CC term, $\epsilon^{LL}_{ \bar{\nu}_\mu \mu \bar{d} u}$, cannot alone fit the NuTeV anomaly without overcontributing to the $\pi\to \mu\bar{\nu}_\mu$ decay. In principle, one could allow for cancellations between different contributions to $R_\pi$ in eq. (\[pi\]). However LEP [@LEP2combined] (and bounds from atomic parity violation [@LEPEWWG]) exclude the simplest possibility, $\epsilon^{LL}_{ \bar{\mu} \nu_\mu \bar{u} d }=\epsilon^{LL}_{ \bar{e} \nu_e \bar{u} d}$.
We now assume that these vector operators are generated by new physics heavier than the maximal energy of present colliders (about few hundred GeV), and study the bounds from collider data. Operators involving second generation leptons are constrained by the Tevatron; LEP and HERA are not sensitive to them[^5]. In the case of vector operators, the Tevatron sets a limit [@TevatronContact] $|\epsilon^{LL}_{\bar{\mu} \mu \bar{q} q}|\circa{<}0.03$ ($q=u$ is slightly more constrained than $q=d$, because protons contain more $u$ than $d$). Assuming SU(2)$_L$-invariance (that relates the $\epsilon^{LL}_{\bar{\ell}\ell \bar{q} q}$ with $\ell = \{\mu,\nu_\mu\}$ and $q=\{u,d\}$, see below) the Tevatron bound is close but consistent with the value suggested by NuTeV, $| \epsilon^{LL}_{\bar{\nu}_\mu\nu_\mu \bar{q} q} | \sim 0.01$.
[**Tensor operators**]{} could possibly produce the NuTeV anomaly via mechanism 2, because $\pi$-decays give no bound on $t_{ \bar{\mu} \nu_\mu \bar{u} d }$ (using only the $\pi$ momentum it is not possible to write any antisymmetric tensor). Tensor operators have not been studied in [@TevatronContact], but if they are generated by physics at a scale $\gg M_Z$, the value of $t_{ \bar{\mu} \nu_\mu \bar{u} d }\sim 0.1$ necessary to fit the NuTeV anomaly is within (and probably above) the sensitivity of present Tevatron data. Furthermore we do not know how new physics (e.g. exchange of new scalar or vector particles [@Pich:1995vj]) could generate only tensor operators, without also generating the scalar operators that overcontribute to $R_\pi$. We will therefore focus on vector operators.
We now consider $\SU(2)_L$-invariant operators. We have shown that the NuTeV anomaly could be explained by the four-fermion vertex $ ( \bar{\nu}_\mu \gamma^{\mu}P_L \nu_\mu)
(\bar{q}_1 \gamma_{\mu}P_L q_1)$. Only two $\SU(2)_L$ invariant operators operators can generate this vertex $${\cal O}_{LQ} =[\bar{L} \gamma_\mu L][\bar{Q} \gamma_\mu Q], \qquad
{\cal O}_{LQ}' = [\bar{L} \gamma_\mu \tau^a L][\bar{Q} \gamma_\mu \tau^a Q].$$ We left implicit the $\SU(2)_L$ indices, on which the Pauli matrices $\tau^a$ act. Other possible 4 fermion operators, with different contractions of the $\SU(2)_L$ indices, can be rewritten as linear combinations of these two operators.
The NuTeV anomaly can be fit by ${\cal O}_{LQ}$ if it is present in ${\mathscr{L}\,}_{\rm eff}$ as $ (-0.024\pm 0.009)\, 2\sqrt{2} G_{\rm F}{\cal O}_{LQ}$, as discussed above. The operator $${\cal O}'_{LQ}= [\bar{\nu}_\mu \gamma_\mu \nu_\mu
-\bar{\mu}_L\gamma_\mu \mu_L][\bar{u}_L \gamma_\mu u_L -
\bar{d}_L
\gamma_\mu d_L] + 2[\bar{\mu}_L \gamma_\mu\nu_\mu][\bar{u}_L \gamma_\mu
d_L]+2[\bar{\nu}_\mu\gamma_\mu \mu_L][\bar{d}_L\gamma_\mu u_L]$$ also can fit the NuTeV anomaly. However its CC part overcontributes to $\pi\to \mu\bar{\nu}_\mu$, giving a contribution to $\epsilon^{LL}_{ \bar{\mu} \nu_\mu \bar{u} d } $ about 10 times larger than what allowed by $R_\pi$, see eq. (\[pi\]).
These operators could be induced e.g. by leptoquark or $Z'$ boson exchange, which we study in the following two sections. A critical difference between these possibilities is that leptoquarks must be heavier than about $200{\,{\rm GeV}}$ [@Abbott:2000ka; @Abe:1998it; @oggi], whereas a neutral $Z'$ boson could also be lighter than about $10 {\,{\rm GeV}}$ (see section \[Z’\]). Leptoquarks are charged and coloured particles that would be pair-produced at colliders, if kinematically possible. If the NuTeV anomaly is due to leptoquarks, their effects should be seen at run II of the Tevatron or at the LHC. If instead the NuTeV anomaly were due to a weakly coupled light $Z'$, it will not show up at Tevatron or LHC.
$$\begin{array}{clcccccc}
\hbox{LQ} &
\multicolumn{1}{c}{{\mathscr{L}\,}_{\rm eff}}& \delta g_L^2
& \epsilon_{\bar{\nu}_\mu\nu_\mu\bar{d}{d}}^{LL}
&\epsilon^{LL}_{\bar{\mu}\mu\bar{u}{u}}
& \epsilon_{\bar{\nu}_\mu\nu_\mu\bar{u}{u}}^{LL}
&\epsilon^{LL}_{\bar{\mu}\mu\bar{d}{d}}
& \epsilon_{\bar{\nu}_\mu \mu\bar{d}{u}}^{LL},\epsilon^{LL}_{\bar{\mu}\nu_\mu\bar{u}{d}}
\cr
\hline\hline S_0 &
\phantom{+}\frac{|\lambda|^2}{4m^2}({\cal O}_{LQ} - {\cal O}'_{LQ}) &
0.12 \alpha&
- \alpha/2 &- \alpha/2 & 0 & 0 & \alpha/2 \\ \hline
& & & 0 & 0 & - \alpha & 0 & 0 \\
S_1 &
\phantom{+}\frac{|\lambda|^2}{4m^2}({\cal O}'_{LQ} +3 {\cal
O}_{LQ}) &
0.03 \alpha& - \alpha/2 & - \alpha/2 & 0 & 0 & - \alpha/2\\
& & & 0 & 0 & 0 & - \alpha & 0 \\ \hline
V_0 &
-\frac{|\lambda|^2}{2m^2}({\cal O}'_{LQ} + {\cal O}_{LQ}) &
0.09 \alpha &
0 & 0 & \alpha & \alpha & \alpha \\ \hline
& & & 0 & 2 \alpha & 0 & 0 & 0 \\
V_1 &
\phantom{+} \frac{|\lambda|^2}{2m^2}({\cal O}'_{LQ} - 3{\cal O}_{LQ} )
&-0.40\alpha
& 0 & 0 & \alpha & \alpha & - \alpha \\
& & & 2 \alpha & 0 & 0 & 0 & 0
\end{array}$$
Leptoquarks {#LQ}
===========
Leptoquarks are scalar or vector bosons with a coupling to leptons and quarks. In this section, we consider leptoquarks which induce baryon and lepton number conserving four-fermion vertices.
The symmetries of the SM allow different types of leptoquarks, which are listed in [@LQ]. There are four leptoquarks that couple to $Q L$, so these are candidates to explain the NuTeV anomaly. They are the scalar $\SU(2)_L$ singlet ($S_0$) and triplet ($S_1^a$), and the vector $\SU(2)_L$ singlet ($V_{0\mu}$) and triplet ($ V_{1\mu}^a$), with interaction Lagrangian \_[S\_0]{} \[QL\] S\_0 + \_[S\_1]{} \[Q\^a L\] S\_1\^a + \_[V\_[0]{}]{} \[|[Q]{} \_L\] V\_[0]{} + \_[V\_[1]{}]{} \[|[Q]{} \_\^a L\] V\_[1]{}\^a + We do not speculate on how the above leptoquarks could arise in specific models.
Consider first the scalar $S_0$. The lower bound on leptoquark masses from the Tevatron is $200{\,{\rm GeV}}$ [@Abbott:2000ka; @Abe:1998it], therefore at NuTeV leptoquarks are equivalent to effective operators. Tree level exchange of $S_0$, with mass $m$ and coupling $\lambda [Q_1 L_2]S_0$ (1 and 2 are generation indices), induces the four-fermion operator $${\mathscr{L}\,}_{\rm eff} =\frac{|\lambda|^2}
{m^2} (Q_1L_2)(Q_1{L}_2)^{\dagger} = \frac{|\lambda|^2}
{4m^2}({\cal O}_{LQ} - {\cal
O}'_{LQ}),$$ The sign of the operator is fixed, because the coupling constant is squared. We see that $S_0$ cannot explain the NuTeV anomaly, because it generates ${\cal O}_{LQ}$ with the wrong sign (it gives a positive contribution to $g_L^2$), and because it also generates the unwanted operator ${\cal O}'_{LQ}$.
In the context of supersymmetric models without $R$-parity $S_0$ can be identified with a $\tilde{D}^c_g$ squark of generation $g$ and superpotential interaction $\lambda'_{2g1} L_2 D^c_g Q_1$. It is interesting to explore further the possible contributions of $R$-parity violating squarks at NuTeV. In supersymmetric models, $\tilde{D}^c$ is accompanied by a scalar $\SU(2)_L$ doublet squark (leptoquark), $\tilde{Q}$. The exchange of $\tilde{Q}$ only modifies the right-handed coupling $g_R$, so that it cannot explain NuTeV by itself. Mixing of right- and left-handed squarks generates dimension seven operators. This mixing is usually, but not always, negligibly small (e.g. one can consider large $\tan\beta$, or non minimal models). The relevant $\Delta L=2$ four-fermion operators are ’\_[ijk]{} ’\_[mnj]{} \[|[d]{}\_k P\_L \_i\]\[ |[Q\^c\_n]{} P\_L L\_m\]= ’\_[ijk]{} ’\_[mnj]{} \[|[d]{}\_k P\_L \_i\] \[|[u]{}\^c\_n P\_L e\_m - |[d]{}\^c\_n P\_L \_m\] These operators cannot account for the NuTeV anomaly: they do not interfere with the SM amplitude and contribute to both NC and CC, leading to a positive correction to $g_L$.
In table \[tab:LQ\], we list the effective four-fermion operators, and the contribution to $g_L^2$, of $S_0$, $S_1$, $V_0$ and $V_1$. In the ${\mathscr{L}\,}_{\rm eff}$ column we have assumed that the members of triplet leptoquarks are degenerate. Only the vector $\SU(2)_L$ triplet leptoquark gives a negative contribution to $g_L^2$. In all cases ${\cal O}_{LQ}$ is generated together with the unwanted ${\cal O}'_{LQ}$ operator, that overcontributes to the $\pi\to \mu\bar{\nu}_\mu$ decay, as discussed in the previous section. These features are also shown in fig. \[fig:plot3\], where we plot the deviations from the SM prediction induced by the $S_0$, $S_1$, $V_0$, $V_1$ leptoquarks imposing that they should not overcontribute to $R_\pi$ by more than $1\sigma$.
In the subsequent columns of table \[tab:LQ\] we generalize the effective Lagrangian assuming that $\SU(2)_L$ breaking effects split the triplets in a general way. In this situation, the scalar and vector $\SU(2)_L$ triplet leptoquarks can explain the NuTeV anomaly. In the scalar (vector) case, NuTeV can be fit by reducing the mass of the triplet member that induces $\epsilon^{LL}_{ \bar{\nu}_2 \nu_2 \bar{u} u}$ ($\epsilon^{LL}_{ \bar{\nu}_2 \nu_2 \bar{d} d}$), by a factor of $\sqrt{2 } $. From [@rho], we expect that such split multiplets are consistent with precision electroweak measurements.
We conclude that the NuTeV anomaly cannot be generated by $\SU(2)_L$ singlet or doublet leptoquarks, or by triplet leptoquarks with degenerate masses. However, triplet leptoquarks with carefully chosen mass splittings between the triplet members can fit the NuTeV data — and this explanation should be tested at Run II of the Tevatron or at LHC.
Unmixed extra U(1) $Z'$ boson {#Z'}
=============================
The sign of the dimension 6 lepton/quark operators generated by an extra $Z'$ vector boson depends on the lepton and quark charges under the ${\rm U}(1)'$ gauge symmetry. Therefore, with generic charges, it is possible to generate a correction to neutrino/nucleon scattering with the sign suggested by the NuTeV anomaly. In order to focus on theoretically appealing $Z'$ bosons, we require that
- Quark and lepton mass terms are neutral under the extra ${\rm U}(1)'$. We make this assumption because experimental bounds on flavour and CP-violating processes suggest that we do not have a flavour symmetry at the electroweak scale.
- The $Z'$ couples to leptons of only second generation. Bounds from (mainly) LEP2 [@LEP2combined] and older $e\bar{e}$ colliders would prevent to explain the NuTeV anomaly in presence of couplings to first generation leptons. We avoid couplings to third generation leptons just for simplicity.
- The extra ${\rm U}(1)'$ does not have anomalies that require extra light fermions charged under the SM gauge group.
The only gauge symmetry that satisfies these conditions is $B-3L_\mu$ (for related work see [@chang]), where $B$ is the baryon number and $L_\mu$ is the muon number.[^6] Under these restrictions, the sign of the $Z'$ correction to neutrino/nucleon scattering is fixed, and this $Z'$ allows to fit the NuTeV anomaly. In fact, the four-fermions operators generated by $Z'$ exchange are $$\begin{aligned}
{\mathscr{L}\,}_{Z'} &=& -\frac{g_{Z'}^2}{2(M_{Z'}^2-t)}\bigg[\bar{Q}\gamma_\mu Q -\bar{U}^c \gamma_\mu U^c
-\bar{D}^c\gamma_\mu D^c -9
\bar{L}_2 \gamma_\mu L_2+9\bar{E}_\mu^c \gamma_\mu E_\mu^c\bigg]^2\end{aligned}$$ where $t$ is momentum transferred: $t \sim - 20{\,{\rm GeV}}^2 $ at NuTeV. The best fit of the NuTeV anomaly is obtained for (see fig. \[fig:plot3\]) $$\label{eq:Z'x}
\sqrt{M_{Z'}^2-t}\approx g_{Z'}\,3{\,{\rm TeV}}.$$ We now discuss the experimental bounds on such a $Z'$.
### Collider bounds {#collider-bounds .unnumbered}
The bounds from Tevatron [@TevatronZ'] $$\sigma(p\bar{p}\to Z'X\hbox{ at $\sqrt{s} = 1.8{\,{\rm TeV}}$})\hbox{BR}(Z'\to
\mu\bar{\mu}) < 40\,\hbox{fb}\quad(95\%\hbox{CL})$$ and LEP [@LEP1][^7] $$\hbox{BR}(Z\to \mu\bar{\mu}Z')\hbox{BR}(Z'\to \mu\bar{\mu})\circa{<}
\hbox{few}\times 10^{-6}$$ imply that $M_{Z'}$ cannot be comparable to $M_Z$. One needs either a light $Z'$, $M_{Z'}\circa{<} 10 {\,{\rm GeV}}$, or a heavy $Z'$, $M_{Z'}\circa{>}
600{\,{\rm GeV}}$ [@TevatronZ']. Perturbativity implies $M_{Z'}\circa{<} 5{\,{\rm TeV}}$.
### The anomalous magnetic moment of the muon {#the-anomalous-magnetic-moment-of-the-muon .unnumbered}
The $Z'$ gives a correction to the anomalous magnetic moment of the muon. Assuming $M_{Z'}\gg m_\mu$, we get $$a_\mu = a_\mu^{\rm SM} +\frac{27g_{Z'}^2}{4\pi^2}\frac{m_\mu^2}{M_{Z'}^2} = a_\mu^{\rm SM} +8.4~10^{-10} \bigg(\frac{3{\,{\rm TeV}}}{M_{Z'}/g_{Z'}}\bigg)^2.$$ At the moment, the $a_\mu$ measured by the $g-2$ collaboration [@g-2] is slightly higher than the SM prediction, $a_\mu^{\rm exp}- a_\mu^{\rm SM} = (20\pm 18)10^{-10}$, if one employs $a_\mu^{\rm had} = (697\pm 10)10^{-10}$ for the hadronic polarization contribution [@had] and $a_\mu^{\rm lbl} = (9\pm 2)\,10^{-10}$ for the light-by-light contribution [@lbl]. One could still prefer to estimate it as $a_\mu^{\rm lbl} = (0\pm 10)\,10^{-10}$, obtaining a larger discrepancy and error.
Therefore, if $M_{Z'}/g_{Z'}\approx 3{\,{\rm TeV}}$, as suggested by the NuTeV anomaly in the heavy $Z'$ case, the $Z'$ correction to the $g-2$ is comparable to the sensitivity of present experiments. If instead $M_{Z'} \circa{<} 5{\,{\rm GeV}}$, the $Z'$ that fits the NuTeV anomaly gives a larger correction to the muon $g-2$, see eq. (\[eq:Z’x\]). For example, for $M_{Z'}\approx 3{\,{\rm GeV}}$ one can fit the central value of $a_\mu^{\rm exp} - a_\mu^{\rm SM}$. On the other hand, a $Z'$ lighter than $(1\div2){\,{\rm GeV}}$ cannot explain the NuTeV anomaly without overcontributing to $a_\mu$. Similar light $Z'$ models were proposed [@russi] as a possible source of the discrepancy between $a_\mu^{\rm exp}$ and previous SM computations [@g-2].
### Other bounds {#other-bounds .unnumbered}
Quantum corrections generate an unwanted kinetic mixing between the $Z'$ and the SM hypercharge boson [@Holdom]. A light $Z'$ needs a small gauge coupling $g_{Z'}$, making these quantum effects negligible.
The $Z'$ could contribute to the decay of $q \bar{q}$ mesons into $\bar{\mu} \mu$. This is negligibly small for $g_{Z'} \sim M_{Z'}/ 3{\,{\rm TeV}}$, unless $m_{Z'}$ is very close to the meson mass $m_{q \bar{q}}$. There are various $c \bar{c}$ mesons in the few GeV mass range, but $\Gamma_{Z'}$ is very narrow, so the $Z'$ is only ruled out in narrow windows $m_{Z'} \approx m_{q \bar{q}}
\pm \Gamma_{{q \bar{q}}}$ [@Bailey:1995qv].
The $Z'$ that can fit the NuTeV anomaly does not give significant corrections to rare $K$, $D$ and $B$ decays. Let us consider for example the $K^+\to \pi^+ \nu\bar{\nu}$ decay. This is a sensitive probe because the dominant SM $Z$ penguins that know that $\SU(2)_L$ is broken (and can therefore generate the FCNC vertex $m_t^2 [\bar{s}_L \gamma_\mu Z_\mu d_L]$, with GIM cancellations spoiled by the large top mass) are suppressed by the small mixing $V_{ts}^*V_{td}$. On the contrary, penguin loops of quarks that do not know that ${\rm U}(1)'$ is broken (that therefore only generate the $q^2[\bar{s}_L \gamma_\mu Z'_\mu d_L]$ operator, where $q$ is the $Z'$ momentum and the GIM cancellations is only spoiled by logarithms of quark masses) are suppressed only by the larger Cabibbo mixing $V_{cd}$. The $Z'$ suggested by NuTeV gives a negligible correction even to this favorable $K^+\to \pi^+\nu\bar{\nu}$ decay.[^8]
Ref. [@dimuon] claims a statistically significant hint of dimuon events (three events, versus an estimated background of $0.07\pm0.01$ events), possibly generated by some neutral long lived particle with production cross section $\sim 10^{-10}/{\,{\rm GeV}}^2$ of few GeV mass (see also [@dimuonint]). The $Z'$ suggested by NuTeV can have the right mass and cross section, but is not sufficiently long lived. However, it could partially decay in sufficiently long lived neutral fermions. Extra light neutral fermions are required to cancel gravitational anomalies and ${\rm U}(1)^{\prime 3}$ anomalies.
### $Z'$ burst and the GZK cutoff {#z-burst-and-the-gzk-cutoff .unnumbered}
The $Z'$ gives a narrow resonant contribution to $\nu\bar{\nu}$ scattering, which could perhaps generate ultrahigh energy cosmic ray events with $E \sim 10^{20}$ GeV. The analogous $Z$ resonance [@ringwald] has been considered as a possible source of the observed events above the Greisen-Zatsepin-Kuzmin (GZK) cutoff [@GZK]. A cosmic ray neutrino that scatters with a nonrelativistic cosmic microwave background neutrino would encounter the $Z'$ resonance at the energy E\_\^[Z’res]{} = = 10\^[20]{}[[eV]{}]{}()\^2 \[CR\] where we have used a neutrino mass suggested by atmospheric oscillation data [@SKatm]. A resonance at a $Z'$ mass of few GeV is more suitable than a resonance on the $Z$, where a larger incident neutrino energy $E_\nu^{Z\,\rm res} =M_Z^2/2m_\nu \circa{>} 4 \times 10^{21}$ eV would be required, even if neutrinos are as heavy as possible, $m_\nu\circa{<} 1{\,{\rm eV}}$. The $Z$ burst scenario is problematic because it seems difficult to imagine a cosmological source that produces enough very energetic neutrinos without producing, at the same time, too many photons.
Although the $Z'$ requires less energetic cosmic neutrinos than the $Z$, roughly the same total flux [@ringwald] is required in the two cases, because the $Z$ and the $Z'$ have comparable energy-averaged cross sections. In fact the NuTeV and $g-2$ data suggest $\Gamma_{Z'}/M_{Z'}\sim \Gamma_Z/M_Z$ and $$\sigma(\nu\bar{\nu}\to Z'\to f)
\approx \frac{12\pi^2\Gamma_{Z'}}{M_{Z'}}
\hbox{BR}(Z'\to f) \hbox{BR}(Z'\to \nu\bar{\nu}) \delta(s-M_{Z'}^2)$$ where $f$ is any final state. In conclusion, a $Z'$ burst could generate the observed cosmic rays above the GZK cutoff more easily than the $Z$ burst.
Summary
=======
We have studied the possible origin of the NuTeV anomaly. Our main results are:
- [**QCD effects**]{}. Because of the approximate use of the Paschos-Wolfenstein relation, the discrepancy between the NuTeV result and the SM prediction is fairly independent of which set of standard parton distribution functions is employed. This is no longer true if one drops some of the simplifying assumptions which are usually made in the PDF fits. A small asymmetry between the momentum carried by strange and antistrange quarks in the nucleon, suggested by a recent analysis of neutrino data [@BPZ], could explain half of the discrepancy between NuTeV and the SM. Such an asymmetry has been set to zero in more recent global parton sets, but the value suggested in [@BPZ] seems compatible with the data used in these fits. It would be desirable to update the analysis of [@BPZ] including more recent di-muon data from NuTeV [@Goncharov], though settling the issue is likely to require more detailed information, such as it could be achieved at a neutrino factory [@nufact]. A tiny violation of isospin symmetry of parton distributions, largely compatible with current data, would have a similar impact. Both these effects may have to be taken into account in the evaluation of the systematic error.
- [**Generic corrections to the propagators or couplings of the SM gauge bosons**]{} can only produce a small fraction of the NuTeV anomaly, as shown in fig. \[fig:plot2\]. In order to perform such a general analysis we have extracted the ‘oblique’ parameters and the SM gauge couplings directly from a fit of precision data, without imposing the SM predictions. We have assumed that the $Z$ couplings are generation universal and respect $\SU(2)_L$, as in the SM. In principle, the NuTeV anomaly could be explained by new physics that only shifts the $\bar{\nu} Z \nu$ couplings. However this situation is not realized by mixing the $Z$ with extra vector bosons, nor by mixing the neutrino with extra fermions.
- [**MSSM.**]{} Loop corrections in the MSSM have generally the wrong sign and are far too small to contribute significantly to the NuTeV observables.
- [**Contact operators**]{}. Dimension six quark-quark-lepton-lepton operators can fit the NuTeV anomaly consistently with other data. The desired operators are neutral current, left-handed four fermion vertices of the form $ \eta (\bar{\nu}_{\mu} \gamma^{\alpha} {\nu}_{\mu})
( \bar{q} \gamma_{\alpha} P_L q)$, where $q = \{u,d\}$. The coefficient $\eta$ must be of order $0.01 \times 2 \sqrt{2}
G_F$, and the sign is fixed by requiring a negative interference with the SM. Effects of these operators should be seen at run II of Tevatron, unless they are generated by very weakly coupled light particles. If one restricts the analysis to $\SU(2)_L$-invariant operators, only the operator $[\bar{Q}_1 \gamma^\alpha Q_1][\bar{L}_2 \gamma_\alpha L_2]$ can fit NuTeV.
- [**Leptoquarks**]{}. $\SU(2)_L$ singlet and triplet leptoquarks can induce these operators — but if the leptoquark masses are $\SU(2)_L$ degenerate, either the sign is wrong or other unacceptable operators are also generated. A $\SU(2)_L$ triplet leptoquark of spin one is a partial exception (see fig. \[fig:plot3\]) at least from a purely phenomenological perspective. Non degenerate triplet leptoquarks could fit the NuTeV results, but squarks in $R$-parity violating supersymmetry cannot.
- [**Extra U(1) vector bosons**]{}. A $Z'$ boson that does not mix with the $Z$ boson can generate the NuTeV anomaly (see fig. \[fig:plot3\]), if its gauge group is $B-3L_\mu$, the minimal choice suggested by theoretical and experimental inputs. The $Z'$ can be either heavy, $600{\,{\rm GeV}}\circa{<}M_{Z'}\circa{<} 5{\,{\rm TeV}}$, or light, $1{\,{\rm GeV}}\circa{<}M_{Z'}\circa{<}10{\,{\rm GeV}}$. The $Z'$ that fits the NuTeV anomaly also increases the muon $g-2$ by $\sim 10^{-9}$ and (if light) gives a $Z'$ burst to cosmic rays just above the GZK cutoff without requiring neutrino masses heavier that what suggested by oscillation data.
#### Acknowledgments
We thank E. Barone, G. Giudice, G. Isidori, M. Mangano, K. McFarland, R.G. Roberts, A. Rossi and I. Tkachev for useful discussions. A.S. thanks R. Rattazzi for his insight into ionization corrections to the $\mu^+$ range in iron. This work is partially supported by Spanish MCyT grants PB98-0693 and FPA2001-3031, by the Generalitat Valenciana under grant GV99-3-1-01, by the TMR network contract HPRN-CT-2000-00148 of the European Union, and by EU TMR contract FMRX-CT98-0194 (DG 12-MIHT). S.D. is supported by the Spanish Ministry of Education in the program “Estancias de Doctores y Tecnólogos Extranjeros en España” and P.G. by a EU Marie Curie Fellowship.
#### Note added
The $\nu_\mu$ ($\bar{\nu}_\mu$) NuTeV beam contains a $1.7\%$ contamination of $\nu_e$ ($\bar{\nu}_e$). A recent paper [@Giunti] suggests an explanation of the NuTeV anomaly assuming that $20\%$ of $\nu_e,\bar{\nu}_e$ oscillate into sterile neutrinos. The suggested oscillation parameters are not consistent with disappearance experiments (unless [Bugey]{} and [Chooz]{} underestimated the theoretical error on their reactor fluxes). Furthermore, NuTeV not only predicts the $\nu_e$ and $\bar{\nu}_e$ fluxes through a MonteCarlo simulation, but also measures them directly (see pag. 26 of the transparencies in [@NuTeV]). The agreement between the two determinations, at the few $\%$ level, contradicts the oscillation interpretation.
#### Note added after publication (June 2002)
In a recent publication [@new], the NuTeV collaboration investigated the effect of a strange quark asymmetry and of isospin violation on their electroweak result. Specifically, they claim that the strange quark asymmetry is severely constrained by the dimuon data of Ref. [@Goncharov], and that the effect of strange quark asymmetry and isospin violation might be considerably diluted due to the fact that NuTeV does not measure directly the Paschos-Wolfenstein ratio $R_{\rm PW}$, Eq. (\[eq:PW\]).
The claim that the dimuon (i.e. $\nu s\to \mu c$ scattering) data [@Goncharov; @new] provides evidence against the strange asymmetry $s^-\approx +2~10^{-3}$ of the BPZ global fit [@BPZ] (and in fact suggest a [*negative*]{} asymmetry $s^{-} = -(2.7\pm 1.3)10^{-3}$ [@Goncharov; @new]) appears dubious, for the following reasons:
1. The parametrization of the strange and antistrange distributions assumed by NuTeV is unphysical, in that it violates the constraint that the proton and neutron carry no strangeness. Furthermore, due to its too small number of free parameters, it artificially relates the $s/\bar{s}$ asymmetry at $x<0.5$ (where NuTeV have significant data) to the one at $x>0.5$ (where NuTeV have few events).
2. NuTeV did not make a global fit allowing a strange asymmetry, but rather made a fit to their dimuon data, using a set of parton distributions based on pre-existing (now obsolete) fits, obtained neglecting NLO QCD corrections and optimized under the assumption $s=\bar{s}$. This is especially worrisome due to the fact that the strange asymmetry found by NuTeV appears to depend very sensitively on the underlying set of parton distributions (see table I of [@Goncharov]).
3. The dimuon data are considerably less sensitive to $\bar s$ than to $s$. In fact, the claim [@Goncharov; @new] that a strange asymmetry at $x>0.5$ is excluded at high confidence level should be restated as the statement that NuTeV rules out a [*total*]{} strangeness at $x>0.5$ of the magnitude found by BPZ [@BPZ]. However, what matters for the NuTeV anomaly is the strange [*asymmetry*]{}.
The BPZ global fit [@BPZ] is not subject to these drawbacks, but it did not include the recent dimuon data. The BPZ fit is characterized by a relatively large strange sea at large $x$, driven mainly by CDHSW data, which agrees well with positivity constraints derived from polarized DIS [@Forte:2001ph]. Hence, the only conclusion that can be drawn by comparing these two analyses is that the size of the large-$x$ strange sea suggested by CDHSW as analyzed by BPZ seems larger than allowed by NuTeV data [@Goncharov]. The origin of this discrepancy, and its impact on the best-fit strange asymmetry, could only be assessed by performing a global NLO fit which includes all available data. Our statement in section 3 remains therefore unchanged: the impact of the data [@Goncharov] on the strange asymmetry is unclear.
NuTeV also comment on isospin violating parton distributions [@new], taking the model by Thomas et al. [@Thomas] as reference. This model predicts a small effect on ${s_{\rm W}}^2$, as a result of a subtle cancellation between high and low $x$ regions. This conclusion is model-dependent. The fact remains that ${\cal O}(1\%)$ isospin violation effects could generate the NuTeV anomaly, without conflicting with any other existing data.
Coming to the possible dilution of strangeness asymmetry or isospin violation due to experimental cuts, we should like to point out that, contrary to what stated in [@new], we did include charm threshold effects and some experimental cuts in our analysis. We found moderate dilution effects, as discussed in section 3 of this paper. On the other hand, we cannot simulate the full NuTeV experimental set-up. However, if what NuTeV really measures differs from $R_{\rm PW}$ in a way which is significant at the required level of accuracy, then the cancellation of NLO effects that occurs in the Paschos-Wolfenstein relation cannot be taken for granted any longer. In particular, NuTeV claims to be less sensitive to $R_{\bar{\nu}}$ than is $R_{\rm PW}$. In general any asymmetry between charged-current and neutral-current, or between $\nu$ and $\bar \nu$ events spoils the cancellation of the NLO corrections in Eq. (\[eq:PWNLO\]). Such asymmetries can also be induced by experimental cuts and by different $\nu$, $\bar{\nu}$ spectra. If any of these effects were significant, a NLO analysis would be required in order to obtain a reliable determination of ${s_{\rm W}}^2$ at the desired level of accuracy. Only a full NLO analysis of the NuTeV data could settle this issue.
[2]{}
[nn]{}
; K. McFarland, seminar available at the internet address www.pas.rochester.edu/\~ksmcf/NuTeV/seminar-only-fnaloct26.pdf
; ;
; ; , and refs. therein.
.
; ; ; ; .
; see also . For reviews see e.g. and . C. S. Wood, S. C. Bennett, D. Cho, B. P. Masterson, J. L. Roberts, C. E. Tanner and C. E. Wieman, [*Science*]{} [**275**]{} (1997) 1759.
; ; . .
.
.
.
.
.
.
.
erratum [*ibid.*]{} [**D31**]{}, 213 (1980).
and refs. therein.
.
M. Grünewald, private communication. See also [@LEPEWWG].
.
. W. Furmanski, R. Petronzio, [*Z. Phys.*]{} [bf C11]{} (1982) 293.
A. D. Martin, R. G. Roberts, W. J. Stirling and R. S. Thorne, [*Eur. Phys. J.*]{} [**C4**]{} (1998) 463; [*Eur. Phys. J. C*]{} [**14**]{} (2000) 133. S. Forte, [*Phys. Rev*]{} [**D47**]{} (1993) 1842; F. G. Cao and A. I. Signal, [*Phys. Rev.*]{} [**C62**]{} (2000) 015203.
S. J. Brodsky, P. Hoyer, C. Peterson and N. Sakai, [*Phys. Lett.*]{} [**B93**]{} (1980) 451.
;
.
.
R. G. Roberts [*private communication*]{}
.
.
.
.
; and erratum [*ibid.*]{} [**66**]{} (1990) 2967.
.
The LEP electroweak working group, see http://www.web.cern.ch/LEPEWWG.
See e.g.; and references therein.
For a review see e.g..
For a relatively recent analysis, performed when atomic parity violation seemed to contain an anomaly, see .
.
; ; .
.
; ; ;
.
.
.
.
.
.
.
and erratum [*ibid.*]{} [**B448**]{} (1987) 320.
See e.g..
.
; .
.
.
; See also .
.
.
.
.
; .
and erratum [*ibidem*]{} [**12**]{} (2000) 379; ; . For a recent review see e.g. and ref.s therein.
.
.
.
.
.
.
[^1]: On leave from INFN, Sezione di Torino, Italy
[^2]: QED radiative corrections depend sensitively on the experimental setup and are taken into account in the NuTeV analysis. The charged current amplitude receives a small residual renormalization which depends on the implementation of QED corrections and is therefore neglected in our analysis.
[^3]: Deep-inelastic charm production events are know as dimuon events because their experimental signature is a pair of opposite–sign muons. Since they mainly proceed through scattering of the neutrino off a strange quark, they are a sensitive probe of the strange distribution.
[^4]: On the contrary, if we allow for generation universal but non $\SU(2)_L$-invariant corrections of the $Z$ couplings to $u_L$ and $d_L$ and to $\nu$ and $e_L$, we get a statistically significant reduction in the global $\chi^2$ ($\Delta\chi^2 \approx 21$ with 7 more parameters), due to the fact that various anomalies, including of course the NuTeV anomaly, can be explained in this artificial context. Without including the NuTeV data, the best fit regions in the ($g_L^2, g_R^2$) plane are shifted towards the NuTeV region.
[^5]: Both at NuTeV and at CCFR these operators may induce deviations from the SM values of $g_L,g_R$. Within their errors, the CCFR data are consistent with both the SM and with NuTeV, as clearly shown in fig. 3 of [@CCFR]. Therefore CCFR bounds on non renormalizable operators, reported by the PDG [@PDG], cannot conflict with NuTeV.
[^6]: The neutrino masses suggested by oscillation data [@SKatm] do not respect $L_\mu$. If we allow third generation leptons to be charged under the extra ${\rm U}(1)'$ symmetry, we could have a $B- c L_\mu - (3-c)L_\tau$ gauge group ($c$ is a constant). However even a $L_\mu \pm
L_\tau$ symmetry would not force a successful pattern of neutrino masses and mixings, that rather suggest a flavour symmetry containing $L_e - L_\mu - L_\tau$ [@numasses].
[^7]: The total number of measured $Z\to\mu\bar{\mu}\mu\bar{\mu}$ agrees with the SM prediction and there is no peak in the $\mu\bar{\mu}$ invariant mass. In order to extract a precise bound on the $Z'$ mass from these data one should take into account the experimental cuts and resolution.
[^8]: A $Z'$ boson with mass $M_{Z'}\sim
100\MeV$ (this case is not motivated by NuTeV) would mediate the resonant decay $K^+\to \pi^+ Z'\to \pi^+ \nu \bar{\nu}$, producing an excess of monoenergetic $\pi^+$ in the $K^+$ rest frame, compatibly with the first experimental data [@Kpinuexp].
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Jan-Hinrich Kämper[^1]\
Universität Oldenburg, Germany
- |
Stephen G. Kobourov[^2]\
University of Arizona, Tucson, Arizona, USA
- |
Martin Nöllenburg[^3]\
Karlsruhe Institute of Technology, Germany
bibliography:
- 'abbrv.bib'
- 'masterReferences.bib'
- 'missingRefs.bib'
title: 'Circular-Arc Cartograms'
---
A *cartogram*, or *value-by-area diagram*, is a thematic cartographic visualization, in which the areas of countries are modified in order to represent a given set of values, such as population, gross-domestic product, or other geo-referenced statistical data. Red-and-blue population cartograms of the United States were often used to illustrate the results in the 2000 and 2004 presidential elections. A geographically accurate map seemed to show an overwhelming victory for George W. Bush; see Fig. \[fig:uselection-geo\]. The population cartograms effectively communicate the near even split, by deflating the rural and suburban central states. The rectilinear cartogram shows the correct distribution of red and blue squares, each representing one vote in the electoral college, but many characteristic shapes and adjacencies are compromised; see Fig. \[fig:uselection-recti\]. For example, Idaho and Washington are no longer neighbors, and the mirror-image shapes of New Hampshire and Vermont are lost. The balloon cartogram also shows the correct areas, but at the cost of distorted shapes and changes in above/below, left/right relationships; see Fig. \[fig:uselection-newman\].
The challenge in creating a good cartogram is thus to shrink or grow the regions in a map so that they faithfully reflect the set of pre-specified area values, while still retaining their characteristic shapes, relative positions, and adjacencies as much as possible. In this paper we introduce a new [*circular-arc cartogram*]{} model, where circular arcs can be used in place of straight-line segments, and corners of the polygons defining each country remain fixed. Intuitively, a region that grows is inflated and becomes cloud-shaped, whereas a region that shrinks is deflated and becomes snowflake-shaped.
Consider the circular-arc cartogram for the 2004 US presidential election: like a traditional cartogram, it also inflates densely populated states (which become cloud-shaped) and deflates sparsely populated ones (which become snowflake shaped); see Fig. \[fig:uselection-circ\]. Note that the circular-arc cartogram preserves adjacencies, and the general shape of the states. Moreover, the circular-arc cartogram makes it easy to see that nearly all blue states are densely populated and nearly all red states are sparsely populated, something that is not apparent in the rectilinear cartogram. Finally, exceptions from this pattern are also easy to spot: Oregon is blue but sparse and North Carolina is red but dense. Of course, there is no such thing as a free lunch: the advantages of the circular-arc cartogram come at the expense of some cartographic errors, where accurate inflation and deflation cannot be guaranteed.
There are many design and implementation aspects that determine the effectiveness of a cartogram. Here we consider four of the main aesthetic and computational criteria:
1. It is important that the cartogram is [*readable*]{}, in that it is possible to find every country in the map. Moreover, a readable cartogram makes is possible to visually answer approximate queries about the relative size of the shown countries.
2. It is important to ensure that the cartogram keeps the underlying map structure [*recognizable*]{}. This criterion can be expressed by insisting that the country adjacencies in the original map and the cartogram remain unchanged. An even stronger version of this requirement is to ensure that the relative positions between pairs of countries (e.g., North-South, East-West) are not disturbed.
3. It is important that the cartogram faithfully represents the given weight function. This criterion is often expressed by the [ *cartographic error*]{}, defined as the absolute or relative difference between the given weight and the area of a country.
4. The [*complexity*]{} of a cartogram also impacts its effectiveness. Here, the complexity is often measured by the maximum number of vertices (or edges) defining the boundary of any country in the cartogram. Highly schematized cartograms use as few as three or four vertices per country, while geographically more accurate and recognizable cartograms may have arbitrarily high complexity.
It is easy to see that there is no perfect method for generating cartograms, that is, there is no method that satisfies all of the main criteria. Most existing methods aim for no cartographic error and low complexity, while sacrificing recognizability (e.g., by allowing adjacencies to be modifies) and/or readability (e.g., by using arbitrary country shapes). Circular-arc cartograms ensure readability by keeping the corners of the countries undisturbed and easily convey the type of area changes by the cloud-shape and snowflake-shape of the countries. They are also recognizable as they retain all adjacencies and also preserve the relative positions of countries. The complexity is exactly the same as that of the input map: a highly schematized input map directly results in low complexity of the resulting cartogram, which at the same time has the advantage that longer edges allow for larger area changes and thus potentially lower cartographic error. These advantages come at a cost: it is possible that a given map with pre-specified areas cannot be realized as a circular-arc cartogram, and determining whether such a realization exists is NP-hard. However, if we are willing to tolerate moderate cartographic errors we can use a heuristic algorithm, which, while not perfectly accurate, achieves many of the desired areas in our real-world examples.
Related Work
------------
The problem of representing additional information on top of a geographic map dates back to the 19th century, and highly schematized rectangular cartograms can be found in the 1934 work of Raisz [@raisz]. With rectangular cartograms it is not always possible to preserve all country adjacencies and realize all areas accurately [@DBLP:conf/infovis/HeilmannKPS04; @ks-rc-07]. Eppstein [*et al.*]{} studied area-universal rectangular layouts and characterized the class of rectangular layouts for which all area-assignments can be achieved with combinatorially equivalent layouts [@DBLP:conf/compgeom/EppsteinMSV09]. If the requirement that rectangles are used is relaxed to allow the use of rectilinear regions then de Berg [*et al.*]{} [@DBLP:journals/dm/BergMS09] showed that all adjacencies can be preserved and all areas can be realized with 40-sided regions. In a series of papers the polygon complexity that is sufficient to realize any rectilinear cartogram was decreased from 40 over 34 corners [@kn-odpgwsfa-07], 12 corners [@bv-ocwcf-11], 10 corners [@abfgkk-lapcgr-11] down to 8 corners [@abfkku-ccwoc-11], which is best possible due to the earlier lower bound of 8-sided regions [@ys-fgd2rm-93]. More general cartograms, without restrictions to rectangular or rectilinear shapes, have also been studied. Dougenik [*et al.*]{} introduced a method based on force fields where the map is divided into cells and every cell has a force related to its data value which affects the other cells [@PROG:PROG75]. Dorling used a cellular automaton approach, where regions exchange cells until an equilibrium has been achieved, i.e., each region has attained the desired amount of cells [@Dorling]. This technique can result in significant distortions, thereby reducing readability and recognizability. Keim [*et al.*]{} defined a distance between the original map and the cartogram with a metric based on Fourier transforms, and then used a scan-line algorithm to reposition the edges so as to optimize the metric [@DBLP:journals/tvcg/KeimNP04]. Edelsbrunner and Waupotitsch generated cartograms using a sequence of homeomorphic deformations and measured the quality with local distance distortion metrics [@DBLP:journals/comgeo/WelzlEW97]. Kocmoud and House [@House:1998:CCC:288216.288250] described a technique that combines the cell-based approach of Dorling [@Dorling] with the homeomorphic deformations of Edelsbrunner and Waupotitsch [@DBLP:journals/comgeo/WelzlEW97].
A popular method by Gastner and Newman [@Gastner] projects the original map onto a distorted grid, calculated so that cell areas match the pre-defined values. This method relies on a physical model in which the desired areas are achieved via an iterative diffusion process. Flow moves from one country to another until a balanced distribution is reached, i.e., the density is the same everywhere. The cartograms produced this way are mostly readable and have no cartographic error. However, some countries may be deformed into shapes very different from those in the original map, and the complexity of the polygons can increase significantly.
This brief review of related work is woefully incomplete; a survey by Tobler [@Tobler04thirtyfive] provides a more comprehensive overview.
Our Contributions
-----------------
Our model combines aspects of existing cartogram types, but at the same time tries to avoid some of the common shortcomings. By pinning the vertices at their input positions and only modifying edge shapes, regions are not displaced and we avoid strong positional distortions that are common, e.g., in the popular diffusion cartograms. On the other hand, the shapes of the regions are not as severely schematized as in rectangular or rectilinear cartograms and recognizability of characteristic shapes is preserved, at least for moderate area changes. The use of the inflation/deflation metaphor makes is possible to immediately recognize regions with positive/negative area changes.
Our results in this paper are as follows. In Section \[sec:model\] we formally introduce the circular-arc cartogram model and state the associated algorithmic problem. In Section \[sec:complexity\] we show that the circular-arc cartogram problem is NP-hard. In Section \[sec:algorithm\] we describe a first heuristic algorithm using network flow and the straight skeleton to minimize the cartographic error in circular-arc cartograms. In Section \[sec:conclusion\] we summarize our results and describe several open problems.
Model {#sec:model}
=====
Geometrically, a map of countries or administrative regions is a subdivision $S$ of the plane into a set of disjoint regions or *faces* $\mathcal{F} = \{f_1, \dots, f_n\}$. In our model we assume that each face is a simple polygon. The topological structure of the map can be described by its *face graph* or *dual graph* $G$, which contains a vertex for each face and an edge between adjacent faces. In order to construct a cartogram of $S$, we additionally need to specify a weight vector $t=(t_1, \dots, t_n)$, where for each $i=1, \dots, n$ the value $t_i$ is the target area of face $f_i$ in the cartogram. An *accurate* cartogram of the input pair $(S,t)$ is a subdivision $S'$ that is homeomorphic to $S$ and in which the area of every face $f_i$ equals its given weight $t_i$.
In this paper, we are interested in the special class of *circular-arc cartograms*, i.e., cartograms that can be obtained from the input $S$ by bending each polygon edge $e$ into a circular arc whose endpoints coincide with the endpoints of $e$. No two circular arcs are allowed to cross, but we may allow that two arcs touch. Bending an edge between two faces $f_i$ and $f_j$ has the effect of transferring a certain area from one face to the other. This exchange of area between faces can be seen as a discrete diffusion process similar to the model of Gastner and Newman [@Gastner]. The algorithmic problem in creating a circular-arc cartogram is thus to compute a bending radius for each edge of the input subdivision so that the resulting circular-arc subdivision $S'$ remains topologically equivalent to the polygonal input subdivision $S$ and each face $f_i$ has area $t_i$. We define a *bending configuration* of $S$ to be an assignment of a bend radius (including radius $r=\infty$ to represent straight-line arcs) to each edge of $S$. A bending configuration is *valid* if no two circular arcs cross and the input topology of $S$ is preserved. We say that a cartogram $S'$ is a *strong* circular-arc cartogram if for every region $f_i$ with a net decrease (increase) in area no incident edge is bent outward (inward). Otherwise we call the cartogram a *weak* circular-arc cartogram. An immediate consequence of strong cartograms is that edges bounding two regions with the same sign of area change must remain straight.
In real-world maps there are often regions (e.g., oceans or seas) whose target area in the cartogram is unspecified. Our model allows sea faces in $S$ with no specified target area. Note that if there is a single sea face then its target area change is implicitly given by the sum of the target area changes of the other faces.
The <span style="font-variant:small-caps;">Circular-Arc Cartogram</span> (CAC) decision problem then is:
\[<span style="font-variant:small-caps;">CAC</span>\] \[prob:cac\] Given a planar polygonal subdivision $S$ and a weight vector $t$, is there a valid bending configuration so that the resulting subdivision $S'$ is an accurate circular-arc cartogram, i.e., all face areas in $S'$ comply with $t$?
While the decision version is mainly of theoretical interest, there is also a corresponding optimization version of [<span style="font-variant:small-caps;">Circular-Arc Cartogram</span>]{}. Here the algorithmic problem is to compute a bending configuration that minimizes the *cartographic error*, i.e., the sum of the differences between the target areas and the actual areas of all faces. In Section \[sec:complexity\] we show that CAC is NP-hard and in Section \[sec:algorithm\] we describe a heuristic algorithm that successfully minimizes the cartographic error in practice.
NP-hardness {#sec:complexity}
===========
First note that positive as well as negative CAC instances can be constructed easily: Only polygons whose vertices are cocircular can be made arbitrarily small by bending edges; all other polygons have some positive lower bound on their area in a circular-arc cartogram. Hence, for example, no simple non-convex polygon can attain area close to $0$ by replacing straight edges with circular arcs. On the other hand, any subdivision with a target area vector that contains the exact initial face areas is a positive instance.
[<span style="font-variant:small-caps;">Circular-Arc Cartogram</span>]{}is NP-hard.
Our reduction is from the NP-complete problem <span style="font-variant:small-caps;">Planar Monotone 3-Sat</span> [@bk-obspp-10]. This problem is a special variant of the <span style="font-variant:small-caps;">Planar 3-Sat</span> problem [@l-pftu-82]: We are given a Boolean formula $\varphi$, in which every clause consists of three literals. Each clause, however, must be monotone, i.e., it may contain either only positive or only negative literals. The planarity of the formula refers to the planarity of the associated bipartite variable-clause graph $G_\varphi$ (with a vertex for every clause and variable of $\varphi$ and an edge between a variable vertex and a clause vertex if and only if the variable appears in the clause). It is known that for every instance of <span style="font-variant:small-caps;">Planar Monotone 3-Sat</span> the graph $G_\varphi$ can be drawn in a planar rectilinear fashion by placing the variable vertices on a horizontal line, the positive clauses above that line, and the negative clauses below; see Fig. \[fig:planmon3sat\].
![A planar rectilinear drawing of a <span style="font-variant:small-caps;">Planar Monotone 3-Sat</span> instance.[]{data-label="fig:planmon3sat"}](Images/plan_mon_3sat)
Our reduction constructs a subdivision $S_\varphi$ for the Boolean formula $\varphi$ that resembles the general structure of the rectilinear drawing of $G_\varphi$. The weight vector $t_\varphi$ is chosen so that $S_\varphi$ can be transformed into a valid circular-arc cartogram if and only if $\varphi$ is satisfiable. The subdivision consists of three types of gadgets: the *variable*, *literal*, and *clause* gadgets, which we describe below.
A basic building block in all three gadgets is a triangle with target area $0$. It is easy to verify that there are exactly three configurations that realize a $0$-area circular-arc triangle, all of which consist of circular arcs of the unique circle defined by the three points; see Fig. \[fig:area0\]. This building block is used to control the possible shapes of regions in the cartogram.
![The three possibilities to realize a circular-arc triangle with area $0$.[]{data-label="fig:area0"}](Images/area0){width="\columnwidth"}
#### Variable gadget
The variable gadget consists of a horizontal row of rectangles with height $4$ and width $2$, except for some taller rectangles in between of height $5$ that serve as *connectors* to the literal gadgets; see Fig. \[fig:var-gadget\]. With the exception of the connector rectangles, all rectangles are enclosed on their two short sides by skinny triangles with a base side of length $2$. These triangles have target area $0$. They are designed so that two of the three possibilities to achieve area $0$ would require edges to become circular arcs that pass beyond some of the input vertices of the rectangles. Hence only a single configuration remains feasible. This immediately fixes the shape of the rectangles’ short edges by bending them slightly outward and increases the area of each rectangle by the area of two circular segments. We define the area of the circular segment thus attached to each rectangle as $c_1$. We also need a scaled-down version of this triangle with base length $1$ instead of $2$ whose corresponding circular segment thus has area $c_2 = c_1/4$.
There is one special *decision rectangle* (purple) in the center of the gadget. The target area change of this rectangle is set to $2c_1-2\pi$, where $2\pi$ is exactly the area of a half-circle with radius $2$. All other rectangles of height $4$ have a target area change of $2c_1$, i.e., they can be extended by the two circular segments at their short sides but otherwise want to keep their area constant. Finally, the taller connector rectangles (which actually consist of six vertices) are adjacent to a literal gadget on one of their short sides (indicated by dots in Fig. \[fig:var-gadget\]) and to a right triangle on the other short side. This triangle has target area $0$, but other than the skinny triangles described before, all three possible $0$-area configurations in Fig. \[fig:area0\] are feasible. The area change of the connector rectangles is $2c_2$, the area gained from the two small skinny triangles adjacent to the left and right sides of the length-1 edges that stick out of the variable row.
Let us consider the purple decision rectangle in the center, with its two short edges fixed by the shape of the attached skinny triangles. If one of its long edges is bent inside the rectangle as exactly a half-circle and the opposite edge remains a straight-line segment, then the specified area constraint is satisfied. It is, however, geometrically impossible to achieve the given target area by bending both edges simultaneously inside the rectangle like in a concave lens. Hence we can use the two possible configurations of the decision rectangle to encode the two truth values of the variable; see Fig. \[fig:var-gadget\]b and \[fig:var-gadget\]c. Since by pulling one long edge inside the decision rectangle the area of the adjacent rectangle in the gadget enlarges, that adjacent rectangle must in turn pull the opposite long edge inwards by the same amount. So the semi-circle arcs propagate, similar to negative air pressure in a physical model, on one side of the gadget, namely that side whose connecting literals evaluate to *false* in the current state.
It remains to describe the behavior of the connector rectangles. Since the long edges are bent into half-circles and no two edges of the subdivision may cross, the right triangle attached to the connector rectangle must be in the state that forms a half-circle and thus increases the area of the connector rectangle. In order to balance this area increase, the opposite short edge must be bent inwards and form an identical half-circle. This gives us a means to transmit the negative pressure from the center of the variable towards all literals that evaluate to *false*.
There is no negative pressure on the positive side of the variable gadget, i.e., the side whose literals evaluate to *true*. Hence the long edges of the rectangles on this side can remain straight, and there are two possible configuration for the short edges of each connector rectangle, one of which pushes the half-circles towards the literal gadgets rather than pulling them away as it is the case on the negative side.
#### Literal gadget
The main task of the literal gadget is to maintain and transmit the truth state that is found at the variable gadget towards the clause gadget. The gadget can be seen as a pipe composed of chains of rectangles that connects variable and clause. In the pipe the truth value is transmitted by pulling or pushing the long edges of the rectangles into half-circles similarly as in the variable gadget. Three literal gadgets are depicted (together with a clause gadget) in Fig. \[fig:clause-gadget\].
\
There is one notable difference from the transmission of the truth value in the variable gadget since two of the incoming literals for each clause make a turn of 90$^\circ$. The turn is realized by a square of side length $2$ with target area $4$ and two right triangles with target area $0$ (as those in the connector rectangles of the variable gadget). Since the two right triangles are placed on adjacent sides of the square, one of them must bend to the outside of the square while the other one must bend to the inside. If the literal is in state *false* (left and right in Fig. \[fig:clause-gadget\]b) and the half-circles are pulled towards the variable gadget, then both the left and right edges of the square are bent inward while the top and bottom edges are bent outward. This is exactly what is needed to transmit negative pressure to the horizontal part of the literal gadget. For a literal in state *true* we observe exactly the opposite behavior. Fig. \[fig:clause-gadget\]c shows a *true* literal on the left and a *false* literal on the right.
#### Clause gadget
The clause gadget consists of a cross-shaped rectilinear *clause polygon* joining the three incoming literal gadgets; see Fig. \[fig:clause-gadget\]. In its top part there are three right triangles with target area $0$. The target area increase of the clause polygon is $8c_2$, the area increase caused by the eight skinny triangles attached to some of its edges. Note that of the three right triangles at most two can simultaneously bend as half-circles inside the polygon, while they all can bend to the outside independently. As long as one of the incoming literals is *true*, i.e., it pushes a half-circle inside the clause polygon, the three triangles in the top part can balance the area change of the clause polygon caused by any other combination of the remaining two literals; see Fig. \[fig:clause-gadget\]c. However, if all three literals are *false*, the area of three half-circles is added to the clause region (indicated by dotted line segments). Consequently, the area of three half-circles must be removed from the clause region, but at most two half-circles can be removed by the right triangles; see Fig. \[fig:clause-gadget\](b). This shows that the area requirement of the clause polygon can be realized if and only if the clause evaluates to *true* in the given truth assignment.
#### Reduction
From the construction of the gadgets it follows that if the Boolean formula $\varphi$ has a satisfying variable assignment, then the subdivision $S_\varphi$ and the weights $t_\varphi$ are a positive instance of [<span style="font-variant:small-caps;">Circular-Arc Cartogram</span>]{}. On the other hand, we can immediately obtain a satisfying truth assignment for the variables of $\varphi$ from a valid circular-arc cartogram of $S_\varphi$. The vertices of the subdivision $S_\varphi$ all lie on a grid of polynomial size and the target weights are either $0$ or can be encoded algebraically in polynomial space. This concludes the proof.
We note that all vertices in the subdivision $S_\varphi$ either belong to triangles or have degree at least 3. Thus the complexity of $S_\varphi$ cannot be decreased further and the hardness result continues to hold for maps that are minimal in that sense.
Heuristic Method for Computing Circular-Arc Cartograms {#sec:algorithm}
======================================================
Here we describe a versatile heuristic method for generating circular-arc cartograms based on network flows and polygonal straight skeletons. In practice we may assume that our input map is already a simplified or even schematized map that retains the characteristic shapes of the countries but at the same time strongly reduces the polygon complexities. The more simplified the shapes the longer the edges and, consequently, the larger the potential area changes that can be realized by bending the edges. Buchin *et al.* [@bms-msswaug-11] or de Berg *et al.* [@bks-ass-95] described suitable algorithms for computing topologically correct subdivision simplifications and schematizations. Recall that in our model a map is a subdivision $S$ of the plane into a set of disjoint faces, $\mathcal{F} = \{f_1, \dots, f_n\}$, where each face is a simple polygon. The topological structure of the map is described by its dual *face graph* $G$, which contains a vertex $v_i$ for each face $f_i$ and an edge $\{v_i,v_j\}$ between adjacent faces $f_i$ and $f_j$. Here we convert $G$ into a directed graph: for any two adjacent countries in $S$ the corresponding vertices in $G$ are connected with two edges, one for each direction. The initial face areas are described by the vector $a=(a_1, \dots, a_n)$ and the target areas are given by the vector $t=(t_1, \dots, t_n)$. Without loss of generality, we can assume that both vectors are normalized, i.e., $\sum_{i=1}^n a_i = \sum_{i=1}^n t_i = 1$. This means that the total area of the map remains the same. From $a$ and $t$ we can obtain the vector $\Delta = (\Delta_1, \dots, \Delta_n)$ of *desired area changes*, where $\Delta_i = t_i - a_i$ for each $i=1, \dots, n$. Note that $\sum_{i=1}^n \Delta_i = 0$.
The goal of our algorithm is to compute a valid bending configuration in which the resulting face areas $b=(b_1, \dots, b_n)$ are as close to the given target areas $t$ as possible. More precisely, we aim to minimize the error $\sum_{i=1}^n |b_i - t_i|$.
A Network Flow Model for Circular-Arc Cartograms
------------------------------------------------
We use the directed face graph $G$ to define a flow network in which the flow along an edge $e=(u,v)$ corresponds to the area exchange from the face vertex $u$ to the face vertex $v$. We define the capacity $c(e)$ to be equal to an area that can be safely transferred from $u$ to $v$. To compute valid capacities we use the geometry of the face polygons that specify the countries. If we want to restrict ourselves to strong circular-arc cartograms we set $c(e)=0$ for all edges $e$ between two regions $f_i$ and $f_j$ for which $\Delta_i \cdot \Delta_j \ge 0$; for weak cartograms there is no such restriction.
The [*straight skeleton*]{} of a simple $m$-edge polygon, $P$, is made of straight-line segments and partitions the interior of $P$ into $m$ disjoint regions, each corresponding to exactly one edge of $P$ [@DBLP:journals/jucs/AichholzerAAG95]. The straight skeleton is similar to the medial axis but does not require parabolic curves and can be efficiently computed in subquadratic time [@DBLP:journals/algorithmica/ChengV07]. Because the straight skeleton partitions a polygon into disjoint regions, we can define a “safe” bending limit for each edge of the polygon by requiring that the circular arcs remain inside their skeleton regions; see Fig. \[fig:skeleton\]. This guarantees that no two circular arcs cross. For each edge $e=(u,v)$ we can thus define the capacity $c(e)$ as the maximally transferable area from face $u$ to face $v$ subject to the constraint that every circular arc on the boundary between $u$ and $v$ remains inside its skeleton regions. The capacities are by definition static and independent of each other. We note that there is still room for enlarging the capacities over this definition, e.g., by considering to remove some degree-2 vertices and consequently merge their incident boundary edges and their skeleton regions. This yields longer boundary edges that allow larger arcs with larger transferable areas.
![The straight skeleton of two adjacent polygons and the maximally realizable circular-arcs within the safe bending limits of each edge.[]{data-label="fig:skeleton"}](Images/straightSkeleton){width="\columnwidth"}
Once we have computed a set of valid edge capacities for $G$, we create a new vertex $v_i'$ for every vertex $v_i$ in $G$. If $\Delta_i > 0$ we make $v_i'$ a source vertex and add the edge $(v_i',v_i)$ with capacity $c(v_i',v_i) = \Delta_i$ to $G$; otherwise if $\Delta_i < 0$ we make $v_i'$ a sink vertex and add the edge $(v_i,v_i')$ with capacity $c(v_i,v_i') = - \Delta_i$ to $G$. Let $S$ be the set of sources and $T$ the set of sinks.
The quadruple $\mathcal{N}=(G,c,S,T)$ now forms a multiple-source multiple-sink flow network, which is planar since the original face graph of $S$ was planar. If a maximum flow in $\cal N$ with a value of $D=\sum_{\Delta_i > 0} \Delta_i$ can be found, we know that all target areas can be achieved. Furthermore, even if the maximum flow has a value of less than $D$, it still corresponds to a bending configuration that minimizes the cartographic error $\sum_{i=1}^n |b_i - t_i|$ under the given safety constraints for the circular arcs.
![Cartogram of the population in Italy; the first number indicates the success rate, the second number the relative cartographic error.[]{data-label="fig:ital-pop"}](Images/Results/italy-pop-numbers){width="\columnwidth"}
![Cartogram of the agricultural use area in Italy; the first number indicates the success rate, the second number the relative cartographic error.[]{data-label="fig:ital-agri"}](Images/italy_agriSurface){width="\columnwidth"}
The expected running time for computing the straight skeleton of a $k$-vertex polygon is $O(k \log^2 k)$ [@DBLP:journals/algorithmica/ChengV07]. So if the input subdivision $S$ consists of $n$ faces with $N$ vertices in total, we can compute the straight skeletons in $O(N \log^2 N)$ expected time in total. To solve the multiple-source multiple-sink maximum-flow problem in our flow network based on the planar face graph of $S$ we can use the recent $O(n \log^3 n)$-time algorithm of Borradaile *et al.* [@bkmnw-mmmfdpgnt-11].
Implementation and Results
--------------------------
We implemented a prototype of our method in C++ using the CGAL library [@cgal] for computing the straight skeletons and Boost [@boost] for solving the max-flow problem. In this section we present four examples of circular-arc cartograms produced with our implementation, which currently supports only weak circular-arc cartograms. As input subdivisions we used octilinear and rectilinear schematized maps generated with the algorithm of Buchin et al. [@bms-msswaug-11] for area-preserving subdivision schematization. We note here that keeping the vertex positions fixed in our method only makes sense if these vertices are actually characteristic corners of the original shape. This is an interesting problem in its own right, which could be addressed with a shape simplification method that identifies and retains characteristic points, but out of scope in this paper. To demonstrate our circular-arc approach, we can assume that the vertices have been chosen in a meaningful way so the polygonal shapes represent the corresponding countries well. In the appendix, we present four additional examples using a manually simplified input map.
For each example below, we measure both the *success rate*, which is defined as $(a_i-b_i)/\Delta_i$, i.e., the relative achieved area change, and the relative *cartographic error*, which is defined as $|b_i-t_i|/t_i$ [@ks-rc-07]. We show the input polygons in gray and overlay the circular-arc cartogram and label each country with the pair (success rate, cartographic error).
Figures \[fig:ital-pop\] and \[fig:ital-agri\] show cartograms of the regions in Italy. Figure \[fig:ital-pop\] represents the population distribution in Italy[^4] and Figure \[fig:ital-agri\] represents the agricultural use areas in each region[^5]. This is an example of a map where our algorithm performs well. The average success rate in Figure \[fig:ital-pop\] is 0.78 and the average cartographic error is 0.3. In Figure \[fig:ital-agri\] the average success rate is as high as 0.97 and the average error is 0.11; here only two regions have non-zero error. In the case of Italy most regions have access to the external sea face where the maximum size of circular arcs is less restricted. Moreover, the desired area changes are relatively moderate. With the removal of a few degree-2 vertices, i.e., a further simplification of the input subdivision, we could improve the area accuracy in Figure \[fig:ital-pop\] even more (in Sardegna or Campania) without the need of displacing vertices.
![Cartogram of the population in the Netherlands. []{data-label="fig:ned_pop"}](Images/Results/netherlands_pop_nums){width="\columnwidth"}
![Cartogram of the length of the main roads in Europe. []{data-label="fig:europe_mainroads_img"}](Images/europeOnly_octo_mainRoads){width="\columnwidth"}
Figure \[fig:ned\_pop\] shows a cartogram for the population distribution in the Netherlands[^6]. This cartogram is based on a rectilinear rather than an octilinear schematization. The Netherlands are quite unevenly populated: for example the three provinces of Noord-Brabant, Zuid-Holland and Noord-Holland (containing all important urban areas), contribute more than half of the Dutch population. This imbalance between south and north can bee seen well in the cartogram. The regions of the metropolitan south are cloud-shaped, while the northern rural areas are more snowflake-shaped. The imbalance in the data leads to slightly worse performance than in the previous example of Italy; the average success rate is 0.61 and the average cartographic error is 0.37. Further simplification of polygons and potentially some vertex displacements will help to increase the area accuracies.
Figure \[fig:europe\_mainroads\_img\] shows a cartogram based on the length of main roads per country[^7]. All regions in both outputs are recognizable. We can easily identify the different countries as the overall shapes are not very distorted. Even the aspect ratios of most regions remain mostly unchanged. The length of all borders is at least as big as in the input, so we obtained an improved readability of adjacencies.
The results in Figure \[fig:europe\_mainroads\_img\] show an average success rate of 0.69 and an average cartographic error of 0.4, with small and landlocked countries affected the most. Groups of landlocked countries, such as Switzerland and Austria, that all need to increase (decrease) their sizes pose significant difficulties. While being adjacent to the sea helps, it does not always suffice to reach the target area: autobahn and motorway giants such as Germany and the UK, need to further increase their areas but are eventually blocked by other countries. This example suggests that for cartograms with large area changes it is necessary to allow additional distortions, e.g., by allowing vertex movement in order to decrease cartographic error; we have not considered such an approach yet.
In summary, the examples illustrate the utility of circular-arc cartograms. they are readable (countries are where they should be), they are recognizable (the adjacencies between neighbors are preserved), they have low complexity, and they yield visually appealing country shapes that immediately communicate whether regions increase or decrease. Moreover, as the US presidential election example on Fig. \[fig:usa\] shows, circular-arc cartograms make it possible to spot patterns in the data when another parameter is encoded with color. On the other hand, with our current heuristic we cannot guarantee low cartographic error if drastic area changes that are required by the data. This is not necessarily a downside of circular-arc cartograms themselves, but rather due to our heuristic used to compute these examples. Other more abstract cartogram types, e.g., rectangular cartograms [@ks-rc-07; @raisz] or circle cartograms [@Dorling], typically achieve very low area errors but come at the cost of lower recognizability as they often change adjacency relationships between neighboring countries. Finally, all of the examples in this paper are of weak circular-arc cartograms, and strong circular-arcs might be preferable. In the next section we briefly discuss several possible approaches to decrease cartographic errors for circular-arc cartograms.
Conclusions and Future Work {#sec:conclusion}
===========================
In this paper we introduced circular-arc cartograms as a new model for value-by-area diagrams. We showed that the [<span style="font-variant:small-caps;">Circular-Arc Cartogram</span>]{}problem is NP-hard and presented a heuristic algorithm to produce valid circular-arc cartograms with fairly low cartographic error. The results from our implementation indicate that circular-arc cartograms are readable, recognizable, have low complexity and are generally visually appealing. While for many countries in our examples the cartographic error is low, this cannot be guaranteed. There are several natural directions for future algorithmic and experimental work on circular-arc cartograms.
First, we note that the potential area change for each edge depends on its input length. Thus the fewer and longer the edges of a face boundary are, the larger is the range of realizable areas of the face. While we assumed that a fixed (simplified) subdivision is given as input, we can also allow further simplification on demand, i.e., the larger the required area changes the more polygon vertices are discarded in order to create longer and fewer edges. Such an approach preserves well the shape and complexity of regions with low area change, whereas at the same time strongly distorted regions with large area change become more strongly simplified. We could also allow introducing gaps with the shape of biconvex lenses between two neighboring countries that both need to decrease their areas; this idea corresponds to splitting these edges into two, each of which can then be bent inwards. Second, while it is generally undesirable to displace regions, it seems often possible to obtain lower cartographic errors by displacing just a few boundary vertices. It is natural to consider the trade-off between minimizing the overall cartographic error and minimizing overall vertex movement.
Third, we need to further study the effect of weak and strong circular-arc cartograms on error rates and perception. Recall that in the strong version all edges of a deflated country point inwards, while in the weak model (used in this paper) we allow some edges to point out.
Fourth, one of the appealing features of circular-arc cartograms is the easily-interpretable cloud-like shape of countries that have increased area and snowflake-shape of countries with decreased area. Generalizing circular arcs to other types of smooth curves, e.g., cubic splines, may result in visually similar cartograms which allow for more flexibility and better accuracy.
Ultimately, there is a need for a formal evaluation of the utilities of cartograms in general and of circular-arc cartograms in particular. It would natural to expect that readability, recognizability, faithfulness and complexity would vary in importance, depending on the given task. Determining the “best” cartograms would be a difficult but worthwhile goal.
**Acknowledgments.** We thank Wouter Meulemans for providing us with schematized map instances. Research supported in part by NSF grants CCF-1115971, DEB 1053573 and by the *Concept for the Future* of KIT within the framework of the German Excellence Initiative.
Further examples
================
![Population cartogram for the states of Germany. []{data-label="fig:germany_pop"}](Images/germany_95_pop){width="\columnwidth"}
Figures \[fig:germany\_pop\]–\[fig:germany\_rail\_fine\] show cartograms of the states of Germany. We used three different data sets: population data[^8], number of craft enterprises[^9], and railroad kilometers[^10]. The underlying input map was simplified by hand in a way that the most characteristic shapes are preserved and yet only relatively few edges per polygon remain. Unlike in the previous examples, we did not restrict the edge slopes.
The average success rate in Figure \[fig:germany\_pop\] is 0.67 and the average cartographic error is 0.3. While several states are error-free or perform fairly well, there are notable examples of densely populated states like Berlin that need to grow a lot further and states like Mecklenburg-Vorpommern in Northern Germany that are sparsely populated and need to shrink a lot more. In both cases low success rates and high errors are observed; similarly the landlocked states perform worse than those on the boundary.
![Cartogram for the number of crafts enterprises in the states of Germany. []{data-label="fig:germany_craft"}](Images/germany_95_craft){width="\columnwidth"}
Figure \[fig:germany\_craft\] shows interesting statistical data in the sense that there is a clear difference between the states in the North (except the city states Bremen, Hamburg and Berlin) with fewer enterprises and the states in the South with more enterprises. The average success rate in this example is 0.75 and the average error rate is 0.23. Thus we see slightly better results than for the population data. As expected, the problematic states are again those that are very densely populated and the sparse state of Mecklenburg-Vorpommern. Although the accuracy is not yet fully satisfying, the overall trend in the data is nicely conveyed. The southern states as well as the cities are all cloud-shaped having many craft enterprises and the northern states are all snowflake-shaped meaning fewer craft enterprises.
![Cartogram for the railroad kilometers in the states of Germany. []{data-label="fig:germany_rail"}](Images/germany_95_tracklength){width="\columnwidth"}
![Cartogram for the railroad kilometers in the states of Germany using a more detailed input map. []{data-label="fig:germany_rail_fine"}](Images/germany_95_tracklength_fine){width="\columnwidth"}
Finally, Figures \[fig:germany\_rail\] and \[fig:germany\_rail\_fine\] show two cartograms for the same data set of railroad kilometers per state. Figure \[fig:germany\_rail\] uses the same input map as the previous two examples and Figure \[fig:germany\_rail\_fine\] uses a more detailed input map with a lot more and a lot shorter edges defining the state polygons. In Figure \[fig:germany\_rail\] we see only a single state with an area error, namely Berlin which has a very extensive railway network compared to its area. Consequently the average success rate is more than 0.99 and the average area error is less than 0.01. Even all the landlocked states achieve their target areas completely. Since this example performs so well, it is interesting to study the effects of increasing the shape complexity of the input map by adding back in more details. The map in Figure \[fig:germany\_rail\_fine\] uses more than four times as many edges as Figure \[fig:germany\_rail\]. While the performance of this cartogram is still reasonably good with an average success rate of 0.78 and an average error of 0.11, it gets clear in the direct comparison that the capacity of bending the shorter edges is not sufficient to reach all target areas, both for growing and for shrinking states. On the other hand the similarity to the true geographic shapes is higher in this cartogram. Nonetheless, the stylized appearance of Figure \[fig:germany\_rail\] with fewer and longer arcs makes this cartogram more appealing for the purpose of depicting statistical data on a very abstract map. The comparison of the two cartograms and their performance shows that strong simplification of the input shapes is generally advisable, both for better aesthetics and to achieve lower cartographic errors.
[^1]: e-mail: [email protected]
[^2]: e-mail: [email protected]
[^3]: e-mail: [email protected]
[^4]: 2010 population data from <http://demo.istat.it/pop2010>.
[^5]: 2010 superficie agricola utilizzata data from Noi Italia 2012, <http://www3.istat.it/dati/catalogo/20120215_00/Noi_Italia_2012.pdf>
[^6]: 2004 population data from <http://en.wikipedia.org/wiki/Ranked_list_of_Dutch_provinces>.
[^7]: 2012 road length data from <http://ec.europa.eu/transport>
[^8]: 2011 population data from <http://www.statistik-portal.de/Statistik-Portal/de_jb01_jahrtab1.asp>.
[^9]: 2009 craft enterprise data from <http://www.statistik-portal.de/Statistik-Portal/de_jb19_jahrtab1.asp>
[^10]: 2010 railroad data from <https://www.destatis.de/DE/ZahlenFakten/Wirtschaftsbereiche/TransportVerkehr/UnternehmenInfrastrukturFahrzeugbestand/Tabellen/Schieneninfrastruktur.html>
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The [*classical Heun equation*]{} has the form $$\left\{Q(z)\frac
{d^2}{dz^2}+P(z)\frac{d}{dz}+V(z)\right\}S(z)=0,$$ where $Q(z)$ is a cubic complex polynomial, $P(z)$ is a polynomial of degree at most $2$ and $V(z)$ is at most linear. In the second half of the nineteenth century E. Heine and T. Stieltjes in [@He], [@St] initiated the study of the set of all $V(z)$ for which the above equation has a polynomial solution $S(z)$ of a given degree $n$. The main goal of the present paper is to study the union of the roots of the latter set of $V(z)$’s when $n\to\infty$. We formulate an intriguing conjecture of K. Takemura describing the limiting set and give a substantial amount of additional information obtained using some technique developed in [@KvA].
address:
- 'Department of Mathematics, Stockholm University, SE-106 91 Stockholm, Sweden'
- 'Department of Theoretical Physics, Nuclear Physics Institute, Academy of Sciences, 25068 Řež near Prague, Czech Republic'
author:
- Boris Shapiro
- Miloš Tater
title: 'On spectral polynomials of the Heun equation. I.'
---
Introduction and Main Results
=============================
A [*generalized Lamé equation*]{} is a second order differential equation of the form $$\label{eq:comLame}
\left\{Q(z)\frac
{d^2}{dz^2}+P(z)\frac{d}{dz}+V(z)\right\}S(z)=0,$$ where $Q(z)$ is a complex polynomial of degree $l$ and $P(z)$ is a complex polynomial of degree at most $l-1$, see [@WW]. It was first shown by Heine [@He] that if the coefficients of $Q(z)$ and $P(z)$ are algebraically independent, i.e. do not satisfy any algebraic equation with integer coefficients then for an arbitrary positive integer $n$ there are exactly $\binom{n+l-2}{n}$ polynomials $V(z)$ such that has a solution $S(z)$ which is a polynomial of degree $n$. As was recently shown in [@Sh] for any equation with $\deg Q(z)=l, \deg P(z)\le l-1$, and any positive $n$ the set ${\mathfrak V}_n$ of all $V(z)$ giving a polynomial solution $S(z)$ of degree $n$ is always finite and its cardinality is at most $\binom{n+l-2}{n}$. Below we concentrate on the classical case $l=\deg Q(z)=3$ which is better known under the name [*the Heun differential equation*]{}, see e.g. [@Heun] and study the union of all roots of polynomials $V(z)$ belonging to ${\mathfrak V}_n$ as $n\to\infty$. Note that if $l=\deg Q(z)=3$ then $V(z)$ is at most linear and that for a given value of the positive integer $n$ there are at most $n+1$ such polynomials. No essential results in this direction seems to be known. One of the few exceptions is a classical proposition of Pólya, [@Po] claiming that if the rational function $\frac{P(z)}{Q(z)}$ has all positive residues then any root of any $V(z)$ as above and of any $S(z)$ as above lie within $Conv_Q$ where $Conv_Q$ is the convex hull of the set of all roots of $Q(z)$.
Before we move further let us formulate appropriate versions of two main results of [@Sh] generalizing the above statements of Heine and Pólya.
\[th:my\] For any polynomial $Q(z)$ of degree $l$ and any polynomial $P(z)$ of degree at most $l-1$
- there exists $N$ such that for any $n\ge N$ there exist exactly $\binom{n+l-2}{n}$ polynomials $V(z), \deg V(z)=l-2$ counted with appropriate multiplicity such that has a polynomial solution $S(z)$ of degree exactly $n$;
- for any $\epsilon >0$ there exists $N_\epsilon$ such that for any $n\ge N_\epsilon$ any root of any above $V(z)$ and $S(z)$ lie in the $\epsilon$-neighborhood of $Conv_Q$.
Applying the latter result to the situation $l=3$, i.e to the Heun equation we can introduce the set $\mathcal V_n$ consisting of polynomials $V(z)$ giving a polynomial solution $S(z)$ of of degree $n$; each such $V(z)$ appearing the number of times equal to its multiplicity. Then by the above results the set $\mathcal V_n$ will contain exactly $n+1$ linear polynomials for all sufficiently large $n$. It will be convenient to introduce a sequence $\{Sp_n(\lambda)\}$ of [*spectral polynomials*]{} where the $n$-th spectral polynomial is defined by $$Sp_n(\lambda)=\prod_{j=1}^{n+1}(\lambda-t_{n,j}),$$ where $t_{n,j}$ is the unique root of the $j$-th polynomial in $\mathcal V_n$ in any fixed ordering. ($Sp_n(\lambda)$ will be well-defined for all sufficiently large $n$.)
Associate to $Sp_n(\lambda)$ the finite measure $$\mu_n=\frac{1}{n+1}\sum_{j=1}^{n+1}{\delta(z-t_{n,j})},$$ where $\delta(z-a)$ is the Dirac measure supported at $a$. The measure $\mu_n$ obtained in this way is clearly a real probability measure which one usually refers to as the [*root-counting measure*]{} of the polynomial $Sp_n(\lambda)$.
The starting point of this project was some numerical results for the distribution of roots of $Sp_n(\lambda)$ obtained by the first author about 5 years ago and illustrated on the next figure.
Extensive numerical experiments strongly suggest that the following holds.
\[sh-ta1\] For any equation the sequence $\{\mu_n\}$ of the root-counting measures of its spectral polynomials converges to a probability measure $\mu$ supported on the union of three curved segments located inside $Conv_Q$ and connecting the three roots of $Q(z)$ with a certain interior point, see Fig. \[fig1\]. Moreover, the limiting measure $\mu$ depends only on $Q(z)$, i.e. is [*independent*]{} of $P(z)$.
An elegant description of the support of $\mu$ was suggested to us by Professor K. Takemura, [@TaM].
Denote the three roots of $Q(z)$ by $a_1,a_2,a_3$. For $i\in\{1,2,3\}$ consider the curve $\gamma_i$ given as the set of all $b$ satisfying the relation: $$\label{TK}
\int_{a_j}^{a_k}\sqrt{\frac{b-t}{(t-a_1)(t-a_2)(t-a_3)}}dt\in {\mathbb R},$$ here $j$ and $k$ are the remaining two indices in $\{1,2,3\}$ in any order and the integration is taken over the straight interval connecting $a_j$ and $a_k$. One can see that $a_i$ belong to $\gamma_i$ and that these three curves connect the corresponding $a_i$ with a common point within $Conv_Q$. Take a segment of $\gamma_i$ connecting $a_i$ with the common intersection point of all $\gamma$’s. Let us denote the union of these three segments by $\Gamma_Q$.
\[takemura\] The support of the limiting root-counting measure $\mu$ coincides with the above $\Gamma_Q$.
The above description of $\Gamma_Q$ led us to the following reformulation of Takemura’s conjecture.
\[sh-ta2\] The above set $\Gamma_Q$ coincides with the continuum of minimal logarithmic capacity connecting the roots of $Q(z)$.
Notice that Goluzin’s classical problem of finding the continuum of minimal capacity connecting a given $n$-tuple of points in ${\mathbb C}$ was completely solved for $n=3$ by G. Kuzmina in [@Ku1], see also [@Ku2].
In the joint with Professor Takemura follow-up of the present paper [@STT] we will completely settle the above Conjecture \[takemura\] and Proposition \[sh-ta2\] using some methods and results presented below. In the present paper generalizing the technique of [@KvA] we study a different probability measure which is easily described and from which the measure $\mu$ (if it exists) is obtained by the inverse balayage, i.e. the support of $\mu$ will be contained in the support of the measure which we construct and they have the same logarithmic potential outside the support of the latter one. This measure will be uniquely determined by the choice of a root of $Q(z)$ and thus we are in fact constructing three different measures having the same measure $\mu$ as their inverse balayage.
Constructing the measure
------------------------
Choosing one of the three vertices $a_i,\; i=\{1,2,3\}$ consider the unique ellipse $E_i$ which: a) passes through $a_i$ and b) has $a_j, a_k$ as its foci. The constructed probability measure $M_i$ is supported on the elliptic domain $\tilde E_i$ bounded by $E_i$. We need the following notion.
Given two distinct points $\alpha_1\neq \alpha_2$ on ${\mathbb C}$ define the [*arcsine measure*]{} $\omega_{[\alpha_1, \alpha_2]}$ of the interval $[\alpha_1,\alpha_2]$ as the measure supported on $[\alpha_1,\alpha_2]$ and whose density at a point $t\in [\alpha_1,\alpha_2]$ equals $\frac{1}{\pi \sqrt{|(t-\alpha_1)(t-\alpha_2)|}}$.
To describe the measure $M_i$ consider the family of straight lines parallel to the tangent line to the ellipse $E_i$ at $a_i$. Take now the family $\Phi_i$ of intervals obtained by intersection of the latter straight lines with the elliptic domain $\tilde E_i$. Denote by $-v_i$ the vector connecting $a_i$ with its opposite point on $E_i$, i.e. draw the straight line through $a_i$ and the center of $E_i$ till it hits $E_i$ again and take the difference of the latter and the former points. (One can easily check that if we introduce a new variable $z_i=z-a_i$ and express $Q(z)=z_i^3+v_iz^2_i+w_iz_i$ then the above vector will be exactly $-v_i$ in the expression for $Q(z)$ which explains our notation.) Now parameterize the above family $\Phi_i$ of the intervals by their middle points using the formula $-v_i\theta^2, \; \theta\in[0,1].$ Consider the family $\mu_\theta$ of arcsine measures of these intervals. Finally the required measure $M_i$ is obtained by the averaging of $\mu_\theta$ w.r.t. parameter $\theta$, i.e. $M_i=\int_0^1\mu_\theta d\theta$, see Fig. \[fig2\]b).
Now we can finally formulate the main results of this paper.
\[th:main\] If the measure $\mu$ in Conjecture \[sh-ta1\] exists then each of the measures $M_i,\; i\in\{1,2,3\}$ have $\mu$ as its inverse balayage, i.e. $\mu$ and $M_i$ have the same logarithmic potential (or the same Cauchy transform) outside the ellipse $E_i$ and the support of $\mu$ is contained inside the support of $M_i$.
By definition the Cauchy transform ${\mathcal C}_\nu(z)$ and the logarithmic potential $pot_\nu(z)$ of a (complex-valued) measure $\nu$ supported in ${\mathbb C}$ are given by: $${\mathcal C}_\nu(z)=\int_{{\mathbb C}}\frac{d\nu(\xi)}{z-\xi}\quad\text{ and }\quad pot_\nu(z)=\int_{{\mathbb C}}\log|z-\xi|{d\nu(\xi)}.$$ About the properties of the Cauchy trasform and the logarithmic potential of a measure consult e.g. [@Ga].
Theorem \[th:main\] is so far a conditional statement. For technical reasons complete proofs of the existence, uniqueness and several other properties of $\mu$ are postponed until [@STT].
Denote by ${\mathcal C}_{Q_i}(z)$ the Cauchy transform of the measure $M_i,\;i=1,2,3$. The next result shows that each Cauchy transform ${\mathcal C}_{Q_i}(z)$ satisfies outside the elliptic domain $\tilde{E}_i$ the following nice linear non-homogeneous second order differential equation (similar to the one obtained earlier in [@BSh]).
\[th:eq\] The Cauchy transforms ${\mathcal C}_{Q_i}(z)$ of the measures $M_i,\;i=1,2,3$ defined in Theorem \[th:main\] satisfy outside the ellipses $E_i$ one and the same linear non-homogeneous differential equation: $$\label{eq:Heun}
Q(z){\mathcal C}''_{Q_i}(z)+Q'(z){\mathcal C}'_{Q_i}(z)+\frac{Q''(z)}{8}{\mathcal C}_{Q_i}(z)+\frac{Q^{\prime\prime\prime}(z)}{24}=0.$$
We are very grateful to Professor K. Takemura of Yokohama City University for a number of illuminating discussions prior, during, and after his visit to Stockholm in September 2007. We are obliged to Professor A. Kuijlaars for clarification of his joint paper [@KvA] and to Professor A. Martínez-Finkelshtein for the interest in our work. We want to thank Professor G. V. Kuzmina for the patient explanation of her related results and Professor J.-E. Björk for the help in manipulations with complicated integrals depending on parameters. Research of the second author was supported by the Czech Ministry of Education, Youth and Sports within the project LC06002.
Proof of Theorem \[th:main\]
============================
It essentially follows from the stronger version of the main result of [@KvA] which we present below. First we express the polynomial $Sp_n(\lambda)$ as the characteristic polynomial of a certain matrix. In order to make this matrix tridiagonal we assume as above that the root $a_i$ is placed at the origin. In order to simplify the notation we drop the index $i$ assuming that $z$ is already the appropriate coordinate. Set $$Q(z)=z^3+vz^2+wz.$$ Consider the operator $$T=(z^3+vz^2+wz)\frac{d^2}{dz^2}+({\alpha}z^2+{\beta}z+{\gamma})\frac{d}{dz}
-\theta_n(z-\lambda),$$ where $v,w,{\alpha},{\beta},{\gamma}$ are fixed coefficients of $Q(z)$ and $P(z)$ respectively and $\theta_n, \lambda$ are variables. Assuming that $S(z)=u_0z^n+u_1z^{n-1}+\ldots+u_n$ with undetermined coefficients $u_i$, $0\le i\le n$, and in order to solve the Heine-Stieltjes problem described in the introduction we will be looking for the values of $\theta_n, \lambda$ and $u_i$, $0\le i\le n$, such that $T(S(z))=0$. Note that $T(S(z))$ is in general a polynomial of degree $n+1$ whose leading coefficient equals $u_0[n(n-1)+{\alpha}n -\theta_n]$. To get a non-trivial solution we therefore set $$\theta_n=n(n-1+{\alpha}).$$ Straightforward computations show that the coefficients of the successive powers $z^n, z^{n-1},\ldots, z^0$ in $T(S(z))$ can be expressed in the form of a matrix product $M_nU$, where $U=(u_0,u_1,\ldots,u_n)^T$ and $M_n$ is the following tridiagonal $(n+1)\times(n+1)$ matrix $$M_n:=\begin{pmatrix}
\lambda-\xi_{n,1}&{\alpha}_{n,2}&0&0&\cdots&0&0\\
{\gamma}_{n,2}&\lambda-\xi_{n,2}&{\alpha}_{n,3}&0&\cdots&0&0\\
0&{\gamma}_{n,3}&\lambda-\xi_{n,3}&{\alpha}_{n,4}&\cdots&0&0\\
\vdots&\vdots&\ddots&\ddots&\ddots&\vdots&\vdots\\
0&0&0&\ddots&\ddots&{\alpha}_{n,n}&0\\
0&0&0&\cdots&{\gamma}_{n,n}&\lambda-\xi_{n,n}&{\alpha}_{n,n+1}\\
0&0&0&\cdots&0&{\gamma}_{n,n+1}&\lambda-\xi_{n,n+1}
\end{pmatrix}$$ with $$\label{eq:extra}
\begin{split}
\xi_{n,i}&=-\frac{v(n-i)(n-i+1)+{\beta}(n-i+1)}{\theta_n},
\quad i\in\{1,\ldots,n+1\}, \\
{\alpha}_{n,i}&=\frac{(n-i)(n-i+1)+{\alpha}(n-i+1)}{\theta_n}-1,
\quad i\in\{2,\ldots,n+1\}, \\
{\gamma}_{n,i}&=\frac{w(n-i+1)(n-i+2)+{\gamma}(n-i+2)}{\theta_n},
\quad i\in\{2,\ldots,n+1\}.
\end{split}$$
A similar matrix can be found in [@He] and also in [@Tu]. The matrix $M_n$ depends linearly on the indeterminate $\lambda$ which appears only on its main diagonal. Obviously if the linear homogeneous system $M_nU=0$ is to have a nontrivial solution $U=(u_0,u_1,...,u_n)^T$ the determinant of $M_n$ has to vanish. This gives the required polynomial equation $$Sp_n(\lambda)=\det(M_n)=0.$$
The sequence of polynomials $\{Sp_n(\lambda)\}_{n\in{\mathbb Z}_+}$ does not seem to satisfy any reasonable recurrence relation. In order to overcome this difficulty and to be able to use the technique of $3$-term recurrence relations with variable coefficients (which is applicable since $M_n$ is tridiagonal) we extend the above polynomial sequence by introducing an additional parameter. Namely, define $$Sp_{n,i}(\lambda)=\det M_{n,i},\quad i\in\{1,\ldots,n+1\},$$ where $M_{n,i}$ is the upper $i\times i$ principal submatrix of $M_n$. One can easily check (see, e.g., [@Ar p. 20]) that the following $3$-term relation holds $$\label{eq:3term}
Sp_{n,i}(\lambda)=(\lambda-\xi_{n,i}) Sp_{n,i-1}(\lambda) - \psi_{n,i} Sp_{n,i-2}(\lambda),
\quad i\in\{1,\ldots,n+1\},$$ where $\xi_{n,i}$ is as in and $$\label{eq:coeffs}
\psi_{n,i}={\alpha}_{n,i}{\gamma}_{n,i},\quad i\in\{2,\ldots,n+1\}.$$ Here we use the (standard) initial conditions $Sp_{n,0}(\lambda)=1$, $Sp_{n,-1}(\lambda)=0$. It is well-known that if all $\xi_{n,i}$’s are real and all $\psi_{n,i}$’s are positive then the polynomials $Sp_{n,i}(\lambda)$, $i\in\{0,\ldots, n+1\}$, form a finite sequence of orthogonal polynomials. In particular, all their roots are real. In our case however these coefficients are complex. To complete the proof of Theorem \[th:main\] we state the following generalization of [@KvA Theorem 1.4] which translated in our notation claims the following.
\[KvA\] If there exist two continuous functions $\xi(\tau)$ and $\psi(\tau)$, $\tau\in [0,1]$, such that $$\lim_{i/(n+1)\to \tau} \xi_{i,n}=\xi(\tau),\quad
\lim_{i/(n+1)\to \tau} \psi_{i,n}=\psi(\tau),\quad
\,\,\,\,\forall\tau\in [0,1],$$ then the asymptotic root-counting measure $\mu$ of the polynomial sequence $\{Sp_n(\lambda)\}_{n\in{\mathbb Z}_+}=\{Sp_{n,n+1}(\lambda)\}_{n\in{\mathbb Z}_+}$ (if it exists) and the average $M$ of the acsine measures given by $$M=\int_0^1
\omega_{\left[\xi(\tau)-2\sqrt{\psi(\tau)},
\xi(\tau)+2\sqrt{\psi(\tau)}\right]}d\tau,$$ have the same logarithmic potential outside the union of their supports.
Recall that for a pair of distinct complex number $\alpha_1\neq \alpha_2$ the arcsine measure $\omega_{[\alpha_1, \alpha_2]}$ is the measure supported on $[\alpha_1,\alpha_2]$ and whose density at a point $t\in [\alpha_1,\alpha_2]$ equals $\frac{1}{\pi \sqrt{|(t-\alpha_1)(t-\alpha_2)|}}$.
Although Theorem \[KvA\] is not explicitly stated in [@KvA] it is very similar and its proof is completely parallel to that of Theorem 1.4 from this paper.
From the explicit formulas for $\xi_{n,i}$ and $\psi_{n,i}$ (see and ) one easily gets $$\begin{split}
\xi(\tau)&=\lim_{i/(n+1)\to \tau} \xi_{i,n}=-v(1-\tau)^2, \\
\psi(\tau)&=\lim_{i/(n+1)\to \tau} \psi_{i,n}=-w(1-(1-\tau)^2)(1-\tau)^2.
\end{split}$$ Notice that the above limits are independent of the coefficients ${\alpha},{\beta},{\gamma}$ of the polynomial $P(z)$.
\[ellipse\] The parametric curve $\Gamma$ given in the above notation by the formula $\xi(\tau)\pm 2\sqrt{\psi(\tau)},\;\tau\in [0,1]$ is the ellipse passing through the origin and given in coordinates $x=Re(z), y=Im(z)$ by the equation $$\label{ell}
a_{11}x^2+2a_{12}xy+a_{22}y^2+2a_{13}x+2a_{23}y=0$$ where
$a_{11}=C^2+4D^2,\quad a_{12}=-(AC+4BD),\quad a_{22}=A^2+4B^2,$\
$a_{13}=2D(BC-AD),\quad a_{23}=-2B(BC-AD)$
and $A=-Re(v), B=-Im(u), C=-Im(v), D=Re(u)$.
We express the functions $\xi$ and $\psi$ as $$\begin{cases}
\xi(\tau)=-v(1-\tau)^2=-v\theta^2=-v\sin^2\varphi,\\
\psi(\tau)=-w(1-(1-\tau)^2)(1-\tau^2)=-w(1-\theta^2)\theta^2=-w\sin^2\varphi\cos^2\varphi,
\end{cases}$$ where $\tau\in[0,1]$, $\theta:=1-\tau\in[0,1]$, and $\sin\varphi:=\theta$, $\varphi\in[0,\pi/2]$. Then
$\xi(\tau)\pm2\sqrt{\psi(\tau)}=-v\sin^2\varphi\pm\sqrt{-w}\sin2\varphi.$
Thus the curve $\Gamma\subset {\mathbb C}$ is given by the parametrization $ \Gamma(\varphi)=-v \sin^2
\varphi \pm \sqrt{-w}\sin2\varphi $, where $v,w,z\in{\mathbb{C}}$ and $\varphi\in [0,\pi/2]$. Set $w=u^2$, so that $\sqrt{-w}={\mathrm{i}}u$. Then $\Gamma$ has the form: $$\Gamma(\varphi) =-v \sin^2 \varphi \pm {\mathrm{i}}u\sin2\varphi=(-Re(v)-{\mathrm{i}}\, Im(v))
\sin^2 \pm {\mathrm{i}}(Re(u)+{\mathrm{i}}\, Im(u))\sin2\varphi.$$ We, therefore, get the following system for its real and imaginary parts: $$\label{prm}
\begin{cases}
x(\varphi)=A \sin^2 \varphi + B \sin2\varphi \\
y(\varphi)=C \sin^2 \varphi + D \sin2\varphi.
\end{cases}$$ Here $A=-Re(v), B=-Im(u), C=-Im(v), D=Re(u)$ and $\varphi\in
[-\pi/2,\pi/2]$ since $\Gamma$ is $\pi$-periodic.
To show that $\Gamma$ is an ellipse passing through the origin and satisfying (\[ell\]) substitute (\[prm\]) into the expression $
a_{11}x^2(\varphi)+2a_{12}x(\varphi)y(\varphi)+a_{22}y^2(\varphi)+
2a_{13}x(\varphi)+2a_{23}y(\varphi)$, where the coefficients $a_{i,j}$ are defined in the statement of Lemma \[ellipse\]. Simple calculations then show then that the latter expression vanishes identically, i.e. for all values of $\varphi$.
To prove that (\[ell\]) describes a real ellipse (and not some other real affine quadric) consider the determinant
$$\Delta:=\left|
\begin{array}{c c c}
a_{11} & a_{12} & a_{13} \\
a_{12} & a_{22} & a_{23} \\
a_{13} & a_{23} & 0
\end{array}
\right| =-4(BC-AD)^4.$$
It is well-known that if $\Delta$ is negative then we have a real ellipse ($\Delta>0$ corresponds to an imaginary ellipse, i.e. an empty set of solutions). Thus unless $BC-AD=0$ (which describes the situation with all three roots of $Q(z)$ being collinear) then $\Gamma$ is a real ellipse. To find its semiaxes $a$ and $b$ we calculate the following quantities: $$\delta:=\left|
\begin{array}{c c}
a_{11} & a_{12} \\
a_{12} & a_{22}
\end{array}
\right| =4(BC-AD)^2; \qquad
\iota:=a_{11}+a_{22}=A^2+C^2+4(B^2+C^2).$$ It is known that the roots $\lambda_{1,2}$ of the characteristic equation $\lambda^2-\iota\lambda+\delta=0$ are equal to $2a^2$ and $2b^2$, (in particular, both need to be positive) where $a,b$ are the semiaxes of the ellipse under consideration. We arrive therefore at $$\begin{cases}
a=\frac{1}{2} \sqrt{\iota+\sqrt{\iota^2-4\delta}} \\
b=\frac{1}{2} \sqrt{\iota-\sqrt{\iota^2-4\delta}}
\end{cases}$$ and $\sqrt{\iota^2-4\delta}=\sqrt{((A-2D)^2+(C+2B)^2)((A+2D)^2+(C-2B)^2)}$. For the sake of completeness the eccentricity $c$ of our ellipse can be expressed as
$c=\sqrt{\frac{-\Delta}{\delta^2}\sqrt{\iota^2-4\delta}}=
\frac{1}{2}\sqrt[4]{((A-2D)^2+(C+2B)^2)((A+2D)^2+(C-2B)^2)}$.
\[foci\] The foci of the ellipse coincide with the two roots of the polynomial $Q(z)$ different from the origin.
The coordinates of the centre ${\mathfrak c}=(x_{\mathfrak c},y_{\mathfrak c})$ of our ellipse satisfy: $$\left.
\begin{aligned}
a_{11}x_{\mathfrak c}+a_{12}y_{\mathfrak c}+a_{13}=0 \\
a_{12}x_{\mathfrak c}+a_{22}y_{\mathfrak c}+a_{23}=0
\end{aligned}
\right\} \quad \Rightarrow \quad x_{\mathfrak c}=\frac{A}{2} \quad
y_{\mathfrak c}=\frac{C}{2}.$$ Recalling that $Q(z)=z(z^2+vz+w)= z(z^2+vz+u^2)$ we need to show that the coordinates $(x_f,y_f)$ of the foci $f$ of $\Gamma$ satisfy the equation: $$f=x_f+{\mathrm{i}}\, y_f=\frac{-v\pm \sqrt{v^2-4u^2}}{2}.$$ To do this we express them through $A,B,C,D$. First, we see that $Re(v^2-4u^2)=A^2+4B^2-C^2-4D^2$. Using the relation:
$\sqrt{\xi+{\mathrm{i}}\, \eta}=\sqrt{\frac{r+\xi}{2}}+{\mathrm{i}}\,
\sqrt{\frac{r-\xi}{2}}$,
where $r=\sqrt{\xi^2+\eta^2}$ we get
$r=\sqrt{(Re(v^2-4u^2))^2+(Im(v^2-4u^2))^2}=4c^2=$
$\sqrt{((A-2D)^2+(C+2B)^2) ((A+2D)^2+(C-2B)^2)}$
and $$\begin{aligned}
x_f=\frac{A}{2}\pm\frac{1}{2\sqrt{2}}\sqrt{4c^2+(A^2+4B^2-C^2-4D^2)} \\
y_f=\frac{C}{2}\pm\frac{1}{2\sqrt{2}}\sqrt{4c^2-(A^2+4B^2-C^2-4D^2)}
\end{aligned}$$ Straightforward calculation shows that the centre ${\mathfrak c}$ and the foci $f_1$ and $f_2$ lie on the same line given by the equation: $$\label{ma}
y=\frac{4c^2-(A^2+4B^2-C^2-4D^2)}{2(AC+4BD)}\left(x-\frac{A}{2}\right)+\frac{C}{2}.$$
Finally we check that the spacing between the centre and either focus equals to the eccentricity $c$ which settles the lemma. This follows, for example, from the expression for the coordinates of the intersection points between (\[ma\]) and the circle $(x-x_{\mathfrak c})^2+(y-y_{\mathfrak c})^2=c^2$.
Proof of Theorem \[th:eq\]
==========================
We start with the following integral representation of the required Cauchy trasform.
\[lm:CT\] The Cauchy transform ${\mathcal C}_0(z)$ of the measure $M_0$ associated with the root of the polynomial $Q(z)=z(z^2+vz+w)$ at the origin is given by $$\label{eq:CT}
{\mathcal C}_0(z)=\int_0^1\frac{d\theta}{\sqrt{(v^2-4w)\theta^4+(2vz+4w)\theta^2+z^2}}.$$
Indeed, recall that the Cauchy transform ${\mathcal C}_{[\alpha_1, \alpha_2]}$Êof the arcsine measure $\omega_{[\alpha_1, \alpha_2]}$ of the interval $[\alpha_1,\alpha_2]$ equals $${\mathcal C}_{[\alpha_1, \alpha_2]}=\frac{1}{\sqrt{z-\alpha_1)(z-\alpha_2)}}.$$ The measure $M_0$ is obtained by the averaging of the family of arcsine measures, namely $$M_0=\int_0^1
\omega_{\left[\xi(\tau)-2\sqrt{\psi(\tau)},
\xi(\tau)+2\sqrt{\psi(\tau)}\right]}d\tau,$$ where $\xi(\tau)=-v(1-\tau)^2=-v\theta^2$, $\psi(\tau)=-w(1-(1-\tau)^2)(1-\tau)^2=-w(1-\theta^2)\theta^2$ and $\theta=1-\tau$. Since the Cauchy transform of the average of a family of measures equals the average of the family of their Cauchy transforms one gets after obvious simplifications: $$\begin{aligned}
{\mathcal C}_0(z)=&\int_0^1\frac{d\tau}{(z-(\xi(\tau)-2\sqrt{\psi(\tau)})(z-(\xi(\tau)+2\sqrt{\psi(\tau)})}=\\
=&\int_0^1\frac{d\theta}{\sqrt{(v^2-4w)\theta^4+(2vz+4w)\theta^2+z^2}}.
\end{aligned}$$
Ê
Special case
------------
Ê We first provide the proof of Theorem \[th:eq\] for a specific case $Q(z)=z(4z^2-1)$ where the calculations are somewhat simpler and then address the general case. By Lemma \[lm:CT\] the Cauchy transform ${\mathcal C}_0(z)$ of the measure $M_0$ associate with the root of $Q(z)$ at the origin is then given by the integral
$$\label{eq:CI}
{\mathcal C}_0(z):=\int_0^1 \frac{d\theta}{\sqrt{\theta^4-\theta^2+z^2}}.$$
We want to find a differential equation satisfied by ${\mathcal C}_0(z)$ w.r.t. the variable $z$. Unfortunately, we do not know how to do it directly and our proof requires a number of intricate variable changes and manipulations. We first change $t=2\theta^2-1$ and consider $$\label{CCII}
{\mathcal C}_0(z)=I_0(s)=\frac{1}{\sqrt{2}}\int_{-1}^1
\frac{dt}{\sqrt{t+1}\sqrt{t^2+s}},$$
where $s:=4z^2-1$. Introduce now a family of functions $I_{\nu}(s)$ indexed by $\nu\ge 0$ and defined by:
$$I_{\nu}(s):=\frac{1}{\sqrt{2}}\int_{-1}^1
\frac{t^{\nu}dt}{\sqrt{t+1}\sqrt{t^2+s}}.$$
\[lm:relat\] For $\nu\ge 0$ the following three relations are satisfied:
$$\label{rec}
\frac{\partial
I_{\nu+2}}{\partial s}=-\frac{1}{2}I_{\nu}-s\frac{\partial
I_{\nu}}{\partial s},$$
$$\label{two}
\frac{\partial}{\partial s}(I_2+I_1)=-\frac{1}{4}I_0+\frac{1}{2\sqrt{1+s}},$$
$$\label{three}
\frac{\partial}{\partial
s}(I_3-I_1)=-\frac{3}{4}I_1-\frac{1}{4}I_0.$$
Relation (\[rec\]) can be proved directly:
$$\begin{aligned}
\frac{\partial I_{\nu+2}}{\partial s}
&=-\frac{1}{2\sqrt{2}}\int_{-1}^1 \frac{t^{\nu+2}dt}{\sqrt{t+1}(t^2+s)^{3/2}}
\\
&=-\frac{1}{2\sqrt{2}}\int_{-1}^1
\frac{(t^{\nu+2}+st^{\nu})dt}{\sqrt{t+1}(t^2+s)^{3/2}}+
\frac{1}{2\sqrt{2}}\int_{-1}^1
\frac{st^{\nu}dt}{\sqrt{t+1}(t^2+s)^{3/2}}
\\
&=-\frac{1}{2}I_{\nu}-s\frac{\partial I_{\nu}}{\partial s}.
\end{aligned}$$
Relation (\[two\]) is easy to verify by integration by parts. Indeed,
$$\begin{aligned}
\frac{\partial}{\partial s}(I_2+I_1)
&=-\frac{1}{2\sqrt{2}}\int_{-1}^1
\frac{t+1}{\sqrt{t+1}}\frac{tdt}{(t^2+s)^{3/2}}
=\frac{1}{2\sqrt{1+s}}-\frac{1}{4}I_0.
\end{aligned}$$
Similarly, by integration by parts one gets:
$$\begin{aligned}
\frac{\partial}{\partial s}(I_3-I_1)
&=-\frac{1}{2\sqrt{2}}\int_{-1}^1
\frac{t+1}{\sqrt{t^2-1}}\frac{tdt}{(t^2+s)^{3/2}}
=-\frac{3}{4}I_1-\frac{1}{4}I_0.
\end{aligned}$$
Now, we express ${\partial I_2}/{\partial s}$ from , substitute it in , and single out $\partial I_1/\partial s$:
$$\label{der1}
\frac{\partial I_1}{\partial s}=s\frac{\partial I_0}{\partial
s}+\frac{1}{4}I_0+\frac{1}{2\sqrt{1+s}}.$$
Adding with and reducing $\partial
I_3/\partial s$, $\partial I_2/\partial s$ with the help of we obtain:
$$4(1+s)\frac{\partial I_1}{\partial s}=I_0+I_1.$$
Through we get:
$$I_1=4s(1+s)\frac{\partial I_0}{\partial s}+s I_0+2\sqrt{s+1}.$$
Differentiating both sides of the latter relation w.r.t. $s$ and using again we obtain the required linear non-homogeneous differential equation satisfied by $I_0(s)$:
$$\label{eq}
16s(1+s)\frac{\partial^2 I_0}{\partial s^2}+16(1+2s)\frac{\partial
I_0}{\partial s}+3I_0=-\frac{2}{\sqrt{1+s}}.$$
In order to recover the required equation for ${\mathcal C}_0(z)$ we have to change $s$ back to $z$. Using straightforward relations $$\frac{\partial I_0}{\partial s}=\frac{1}{8z}\frac{\partial {\mathcal C}_0}{\partial z}\text{ and }
\frac{\partial^2 I_0}{\partial s^2}=\frac{1}{64z^3}\left(z\frac{\partial^2 {\mathcal C}_0}{\partial z^2}-\frac{\partial {\mathcal C}_0}{\partial z}\right)$$ we obtain after some obvious simplifications the equation: $$z(4z^2-1)\frac{\partial^2 {\mathcal C}_0}{\partial z^2}+(12z^2-1)\frac{\partial {\mathcal C}_0}{\partial z}+3z{\mathcal C}_0(z)+1=0$$ which coincides with for $Q(z)=z(4z^2-1)$. Thus our special case of Theorem \[th:eq\] is settled.
Notice also that can be solved explicitly. The general solution of the corresponding linear homogeneous equation is an arbitrary linear combination of a complete elliptic integral of the first kind $y_1(s)$ and of an associated Legendre function of the second kind $y_2(s)$ given by:
$$\begin{cases}
y_1(s)=&
\frac{2}{\pi\sqrt[4]{1+s}}\mathbb{K}\left(\frac{\sqrt{1+s}-1}{2\sqrt{1+s}}\right)
\\
y_2(s)=& \mathbb{Q}_{-1/4}(1+2s),
\end{cases}$$
here $\mathbb{K}(x)$ and $\mathbb{Q}(x)$ are the complete elliptic integral and the associated Legendre function of the second kind respective. The general solution to depends on two arbitrary constants $C_1, C_2$ and is given by:
$$I_0=C_1y_2+C_2y_2+y_2\int y_1\frac{g}{f_2}\frac{ds}{W}-y_1\int
y_2\frac{g}{f_2}\frac{ds}{W},$$
where $g(s)=-\frac{2}{\sqrt{1+s}}, \quad f_2(s)=16s(1+s), \quad
W(s)=y_1(s)y'_2(s)-y_2(s)y'_1(s).$ However, we need its particular solution and thus have to determine the corresponding particular values of $C_1, C_2$. (To find them we evaluated the integral for two different values of $s$. Moreover, analyzing the polynomial $(t+1)(t^2+s)=t^3+t^2+st+s$, we observed that it is positive on $[-1,1]$ for $s>0$ and that is divergent when $s=0$.)
The next figure compares the appropriate solution of giving $I_0(s)$ with the values of $I_0(s)$ calculated numerically using the integral for a number of values of $s$ (which are shown by dots below).
\[fig3\]
General case
------------
The scheme of this proof is exactly the same as in the above special case but calculations are somewhat messier. Assuming that $v^2-4w\neq0$ we need to find a differential equation satisfied by the integral . We change variables as follows: $$s=-16w\frac{z^2+vz+w}{(v^2-4w)^2}, \qquad u=v\frac{v+2z}{v^2-4w},
\qquad a=v^2-4w$$ and denote $I_0(s,u,a)={\mathcal C}_0(z)$. (Here as above we assume that $v$ and $w$ are some fixed complex numbers.) It also helps to change the variable $\theta$ in by using $2\theta^2=t+1$ and then we finally get $${\mathcal C}_0(z)=I_0(s)=\frac{1}{\sqrt{2a}}\int_{-1}^1\frac{dt}{\sqrt{t+1}\sqrt{(t+u)^2+s}}.$$ As above we introduce a family of functions $I_{\nu}(s),\;s\ge0$ given by the formula: $$I_\nu(s):=\frac{1}{\sqrt{2a}}\int_{-1}^1\frac{(t+u)^\nu
dt}{\sqrt{t+1}\sqrt{(t+u)^2+s}}.$$ Analogously to Lemma \[lm:relat\] one can prove the next statement.
The following relations are valid for $I_{\nu}(s),\;s\ge0$: $$\label{srec}
\frac{\partial
I_{\nu+2}}{\partial s}=-\frac{1}{2}I_{\nu}-s\frac{\partial
I_{\nu}}{\partial s},$$
$$\label{stwo}
\frac{\partial}{\partial s}(I_2+I_1)=u\frac{\partial I_1}{\partial s}
-\frac{1}{4}I_0+\frac{1}{2\sqrt{2}\sqrt{(u+1)^2+s}},$$
$$\label{sthree}
\frac{\partial}{\partial
s}(I_3-I_1)=(u^2-2u)\frac{\partial I_1}{\partial s}-\frac{3}{4}I_1+
\frac{u-1}{4}I_0+\frac{u}{\sqrt{2}\sqrt{(u+1)^2+s}}.$$
Now, we use for expressing ${\partial I_2}/{\partial
s}$ and then we single out ${\partial I_1}/{\partial s}$ from : $$\label{sder1}
\frac{\partial I_1}{\partial s}=\frac{s}{1-u}\frac{\partial
I_0}{\partial
s}+\frac{I_0}{4(1-u)}+\frac{1}{2\sqrt{a}(1-u)\sqrt{(u+1)^2+s}}.$$ Adding and , employing again, and using we get the relation: $$\label{sI1}
(u-1)I_1=-4s(s+(u-1)^2)\frac{\partial I_0}{\partial
s}-sI_0-\frac{2(s-u^2+1)}{\sqrt{a}\sqrt{(u+1)^2+s}}.$$ Eventually, taking the derivative of the both sides of the latter equation w.r.t $s$ and using again we finally get a linear differential equation in the variable $s$ satisfied by $I_0(s)$: $$\label{seq}
16s(s+(u-1)^2)\frac{\partial^2 I_0}{\partial
s^2}+16(2s+(u-1)^2)\frac{\partial I_0}{\partial
s}+3I_0+\frac{2}{\sqrt{2}}\frac{s+(u+1)(5u+1)}{\sqrt{(u+1)^2+s}}=0.$$
In order to get an equation for ${\mathcal C}_0(z)$ w.r.t. the variable $z$, we use: $$\frac{\partial {\mathcal C}_0}{\partial z}=\frac{\partial s}{\partial
z}\frac{\partial I_0}{\partial s}+\frac{\partial u}{\partial
z}\frac{\partial I_0}{\partial u}$$ and $$\frac{\partial^2 {\mathcal C}_0}{\partial z^2}=\frac{\partial^2 s}{\partial z^2}
\frac{\partial I_0}{\partial s}+\left(\frac{\partial s}{\partial
z}\right)^2\frac{\partial^2 I_0}{\partial s^2}+2\frac{\partial s}{\partial z}
\frac{\partial u}{\partial z}\frac{\partial^2 I_0}{\partial s \partial
u}+\left(\frac{\partial u}{\partial
z}\right)^2\frac{\partial^2 I_0}{\partial u^2}.$$ With the help of $\frac{\partial I_0}{\partial u}=2I_1$ and we obtain $$\frac{\partial {\mathcal C}_0}{\partial z}=\left(\frac{\partial s}{\partial
z}+2\frac{\partial u}{\partial
z}\frac{s}{1-u}\right)\frac{\partial I_0}{\partial s}+2\frac{\partial u}{\partial
z}\frac{I_0}{4(1-u)}+\frac{\partial u}{\partial
z}\frac{1}{\sqrt{a}(1-u)\sqrt{(u+1)^2+s}}.$$ Now, we get $$\label{J0z}
\frac{\partial I_0}{\partial
s}=\frac{(v^2-4w)(vz+2w)}{16wz}\frac{\partial {\mathcal C}_0}{\partial
z}+\frac{v(v^2-4w)}{32wz}{\mathcal C}_0+\frac{v(v^2-4w)}{32wz(v+z)}.$$
Further, we use $$\frac{\partial^2 I_0}{\partial u^2}=-4\frac{\partial I_0}{\partial
s}-4s\frac{\partial^2 I_0}{\partial s^2}$$ and $$\begin{aligned}
\frac{\partial^2 I_0}{\partial s \partial
u}=&2(u-1)\frac{\partial^2 I_0}{\partial
s^2}+\frac{3s+4(u-1)^2}{2s(u-1)}\frac{\partial I_0}{\partial
s}+
\frac{3I_0}{8s(u-1)}+\frac{3s+5u^2+6u+1}{4\sqrt{a}(u-1)s(s+(u-1)^2)^{3/2}}.
\end{aligned}$$ We can now express ${\partial^2 {\mathcal C}_0}/{\partial z^2}$ through ${\partial^2 I_0}/{\partial s^2}$, ${\partial
I_0}/{\partial s}$, and $I_0$ as follows: $$\begin{aligned}
\frac{\partial^2 {\mathcal C}_0}{\partial z^2}=&\left(\left(\frac{\partial
s}{\partial z}\right)^2+4(u-1)\left(\frac{\partial s}{\partial
z}\right)\left(\frac{\partial u}{\partial
z}\right)-4s\left(\frac{\partial u}{\partial
z}\right)^2\right)\frac{\partial^2 I_0}{\partial s^2}+
\\
&+\left(\frac{\partial^2 s}{\partial z^2}+\left(\frac{\partial s}{\partial
z}\right)\left(\frac{\partial u}{\partial
z}\right)\frac{3s+4(u-1)^2}{s(u-1)}-4\left(\frac{\partial u}{\partial
z}\right)^2\right)\frac{\partial I_0}{\partial s}+
\\
&+\left(\frac{\partial s}{\partial
z}\right)\left(\frac{\partial u}{\partial
z}\right)\frac{3I_0}{4s(u-1)}+\left(\frac{\partial s}{\partial
z}\right)\left(\frac{\partial u}{\partial
z}\right)\frac{3s+5u^2+6u+1}{4\sqrt{a}(u-1)s(s+(u-1)^2)^{3/2}}.
\end{aligned}$$ This leads to:
$$\begin{aligned}
\frac{\partial^2 {\mathcal C}_0}{\partial
z^2}=&-\frac{256wz^2}{(v^2-4w)^3}\frac{\partial^2 I_0}{\partial
s^2}-
\\
&-16\frac{4w^2(w+z^2)+4vwz(w+2z^2)+v^2w(w+5z^2)+v^3(2wz-z^3)}
{(v^2-4w)^2(2w+vz)(w+z(v+z))}\frac{\partial I_0}{\partial s}+
\\
&+\frac{3v(v+2z)}{4(2w+vz)(w+z(v+z))}I_0+
\\
&+\frac{v(v+2z)(3v^4+8v^3z-24vwz-4w(2w+3z^2)
+v^2(5z^2-8w))}{4(v^2-4w)(v+z)^3(2w+vz)(w+z(v+z))}.
\end{aligned}$$
From the latter equation and we finally get: $$\begin{aligned}
\frac{\partial^2 I_0}{\partial
s^2}=&-\frac{(v^2-4w)^3}{256wz^2}\frac{\partial^2 {\mathcal C}_0}{\partial
z^2}- \frac{(v^2-4w)^2 c_1}
{256w^2z^3(w+z(v+z))}\frac{\partial {\mathcal C}_0}{\partial z}-
\\
&-\frac{v(v^2-4w)^2c_2}{1024wz^2(w+z(v+z))}{\mathcal C}_0-
\frac{v(v^2-4w)^2c_3}
{1024w^2z^3(v+z)^3(w+z(v+z))},
\end{aligned}$$ where $$\begin{aligned}
c_1=&4w^2(w+z^2)+4vwz(w+2z^2)+v^2w(w+5z^2)+v^3(2wz-z^3),
\\
c_2=&8vwz+v^2(w-2z^2)+4w(w+4z^2)),
\\
c_3=&v^4(w-2z^2)+12vwz(w+3z^2)+4wz^2(3w+4z^2)+v^3(8wz-4z^3)+
\\
&+v^2(4w^2+27wz^2-2z^4).
\end{aligned}$$
Plugging these formulae into we arrive at: $$4z(z^2+vz+w){\mathcal C}_0^{''}(z)+4(3z^2+2vz+w){\mathcal C}_0^{'}(z)+(3z+v){\mathcal C}_0(z)+1=0,$$ which can be equivalently expressed as
with $Q(z)=4z(z^2+vz+w)$. (Notice that the multiplication of $Q(z)$ by a non-vanishing constant is irrelevant in our considerations.)
Final remarks
=============
It is very tempting to extend the methods and results of the present paper to the case of the ’generalized’ Heun equations which are of the form $$\left\{Q_{k+1}(z)\frac{d^k}{dz^k}+Q_{k}(z)\frac{d^k}{dz^{k-1}}+...+Q_2(z)\frac{d}{dz}+V(z)\right\}S(z)=0,$$ where $\deg Q_{k+1}(z)=k+1$ and $\deg Q_i(z)\le i$ for $i=2,3,...,k$. As in the introduction for each positive (and sufficiently large) integer $n$ there exist $n+1$ polynomials $V(z)$ counted with appropriate multiplicities such that for each of these $V(z)$ the above equation has a polynomial solution $S(z)$ of degree $n$. Thus one can define the corresponding spectral polynomials and study the asymptotics of their root-counting measures. Large scale numerical experiments support the following.
For any ’generalized’ Heun equation the sequence $\{\mu_n\}$ of the root-counting measures of its spectral polynomials converges to a probability measure $\mu$ supported on a curvilinear planar tree located inside $Conv_{Q_{k+1}}$ and whose leaves (i.e. vertices of valency $1$) is the set of all roots of $Q_{k+1}(z)$, see Fig. \[fig4\]. Moreover, the limiting measure $\mu$ depends only on $Q_{k+1}(z)$, i.e. is [*independent*]{} of the other coefficient of the equation.
We finish our paper with the following problem.
Under the assumption that the latter conjecture holds (which is very likely) is it true that the Cauchy transform ${\mathcal C}_\mu$ of the limiting root-counting measure $\mu$ satisfies a linear ode of the form: $$Q_{k+1}(z){\mathcal C}_\mu^{(k)}(z)+a_1Q'_{k+1}(z){\mathcal C}_\mu^{(k-1)}(z)+a_1Q_{k+1}^{\prime\prime}(z){\mathcal C}_\mu^{(k-2)}(z)+...+a_{k+1}Q_{k+1}^{(k+1)}(z)=0,$$ where $a_1,...,a_k$ are some universal constants, i.e. independent of $Q_{k+1}(z)$ (but maybe dependent on the order $k$ of the operator).
[99]{}
M. Abramowitz, I. A. Stegun (eds.), Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. National Bureau of Standards Appl. Math. Ser. 55, Dover Publications, New York, xiv+1046 pp., 1964.
F. M. Arscott, Periodic differential equations, Pergamon Press, vii+283 pp.,1964.
J. Borcea, B. Shapiro, [*Root asymptotics of spectral polynomials for the Lamé operator*]{}, Commun. Math. Phys, [**282**]{} (2008), 323–337.
J. Garnett, [*Analytic capacity and measure.*]{} Lecture Notes in Mathematics, [**297**]{} Springer-Verlag, Berlin-New York, 1972. iv+138 pp.
E. Heine, Handbuch der Kugelfunctionen, Vol. 1, pp. 472–479, G. Reimer Verlag, Berlin, 1878.
A. Ronveaux (Ed.), Heun’s Differential Equations, Oxford University Press, Oxford, 1995.
A. B. J. Kuijlaars, W. Van Assche, [*The asymptotic zero distribution of orthogonal polynomials with varying recurrence coefficients,*]{} J. Approx. Theory [**99**]{} (1999), 167–197.
G. V. Kuzmina, [*Estimates of the transfinite diameter of a certain family of continua and covering theorems for schlicht functions.*]{} (Russian) Trudy Mat. Inst. Steklov [**94**]{} (1968), 47–65.
ÊG. V. Kuzmina, Moduli of families of curves and quadratic differentials. A translation of Trudy Mat. Inst. Steklov. 139 (1980). Proc. Steklov Inst. Math. 1982 (1), vii+231 pp.
G. Pólya, [ Sur un théoreme de Stieltjes]{}, C. R. Acad. Sci Paris [**155**]{} (1912), 767–769.
B. Shapiro, [*Algebraic aspects of Heine-Stieltjes theory*]{}, submitted.
B. Shapiro, K. Takemura, and M. Tater, [*On spectral polynomials of the Heun equation. II*]{}, in preparation.
T. Stieltjes, [*Sur certains polynômes qui vérifient une équation différentielle linéaire du second ordre et sur la théorie des fonctions de Lamé*]{}, Acta Math. [**8**]{} (1885), 321–326.
K. Strebel, [Quadratic differentials]{}, Ergebnisse der Mathematik und ihrer Grenzgebiete, 5, Springer-Verlag, Berlin, (1984), xii+184 pp.
K. Takemura, [*Private communication*]{}, October 2007.
A. Turbiner, [*Quasi-exactly solvable differential equations,*]{} in CRC Handbook of Lie group analysis of differential equations Vol. 3, ed. N. H. Ibragimov, CRC Press, Boca Raton, FL, xvi+536 pp., 1996.
E. T. Whittaker, G. Watson, A course of modern analysis. An introduction to the general theory of infinite processes and of analytic functions with an account of the principal transcendental functions. Reprint of the 4-th (1927) edition, Cambridge Mathematical Library, Cambridge Univ. Press, Cambridge, UK, vi+608 pp., 1996.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Ming Li
- Eite Tiesinga
- Svetlana Kotochigova
bibliography:
- 'Refs.bib'
title: 'Orbital Quantum Magnetism in Spin Dynamics of Strongly Interacting Magnetic Lanthanide Atoms: Supplemental Material'
---
Relative Motion of two bosonic lanthanide atoms {#relative-motion-of-two-bosonic-lanthanide-atoms .unnumbered}
-----------------------------------------------
A single site of an optical lattice potential is well approximated by a cylindrically-symmetric harmonic trap. The Hamiltonian for the relative motion of two ground-state bosonic lanthanide atoms is then $H_{\rm rel} =H_0+U$, where $$H_0 ={\vec p\,}^2\!/(2\mu)+ \mu (\omega_\rho^2\rho^2 +\omega_z^2 z^2)/2 + H_{\rm Z}$$ with relative momentum $\vec p$, relative coordinate $\vec
r=(r,\theta,\varphi)=(\rho, \theta, z)$ between the atoms in spherical and cylindrical coordinates, respectively, and reduced mass $\mu$. Moreover, $\omega_\rho$ and $\omega_z$ are trapping frequencies and $H_{\rm Z}=g\mu_{\rm B} {(\vec \jmath_a+\vec \jmath_b)\cdot \vec B}$ is the Zeeman Hamiltonian with atomic g-factor $g$ and Bohr magneton $\mu_{\rm B}$. Zeeman states $|j_a, m_a\rangle|j_b, m_b\rangle$ are eigen states of $H_{\rm Z}$, where $\vec\jmath_\alpha$ is the total atomic angular momentum of atom $\alpha=a$ or $b$ and $m_\alpha$ is its projection along $\vec B$.
We have assumed that the harmonic lattice-site potential is the same for all atomic states and a $\vec B$ field directed along the axial or $z$ direction of the trap. Small tensor light shifts, proportional to $(m_a)^2$ [@Laburthe2013], induced by the lattice lasers, have been omitted. Here and throughout, we use dimensionless angular momenta, i.e. $\vec
\jmath/\hbar$ is implied when we write $\vec \jmath$ with $\hbar=h/2\pi$ and Planck constant $h$. For angular momentum algebra we follow Ref. [@BrinkSatchler].
The Hamiltonian term $U$ describes the molecular interactions. It contains an isotropic contribution $U^{\rm iso}( r)$ that only depends on $r$ as well as an anisotropic contribution $U^{\rm aniso}(\vec r)$ that also depends on the orientation of the internuclear axis. In fact, we have $U^{\rm iso}(r) = V_0(r) + V^{\rm jj}(r) \, {\vec\jmath_a\cdot \vec \jmath_b} + \cdots$, where for ${r\to\infty}$ the angular-momentum independent ${V_0(r)\to-C_6/r^6}$ with dispersion coefficient ${C_6 =1723E_{\rm h} a^6_0}$ for Er$_2$ [@Frisch2014; @Maier2015]. For small $r$ the potential $V_0(r)$ has a repulsive wall and an attractive well with depth ${D_e/(hc)\approx790}$ cm$^{-1}$. We use $V^{\rm jj}(r)= -c^{\rm jj}_6/r^6$ for all $r$ with dispersion coefficient $c^{\rm jj}_6=-0.1718 E_{\rm h} a^6_0$. Furthermore, $U^{\rm aniso}(\vec r)=U^{\rm orb}(\vec r)+U^{\rm dip}(\vec r)$ with $$\begin{aligned}
U^{\rm orb}(\vec r) &=& V^{\rm orb}(r)\sum_{i=a,b} \frac{3(\hat r \cdot \vec \jmath_i)(\hat r \cdot \vec \jmath_i)-\vec \jmath_i \cdot \vec \jmath_i }{\sqrt{6}}+ \cdots \,, \nonumber\end{aligned}$$ and $$\begin{aligned}
U^{\rm dip}(\vec r)&=&- \frac{\mu_0(g\mu_{\rm B})^2}{4\pi} \frac{3(\hat r \cdot \vec \jmath_a)(\hat r \cdot \vec \jmath_b)-\vec \jmath_a \cdot \vec \jmath_b}{r^3}\end{aligned}$$ is the magnetic dipole-dipole interaction. We use $ V^{\rm
orb}(r)=-c^{\rm orb}_6/r^6 $ for all $r$ with $c^{\rm orb}_6 =
-1.904E_{\rm h} a^6_0$. Finally, $E_{\rm h}$ is the Hartree energy, $c$ is the speed of light in vacuum, and $\mu_0$ is the magnetic constant.
The coefficient $C_6$ is the largest dispersion coefficient by far, making the van-der-Waals length $R_6=\sqrt[4]{2\mu C_6/\hbar^2}$ the natural length scale for the dispersive interactions. The contribution to $U^{\rm
orb}(\vec r)$ with strength $V^{\rm orb}(r)$ is the strongest anisotropic orbital interaction. It couples the angular momentum of each atom to the rotation of the molecule. For future reference and following the convention in the literature the natural length scale for the magnetic dipole-dipole interaction is $a_{\rm dd}=(1/3)\times 2\mu C_3/\hbar^2$ with coefficient $C_3=\mu_0(g\mu_{\rm B}j)^2/(4\pi)$, where $j=6$ and $g=1.16381$ for Er.
The Hamiltonian $H_{\rm rel}$ commutes with $J_{z}$ and only couples even or odd values of $\vec \ell$, the relative orbital angular momentum or partial wave. Here, $J_{z}$ is the $z$ projection of the total angular momentum $\vec J =\vec \ell + \vec \jmath\,$ with $\vec \jmath=\vec \jmath_a
+ \vec \jmath_b$. For $B=0$ $H_{\rm rel}$ also commutes with $J ^2$. Eigenstates $| i, M \rangle$ of $H_{\rm rel}$ with energy $E_{i,M}$ are labeled by projection quantum number $M$ and index $i$. These eigen pairs have been computed in the spin basis $$|(j_aj_b)j \ell; J M \rangle\equiv
\sum_{m_j m} \langle j\ell m_j m | J M \rangle
|(j_aj_b)j m_j\rangle Y_{\ell m}(\theta,\varphi)$$ with $|(j_aj_b)j
m_j\rangle=\sum_{m_a m_b}\langle j_a j_b m_a m_b | j m_j \rangle |j_a
m_a\rangle |j_b m_b\rangle$, spherical harmonic function $Y_{\ell
m}(\theta,\varphi)$, and $\langle j_1 j_2 m_1 m_2 | j m \rangle$ are Clebsch-Gordan coefficients. For our bosonic system only basis states with even $\ell+j$ exist. We use a discrete variable representation (DVR) [@ColbertMiller; @Tiesinga1998] to represent the radial coordinate $r$. The largest $r$ value is a few times the largest of the harmonic oscillator lengths $\sqrt{\hbar/(\mu\omega_{\rho,z})}$ and for typical traps $R_6\ll \sqrt{\hbar/(\mu\omega_{\rho,z})}$. We further characterize eigen states by computing overlap amplitudes with those at different $B$ field values. In particular, overlaps with $B=0$ eigenstates give us the approximate $J$ value. The expectation value of operators $j^2$ and $\ell^2$ are also computed.
Eigenstates in an isotropic lattice site {#eigenstates-in-an-isotropic-lattice-site .unnumbered}
----------------------------------------
![Near-threshold energy levels of the relative motion of two harmonically trapped and interacting $^{168}$Er atoms with ${M =-10}$, $-11$, and $-12$ in panel a), b), and c), respectively. The trap is isotropic with $\omega/(2\pi)$ = 0.4 MHz. In the three panels the zero of energy corresponds to the Zeeman energy of two atoms at rest with ${m_a+m_b=-10}$, $-11$, and $-12$, respectively. Dashed red lines correspond to the first and second harmonic-oscillator levels with energies $(3/2)\hbar\omega$ and $(7/2)\hbar\omega$, respectively. Blue curves indicate the energy levels that are involved in the spin-changing oscillations. Some of the eigenstates have been labeled by their dominant $\ell$ and $j$ contribution. Orange arrows (not to scale) and a red rf pulse indicate single-color two-photon rf transitions that initiate the spin oscillations starting from two atoms in the $|6,-6\rangle$ state. []{data-label="IsoTrap"}](Fig4)
Figure \[IsoTrap\] shows even-$\ell$, above-threshold $^{168}$Er$_2$ energy levels in an isotropic harmonic trap and $M=-12$, $-11$, or $-10$ as functions of $B$ up to 0.05 mT. The energies of two harmonic oscillator levels are also shown. The energetically-lowest is an $\ell=0$ or $s$-wave state with energy $(3/2)\hbar \omega$; the second is degenerate with one $s$- and multiple $\ell=2$, $d$-wave states and energy $(7/2)\hbar \omega$. In each panel one or two eigenstates with energies that run nearly parallel with these oscillator energies exist. In fact, their energy, just above $(3/2)\hbar \omega$, indicates a repulsive effective atom-atom interaction [@Busch1998; @Tiesinga2000]. For our weak $B$ fields and [*away*]{} from avoided crossings their wavefunctions are well described by or are correlated to a single ${J =10}$ or 12 zero-$B$ eigenstate. Further, they have an $s$-wave dominated spatial function, ${j\approx J }$, and ${m_j\approx M }$. These eigenstates will be labeled by $| s; j m_j
\rangle\rangle$ away from avoided crossings.
A single bound state with $E<0$ when $B=0$ and a negative magnetic moment can be inferred in each panel of Fig. \[IsoTrap\]. It avoids with several trap states when $B>0.015$ mT and has mixed $g$- and $i$-wave character away from avoided crossings. In free space this bound state would induce a narrow Feshbach resonance near $B=6$ $\mu$T (not shown).
Figure \[IsoTrap\] also shows eigenstates with energies close to $E=(7/2)\hbar \omega$ when $B=0$. For $B>2$ $\mu$T and away from avoided crossings, these states have $d$-wave character, are well described by a single $j,m_j$ pair, and have a magnetic moment, $-dE_{i,M}/dB$, that is an integer multiple of $g\mu_{\rm B}$. $D$-wave states with a positive magnetic moment in the figure have avoided crossings with $s$-wave states $| s; j m_j \rangle\rangle$ and play an important role in our analysis of spin oscillations. We focus on the three-state avoided crossing in panel a) near ${B=
0.020}$ mT, where the corresponding $d$-wave state has $j=12$ and $m_j=-12$ and will be labeled by $| d; j m_j\rangle\rangle$. Close to the avoided crossings the three eigenstates of $H_{\rm rel}$ are superpositions of the $B$-independent $| s; j m_j \rangle\rangle$ and $| d; j m_j\rangle\rangle$ states. The mixing coefficients follow from the overlap amplitudes with eigenstates well away from the avoided crossing. In other words $ | i, {M =-10}\rangle= \sum_k U_{i,k}(B)\, | k
\rangle\rangle $ with $k=\{s;10,-10\}$, $\{s;12,-10\}$, and $\{d;12,-12\}$ and $U(B)$ is a $B$-dependent $3\times 3$ unitary matrix.
Finally, Fig. \[IsoTrap\] shows the rf pulse that initiates spin oscillations. We choose a non-zero $B$ field and start in $|s; 12\,{-12}\rangle\rangle$ with ${M =-12}$ in panel c). After the pulse, via near-resonant intermediate states with ${M
=-11}$, the atom pair is in a superposition of two or three eigenstates with ${M =-10}$. The precise superposition depends on experimental details such as carrier frequency, polarization and pulse shape. We, however, can use the following observations. The initial $|s; 12\,{-12}\rangle\rangle$ state can also be expressed as the uncoupled product state $|6,-6\rangle|6,-6\rangle$ independent of $B$. As rf photons only induce transitions in atoms (and not couple to $\vec \ell$ of the atom pair), the absorption of one photon by each atom leads to the product $s$-wave state $|6,-5\rangle|6,-5\rangle$, neglecting changes to the spatial wavefunction of the atoms. To prevent population in atomic Zeeman levels with ${m_{a,b}>-5}$ we follow Ref. [@Laburthe2013] and assume that light shifts induced by optical photons briefly break the resonant condition to such states.
The $s$-wave $|6,-5\rangle|6,-5\rangle$ state then evolves under the molecular Hamiltonian. It is therefore convenient to express this state in terms of ${M =-10}$ eigenstates. First, by coupling the atomic spins to $\vec \jmath$ we note $|6,-5\rangle|6,-5\rangle \to c_{10}
| s; 10,{-10}\rangle\rangle + c_{12}|s; 12,{-10}\rangle\rangle$, where $c_j=\langle 6 6 \,{-5} {-5} | j \,{-10}\rangle$ and each $|k\rangle\rangle$ is a superposition of three eigenstates $| i, {M
=-10}\rangle$ as given by the inverse of $U(B)$. After free evolution for time $t$ we measure the population of remaining in the initial state.
Anisotropic harmonic traps {#anisotropic-harmonic-traps .unnumbered}
--------------------------
The lattice laser beams along the independent spatial directions do not need to have the same intensity and thus the potential for a lattice site can be anisotropic. This gives us a means to extract further information about the anisotropic molecular interaction potentials. A first such experiment for Er provided evidence of the orientational dependence due to the intra-site magnetic dipole-dipole interactions on the particle-hole excitation frequency in the doubly-occupied Mott state [@Baier2016].
Figure \[AnisoTrap\] shows the near-threshold energy ${M
=-10}$ levels in an anisotropic harmonic trap as a function of trap aspect ratio $\omega_z/\omega_\rho$ at fixed $\omega/(2\pi)= 0.4$ MHz for two magnetic field strengths. Dashed lines correspond to non-interacting levels, including only the harmonic trap and Zeeman energies. For $\omega_z/\omega_\rho\to 0$ and $\infty$ the trap is quasi-1D and quasi-2D, respectively. Energy levels involved in spin-oscillations are highlighted in blue and for finite $B$ their avoided crossings with other states can be studied.
![Near-threshold energy levels of $^{168}$Er$_2$ with ${M =-10}$ in an anisotropic harmonic trap as a function of trap aspect ratio $\omega_z/\omega_\rho$ at fixed $\omega/(2\pi)= 0.4$ MHz for $B=0.1$ $\mu$T and 0.05 mT in panels a) and b), respectively. The red dashed lines correspond to non-interacting levels. Blue lines are relevant for spin-oscillation experiments. The zero of energy in a panel is that of two free atoms at rest with $m_a+m_b=-10$ at the corresponding $B$ field. []{data-label="AnisoTrap"}](Fig5)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Intuitively, human readers cope easily with errors in text; typos, misspelling, word substitutions, etc. do not unduly disrupt natural reading. Previous work indicates that letter transpositions result in increased reading times, but it is unclear if this effect generalizes to more natural errors. In this paper, we report an eye-tracking study that compares two error types (letter transpositions and naturally occurring misspelling) and two error rates (10% or 50% of all words contain errors). We find that human readers show unimpaired comprehension in spite of these errors, but error words cause more reading difficulty than correct words. Also, transpositions are more difficult than misspellings, and a high error rate increases difficulty for all words, including correct ones. We then present a computational model that uses character-based (rather than traditional word-based) surprisal to account for these results. The model explains that transpositions are harder than misspellings because they contain unexpected letter combinations. It also explains the error rate effect: upcoming words are more difficult to predict when the context is degraded, leading to increased surprisal.
**Keywords:** human reading, eye-tracking, errors, computational modeling, surprisal, neural networks.
bibliography:
- 'references.bib'
title: |
Character-based Surprisal as a Model of\
Human Reading in the Presence of Errors
---
Introduction
============
Human reading is both effortless and fast, with typical studies reporting reading rates around 250 words per minute [@Rayner:ea:06]. Human reading is also adaptive: readers vary their strategy depending on the task they want to achieve, with experiments showing clear differences between reading for comprehension, proofreading, or skimming [@Kaakinen:Hyona:10; @Schotter:ea:14; @Hahn:Keller:18].
Another remarkable aspect of human reading is its robustness. A lot of the texts we read are carefully edited and contain few errors, for example articles in newspapers and magazines, or books. However, readers also frequently encounter texts that contain errors, e.g., in hand-written notes, emails, text messages, and social media posts. Intuitively, such errors are easy to cope with and impede understanding only in a minor way. In fact, errors often go unnoticed during normal reading, which is presumably why proofreading is difficult.
The aim of this paper is to experimentally investigate reading in the face of errors, and to propose a simple model that can account for our experimental results. Specifically, we focus on errors that change the form of a word, i.e., that alter a word’s character sequence. This includes letter transposition (e.g., [*innocetn*]{} instead of [*innocent*]{}) and misspellings (e.g., [*inocent*]{}). Importantly, we will not consider whole-word substitutions, nor will we deal with morphological, syntactic, or semantic errors.
We know from the experimental literature that letter transpositions cause difficulty in reading [@Rayner:ea:06; @Johnson:ea:07; @White:ea:08]. However, transpositions are artificial errors (basically they are an artifact of typing), and are comparatively rare.[^1] It is not surprising that such errors slow down reading. This contrasts with misspellings, i.e., errors that writers make because they are unsure about the orthography of a word. These are natural errors that should be easier to read, because they occur more frequently and are linguistically similar to real words ([*inocent*]{} conforms to the phonotactics of English, while [*innocetn*]{} does not). This is our first prediction, which we will test in an eye-tracking experiment that compares the reading of texts with transpositions and misspellings.
Readers’ prior exposure to misspellings might explain why reading is mostly effortless, even in the presence of errors. The fact remains, however, that all types of errors are relatively rare in everyday texts. All previous research has studied isolated sentences that contain a single erroneous word. This is a situation with which the human language processor can presumably cope easily. However, what happens when humans read a whole text which contains a large proportion of errors? It could be that normal reading becomes very difficult if, say, half of all words are erroneous. In fact, this is what we would expect based on theories of language processing that assume prediction, such as surprisal [@levy_expectation-based_2008]: the processor constantly uses the current context to predict the next word, and difficulty ensues if these predictions are incorrect. However, if the context is degraded by a large number of errors, then predictions become unreliable, and reading slows down. Crucially, we should see this effect on all words, not just on those words that contain errors. This is the second prediction that we will test in our eye-tracking experiment by comparing texts with high and low error rates.
In the second part of this paper, we present a surprisal model that can account for the patterns of difficulty observed in our experiment on reading texts with errors. We start by showing that standard word-based surprisal does not make the right predictions, as it essentially treats words with errors as out of vocabulary items. We therefore propose to estimate surprisal with a character-based language model. We show that this model successfully predicts human reading times for texts with errors and accounts for both the effect of error type and the effect of error rate that we observed in our reading experiment.
Eye-tracking Experiment
=======================
Methods
-------
### Participants
Sixteen participants took part in the experiment after giving informed consent. They were paid £10 for their participation, had normal or corrected-to-normal vision, and were self-reported native speakers of English.
#### Materials
We used the materials of (no preview-condition only), but introduced errors into the texts. These materials contain twenty newspaper texts from the DeepMind question answering corpus [@hermann_teaching_2015]. Texts of comparable in length (between 149 and 805 words, mean 323) and represent a balanced selection of topics. Each text comes with a question and a correct answer. The questions are formulated as sentences with a blank to be completed with a named entity so that a statement implied by the text is obtained. Three incorrect answers (distractors) are included for each question; these were also named entities, chosen so that correctly answering the question would likely be impossible without reading the text.
We introduced errors into the materials of following the method suggested by . These errors are automatically generated and are either transpositions (i.e., two adjacent letters are swapped) or natural errors that replicate actual misspellings. For the latter, we used a corpus of human edits [@geertzen2014automatic], and introduced errors in our experimental materials by replacing correct words with known misspellings from our edit corpus. The percentages of different types of misspellings are listed in Table \[tab:error\_types\]. By generating texts with errors automatically we can ensure that both error conditions (transpositions and misspellings) contain the same percentage of erroneous words. For both error conditions, we generated texts in which either 10% or 50% of tokens are erroneous.
phonetics deletion swap/repeat keyboard insertion other
----------- ---------- ------------- ---------- ----------- -------
36.2 16.7 11.0 10.5 8.3 17.3
: Percentages of different types of misspellings in the natural error condition.[]{data-label="tab:error_types"}
### Procedure
Participants received written instructions and went through two practice trials whose data was discarded. Then, each participant read and responded to all 20 items (texts with questions and answer choices); the items were the same for all participants, but were presented in a new random order for each participant. The order of the answer choices was also randomized. Participants pressed buttons on a response pad to get to the next page, and to selected one of the four answers once they had finished reading a given text.
Eye-movements were recorded using an Eyelink 2000 tracker (SR Research, Ottawa). The tracker recorded the dominant eye of the participant (as established by an eye-dominance test) with a sampling rate of 2000 Hz. Before the experiment started, the tracker was calibrated using a nine-point calibration procedure; at the start of each trial, a fixation point was presented and drift correction was carried out.
### Data Analysis
For data analysis, each word in the text was defined as a region of interest. Punctuation was included in the region of the word it followed or preceded without intervening whitespace. If a word was preceded by a whitespace, then that space was included in the region for that word. We report data for the following eye-movement measures in the critical regions: *First pass time* (often called gaze duration for single-word regions) consists of the sum of fixation durations beginning with this first fixation in the region until the first saccade out of the region, either to the left or to the right. *Fixation rate* measures the proportion of trials in which the region was fixated (rather than skipped) on first-pass reading. For first pass time, no trials in which the region is skipped on first-pass reading were included in the analysis.
Results
-------
[@[ ]{}lddd@[ ]{}]{} & [1]{}[c]{} & [2]{}[c]{}[This experiment]{}\
& & [1]{}[c]{}[No error]{}& [1]{}[c]{}[Error]{}\
First fixation & 221.3 & 211.8 & 225.1\
First pass & 260.7 & 242.5 & 265.2\
Total time & 338.0 & 306.9 & 342.1\
Fixation rate & 0.50 & 0.45 & 0.48\
Accuracy & [1]{}[c]{}[70%]{} & [2]{}[c]{}[72%]{}\
In Table \[tab:desc\_stats\], we present some basic reading measures for our experiments, and compare these to the reading experiments of , which used the same texts, but did not include any errors (the data is taken from their no-preview condition, which corresponds to our experimental setup). Even in the error condition, the reading measures in our experiments differ only minimally from the ones reported by . In the no-error condition, we find slight faster reading times and lower fixation rates than . Also the accuracy (which can only be measured on text level, hence we do not distinguish error and no-error conditions) is essentially unchanged. This provides good evidence for the claim that human readers cope very well with errors in text, with essentially no detriment in terms of reading time, fixation rate, and question accuracy.[^2]
In the following, we analyze two reading measures in more detail: first pass time and fixation rate. We analyzed per-word reading measures using mixed-effects models, considering the following predictors:
1. <span style="font-variant:small-caps;">ErrorType</span>: Does the text contain mispellings ($-0.5$) or transpositions ($+0.5$)?
2. <span style="font-variant:small-caps;">ErrorRate</span>: Does the text contain 10% ($-0.5$) or 50% ($+0.5$) erroneous words overall?
3. <span style="font-variant:small-caps;">Error</span>: Is the word correct ($-0.5$) or erroneous ($+0.5$)?
4. <span style="font-variant:small-caps;">WordLength</span>: Length of the word in characters.
5. <span style="font-variant:small-caps;">LastFix</span>: Was the preceding word fixated ($+0.5$) or not ($-0.5$)?
All predictors were centered. Word length was scaled to unit variance. We selected binary interactions using forward model selection with a $\chi^2$ test, running the R package `lme4` [@bates-fitting-2015-1] with a maximally convergent random effects structure. We then re-fitted the best model with a full random effects structure as a Bayesian generalized multivariate multilevel model using the R package `brms`; this method is slower but allows fitting large random effects structures even when traditional methods do not converge. Resulting Bayesian models are shown in Table \[tab:mixed-models\].[^3]
[@[ ]{}l@d@d@l@[ ]{}d@d@l@[ ]{}]{} & [3]{}[c]{}[First Pass]{}& [3]{}[c]{}[Fixation Rate]{}\
(Intercept) & 248.41 & (6.34) & $^{***}$& -0.16 & (0.12) & $^{}$\
<span style="font-variant:small-caps;">ErrType</span> & 1.41 & (1.32) & $^{}$ & 0.08 & (0.02) & $^{***}$\
<span style="font-variant:small-caps;">ErrRate</span> & 7.20 & (1.60) & $^{***}$ & 0.16 & (0.02) & $^{***}$\
<span style="font-variant:small-caps;">Error</span> & 23.77 & (4.12) & $^{***}$ & 0.21 & (0.07) & $^{***}$\
<span style="font-variant:small-caps;">WLength</span> & 22.18 & (2.02) & $^{***}$& 0.83 & (0.04) & $^{***}$\
<span style="font-variant:small-caps;">LastFix</span> & 3.10 & (4.18) & & 0.22 & (0.18) &\
<span style="font-variant:small-caps;">ErrRate</span> $\times$ <span style="font-variant:small-caps;">LastFix</span> & 6.71 & (2.77) & $^{*}$ & 0.16 & (0.04) & $^{***}$\
<span style="font-variant:small-caps;">Error</span> $\times$ <span style="font-variant:small-caps;">LastFix</span> &[3]{}[c]{}[—]{} & 0.26 & (0.10) & $^{**}$\
<span style="font-variant:small-caps;">WLength</span> $\times$ <span style="font-variant:small-caps;">LastFix</span> & [3]{}[c]{}[—]{} & 0.74 & (0.10) & $^{***}$\
[7]{}[c]{}[$^{***}p<0.001$, $^{**}p<0.01$, $^*p<0.05$]{}
The main effects of <span style="font-variant:small-caps;">WordLength</span> replicate the well-known positive correlation between word length on and reading time [@demberg_data_2008]. We also find main effects of <span style="font-variant:small-caps;">Error</span>, indicating that erroneous words are read more slowly and are more likely to be fixated. The main effects of <span style="font-variant:small-caps;">ErrorRate</span> show that higher text error rate leads to longer reading times and higher fixation rates for all words (whether they are correct or erroneous). Additionally, we find a main effect of <span style="font-variant:small-caps;">ErrType</span> in fixation rate, showing that transposition errors lead to higher fixation rates. This is consistent with our hypothesis that misspellings are easier to process than transpositions, as they are real errors that participants have been exposed in their reading experience.
Figure \[fig:fp\] graphs mean first pass times and fixation rates by error type and error rate. The most important effect is that error words take longer to read and are fixated more than non-error words. The effect of error rate is also clearly visible: the 50% error condition causes longer reading times and more fixations than the 10% one, even for non-error words. We also observe a small effect of error type.
![First pass time (top) and fixation rate (bottom) when reading texts with transposition errors or misspelling.[]{data-label="fig:fp"}](figures/fp.pdf "fig:"){width="0.9\columnwidth"} ![First pass time (top) and fixation rate (bottom) when reading texts with transposition errors or misspelling.[]{data-label="fig:fp"}](figures/fix.pdf "fig:"){width="0.9\columnwidth"}
Turning now to the interactions, we found that <span style="font-variant:small-caps;">ErrorRate</span> and <span style="font-variant:small-caps;">LastFix</span> interact in both reading measures, which indicates that reading times and fixation rates increase in the high-error condition if the previous word has been fixated.
Only in fixation rate, there was also an interaction of <span style="font-variant:small-caps;">Error</span> and <span style="font-variant:small-caps;">LastFix</span>, indicating that fixation rate goes up for error words if the preceding word was fixated, presumably because of preview of the erroneous words, which is then more likely to be fixated in order to identify the error.
For fixation rate, <span style="font-variant:small-caps;">WordLength</span> interacts with <span style="font-variant:small-caps;">LastFix</span>: longer words are more likely to be fixated if the preceding word was fixated; again, this is likely an effect of preview. While Figure \[fig:fp\] seems to suggest an interaction of <span style="font-variant:small-caps;">Error</span> and <span style="font-variant:small-caps;">Error Type</span>, this was not significant in the mixed model.
Discussion
----------
We have found four main results: (1) Erroneous words show longer reading times and are more likely to be fixated. (2) Higher error rates lead to increased reading times and more fixations, even on words that are correct. (3) Transpositions lead to an increased fixation rate compared to misspellings. (4) Whether the previous word is fixated or not modulates the effect of error and error rate.
However, it is conceivable that the effects of error and error rate are actually artifacts of word length. All else being equal, longer words take longer to read and are more likely to be fixated. So if error words and non-error words in our texts differ in mean length, then that would be an alternative explanation for the effects that we found.
For transposition errors, error words by definition have the same length as their non-error versions. For misspellings, a mixed-effects analysis with word forms as random effects showed no significant difference in the lengths of error words and their correct versions (mean difference $-0.011$, SE $0.029$, $t = -0.393$). Comparing the erroneous words of the two error types, we found that they differ in mean length (misspellings 5.44, transpositions 6.06 characters); however this difference was not significant in a mixed-effects analysis predicting word length of erroneous words from error types, with items as a random effect (mean difference $0.015$, SE $0.010$, $t
= 1.449$).
Surprisal Model
===============
Most models of human reading do not explicitly deal with reading in the face of errors. In fact, reading models that use a lexicon to look up word forms (e.g., to retrieve word frequencies) cannot deal with erroneous words without further assumptions. We can use the surprisal model of processing difficulty [@levy_expectation-based_2008] to illustrate this: in its original, word-based formulation, surprisal is forced to treat all error words as out of vocabulary items; it therefore cannot distinguish between different types of errors or between different error rates.
Intuitively, a more fine-grained version of surprisal is required that makes prediction in terms of characters, not words. In such a setting, the word [*inocent*]{} would be more surprising than [*innocent*]{} in the same context, but not as surprising as a completely unfamiliar letter string. In other words, the surprisal of the same word with and without misspellings or letter transpositions would be similar but not the same. To achieve this, we can use character-based language models, which are standard tools in natural language processing for dealing with errors in the input (e.g., the work by on errors in machine translation).
Crucially, once we have a character-based surprisal model, we can derive predictions regarding how errors should affect reading. We predict that transpositions should be more surprising than misspellings, as they involve character sequences that are unfamiliar to the model (e.g., [*innocetn*]{} contains the rare character sequence [*tn*]{}). Also, we predict that words that occur in texts with a high error rate are more difficult to read than words in texts with a low error rate: if the context of a word contains few errors, then we are able to predict that word confidently (resulting in low surprisal). If the context contains lots of errors then our prediction in degraded (resulting in high surprisal). We will now test the predictions regarding error type and error rate using a character-based version of surprisal.
Methods
-------
We trained a character-based neural language model using LSTM cells [@hochreiter_long_1997]. Such models can assign probabilities to any sequence of characters, and thus are capable of computing surprisal even for words never seen in the training data, such as erroneous words. For training, we used the Daily Mail portion of the DeepMind corpus. We used a vocabulary consisting of the 70 most frequent characters, mapping others to an out-of-vocabulary token.
The hyperparameters of the language model were selected on an English corpus based on Wikipedia text.[^4] We then used the resulting model to compute surprisal on the texts used in the eye-tracking experiment for each experimental condition.
The model estimates, for each element of a character sequence, the probability of seeing this character given the preceding context. We compute the surprisal of a word as the sum of the surprisals of the individual characters, as prescribed by the product rule of probability. For a word consisting of characters $x_{t}\dots x_{t+T}$ following a context $x_1...x_{t-1}$, its surprisal is: $$-\log P(x_{t}\dots x_{t+T}|x_1...x_{t-1}) = \sum_{i=t}^{t+T} -\log P(x_i|x_{1}...x_{i-1})$$ In this computation, we take whitespace characters to belong to the preceding word. To control for the impact of the random initialization of the neural network at the beginning of training, we trained seven models with identical settings but different random initializations.
The quality of character-based language models is conventionally measured in Bits Per Character (BPC), which is the average surprisal, to the base 2, of each character. On held-out data, our model achieves a mean BPC of 1.28 (SD 0.025), competitive with BPCs achieved by state-of-the-art systems of similar datasets (e.g., report BPC = 1.23 on Wikipedia text).
In the introduction we predicted that word-based surprisal is not able to model the reading time pattern we found in our eye-tracking experiment. In order to test this prediction, we compare our character-level surprisal model to surprisal computed using a conventional word-based neural language model. Word-based models have a fixed vocabulary, consisting of the most common words in the training data; a typical vocabulary size is 10,000. Words that were not seen in the training data, and rare words, are represented by a special out-of-vocabulary (OOV) token. From a cognitive perspective, this corresponds to assuming that all unknown words (whether they contain errors or not) are treated in the same way: they are recognized as unknown, but not processed any further. We used a vocabulary size of 10,000. The hyperparameters of the word-based model were selected on the same English Wikipedia corpus as the character-based model.[^5]
Results and Discussion
----------------------
In this section, we show that surprisal computed by a character-level neural language model (<span style="font-variant:small-caps;">CharSurprisal</span>) is able to account for the effects of errors on reading observed in our eye-tracking experiments. We compute character-based surprisal for the texts used in our experiments, and expect to obtain mean surprisal scores for each experimental condition that resemble mean reading times. We will also verify our prediction that word-based surprisal (<span style="font-variant:small-caps;">WordSurprisal</span>) is not able to account for the effects observed in our experimental data, due to the way it treats unknown words.
Figure \[fig:surp\] shows the mean surprisal values across the different error conditions. We note that the pattern of reading time predicted by <span style="font-variant:small-caps;">CharSurprisal</span> (solid lines) matches the first-pass times observed experimentally very well (see Figure \[fig:fp\]), while <span style="font-variant:small-caps;">WordSurprisal</span> (dotted line) shows a clearly divergent pattern, with error words showing *lower* surprisal than non error words. This can be explained by the fact that a word-based model does not process error words beyond recognizing them as unknown; the presence of an unknown word itself is not a high-surprisal event (even without errors, 17 % of the words in our texts are unknown to the model, given its 10,000-word vocabulary).
To confirm this observation statistically, we fitted linear mixed-effects models with <span style="font-variant:small-caps;">CharSurprisal</span> and <span style="font-variant:small-caps;">WordSurprisal</span> as dependent variables. We enter the seven random initializations of each model as a random factor, analogously to the participants in the eye-tracking experiment. We use the same predictors that we used for the reading measures, except for <span style="font-variant:small-caps;">LastFix</span>, which is not meaningful for the surprisal models, as they do not skip any words.
The results of the mixed model for <span style="font-variant:small-caps;">CharSurprisal</span> (see Table \[tab:mixed-models-surp\]) replicated the effects of <span style="font-variant:small-caps;">ErrorRate</span>, <span style="font-variant:small-caps;">Error</span>, and <span style="font-variant:small-caps;">WordLength</span> found in first pass and fixation rate, as well as the effect of <span style="font-variant:small-caps;">ErrorType</span> found only in fixation rate (see Table \[tab:mixed-models\]). The same analysis for <span style="font-variant:small-caps;">WordSurprisal</span> (see again Table \[tab:mixed-models-surp\]), however, does not yield the correct pattern of results: Crucially, the coefficients of <span style="font-variant:small-caps;">Error</span> and <span style="font-variant:small-caps;">ErrorType</span> have the opposite sign compared to both <span style="font-variant:small-caps;">CharSurprisal</span> and the experimental data.
![<span style="font-variant:small-caps;">CharSurprisal</span> (full lines) and <span style="font-variant:small-caps;">WordSurprisal</span> (dotted lines) as a function of error type and error rate, for correct (left) and erroneous (right) words. For <span style="font-variant:small-caps;">CharSurprisal</span>, we show the means of all seven random initializations of our neural surprisal model.[]{data-label="fig:surp"}](figures/surpByCondition-surp-char.pdf){width="0.9\columnwidth"}
[@[ ]{}l@[ ]{}d@d@l@[ ]{}d@d@l@[ ]{}]{} & [3]{}[c]{}[<span style="font-variant:small-caps;">CharSurpr</span>]{} & [3]{}[c]{}[<span style="font-variant:small-caps;">WordSurpr</span>]{}\
(Intercept) & 10.47 & (0.09) & $^{***}$ & 5.06 & (0.07) & $^{***}$\
<span style="font-variant:small-caps;">ErrType</span> & 1.27 & (0.02) & $^{***}$ & -0.40 & (0.02)& $^{***}$\
<span style="font-variant:small-caps;">ErrRate</span> & 1.57 & (0.02) & $^{***}$ &0.01& (0.00)& $^{***}$\
<span style="font-variant:small-caps;">Error</span> & 13.88 & (0.03) & $^{***}$ &-2.96&(0.02)& $^{***}$\
<span style="font-variant:small-caps;">WLength</span> & 3.02 & (0.05) & $^{***}$ &0.25&0.01& $^{***}$\
[7]{}[c]{}[$^{***}p<0.001$, $^{**}p<0.01$, $^*p<0.05$]{}
[@[ ]{}l@d@d@l@[ ]{}d@d@l@[ ]{}]{} & [3]{}[c]{}[First Pass]{}& [3]{}[c]{}[Fixation Rate]{}\
(Intercept) & 248.73 & (5.55) & $^{***}$& -0.15 & (0.09) & $^{}$\
<span style="font-variant:small-caps;">WLength</span> & 22.22 & (0.79) & $^{***}$& 0.75 & (0.01) & $^{***}$\
<span style="font-variant:small-caps;">LastFix</span> & 2.65 & (1.34) & & 0.22 & (0.02) & $^{***}$\
<span style="font-variant:small-caps;">WLength</span> $\times$ <span style="font-variant:small-caps;">LastFix</span> & [3]{}[c]{}[—]{} & 0.60 & (0.19) & $^{***}$\
<span style="font-variant:small-caps;">ResidCharSurp-</span> & 9.89 & (0.78) & $^{***}$ & 0.09 & (0.01) & $^{***}$\
<span style="font-variant:small-caps;">Oracle</span>\
<span style="font-variant:small-caps;">ResidCharSurp</span> & 13.82 & (0.66) & $^{***}$ & 0.14 & (0.01) & $^{***}$\
$\Delta$AIC & [3]{}[c]{}[$-273.88$]{} & [3]{}[c]{}[$-205.83$]{}\
$\Delta$BIC & [3]{}[c]{}[$-273.88$]{} & [3]{}[c]{}[$-205.83$]{}\
[7]{}[c]{}[$^{***}p<0.001$, $^{**}p<0.01$, $^*p<0.05$]{}
We have shown that character-based surprisal computed on the texts used in our experiment is qualitatively similar to the experimental results. As a next step we will test its quantitative predictions, i.e., we will correlate surprisal scores with reading times. For this, we performed mixed-effects analyses in which first-pass time and fixation rate are predicted by <span style="font-variant:small-caps;">WLength</span>, <span style="font-variant:small-caps;">LastFix</span>, and character-based surprisal residualized against word length (<span style="font-variant:small-caps;">ResidCharSurp</span>). Note that we did not enter the error factors (<span style="font-variant:small-caps;">ErrorType</span>, <span style="font-variant:small-caps;">ErrorRate</span>, <span style="font-variant:small-caps;">Error</span>) into this analysis, as we predict that surprisal will simulate the effect of errors in reading.
It is known that surprisal predicts reading times in ordinary text not containing errors [@demberg_data_2008]; thus, it is important to disentangle the specific contribution of modeling errors correctly from the general contribution of surprisal in our model. We do this by constructing a baseline version of character-based surprisal that is computed using an oracle (<span style="font-variant:small-caps;">ResidCharSurpOracle</span>). For this, we replace erroneous words with their correct counterparts before computing surprisal, and again residualize against word length. If <span style="font-variant:small-caps;">ResidCharSurp</span> correctly accounts for the effects of errors on reading, then we expect that <span style="font-variant:small-caps;">ResidCharSurp</span> – which has access to the erroneous word forms – will improve the fit with our reading data compared to <span style="font-variant:small-caps;">ResidCharSurpOracle</span>.
For <span style="font-variant:small-caps;">ResidCharSurpOracle</span>, we use the same seven models as for <span style="font-variant:small-caps;">ResidCharSurp</span>, only exchanging the character sequences on which surprisal is computed. This ensures that any difference in model fit between the two predictors can be attributed entirely to the way <span style="font-variant:small-caps;">ResidCharSurp</span> is affected by the presence of errors in the texts.
The resulting models are shown in Table \[tab:mixed-models-plus-surp\]. For <span style="font-variant:small-caps;">WLength</span> and <span style="font-variant:small-caps;">LastFix</span>, we see the same pattern of results as in the experimental data (see Table \[tab:mixed-models\]). Furthermore, regular surprisal (<span style="font-variant:small-caps;">ResidCharSurp</span>) and oracle surprisal (<span style="font-variant:small-caps;">ResidCharSurpOracle</span>) significantly predict both first pass time and fixation rate. This is in line with the standard finding that surprisal predicts reading time [@demberg_data_2008], but has so far not been demonstrated for texts containing errors. We compare model fit using AIC and BIC. Both measures indicate that <span style="font-variant:small-caps;">ResidCharSurp</span> fits the experimental data better than <span style="font-variant:small-caps;">ResidCharSurpOracle</span>. Thus, character-level surprisal provides an account of our data going beyond the known contribution of surprisal to reading times, and correctly predicts reading in the presence of errors.
Conclusion
==========
We investigated reading with errors in texts that contain either letter transpositions or real misspellings. We found that transpositions cause more reading difficulty than misspellings and explained this using a character-based surprisal model, which assigns higher surprisal to rare letter sequences as they occur in transpositions. We also found that in texts with a high error rate, all words are more difficult to read, even the ones without errors. Again, character-based surprisal explains this: word prediction is harder when the context of a word is degraded by errors, resulting in increased surprisal.
In future work, we plan to integrate character-based surprisal with existing neural models of human reading [@Hahn:Keller:18]. Models at the character level are necessary not only to account for errors, but also to model landing position effects, parafoveal preview, and word length effects, all of which word-based models are unable to capture.
[^1]: For example, in the error corpus we use [@geertzen2014automatic] only 11% are letter swaps or repetitions, see Table \[tab:error\_types\].
[^2]: Note that participants are not performing at ceiling in question answering; our pattern of results therefore cannot be explained by asserting that the questions were too easy.
[^3]: An analogous analysis for log-transformed first-pass times led to the same pattern of significant effects and their directions.
[^4]: 1024 units, 3 layers, batch size 128, embedding size 200, learning rate 3.6 with plain SGD, multiplied by 0.95 at the end of each epoch; BPTT length 80; DropConnect with rate 0.01 for hidden units; replacing entire character embeddings by zero with rate 0.001.
[^5]: 1024 units, batch size 128, embedding size 200, learning rate 0.2 with plain SGD, multiplied by 0.95 at the end of each epoch; BPTT length 50; DropConnect with rate 0.2 for hidden units; Dropout 0.1 for input layer; replacing words by random samples from the vocabulary with rate 0.01 during training.
|
{
"pile_set_name": "ArXiv"
}
|
---
bibliography:
- 'bibl2b.bib'
---
=2 =2
Charles University in Prague
Faculty of Mathematics and Physics
[**DOCTORAL THESIS**]{}
[Anton Repko]{}
[**Theoretical description of nuclear collective excitations**]{}
Institute of Particle and Nuclear Physics
------------------------------------ -------------------------------
Supervisor of the doctoral thesis: prof. RNDr. Jan Kvasil, DrSc.
Study programme: Physics
Specialization: Nuclear Physics
------------------------------------ -------------------------------
Prague 2015
I would like to thank my supervisor prof. Jan Kvasil for patience during the explanation and discussions of the field of nuclear theory, and for various helpful suggestions. I am grateful also to Valentin Nesterenko for encouragement, for hosting me during my two stays in Dubna, and for pointing out perspective research directions.
plus 1fill
I declare that I carried out this doctoral thesis independently, and only with the cited sources, literature and other professional sources.
I understand that my work relates to the rights and obligations under the Act No. 121/2000 Coll., the Copyright Act, as amended, in particular the fact that the Charles University in Prague has the right to conclude a license agreement on the use of this work as a school work pursuant to Section 60 paragraph 1 of the Copyright Act.
to 0.5to 0.49
Introduction
============
Microscopic quantum-theoretical description of nuclei (in terms of their ground-state and excited-state properties, transition probabilities and nuclear reactions) is a difficult task, due to poor knowledge of the nuclear interaction (as compared to electronic systems) and various other obstacles. Although the main features of the nucleon-nucleon interaction can be deduced from the scattering data, their straightforward application for a product wavefunction (Slater determinant) of the Hartree-Fock method, which is numerically the simplest approach to a quantum many-body problem, runs into the problem of strongly repulsive short-range part of the $N$-$N$ interaction. This obstacle can be circumvented by utilizing renormalized in-medium interaction, thus giving rise to *Brückner-Hartree-Fock* method [@Ring1980]. However, such approach did not give satisfactory results, also due to the need of three-body interactions, which are difficult to measure. The BHF method has currently attracted revived attention [@Ring2015], due to the fact that its relativistic version [@Muther1988] does not need the three-body interaction.
An alternative approach is to diagonalize the Hamiltonian in the full configuration space of many-body wavefunctions, constructed from a given single-particle basis, and the resulting method is called *large-scale shell model* [@Caurier2005]. Because the numerical cost grows exponentially with the basis, the shell model either has to use strongly truncated model space with phenomenological corrections to the interaction (usable for $A < 100$), or restrict itself to very light nuclei (up to $^{12}$C), giving rise to *ab-initio* no-core shell model. The convergence and reach of the no-core shell model can be somewhat improved either by softening of the interaction (based on chiral forces) by similarity renormalization group [@Barett2013], or by group-theoretical preselection of the basis in the symmetry-adapted no-core shell model [@Dytrych2013].
Shell model is not suitable for heavier nuclei, which are therefore most often treated by the *density functionals*. These were at first inspired by the Brückner-Hartree-Fock method, so they are mean-field methods, based on a product wavefunction determined in an iterative way, but the interaction is phenomenological and no longer derived from the bare $N$-$N$ data. Energy density functional in nuclear physics is then a self-consistent microscopic approach to calculate nuclear properties and structure over the whole periodic table (except the lightest nuclei) [@Bender2003]. The method is analogous to Kohn-Sham density functional theory (DFT) used in electronic systems. Three types of functionals are frequently used nowadays: non-relativistic Skyrme functional [@Skyrme1959; @Vautherin1972; @Reinhard2011] with zero-range two-body and density dependent interaction, finite-range Gogny force [@Gogny1980; @Gogny2009] and relativistic (covariant) mean-field [@Walecka1986; @Vretenar2005; @Niksic2014]. Typical approach employs Hartree-Fock-Bogoliubov or HF+BCS calculation scheme to obtain ground state and single-(quasi)particle wavefunctions and energies. These results are then utilized to fit the parameters of the functional to experimental data, thus obtaining various parametrizations suitable for specific aims, such as: calculation of mass-table, charge radii, fission barriers, spin-orbit splitting and giant resonances. Mean-field calculation can be extended by taking a superposition of more Slater determinants and by restoration of broken symmetries (particle number, angular momentum), leading to the *generator coordinate method*, which is suitable for description of shape coexistence and low-energy excited states (including rotational) [@Rodriguez2010; @Yao2011].
Random Phase Approximation (RPA) is a textbook standard [@Ring1980] to calculate one-phonon excitations of the nucleus, suitable also for the mean-field functionals. In practice, it is a method widely utilized for calculation of giant dipole resonances and other strength functions (giant monopole, quadrupole and M1 spin-flip resonances). Increasing computing power has enabled to employ fully self-consistent residual interaction derived from the same density functional as the underlying ground state. While the spherical nuclei can be treated directly (by matrix diagonalization) [@Reinhard1992; @Terasaki2005; @Colo2013], axially deformed nuclei still pose certain difficulties due to large matrix dimensions [@Terasaki2010; @Yoshida2013]. Our group developed a separable RPA (SRPA) approach for Skyrme functional [@Nesterenko2002; @Nesterenko2006], which greatly reduces the computational cost for deformed nuclei by utilizing separable residual interaction, entirely derived from the underlying functional by means of multi-dimensional linear response theory.
Skyrme RPA is used in our group mainly in its separable form and assuming the axial symmetry. Therefore, the primary aim of my work was a derivation and implementation of RPA in the spherical symmetry. To clarify the remaining issues, I developed also the full RPA in axial symmetry and a spherical Hartree-Fock for closed-shell nuclei.
The present work gives a derivation of convenient formalism for rotationally-invariant treatment of spherical Skyrme RPA (both full and separable). Both time-even and time-odd terms of Skyrme functional are employed, so the method is suitable for various electric and magnetic multipolarities. Then, the corresponding computer codes were constructed. Programs `sph_qrpa` and `sph_srpa` take wavefunctions from Reinhard’s `haforpa`, which is a grid-based Skyrme HF+BCS code. Due to a restricted model space in `haforpa` (22–23 major shells), I wrote also a Skyrme Hartree-Fock code (without pairing) based on the spherical-harmonic-oscillator (SHO) basis, which allows to extend the model space to over 100 major shells. Subsequent RPA then leads to almost complete elimination of the spurious center-of-mass contribution in E1 transitions.
Detailed expressions for matrix elements, applicable to full RPA, were also derived for axial symmetry. RPA code `skyax_qrpa` was written to deal with wavefunctions of Reinhard’s `skyax`, a Skyrme HF+BCS code for deformed nuclei working with a cylindrical coordinate grid. Due to large computational demands, special care was taken to vectorize and parallelize the code, to make it suitable for routine calculations on the available multi-processor workstations (with 12 CPU cores and more than 32 GB of RAM).
The new codes were first tuned with respect to the basis parameters and the size of the configuration space, with the aim of consistent RPA results. Then, full and separable RPA are compared to get a set of the most efficient input operators. Selected nuclear properties were then calculated and compared to the experimental data, such as giant electric dipolar (E1) resonance (GDR) with its low-energy “pygmy” part, isoscalar giant monopolar (E0) resonance (GMR), M1 and E2 strength functions in spherical and deformed nuclei. The importance of spin and tensor terms of Skyrme functional is demonstrated for M1 and toroidal E1 resonances. Since the strength functions are calculated in the long-wave approximation, a comparison with exact transition operator is presented as well. Finally, a toroidal nature of the low-energy E1 (pygmy) transitions is demonstrated, as was also published in our recent papers [@Repko2013; @Reinhard2014].
The thesis is organized as follows. First, I give a detailed treatment of various terms of the nuclear density functional in chapter *\[ch\_theory\] Theoretical formalism*. $$\label{full_hamil}
\mathcal{H} = \mathcal{H}_\mathrm{kin} + \mathcal{H}_\mathrm{Sk} + \mathcal{H}_\mathrm{coul} + \mathcal{H}_\mathrm{xc} + \mathcal{H}_\mathrm{pair} + \mathcal{H}_\mathrm{c.m.}$$ Kinetic and direct Coulomb terms are $$\begin{aligned}
\mathcal{H}_\mathrm{kin} &= \int\mathrm{d}^3r \bigg(\frac{\hbar^2}{2m_p}\tau_p(\vec{r}) + \frac{\hbar^2}{2m_n}\tau_n(\vec{r})\bigg), \\
\mathcal{H}_\mathrm{coul} &= \frac{1}{2}\frac{e^2}{4\pi\epsilon_0}
\iint\mathrm{d}^3r_1\mathrm{d}^3r_2
\frac{\rho_p(\vec{r}_1)\rho_p(\vec{r}_2)}{|\vec{r}_1-\vec{r}_2|},\end{aligned}$$ where the densities ($\rho,\,\tau$) will be defined in (\[Jd\_gs\]). Skyrme functional $\mathcal{H}_\mathrm{Sk}$, including its implementation in RPA, is treated for spherical symmetry in section \[sec\_skyr\_sph\] and for axial symmetry in section \[sec\_skyr\_ax\]. Its derivation from the two-body interaction is given in appendix \[app\_skyr-dft\]. Direct Coulomb interaction and numerical integration in general are discussed in section \[sec\_coul\], and the exchange Coulomb interaction is taken in Slater approximation [@Slater1951]: $$\label{xc}
\mathcal{H}_\mathrm{xc} = -\frac{3}{4}\bigg(\frac{3}{\pi}\bigg)^{\!1/3}
\frac{e^2}{4\pi\epsilon_0}\int\mathrm{d}^3 r \rho_p^{4/3}(\vec{r})$$ Pairing interaction $\mathcal{H}_\mathrm{pair}$ is given in section \[sec\_pair\], and finally, the subtraction of center-of-mass energy is described in section \[sec\_kin-cm\]. The computer programs and their tuning are discussed in chapter *\[ch\_num\] Numerical codes* and the physical results of the calculations, mainly in terms of strength functions and transition currents, are given in chapter *\[ch\_results\] Physical results*. SRPA formalism adapted to spherical symmetry is given in appendix \[app\_SRPA\].
Theoretical formalism {#ch_theory}
=====================
This chapter gives a detailed account of the calculation of Skyrme Hartree-Fock and RPA (i.e., nuclear ground state and small-amplitude excitations) [@Ring1980] by means of single-particle (s.p.) wavefunctions decomposed by assuming rotational symmetry – either spherical or axial (cylindrical). Besides Skyrme functional, it was necessary to treat also the Coulomb interaction, pairing interaction, transition operators, and kinetic center-of-mass term [@Reinhard2011]. The derived formulae were implemented in the computer programs as mentioned in the introduction, with the exception of Coulomb integral in cartesian coordinates, which is given only as a kind-of toy-model. More specifically, the programs included: spherical closed-shell HF in SHO basis, spherical full and separable RPA in SHO basis and on the radial grid, and axial full RPA on the 2D grid – the results of these calculations are given in chapters \[ch\_num\] and \[ch\_results\]. Separable RPA, which is a numerically efficient method based on the linear response theory [@Nesterenko2002; @Nesterenko2006], is treated only in appendix \[app\_SRPA\], to avoid unnecessary details in this chapter.
Since the primary aim was to derive the spherical RPA, the formalism given below is optimized in this direction. Particular attention was given also to the precise evaluation of the Coulomb integral by means of Euler-Maclaurin corrections, and to the evaluation of kinetic center-of-mass term for HF and RPA. Both of these topics seem to have little coverage in the literature on nuclear density functionals.
Notation of Clebsch-Gordan coefficients and most of the formulae used in the derivation are taken from the book of Varshalovich [@Varshalovich1988]. Detailed derivation of the utilized formulae can be found also in my notes about special functions in quantum mechanics [@Repko-specf_qm] (in Slovak).
Before coming to the theory itself, a few preliminary comments are given here in order to clarify the further utilization of a bra-ket notation. When applying single-particle expressions to a many-body system, it is necessary to distinguish whether the bra-ket formulation of matrix elements is understood as in the antisymmetrized many-body system, described by the Slater determinants (or equivalently by creation and annihilation operators; $P$ is a permutation of indices)
\[slater\] $$\begin{aligned}
{\langle\vec{r}_1,\vec{r}_2,\ldots\vec{r}_n|1,2,3,\ldots n\rangle}_\mathrm{Slater}
&= \frac{1}{\sqrt{n!}}\sum_P \mathrm{sign}(P)
\psi_{P(1)}(\vec{r}_1)\psi_{P(2)}(\vec{r}_2)\ldots\psi_{P(n)}(\vec{r}_n) \\
\Leftrightarrow\quad
{|1,2,3,\ldots n\rangle}_\mathrm{Slater} &= \hat{a}_1^+\hat{a}_2^+\ldots\hat{a}_n^+|\rangle\end{aligned}$$
or they are meant only as a shortcut for non-symmetrized integral $$\label{nonsym}
{\langle\alpha\beta|\hat{V}|\gamma\delta\rangle}_\mathrm{nonsym} =
\int \psi_\alpha^\dagger(\vec{r}_1) \psi_\beta^\dagger(\vec{r}_2)
\hat{V}(\vec{r}_1,\vec{r}_2)
\psi_\gamma(\vec{r}_1) \psi_\delta(\vec{r}_2)\,\mathrm{d}^3 r_1\mathrm{d}^3 r_2$$ In most cases below, the bra-ket notation is meant as (\[nonsym\]), with the exception of sections *\[sec\_fullrpa\] Full RPA* and *\[sec\_pair\] Pairing*, where many-body Slater states (\[slater\]) or their linear combinations are used. Many-body matrix element is presumed also in the following shortcut for commutators, which is evaluated in Hartree-Fock (or HF+BCS) ground state: $$\langle[\hat{A},\hat{B}]\rangle \equiv
{\langle\mathrm{HF}|\hat{A}\hat{B}-\hat{B}\hat{A}|\mathrm{HF}\rangle}_\mathrm{Slater}$$ Conversion between Slater and non-symmetrized two-body matrix element is $$\label{V_2ph}
\langle\alpha\beta|\frac{1}{2}\sum_{i,j}^N\hat{V}(\vec{r}_i,\vec{r}_j)
{|\gamma\delta\rangle}_\mathrm{Slater}
= {\langle\alpha\beta|\hat{V}(\vec{r}_1,\vec{r}_2)|\gamma\delta\rangle}_\mathrm{nonsym}
- {\langle\alpha\beta|\hat{V}(\vec{r}_1,\vec{r}_2)|\delta\gamma\rangle}_\mathrm{nonsym}$$ on the condition that all s.p. states $\alpha,\beta,\gamma,\delta$ are different (with zero overlap) and $\hat{V}(\vec{r}_i,\vec{r}_j) = \hat{V}(\vec{r}_j,\vec{r}_i)$. Notation $|\alpha\beta\rangle_\mathrm{Slater}$ can mean either a two-particle state ($N=2$) or a many-particle state ($N\geq2$), where the undisclosed states are the same as in $|\gamma\delta\rangle_\mathrm{Slater}$. When the matrix element is calculated between the same many-body Slater states, as is the case of Hartree-Fock total energy, the result is the following (with sums running over the occupied single-particle states): $$\begin{aligned}
\langle\mathrm{HF}|\sum_i\hat{T}(\vec{r}_i)+{}&\frac{1}{2}\sum_{i,j}\hat{V}(\vec{r}_i,\vec{r}_j){|\mathrm{HF}\rangle}_\mathrm{Slater} = \nonumber\\
\label{V_HF}
&= \sum_\gamma{\langle\gamma|\hat{T}(\vec{r})|\gamma\rangle}_\mathrm{nonsym}
+\frac{1}{2}\sum_{\alpha\beta}{\langle\alpha\beta|\hat{V}(\vec{r}_1,\vec{r}_2)|\alpha\beta\rangle}_\mathrm{nonsym} \nonumber\\
&\hspace{85pt}{}-\frac{1}{2}\sum_{q=p,n}\sum_{\alpha\beta\in q}
{\langle\alpha\beta|\hat{V}(\vec{r}_1,\vec{r}_2)|\beta\alpha\rangle}_\mathrm{nonsym}\end{aligned}$$ Prescriptions (\[V\_2ph\]) and (\[V\_HF\]) can be unified by means of creation and annihilation operators: $$\frac{1}{2}\sum_{i,j}^N\hat{V}(\vec{r}_i,\vec{r}_j) = \frac{1}{2}
\sum_{\alpha\beta\gamma\delta}
{\langle\alpha\beta|\hat{V}(\vec{r}_1,\vec{r}_2)|\gamma\delta\rangle}_\mathrm{nonsym}
\hat{a}^+_\alpha\hat{a}^+_\beta\hat{a}_\delta^{\phantom{|}}\hat{a}_\gamma^{\phantom{|}}$$
Skyrme interaction and density functional {#sec_skyrme}
-----------------------------------------
Skyrme interaction is a phenomenological approach to nuclear potential, which includes spatial derivatives in addition to the local densities. Its definition usually starts with a two-body density-dependent interaction [@Vautherin1972] $$\begin{aligned}
\hat{V}_\mathrm{Sk}(\vec{r}_1,\vec{r}_2) & =
t_0(1+x_0\hat{P}_\sigma)\delta(\vec{r}_1-\vec{r}_2)
-\frac{1}{8}t_1(1+x_1\hat{P}_\sigma) \nonumber\\
&\qquad\qquad\qquad{}\times \big[
(\overleftarrow{\nabla}_1-\overleftarrow{\nabla}_2)^2\delta(\vec{r}_1-\vec{r}_2) + \delta(\vec{r}_1-\vec{r}_2)(\overrightarrow{\nabla}_1-\overrightarrow{\nabla}_2)^2
\big] \nonumber\\
& \quad{}+\frac{1}{4}t_2(1+x_2\hat{P}_\sigma)(\overleftarrow{\nabla}_1-\overleftarrow{\nabla}_2) \cdot \delta(\vec{r}_1-\vec{r}_2)(\overrightarrow{\nabla}_1-\overrightarrow{\nabla}_2)
\nonumber\\
& \quad{}+\frac{1}{6}t_3(1+x_3\hat{P}_\sigma)\delta(\vec{r}_1-\vec{r}_2) \rho^\alpha\Big(\frac{\vec{r}_1+\vec{r}_2}{2}\Big) \nonumber\\
\label{V_skyrme}
& \quad {}+\frac{\mathrm{i}}{4}t_4
(\vec{\sigma}_1+\vec{\sigma}_2)\cdot\big[(\overleftarrow{\nabla}_1-\overleftarrow{\nabla}_2) \times \delta(\vec{r}_1-\vec{r}_2)(\overrightarrow{\nabla}_1-\overrightarrow{\nabla}_2)\big]\end{aligned}$$ with parameters $t_0,t_1,t_2,t_3,t_4,x_0,x_1,x_2,x_3,\alpha$ and a spin-exchange operator $$\label{P_sigma}
\hat{P}_\sigma = \frac{1}{2}(1+\vec{\sigma}_1\cdot\vec{\sigma}_2) =
\frac{1+\sigma_{1z}\sigma_{2z}}{2} + \sigma_{1+}\sigma_{2-} + \sigma_{1-}\sigma_{2+}, \quad
\sigma_\pm = \frac{\sigma_x\pm\sigma_y}{2}.$$ Since it is a zero-range interaction, the solution of a many-body problem by Hartree-Fock can be equivalently reformulated as a density functional theory [@Vautherin1972; @Reinhard1992] (given in detail in appendix \[app\_skyr-dft\]), and the complete density functional is
$$\begin{aligned}
\mathcal{H}_\mathrm{Sk} & = \frac{1}{2}\sum_{\alpha\beta} \langle \alpha\beta|\hat{V}_\mathrm{Sk}|\alpha\beta\rangle -
\frac{1}{2}\sum_{q=p,n}\sum_{\alpha\beta\in q} \langle \alpha\beta|\hat{V}_\mathrm{Sk}|\beta\alpha\rangle \nonumber\\
& = \int\mathrm{d}^3 r\bigg\{ \frac{b_0}{2}\rho^2 - \frac{b_0'}{2}\sum_q\rho_q^2
+b_1(\rho\tau\!-\!\vec{j}^{\,2}) - b_1'\sum_q(\rho_q\tau_q\!-\!\vec{j}_q^{\,2})
+\frac{b_2}{2}(\vec{\nabla}\rho)^2 - \frac{b_2'}{2}\sum_q(\vec{\nabla}\rho_q)^2 \nonumber\\
&\qquad {}+\tilde{b}_1\Big(\vec{s}\cdot\vec{T} - \sum_{ij} \mathcal{J}_{ij}^2\Big)
+\tilde{b}_1'\sum_q\Big(\vec{s}_q\cdot\vec{T}_q-\sum_{ij} \mathcal{J}_{q;ij}^2\Big)
+\frac{b_3}{3}\rho^{\alpha+2}-\frac{b_3'}{3}\rho^\alpha\sum_q\rho_q^2 \nonumber\\
&\qquad {}-b_4\big[\rho\vec{\nabla}\cdot\vec{\mathcal{J}}
+\vec{s}\cdot(\vec{\nabla}\times\vec{j})\big]
-b_4'\sum_q\big[\rho_q\vec{\nabla}\cdot\vec{\mathcal{J}}_q+\vec{s}_q\cdot (\vec{\nabla}\times\vec{j}_q)\big] \nonumber\\
\label{Skyrme_DFT}
&\qquad {}+\frac{\tilde{b}_0}{2}\vec{s}^2-\frac{\tilde{b}_0'}{2}\sum_q\vec{s}_q^{\,2}
+\frac{\tilde{b}_2}{2}\sum_{ij}(\nabla_i s_j)^2-\frac{\tilde{b}_2'}{2}\sum_q \sum_{ij}(\nabla_i s_j)_q^2
+\frac{\tilde{b}_3}{3}\rho^\alpha\vec{s}^{\,2}-\frac{\tilde{b}_3'}{3}\rho^\alpha\sum_q \vec{s}_q^{\,2} \bigg\}\end{aligned}$$
\
where the last line contains the spin terms, which are usually omitted. However, they have quite important contribution for magnetic excitations, as will be shown in section \[sec\_spin-tens\], so I am using them in all calculations. Parameters $b_j$ depend on the parameters $t_j,x_j$ from (\[V\_skyrme\]): $$\begin{aligned}
b_0 &= \tfrac{t_0(2+x_0)}{2}, \quad b_0' = \tfrac{t_0(1+2x_0)}{2}, \quad
\tilde{b}_0 = \tfrac{t_0 x_0}{2}, \quad \tilde{b}_0' = \tfrac{t_0}{2}, \nonumber\\
b_1 &= \tfrac{t_1(2+x_1)+t_2(2+x_2)}{8}, \quad
b_1' = \tfrac{t_1(1+2x_1)-t_2(1+2x_2)}{8}, \quad
\tilde{b}_1 = \tfrac{t_1 x_1 + t_2 x_2}{8}, \quad
\tilde{b}_1' = \tfrac{-t_1+t_2}{8}, \nonumber\\
b_2 &= \tfrac{3t_1(2+x_1)-t_2(2+x_2)}{16}, \quad
b_2' = \tfrac{3t_1(1+2x_1)+t_2(1+2x_2)}{16}, \quad
\tilde{b}_2 = \tfrac{3t_1 x_1 - t_2 x_2}{16}, \quad
\tilde{b}_2' = \tfrac{3t_1 + t_2}{16}, \nonumber\\
b_3 &= \tfrac{t_3(2+x_3)}{8}, \quad b_3' = \tfrac{t_3(1+2x_3)}{8}, \quad
\tilde{b}_3 = \tfrac{t_3 x_3}{8}, \quad \tilde{b}_3' = \tfrac{t_3}{8}, \quad
b_4 = b_4' = \tfrac{t_4}{2}\end{aligned}$$ Most Skyrme parametrizations set explicitly $\tilde{b}_1=\tilde{b}_1'=0$ and this fact is denoted here as exclusion of the “tensor term” (not to be confused with spin-tensor term utilized in the shell model). There are also parametrizations fitted with the tensor term included, e.g. SGII [@SGII], SLy7 [@SLy6], SkT6 [@SkT6].
The ground state densities (denoted in general as $J_d(\vec{r}) = \langle\hat{J}_d(\vec{r})\rangle$) are defined: $$\begin{aligned}
\rho_q(\vec{r}) &= \sum_{\alpha\in q}v_\alpha^2 \psi_\alpha^\dagger(\vec{r})\psi_\alpha^{\phantom{|}}(\vec{r}),\quad
\rho(\vec{r}) = \sum_{q=p,n}\rho_q(\vec{r}),\quad
\tau(\vec{r}) = \sum_{\alpha}v_\alpha^2 [\vec{\nabla}\psi_\alpha(\vec{r})]^\dagger
\cdot[\vec{\nabla}\psi_\alpha(\vec{r})], \nonumber\\
\mathcal{J}_{jk}(\vec{r}) &= \frac{\mathrm{i}}{2}\sum_{\alpha}v_\alpha^2
\big\{[\partial_j\sigma_k\psi_\alpha(\vec{r})]^\dagger\psi_\alpha(\vec{r}) -
\psi_\alpha^\dagger(\vec{r})[\partial_j\sigma_k\psi_\alpha(\vec{r})]\big\},
\quad \mathcal{J}_i(\vec{r}) = \sum_{ijk} \varepsilon_{ijk}\mathcal{J}_{jk}(\vec{r}), \nonumber\\
\vec{\mathcal{J}}(\vec{r}) &= \frac{\mathrm{i}}{2}\sum_{\alpha}v_\alpha^2
\big\{[(\vec{\nabla}\times\vec{\sigma})\psi_\alpha(\vec{r})]^\dagger
\psi_\alpha(\vec{r}) - \psi_\alpha^\dagger(\vec{r})[(\vec{\nabla}\times\vec{\sigma})\psi_\alpha(\vec{r})]\big\}, \nonumber\\
\vec{j}(\vec{r}) &= \frac{\mathrm{i}}{2}\sum_{\alpha}v_\alpha^2
\big\{[\vec{\nabla}\psi_\alpha(\vec{r})]^\dagger\psi_\alpha(\vec{r}) -
\psi_\alpha^\dagger(\vec{r})[\vec{\nabla}\psi_\alpha(\vec{r})]\big\},\quad
\vec{s}(\vec{r}) = \sum_{\alpha} v_\alpha^2
\psi_\alpha^\dagger(\vec{r})\vec{\sigma}\psi_\alpha^{\phantom{|}}(\vec{r}), \nonumber\\
\label{Jd_gs}
\vec{T}(\vec{r}) &= \sum_{\alpha} v_\alpha^2 \sum_j
[\partial_j\psi_\alpha(\vec{r})]^\dagger\vec{\sigma}[\partial_j\psi_\alpha(\vec{r})]\end{aligned}$$ Densities $\rho,\tau,\mathcal{J}$ are time-even and currents $\vec{j},\vec{s},\vec{T}$ are time-odd, and $v_\alpha^2$ represents occupation probability, defined later in (\[bogoliubov\]). Time-odd currents are zero in the ground state ($0^+$) of the even-even nuclei. The operators corresponding to the densities and currents are
$$\begin{aligned}
\textrm{density:}\quad & \hat{\rho}(\vec{r}_0) = \delta(\vec{r}-\vec{r}_0), \qquad
\textrm{kinetic energy:} \quad \hat{\tau}(\vec{r}_0) =
\overleftarrow{\nabla}\cdot\delta(\vec{r}-\vec{r}_0)\overrightarrow{\nabla}, \nonumber\\
\textrm{spin-orbital:}\quad & \hat{\mathcal{J}}_{jk}(\vec{r}_0) = \tfrac{\mathrm{i}}{2}
\big[\overleftarrow{\nabla}_{\!j}\sigma_k\delta(\vec{r}-\vec{r}_0)
-\delta(\vec{r}-\vec{r}_0)\overrightarrow{\nabla}_{\!j}\sigma_k\big], \nonumber\\
\textrm{vector spin-orbital:}\quad & \hat{\vec{\mathcal{J}}}(\vec{r}_0) = \tfrac{\mathrm{i}}{2} \big[\overleftarrow{\nabla}\times\vec{\sigma}\delta(\vec{r}-\vec{r}_0)
-\delta(\vec{r}-\vec{r}_0)\overrightarrow{\nabla}\times\vec{\sigma}\big],\quad
\hat{\mathcal{J}}_i = \textstyle\sum_{ijk}\varepsilon_{ijk}\mathcal{J}_{jk}, \nonumber\\
\textrm{current:}\quad & \hat{\vec{j}}(\vec{r}_0) = \tfrac{\mathrm{i}}{2}
\big[\overleftarrow{\nabla}\delta(\vec{r}-\vec{r}_0)
-\delta(\vec{r}-\vec{r}_0)\overrightarrow{\nabla}\big],\quad
\textrm{spin:}\quad \hat{\vec{s}}(\vec{r}_0) = \vec{\sigma}\delta(\vec{r}-\vec{r}_0), \nonumber\\
\label{Jd_op}
\textrm{kinetic energy-spin:}\quad & \hat{T}_j(\vec{r}_0) =
\overleftarrow{\nabla}\cdot\sigma_j\delta(\vec{r}-\vec{r}_0)\overrightarrow{\nabla}\end{aligned}$$
\
and they are understood as single-particle operators in many-body system; more explicit notation would be, e.g. $$\hat{\rho}_q(\vec{r}_0) = \sum_{i\in q}\delta(\vec{r}_i-\vec{r}_0)$$
Spin-orbital current $\mathcal{J}_{jk}$ and current $\nabla_j s_k$ have two indices, so they can be interpreted as spherical tensor operators and then decomposed into scalar, vector and (symmetric) rank-2 tensor part, using orthogonality of Clebsch-Gordan coefficients, with components corresponding to angular quantum numbers 0, 1 and 2 (i.e., total number of components is $1+3+5=9=3^2$).
$$\sum_{i,j=x,y,z} \mathcal{J}_{ij}^2 = \sum_{\mu,\nu}^{{-1},0,1} (-1)^{\mu+\nu} \mathcal{J}_{\mu\nu} \mathcal{J}_{-\mu,-\nu}$$
$$\mathcal{J}_{\tilde{\mu}\tilde{\nu}} = C_{1\tilde{\mu}1\tilde{\nu}}^{00}{[\mathcal{J}_{\mu\otimes\nu}]}_0
+ \sum_{M=-1}^1 C_{1\tilde{\mu}1\tilde{\nu}}^{1M}{[\mathcal{J}_{\mu\otimes\nu}]}_{1M}
+ \sum_{M=-2}^2 C_{1\tilde{\mu}1\tilde{\nu}}^{2M}{[\mathcal{J}_{\mu\otimes\nu}]}_{2M}$$
$$\begin{aligned}
\,{[\mathcal{J}_{\mu\otimes\nu}]}_0 &= \sum_{\mu\nu}C_{1\mu1\nu}^{00}\mathcal{J}_{\mu\nu} =
-\sum_{\mu=-1}^1 \frac{(-1)^\mu}{\sqrt{3}} \mathcal{J}_{\mu,-\mu} =
-\frac{1}{\sqrt{3}}\sum_i \mathcal{J}_{ii} = -\frac{1}{\sqrt{3}}\mathcal{J}_s \nonumber\\[-2pt]
\,{[\mathcal{J}_{\mu\otimes\nu}]}_{1M} &= \sum_{\mu\nu}C_{1\mu1\nu}^{1M}\mathcal{J}_{\mu\nu} =
\frac{\mathrm{i}}{\sqrt{2}} \Big[\sum_{ij} \varepsilon_{ijk}\mathcal{J}_{ij}\Big]_{k\rightarrow M} =
\frac{\mathrm{i}}{\sqrt{2}} {[\vec{\mathcal{J}}]}_M \\
\,{[\mathcal{J}_{\mu\otimes\nu}]}_{2M} &= \sum_{\mu\nu}C_{1\mu1\nu}^{2M}\mathcal{J}_{\mu\nu} =
\mathcal{J}_{tM} = {[\boldsymbol{\mathcal{J}}_{\!t}]}_{M} \nonumber\end{aligned}$$
$$\label{Jsq-decom}
\sum_{ij} \mathcal{J}_{ij}^2 = \frac{1}{3}\Big(\sum_i \mathcal{J}_{ii}\Big)^2 +
\frac{1}{2}\vec{\mathcal{J}}^2 + \sum_{m=-2}^2 (-1)^m
{[\boldsymbol{\mathcal{J}}_{\!t}^{\vphantom{*}}]}_m {[\boldsymbol{\mathcal{J}}_{\!t}^{\vphantom{*}}]}_{-m} =
\frac{1}{3}\mathcal{J}_s^2 + \frac{1}{2}\vec{\mathcal{J}}^2 + \boldsymbol{\mathcal{J}}_{\!t}^2$$
\
Decomposition of vector spin-orbital current in the convention of tensor operators is (see [@Varshalovich1988 (1.2.28)] for vector product formula) $${[\vec{\mathcal{J}}]}_\mathrm{sph} = \begin{pmatrix} (-\mathcal{J}_x-\mathrm{i}\mathcal{J}_y)/\sqrt{2} \\
\mathcal{J}_z \\ (\mathcal{J}_x-\mathrm{i}\mathcal{J}_y)/\sqrt{2} \end{pmatrix} = \begin{pmatrix}
\mathrm{i}(-\mathcal{J}_{10} + \mathcal{J}_{01}) \\ \mathrm{i}(-\mathcal{J}_{1,-1} + \mathcal{J}_{-1,1}) \\
\mathrm{i}(\mathcal{J}_{-1,0} - \mathcal{J}_{0,-1}) \end{pmatrix}$$ The tensor part is then
$$\begin{aligned}
{[\boldsymbol{\mathcal{J}}_{\!t}]}_{\pm2} &= \mathcal{J}_{\pm1,\pm1}
= \frac{\mathcal{J}_{xx} \pm \mathrm{i}\mathcal{J}_{xy} \pm \mathrm{i}\mathcal{J}_{yx} - \mathcal{J}_{yy}}{2} \nonumber\\[2pt]
{[\boldsymbol{\mathcal{J}}_{\!t}]}_{\pm1} &= \frac{\mathcal{J}_{\pm1,0}+\mathcal{J}_{0,\pm1}}{\sqrt{2}}
= \frac{\mp\mathcal{J}_{xz} - \mathrm{i}\mathcal{J}_{yz} \mp \mathcal{J}_{zx} - \mathrm{i}\mathcal{J}_{zy}}{2} \\
{[\boldsymbol{\mathcal{J}}_{\!t}]}_0 &= \frac{\mathcal{J}_{1,-1} + 2\mathcal{J}_{00} + \mathcal{J}_{-1,1}}{\sqrt{6}} =
\frac{-\mathcal{J}_{xx} - \mathcal{J}_{yy} + 2\mathcal{J}_{zz}}{\sqrt{6}} \nonumber
% \mathcal{J}_s &= -\mathcal{J}_{1,-1} + \mathcal{J}_{00} - \mathcal{J}_{-1,1} = \mathcal{J}_{xx} + \mathcal{J}_{yy} + \mathcal{J}_{zz}\end{aligned}$$
\
To check the decomposition (\[Jsq-decom\]), I can substitute above expressions into it: $$\begin{aligned}
\mathcal{J}_s^2 &= \mathcal{J}_{xx}^2 + \mathcal{J}_{yy}^2 + \mathcal{J}_{zz}^2
+2(\mathcal{J}_{xx}\mathcal{J}_{yy} + \mathcal{J}_{yy}\mathcal{J}_{zz} + \mathcal{J}_{zz}\mathcal{J}_{xx}) \qquad\qquad \times\frac{1}{3} \\
\vec{\mathcal{J}}^2 &= \mathcal{J}_{yz}^2 + \mathcal{J}_{zy}^2 + \mathcal{J}_{zx}^2 + \mathcal{J}_{xz}^2 + \mathcal{J}_{xy}^2 + \mathcal{J}_{yx}^2
-2(\mathcal{J}_{yz}\mathcal{J}_{zy} + \mathcal{J}_{zx}\mathcal{J}_{xz} + \mathcal{J}_{xy}\mathcal{J}_{yx}) \quad \times\frac{1}{2} \\
\boldsymbol{\mathcal{J}}_{\!t}^2 &= \frac{2}{3}(\mathcal{J}_{xx}^2 + \mathcal{J}_{yy}^2 + \mathcal{J}_{zz}^2
-\mathcal{J}_{xx}\mathcal{J}_{yy} - \mathcal{J}_{yy}\mathcal{J}_{zz} - \mathcal{J}_{zz}\mathcal{J}_{xx}) + \frac{1}{2}(\mathcal{J}_{xy}+\mathcal{J}_{yx})^2 \\
&\quad{}+\frac{1}{2}[(\mathcal{J}_{zx}+\mathcal{J}_{xz})^2 + (\mathcal{J}_{yz}+\mathcal{J}_{zy})^2]\end{aligned}$$
Skyrme RPA in the spherically symmetric case {#sec_skyr_sph}
--------------------------------------------
The complete treatment of various terms of Skyrme density functional, and the residual interaction derived from it, is given below for spherical symmetry. Some of these concepts are valid also for the axial symmetry, so the corresponding section will be accordingly shorter.
### Notation for one-body matrix elements {#sec_notation}
Spherical decomposition of a single-particle wavefunction (spin 1/2) is $$\langle\vec{r}|\alpha\rangle = \psi_\alpha(\vec{r}) =
R_\alpha(r)\,\Omega_{j_\alpha m_\alpha}^{l_\alpha}(\vartheta,\varphi) =
R_\alpha(r) \sum_{\nu \xi}
C_{l_\alpha,\nu,\frac{1}{2},\xi}^{j_\alpha,m_\alpha}
Y_{l_\alpha\nu}^{\phantom{|}}(\vartheta,\varphi)\,
\chi_{\xi}^{\phantom{|}}$$ with $\Omega_{j_\alpha m_\alpha}^{l_\alpha}$ denoting spin-orbitals and $\chi_{\xi}^{\phantom{|}}$ are spinors. Greek letters will be used for labeling of single-particle and single-quasiparticle states and should not be later confused with creation and annihilation operators for quasiparticles which are always denoted by a hat (i.e. $\hat{\alpha}_\alpha,\,\hat{\alpha}_\beta^+$).
Time reversal of a nucleon wavefunction is defined as $$\label{t_inv}
\psi_{\bar{\alpha}}(\vec{r}) = \hat{T}\psi_\alpha(\vec{r}) =
\mathrm{i}\sigma_y\psi_\alpha^*(\vec{r}) =
(-1)^{l_\alpha+j_\alpha+m_\alpha}\psi_{-\alpha}(\vec{r}),\quad
\hat{T}|\bar{\alpha}\rangle = -|\alpha\rangle$$ where ${-}\alpha\equiv\{n_\alpha,j_\alpha,l_\alpha,-m_\alpha\}$. Time-parity of an operator $\hat{A}$, denoted as $\gamma_T^A$, is defined by the relation $$\hat{T}^{-1}\hat{A}\hat{T} = \gamma_T^A\hat{A}^\dagger, \quad
\gamma_T^A = \Big\{
\begin{array}{l} +1:\quad\textrm{time-even} \\ -1:\quad\textrm{time-odd}\end{array}$$ Single-particle matrix elements then satisfy
\[time-inv\] $$\begin{aligned}
\langle\bar{\alpha}|\hat{A}|\bar{\beta}\rangle &=
\langle\bar{\alpha}|\hat{A}\hat{T}|\beta\rangle =
\langle\alpha|\hat{T}^{-1}\hat{A}\hat{T}|\beta\rangle^* =
\gamma_T^A\langle\beta|\hat{A}|\alpha\rangle \\
\langle\alpha|\hat{A}|\bar{\beta}\rangle &=
\langle\alpha|\hat{A}\hat{T}|\beta\rangle =
-\langle\bar{\alpha}|\hat{T}^{-1}\hat{A}\hat{T}|\beta\rangle^* =
-\gamma_T^A\langle\beta|\hat{A}|\bar{\alpha}\rangle\end{aligned}$$
To account for pairing (treated in more detail in section *\[sec\_pair\] Pairing interaction*), quasiparticles are introduced by Bogoliubov transformation [@Ring1980 p. 234] $$\label{bogoliubov}
\begin{array}{ll}
\hat{a}_\beta^+ = u_\beta^{\phantom{|}} \hat{\alpha}_\beta^+
+ v_\beta^{\phantom{|}} \hat{\alpha}_{\bar{\beta}}^{\phantom{|}},\quad &
\{\hat{\alpha}_\alpha^{\phantom{|}},\hat{\alpha}_\beta^+\} =
\delta_{\alpha\beta},\\
\hat{a}_{\bar{\beta}}^+ = u_\beta^{\phantom{|}} \hat{\alpha}_{\bar{\beta}}^+
- v_\beta^{\phantom{|}} \hat{\alpha}_\beta^{\phantom{|}},\quad &
\{\hat{\alpha}_\alpha^{\phantom{|}},\hat{\alpha}_\beta^{\phantom{|}}\} = 0
\end{array}$$ with real positive coefficients $u_\beta,v_\beta$ satisfying $u_\beta^2+v_\beta^2 = 1$. States $\alpha,\beta,\ldots$ are obtained from Skyrme Hartree-Fock iteration, which is appended by a solution of BCS equations to obtain $u_\beta,v_\beta$. The HF+BCS groud state of an even-even nucleus is then a vacuum with respect to quasiparticle annihilation operators $\hat{\alpha}_\beta$. One-body operator can be expressed as $$\begin{aligned}
\hat{A} = \sum_{\alpha\beta}
\langle\alpha|\hat{A}|\beta\rangle\hat{a}_\alpha^+\hat{a}_\beta^{\phantom{+}}
= \sum_{\alpha\beta} &\langle\alpha|\hat{A}|\beta\rangle
\big(u_\alpha^{\phantom{|}}v_\beta^{\phantom{|}}
\hat{\alpha}_\alpha^+\hat{\alpha}_{\bar{\beta}}^+ + v_\alpha^{\phantom{|}}u_\beta^{\phantom{|}} \hat{\alpha}_{\bar{\alpha}}^{\phantom{|}}\hat{\alpha}_\beta^{\phantom{|}}
\nonumber\\[-8pt]
&\qquad\ \
{}+u_\alpha^{\phantom{|}}u_\beta^{\phantom{|}}\hat{\alpha}_\alpha^+\hat{\alpha}_\beta^{\phantom{|}}
-v_\alpha^{\phantom{|}}v_\beta^{\phantom{|}}\hat{\alpha}_{\bar{\beta}}^+\hat{\alpha}_{\bar{\alpha}}^{\phantom{|}} + v_\alpha^2\delta_{\alpha\beta}\big)\end{aligned}$$ In the RPA, the operators are evaluated only in the commutators in the ground state, $\langle[\hat{A},\hat{B}]\rangle$, and here, the non-zero contributions come only from $\hat{\alpha}_{\alpha}^{\phantom{|}}\hat{\alpha}_\beta^{\phantom{|}},\hat{\alpha}_{\alpha}^+\hat{\alpha}_\beta^+$. So I will drop all other terms and symmetrize according to (\[time-inv\]) to obtain
\[2qp\_operator\] $$\begin{aligned}
\hat{A} &= \frac{1}{2}\sum_{\alpha\beta}u_{\alpha\beta}^{(\gamma_T^A)}
\langle\alpha|\hat{A}|\bar{\beta}\rangle
(-\hat{\alpha}_\alpha^+\hat{\alpha}_\beta^+ + \gamma_T^A
\hat{\alpha}_{\bar{\alpha}}^{\phantom{*}}\hat{\alpha}_{\bar{\beta}}^{\phantom{*}}) \\
&= \frac{1}{2}\sum_{\alpha\beta}u_{\alpha\beta}^{(\gamma_T^A)}
\langle\bar{\alpha}|\hat{A}|\beta\rangle
(\hat{\alpha}_{\bar{\alpha}}^+\hat{\alpha}_{\bar{\beta}}^+ - \gamma_T^A \hat{\alpha}_\alpha^{\phantom{|}}\hat{\alpha}_\beta^{\phantom{|}})\end{aligned}$$
where the pairing factors were abbreviated as $$\label{u_ab}
u_{\alpha\beta}^{(+)} = u_\alpha^{\phantom{|}}v_\beta^{\phantom{|}}+ v_\alpha^{\phantom{|}} u_\beta^{\phantom{|}}, \qquad
u_{\alpha\beta}^{(-)} = u_\alpha^{\phantom{|}}v_\beta^{\phantom{|}}- v_\alpha^{\phantom{|}} u_\beta^{\phantom{|}}$$ The sums are not restricted with respect to double counting, so the diagonal matrix elements are treated correctly. Ordering of the pairs $\alpha\beta$ with properly included diagonal matrix elements will be discussed at (\[order2qp\]).
Vector hermitian operators can be rewritten as tensor operators of rank 1 $$\hat{A}_1 = (-\hat{A}_x-\mathrm{i}\hat{A}_y)/\sqrt{2},\quad
\hat{A}_0 = \hat{A}_z,\quad
\hat{A}_{-1} = (\hat{A}_x-\mathrm{i}\hat{A}_y)/\sqrt{2}$$ I therefore define hermiticity of the tensor operator by a condition $$\label{hermit}
\hat{A}_m^\dagger = (-1)^m\hat{A}_{-m}$$ The same rule applies also to higher-rank tensor operators. Besides scalars and vectors, I will use rank-2 tensors (denoted by boldface, $\boldsymbol{A}$). When the rank is not specified, I will use upright bold symbols ($\mathbf{A}$). By the term “rank”, I refer to the “spin” part of an operator (i.e., to its multi-component nature; but its meaning is closer to a photon spin, and not a nucleon spin).
Since the density and current operators depend on position, their angular part will be decomposed by orbital angular momentum ($L$) and total angular momentum ($J$ or $\lambda$) in terms of scalar ($Y_{LM}$), vector ($\vec{Y}_{JM}^L$) and tensor ($\boldsymbol{Y}_{\!JM}^L$) spherical harmonics, whose decomposition in terms of Clebsch-Gordan coefficients and tensor-operator-like components (denoted by $[\ ]_m$) is in general $$\begin{aligned}
\label{sph_vectors}
\mathbf{Y}_{JM}^L(\vartheta,\varphi) &=
\sum_{m\mu}C_{Lms\mu}^{JM} Y_{Lm}\mathbf{e}_\mu =
\sum_{m=-s}^s (-1)^m \big[\mathbf{Y}_{JM}^L\big]_m\mathbf{e}_{-m} \\
&= (-1)^{J+L+M+s}\mathbf{Y}_{J,-M}^{L*}(\vartheta,\varphi)\end{aligned}$$ where I choose $\vec{e}_0 = \vec{e}_z$ and $\mathbf{e}_0 = (2\vec{e}_z\vec{e}_z-\vec{e}_x\vec{e}_x-\vec{e}_y\vec{e}_y)/\sqrt{6}$; $s$ denotes the rank (0: scalar, 1: vector, 2: tensor), and $L\in\{J-s,\ldots J+s\}$.
When $\hat{A}$ is a tensor operator with multipolarity $\lambda,\mu$, then, according to Wigner-Eckart theorem, I can factorize a Clebsch-Gordan coefficient from $\langle\alpha|\hat{A}|\beta\rangle$, and obtain a reduced matrix element, which will be denoted by $A_{\alpha\beta}$ (including the pairing factor), instead of bra-ket, not to cause confusion in many-body quasiparticle formalism. $$\label{A_rme}
u_{\alpha\beta}^{(\gamma_T^A)}\langle\alpha|\hat{A}|\bar{\beta}\rangle =
\frac{(-1)^{l_\beta+j_\beta+m_\beta}A_{\alpha\beta}}{\sqrt{2j_\alpha+1}}
C_{j_\beta,-m_\beta,\lambda,\mu}^{j_\alpha,m_\alpha} =
\frac{(-1)^{l_\beta}A_{\alpha\beta}}{\sqrt{2\lambda+1}}
C_{j_\alpha m_\alpha j_\beta m_\beta}^{\lambda\mu}$$
$$\begin{aligned}
\hat{A} & = \frac{1}{2}\sum_{\alpha\beta}
\frac{(-1)^{l_\beta}A_{\alpha\beta}}{\sqrt{2\lambda+1}}
C_{j_\alpha m_\alpha j_\beta m_\beta}^{\lambda\mu}
(-\hat{\alpha}_\alpha^+\hat{\alpha}_\beta^+ + \gamma_T^A
\hat{\alpha}_{\bar{\alpha}}^{\phantom{*}}
\hat{\alpha}_{\bar{\beta}}^{\phantom{*}}) \\
& = \frac{1}{2}\sum_{\alpha\beta}
\frac{(-1)^{l_\alpha+\lambda+\mu+1}A_{\alpha\beta}}{\sqrt{2\lambda+1}}
C_{j_\alpha m_\alpha j_\beta m_\beta}^{\lambda,-\mu}
(\hat{\alpha}_{\bar{\alpha}}^+\hat{\alpha}_{\bar{\beta}}^+ - \gamma_T^A \hat{\alpha}_\alpha^{\phantom{|}}\hat{\alpha}_\beta^{\phantom{|}})\end{aligned}$$
The commutator in the ground state then evaluates as $$\label{comm}
\langle[\hat{A},\hat{B}]\rangle = \frac{1}{2}\sum_{\alpha\beta}
\frac{(-1)^{l_\alpha+l_\beta+\lambda+\mu}}{2\lambda+1}
(\gamma_T^A-\gamma_T^B)A_{\alpha\beta}B_{\alpha\beta}$$ where the sum does not run over $m_\alpha,m_\beta$ anymore, and the operators are supposed to have the same $\lambda$ and opposite $\mu$.
The formalism of reduced matrix elements needs to be generalized to density and current operators (\[Jd\_op\]), which are position-dependent, in contrast with usual tensor operators (\[A\_rme\]). The outcome will be first demonstrated for ordinary density, by using (7.2.40) in [@Varshalovich1988]: $$\begin{aligned}
\langle\alpha|\hat{\rho}(\vec{r})|\bar{\beta}\rangle &= R_\alpha(r) R_\beta(r)
(-1)^{l_\beta+j_\beta+m_\beta}
\Omega_{j_\alpha m_\alpha}^{l_\alpha\dagger}(\vartheta,\varphi)
\Omega_{j_\beta,-m_\beta}^{l_\beta}(\vartheta,\varphi) \nonumber\\
\Omega_{j_\alpha m_\alpha}^{l_\alpha\dagger}
\Omega_{j_\beta,-m_\beta}^{l_\beta} &=
\sum_L (-1)^{j_\alpha+m_\alpha+j_\beta+L+\frac{1}{2}}
\sqrt{\tfrac{(2j_\alpha+1)(2j_\beta+1)(2l_\alpha+1)(2l_\beta+1)}{4\pi(2L+1)}}
\begin{Bmatrix} l_\alpha & l_\beta & L \\ j_\beta & j_\alpha & \frac{1}{2} \end{Bmatrix}
\nonumber\\[-5pt]
&\qquad\qquad{}\times C_{l_\alpha 0 l_\beta 0}^{L 0}
C_{j_\alpha,-m_\alpha,j_\beta,-m_\beta}^{L,m_\beta-m_\alpha}
Y_{L,-m_\beta-m_\alpha} \nonumber\\
&= \sum_L (-1)^{j_\beta+\frac{1}{2}}
\sqrt{\tfrac{(2j_\alpha+1)(2j_\beta+1)(2l_\alpha+1)(2l_\beta+1)}{4\pi(2j_\alpha+1)}}
\begin{Bmatrix} l_\alpha & l_\beta & L \\ j_\beta & j_\alpha & \frac{1}{2} \end{Bmatrix}
\nonumber\\[-5pt]
\label{rho_rme_example}
&\qquad\qquad{}\times C_{l_\alpha 0 l_\beta 0}^{L 0}
C_{j_\beta,-m_\beta,L,m_\beta+m_\alpha}^{j_\alpha,m_\alpha}
Y^*_{L,m_\beta+m_\alpha}(\vartheta,\varphi)\end{aligned}$$ As can be seen, besides Clebsch-Gordan coefficient and numerical factors, there is a radial-dependent function and the complex-conjugated spherical harmonics (appearance of $Y^*$ can be understood as coming from the multipolar decomposition of the delta function to $\delta(r)Y(\hat{r})Y^*(\hat{r})$). In the generalization of the Wigner-Eckart theorem (\[A\_rme\]), I will absorb the radial dependence into the reduced matrix element, which will be denoted like $\rho_{\alpha\beta}^L(r)$. In general (see appendix \[app\_Jab\]), the multipolar expansion of the density and current operators, $\hat{\mathbf{J}}_d(\vec{r})$ (\[Jd\_op\]), contains spherical harmonics in its scalar, vector or tensor form: $\mathbf{Y}_{J,M}^{L*}(\vartheta,\varphi)$. I then define a reduced matrix element $J_{d;\alpha\beta}^{JL}(r)$ as $$\begin{aligned}
u_{\alpha\beta}^{(\gamma_T^d)}\langle\alpha|\hat{\mathbf{J}}_d|\bar{\beta}\rangle
& = \sum_{LJ} \frac{(-1)^{l_\beta+j_\beta+m_\beta}
J_{d;\alpha\beta}^{JL}(r)}{\sqrt{2j_\alpha+1}}
C_{j_\beta,-m_\beta,J,m_\alpha+m_\beta}^{j_\alpha,m_\alpha}
\mathbf{Y}_{J,m_\alpha+m_\beta}^{L*}(\vartheta,\varphi) \nonumber\\
\label{dens_rme}
& = \sum_{LJ} J_{d;\alpha\beta}^{JL}(r)\frac{(-1)^{l_\beta}}{\sqrt{2J+1}}
C_{j_\alpha m_\alpha j_\beta m_\beta}^{J,m_\alpha+m_\beta}
\mathbf{Y}_{J,m_\alpha+m_\beta}^{L*}(\vartheta,\varphi)\end{aligned}$$ The operators are then expressed in terms of quasiparticles (\[2qp\_operator\])
\[rme\] $$\begin{aligned}
\!\!\hat{\mathbf{J}}_d(\vec{r}) & = \tfrac{1}{2}\!\!\!\sum_{\alpha\beta LJM}\!\!\!
J_{d;\alpha\beta}^{JL}(r)\frac{(-1)^{l_\beta}}{\sqrt{2J+1}}
C_{j_\alpha m_\alpha j_\beta m_\beta}^{JM} \mathbf{Y}_{JM}^{L*}(\vartheta,\varphi)
(-\hat{\alpha}_{\alpha}^+ \hat{\alpha}_{\beta}^+ + \gamma_T^d
\hat{\alpha}_{\bar{\alpha}} \hat{\alpha}_{\bar{\beta}}) \\
& = \tfrac{1}{2}\!\!\!\sum_{\alpha\beta LJM}\!\!\!
J_{d;\alpha\beta}^{JL}(r)\frac{(-1)^{l_\alpha+L+s+1}}{\sqrt{2J+1}}
C_{j_\alpha m_\alpha j_\beta m_\beta}^{JM} \mathbf{Y}_{JM}^L(\hat{r})
(\hat{\alpha}_{\bar{\alpha}}^+ \hat{\alpha}_{\bar{\beta}}^+ - \gamma_T^d
\hat{\alpha}_{\alpha}^{\phantom{*}} \hat{\alpha}_{\beta}^{\phantom{*}})\end{aligned}$$
All the density and current operators are hermitian and their reduced matrix elements satisfy $$\label{rme_hermit}
J_{d;\alpha\beta}^{JL}(r) = \gamma_T^{d}(-1)^{l_\alpha+l_\beta+L+s}
J_{d;\alpha\beta}^{JL*}(r) = (-1)^{l_\alpha+l_\beta+j_\alpha+j_\beta+J+1}
J_{d;\beta\alpha}^{JL}(r)$$
### Reduced matrix elements of densities and currents {#sec_rme}
To simplify the expressions for reduced matrix elements, it is convenient to absorb certain numerical factors to the radial wavefunctions, e.g. factor $\sqrt{(2j+1)(2l+1)}$ in ordinary density (\[rho\_rme\_example\]). Other densities and currents will employ also derivative operators and the following shorthand notation of radial wavefunctions turns out to be convenient
\[Rpm\] $$\begin{aligned}
R_{\alpha}^{(0)} &\equiv \sqrt{(2j_\alpha+1)(2l_\alpha+1)}\, R_\alpha(r) \phantom{\bigg(} \\
R_{\alpha}^{(+)} &\equiv -\sqrt{(2j_\alpha+1)(l_\alpha+1)(2l_\alpha+3)}\,\bigg( \frac{\mathrm{d}R_\alpha(r)}{\mathrm{d}r} - \frac{l_\alpha}{r} R_\alpha(r)\bigg) \\
R_{\alpha}^{(-)} &\equiv \sqrt{(2j_\alpha+1)l_\alpha(2l_\alpha-1)}\,\bigg( \frac{\mathrm{d}R_\alpha(r)}{\mathrm{d}r} + \frac{l_\alpha+1}{r} R_\alpha(r)\bigg)\end{aligned}$$
and a shifted angular momentum will be denoted by $$l_\alpha^+ = l_\alpha+1,\quad l_\alpha^- = l_\alpha-1$$ An example of the derivation of vector spin-orbital current $\mathcal{J}_{\alpha\beta}^{JL}(r)$ is given in appendix \[app\_Jab\], which illustrates main steps involved in the remaining densities/currents.
Precise differentiation of the wavefunctions in (\[Rpm\]), which are defined on an equidistant grid (with spacing $\Delta$, going from $-n\Delta$ to $n\Delta$), is achieved through their discrete Fourier transformation. $$\begin{aligned}
R(r) &= \sum_{k=-n}^{n-1}\tilde{R}_k\mathrm{e}^{\mathrm{i}\pi kr/n\Delta}
= \frac{1}{2n}\sum_{k=-n}^{n-1}\sum_{j=-n}^{n-1}
\mathrm{e}^{\mathrm{i}\pi k(r/\Delta-j)/n} R(j\Delta)
\nonumber\\
\frac{\mathrm{d}R(r)}{\mathrm{d}r}\bigg|_{r=m\Delta} &=
\frac{1}{2n}\sum_{k=-n+1}^{n-1}\sum_{j=-n}^{n-1}
\frac{\mathrm{i}\pi k}{n\Delta}\mathrm{e}^{\mathrm{i}\pi k(m-j)/n} R(j\Delta)
\nonumber\\
&= \sum_{j=-n}^{n-1}\bigg(\frac{-\pi}{n^2\Delta}\sum_{k=1}^{n-1}
k\sin\frac{\pi k(m-j)}{n}\bigg) R(j\Delta)\end{aligned}$$ In practice, the convolution matrix (in large parentheses) is calculated in advance for two cases, even and odd $R(r)$, and then applied to functions $R_\alpha(r)$. Alternatively, expressions (\[Rpm\]) can be calculated analytically, if the radial wavefunctions are expressed in the basis of spherical harmonic oscillator (see later (\[Rpm\_sho\])).
The reduced matrix elements (\[dens\_rme\],\[rme\]) of quantities used in Skyrme functional are listed below. I will later complement the r.m.e. by index $q\in\{p,n\}$ (e.g. $\rho_{q;\alpha\beta}^{L}(r)$, where $\alpha\beta\in q$).
$$\begin{aligned}
% density
\rho_{\alpha\beta}^L(r) & =
u_{\alpha\beta}^{(+)}R_\alpha^{(0)}(r) R_\beta^{(0)}(r)
\frac{(-1)^{j_\beta+\frac{1}{2}}}{\sqrt{4\pi}}
\begin{Bmatrix}
l_\alpha & \!l_\beta\! & L \\ j_\beta & \!j_\alpha\! & \frac{1}{2} \end{Bmatrix}
C_{l_\alpha 0 l_\beta 0}^{L 0} \\
% kinetic energy
\tau_{\alpha\beta}^L(r) & = u_{\alpha\beta}^{(+)}
\bigg[ \sum_{ss'}^{\pm\pm} R_\alpha^{(s)}(r) R_\beta^{(s')}(r) \begin{Bmatrix}
l_\alpha^s & l_\beta^{s'} & L \\ l_\beta & l_\alpha & 1 \end{Bmatrix}
C_{l_\alpha^s 0 l_\beta^{s'} 0}^{L 0} \bigg]
\frac{(-1)^{j_\beta-\frac{1}{2}}}{\sqrt{4\pi}}
\begin{Bmatrix} l_\alpha & l_\beta & L \\ j_\beta & j_\alpha & \frac{1}{2} \end{Bmatrix} \\
% spin-orbital - vector
\label{spin-orb_me}
\mathcal{J}_{\alpha\beta}^{JL}(r) & =
\frac{1}{2}u_{\alpha\beta}^{(+)}\bigg[\sum_{ss'}^{0\pm,\pm0}
\mathcal{A}_{\alpha\beta LJ}^{\vec{\mathcal{J}},ss'} R_\alpha^{(s)}(r) R_\beta^{(s')}(r) \bigg]
(-1)^{j_\beta+\frac{1}{2}}
\sqrt{\frac{2J+1}{4\pi}} \\
& \mathcal{A}_{\alpha\beta LJ}^{\vec{\mathcal{J}},0\pm} =
C_{l_\alpha 0 l_\beta^\pm 0}^{L 0}
\bigg[
\begin{Bmatrix} l_\alpha & \!l_\beta\! & J \\ j_\beta & \!j_\alpha\! & \frac{1}{2} \end{Bmatrix}
\begin{Bmatrix} l_\alpha & \!l_\beta\! & J \\ 1 & \!L\! & l_\beta^\pm \end{Bmatrix}
-\frac{2\sqrt{3}}{\sqrt{2j_\beta+1}}
\begin{Bmatrix} j_\alpha & j_\beta & J \\ l_\alpha & l_\beta^\pm & L \\ \frac{1}{2} & \!\frac{1}{2}\! & 1 \end{Bmatrix}
\bigg] \nonumber\\
& \mathcal{A}_{\alpha\beta LJ}^{\vec{\mathcal{J}},\pm0} =
(-1)^{J+L+1} C_{l_\alpha^\pm 0 l_\beta 0}^{L 0}
\bigg[
\begin{Bmatrix} l_\alpha & \!l_\beta\! & J \\ j_\beta & \!j_\alpha\! & \frac{1}{2} \end{Bmatrix}
\begin{Bmatrix} l_\beta & \!l_\alpha\! & J \\ 1 & \!L\! & l_\alpha^\pm \end{Bmatrix}
-\frac{2\sqrt{3}}{\sqrt{2j_\alpha+1}}
\begin{Bmatrix} j_\beta & \!j_\alpha\! & J \\ l_\beta & \!l_\alpha^\pm\! & L \\ \frac{1}{2} & \!\frac{1}{2}\! & 1 \end{Bmatrix}
\bigg] \nonumber\\
% spin-orbital - scalar
\label{spin-orb-s_me}
\mathcal{J}_{s;\alpha\beta}^L(r) & = \mathrm{i}
u_{\alpha\beta}^{(+)}\frac{(-1)^{j_\beta+\frac{1}{2}}}{\sqrt{8\pi}}\Big[
\frac{{\mp}R_\alpha^{(0)}R_\beta^{(\pm)}\!\!\!}{\sqrt{2j_\beta+1}}
C_{l_\alpha 0 l_\beta^\pm 0}^{L0} \begin{Bmatrix}
l_\alpha & l_\beta^\pm & L \\ j_\beta & j_\alpha & \frac{1}{2} \end{Bmatrix}
\pm \frac{R_\alpha^{(\pm)}R_\beta^{(0)}}{\sqrt{2j_\alpha+1}}
C_{l_\alpha^\pm 0 l_\beta 0}^{L0} \begin{Bmatrix}
l_\alpha^\pm & l_\beta & L \\ j_\beta & j_\alpha & \frac{1}{2} \end{Bmatrix}
\Big] \\
% spin-orbital - tensor
\label{spin-orb-t_me}
\mathcal{J}_{t;\alpha\beta}^{JL}(r) & = \mathrm{i} u_{\alpha\beta}^{(+)}
\bigg[\sum_{ss'}^{0\pm,\pm0} \mathcal{A}_{\alpha\beta LJ}^{\boldsymbol{\mathcal{J}},ss'}
R_\alpha^{(s)}(r)R_\beta^{(s')}(r) \bigg]
(-1)^{j_\beta+\frac{1}{2}} \sqrt{\frac{5(2J+1)}{6\pi}} \\
& \mathcal{A}_{\alpha\beta LJ}^{\boldsymbol{\mathcal{J}},0\pm} =
(j_\beta-l_\beta)
{\textstyle \sqrt{\frac{(4j_\beta-2l_\beta+1\pm1)[2\mp2(j_\beta-l_\beta)]}
{(2l_\beta+1)(2j_\beta+1)}} }
\begin{Bmatrix} j_\alpha & j_\beta & J \\ l_\alpha & l_\beta^\pm & L \\ \frac{1}{2} & \frac{3}{2} & 2 \end{Bmatrix} C_{l_\alpha 0 l_\beta^\pm 0}^{L0} \nonumber\\
& \mathcal{A}_{\alpha\beta LJ}^{\boldsymbol{\mathcal{J}},\pm0} =
(-1)^{J+L+1}(j_\alpha-l_\alpha)
{\textstyle \sqrt{\frac{(4j_\alpha-2l_\alpha+1\pm1)[2\mp2(j_\alpha-l_\alpha)]}
{(2l_\alpha+1)(2j_\alpha+1)}} }
\begin{Bmatrix} j_\beta & j_\alpha & J \\ l_\beta & l_\alpha^\pm & L \\ \frac{1}{2} & \frac{3}{2} & 2 \end{Bmatrix} C_{l_\alpha^\pm 0 l_\beta 0}^{L0} \nonumber \\
% divergence of spin-orbital
\!\!\!\!\!\!\!(\nabla\cdot \mathcal{J})_{\alpha\beta}^L(r) & = \tau_{\alpha\beta}^L(r) -
u_{\alpha\beta}^{(+)}\frac{2(-1)^{j_\alpha+L+\frac{1}{2}}
R_\alpha^{(\pm)}(r) R_\beta^{(\pm)}(r)
}{\sqrt{4\pi(2j_\alpha+1)(2j_\beta+1)}}
\begin{Bmatrix} l_\alpha^\pm & \!l_\beta^\pm\! & L \\ j_\beta & \!j_\alpha\! & \frac{1}{2} \end{Bmatrix}
C_{l_\alpha^\pm 0 l_\beta^\pm 0}^{L 0} \bigg|_{\pm:\,j=l\pm\frac{1}{2}}\! \\
% current
\label{j_me}
j_{\alpha\beta}^{JL}(r) & =
\frac{\mathrm{i}}{2}u_{\alpha\beta}^{(-)}
\bigg[ \sum_{ss'}^{0\pm,\pm0}
\mathcal{A}_{\alpha\beta LJ}^{\vec{j},ss'} R_\alpha^{(s)}(r) R_\beta^{(s')}(r) \bigg]
(-1)^{j_\beta-\frac{1}{2}}
\sqrt{\frac{2J+1}{4\pi}}
\begin{Bmatrix} l_\alpha & \!l_\beta\! & J \\ j_\beta & \!j_\alpha\! & \frac{1}{2} \end{Bmatrix} \\
& \mathcal{A}_{\alpha\beta LJ}^{\vec{j},0\pm} =
\begin{Bmatrix} l_\alpha & \!l_\beta\! & J \\ 1 & \!L\! & l_\beta^\pm \end{Bmatrix}
C_{l_\alpha 0 l_\beta^\pm 0}^{L 0},
\qquad
\mathcal{A}_{\alpha\beta LJ}^{\vec{j},\pm0} = (-1)^{L+J}
\begin{Bmatrix} l_\beta & \!l_\alpha\! & J \\ 1 & \!L\! & l_\alpha^\pm \end{Bmatrix}
C_{l_\alpha^\pm 0 l_\beta 0}^{L 0} \nonumber\\
% curl of current
\!\!\!\!\!\!\!\!(\nabla\times j)_{\alpha\beta}^{JL}(r) & =
u_{\alpha\beta}^{(-)}\bigg[ \sum_{ss'}^{\pm\pm}
R_\alpha^{(s)}R_\beta^{(s')}
\begin{Bmatrix} l_\alpha & \!l_\beta\! & J \\ l_\alpha^{s} & \!l_\beta^{s'}\! & L \\ 1 & \!1\! & 1 \end{Bmatrix} C_{l_\alpha^s 0 l_\beta^{s'} 0}^{L0} \bigg]
(-1)^{j_\beta+L+J-\frac{1}{2}}\sqrt{\frac{3(2J+1)}{2\pi}}
\begin{Bmatrix} l_\alpha & \!l_\beta\! & J \\ j_\beta & \!j_\alpha\! & \frac{1}{2} \end{Bmatrix} \\
% spin
s_{\alpha\beta}^{JL}(r) & =
u_{\alpha\beta}^{(-)}R_\alpha^{(0)}(r) R_\beta^{(0)}(r)
(-1)^{l_\beta} \sqrt{\frac{3(2J+1)}{2\pi}}\,
\begin{Bmatrix} j_\alpha & \!j_\beta\! & J \\ l_\alpha & \!l_\beta\! & L \\ \frac{1}{2} & \!\frac{1}{2}\! & 1 \end{Bmatrix}
C_{l_\alpha 0 l_\beta 0}^{L 0} \\
% kinetic energy-spin
T_{\alpha\beta}^{JL}(r) & =
u_{\alpha\beta}^{(-)}\bigg[\sum_{ss'}^{\pm\pm}
R_\alpha^{(s)} R_\beta^{(s')} \begin{Bmatrix}
l_\alpha^s & l_\beta^{s'} & L \\ l_\beta & l_\alpha & 1 \end{Bmatrix}
C_{l_\alpha^s 0 l_\beta^{s'} 0}^{L 0} \bigg]
(-1)^{l_\beta+1} \sqrt{\frac{3(2J+1)}{2\pi}}\,
\begin{Bmatrix} j_\alpha & \!j_\beta\! & J \\ l_\alpha & \!l_\beta\! & L \\ \frac{1}{2} & \!\frac{1}{2}\! & 1 \end{Bmatrix}\end{aligned}$$
I am interested in electric and magnetic transitions of multipolarity $\lambda$, so the relevant matrix elements follow the selection rules $$\label{EM_sel_rules}
J=\lambda,\quad(-1)^{l_\alpha+l_\beta+\lambda}=\Big\{\begin{array}{l}
{+}1:\quad\textrm{electric} \\ {-}1:\quad\textrm{magnetic} \end{array}$$ These selection rules together with (\[rme\_hermit\]) lead to the conditions on non-zero $L$-components as listed in the Table \[tab-L\].
[|c||c|c|c|c|c|]{}
------------------------------------------------------------------------
$(-1)^{l_\alpha+l_\beta+\lambda}$ & $\rho$ & $\tau$ & $\mathcal{J},\vec{\mathcal{J}},\boldsymbol{\mathcal{J}}$ & $\vec{j}$ & $\vec{s},\,\vec{T},\,\vec{\nabla}\times\vec{j}$\
$+1$ & $L=\lambda$ & $L=\lambda$ & $L=\lambda\pm1$ & $L=\lambda\pm1$ & $L=\lambda$\
$-1$ & 0 & 0 & $L=\lambda,\lambda\pm2$ & $L=\lambda$ & $L=\lambda\pm1$\
Reduced matrix elements of $\vec{\nabla}\rho$ and $(\nabla s)$ are not given here, since they are simply related to $\vec{j}$ and $\mathcal{J}$ (\[Jd\_op\]) and differ only in the relative sign and the imaginary constant. $$(\vec{\nabla}\rho)(\vec{r}_0) =
\overleftarrow{\nabla}_{\!(\vec{r})}\delta(\vec{r}-\vec{r}_0)
+\delta(\vec{r}-\vec{r}_0)\overrightarrow{\nabla}_{\!(\vec{r})}$$ The differentiation in the definitions above does not spoil the hermiticity of the corresponding operators, because the resulting operators can be given equivalently as commutators, e.g. $$\label{diff_op}
(\vec{\nabla}\rho)(\vec{r}) = -\sum_j[\vec{\nabla}_j,\hat{\rho}(\vec{r})] = -\mathrm{i}\sum_j[\hat{\vec{p}}_j,\hat{\rho}(\vec{r})]/\hbar$$ where $j$ labels the particles.
Most of the $9j$ symbols given above do not have to be calculated explicitly, since their product with $C_{l_\alpha 0 l_\beta 0}^{L 0}$ can be expressed by Clebsch-Gordan coefficients, e.g. $C_{j_\alpha,-\frac{1}{2},j_\beta,\frac{1}{2}}^{J,0}$, see [@Varshalovich1988 eq. 10.9.10–12].
### Hartree-Fock in the basis of spherical harmonic oscillator
Solution of the Hartree-Fock (in its density-functional form) corresponds to a variation of the full Hamiltonian $\mathcal{H}$ with respect to densities $J_d(\vec{r})$ to obtain single-particle Hamiltonian $\hat{h}$: $$\hat{h} = \int\mathrm{d}\vec{r}\,\sum_d\frac{\delta\mathcal{H}}{\delta J_d(\vec{r})}
\hat{J}_d(\vec{r})$$ Ground state densities, which are contained in $\frac{\delta\mathcal{H}}{\delta J_d}$, are non-zero in spherical even-even nuclei only in their monopole component ($J=0$) and for time-even case. They can be calculated from the reduced matrix elements of the previous section, re-evaluating (\[dens\_rme\]), assuming $j_a = j_b = j$, $l_a=l_b=l$, $m_a=m_b=-m_\beta$, or, more precisely, $|\bar{\beta}\rangle\mapsto|\overline{-b}\rangle = (-1)^{l_b+j_b-m_b}|b\rangle$. $$\label{HF_me}
\langle a|\hat{\mathbf{J}}_d|b\rangle =
\frac{J_{d;ab}^{0L\mathrm{(HF)}}(r)}{\sqrt{2j+1}} \mathbf{Y}_{00}^{L*}$$ Terms with $J>0$ cancel in the summation over $m$ during the calculation of ground-state densities. $L$ is irrelevant for scalar densities, but is fixed as $L=1$ for vector densities and $L=2$ for tensor densities (due to triangular inequality in the coupling of orbital and spin angular momentum). Index (HF) and latin letters emphasize that indices $a,\,b$ correspond to the basis of spherical harmonic oscillator, instead of HF basis, so the factor $u_{ab}^{(+)}$ is absent here (instead, factors $v^2$ will be included later)
$$\begin{aligned}
\rho_{ab}^{0\mathrm{(HF)}}(r) &= \frac{1}{\sqrt{4\pi(2j+1)}}
\frac{\displaystyle R_a^{(0)} R_b^{(0)}}{2l+1} = \sqrt{\frac{2j+1}{4\pi}}\,R_a(r) R_b(r) \\
\tau_{ab}^{0\mathrm{(HF)}}(r) &= \frac{1}{\sqrt{4\pi(2j+1)}}\frac{1}{2l+1}
\bigg(\frac{\displaystyle R_a^{(+)}R_b^{(+)}}{2l^+ + 1}+\frac{\displaystyle R_a^{(-)}R_b^{(-)}}{2l^- + 1}\bigg) \\
\!\!(\nabla\!\cdot\!\mathcal{J})_{ab}^{0\mathrm{(HF)}}(r) &= \tau_{ab}^{0\mathrm{(HF)}}(r)
- \frac{1}{\sqrt{4\pi(2j+1)}}\frac{\displaystyle 2R_a^{(\pm)}R_b^{(\pm)}}{(2j+1)(2l^\pm+1)}
\bigg|_{\pm:\,j=l\pm\frac{1}{2}} \\
(\nabla\rho)_{ab}^{01\mathrm{(HF)}}(r) &= \frac{1}{\sqrt{4\pi(2j+1)(2l+1)}}
\frac{1}{2l+1} \bigg[
\sqrt{\frac{l+1}{2l^++1}}\big(R_a^{(+)}R_b^{(0)}+R_a^{(0)}R_b^{(+)}\big) \nonumber\\
&\qquad\quad\qquad\qquad\qquad{}-\sqrt{\frac{l}{2l^-+1}}
\big(R_a^{(-)}R_b^{(0)}+R_a^{(0)}R_b^{(-)}\big) \bigg] \\
\mathcal{J}_{ab}^{01\mathrm{(HF)}}(r) &= \frac{(\nabla\rho)_{ab}^{01\mathrm{(HF)}}(r)}{2}
\mp \frac{\displaystyle R_a^{(\pm)}R_b^{(0)}+R_a^{(0)}R_b^{(\pm)}}{\sqrt{8\pi(2l+1)(2l^\pm+1)}\,(2j+1)} \bigg|_{\pm:\,j=l\pm\frac{1}{2}}\end{aligned}$$
where $l^\pm = l\pm1$. Scalar and tensor spin-orbital currents are zero due to Clebsch-Gordan coefficient $C_{l_\alpha 0 l_\alpha^\pm 0}^L$ in (\[spin-orb-s\_me\]), (\[spin-orb-t\_me\]) with $L=0$ or $2$, respectively.
The basis of spherical harmonic oscillator (SHO) is defined by oscillator length $b$ (not to be confused with the w.f. labels above), orbital angular momentum $l$ and radial quantum number $\nu\in\{0,1,2,\ldots\}$. $$\label{SHO}
\psi_{\nu l m_l}^\mathrm{SHO}(r) = R_{\nu l}(r)Y_{lm_l}(\vartheta,\varphi),\quad
E_{\nu l}^\mathrm{SHO} = \hbar\omega\bigg(2\nu+l+\frac{3}{2}\bigg),\quad
b = \sqrt{\frac{\hbar}{m\omega}},$$ Radial part of s.p. HF matrix elements is evaluated directly in SHO basis, and the derivatives in the definition of $R^{(\pm)}$ (\[Rpm\]) can be calculated analytically $$\begin{aligned}
{-}\tfrac{\mathrm{d}R_{\nu l}(r)}{\mathrm{d}r}+\tfrac{l}{r}R_{\nu l}(r) &=
\tfrac{1}{b}\big[\sqrt{\nu+l+3/2}\,R_{\nu,l+1}(r)+\sqrt{\nu}\,R_{\nu-1,l+1}(r)\big]\\
\tfrac{\mathrm{d}R_{\nu l}(r)}{\mathrm{d}r}+\tfrac{l+1}{r}R_{\nu l}(r) &=
\tfrac{1}{b}\big[\sqrt{\nu+l+1/2}\,R_{\nu,l-1}(r)+\sqrt{\nu+1}\,R_{\nu+1,l-1}(r)\big]\end{aligned}$$ so the expressions for $R^{(\pm)}$ are
\[Rpm\_sho\] $$\begin{aligned}
R_{\nu l}^{(0)} &= \sqrt{(2j+1)(2l+1)}\,R_{\nu l}(r) \\
\!R_{\nu l}^{(+)} &= \sqrt{(2j+1)(l+1)(2l^++1)}\,\tfrac{1}{b}
\Big[\sqrt{\nu+l+3/2}\,R_{\nu,l+1}(r)+\sqrt{\nu}\,R_{\nu-1,l+1}(r)\Big] \\
\!R_{\nu l}^{(-)} &= \sqrt{(2j+1)l(2l^-+1)}\,\tfrac{1}{b}
\Big[\sqrt{\nu+l+1/2}\,R_{\nu,l-1}(r)+\sqrt{\nu+1}\,R_{\nu+1,l-1}(r)\Big]\end{aligned}$$
Kinetic energy can be evaluated in a similar way, and the only non-zero matrix elements are
\[HF\_kinetic\] $$\begin{aligned}
\langle\nu-1,l|\nabla^2|\nu,l\rangle &= {-}\sqrt{\nu(\nu+l+1/2)}/b^2 \\
\langle\nu,l|\nabla^2|\nu,l\rangle &= {-}\big(2\nu+l+\tfrac{3}{2}\big)/b^2 \\
\langle\nu+1,l|\nabla^2|\nu,l\rangle &= {-}\sqrt{(\nu+1)(\nu+l+3/2)}/b^2\end{aligned}$$
Skyrme HF calculation then proceeds by a straightforward iterative way:
1. HF wavefunctions $R_\alpha(r)$ are evaluated on the radial grid from the orthogonal matrices $U_{a\alpha\phantom{b}}^{(j,l)}$ in each subspace of $j$ and $l$ (index $a$ is essentially equivalent to $\nu$ in spherical-harmonic-oscillator basis).
2. Ground state densities are calculated from $R_\alpha(r)$, taking into acount pairing factors $v_\alpha^2$ (given by the separate BCS step, or taken as $0/1$ according to occupancy). Densities from the previous iteration are admixed to the new densities (by 50%) to stabilize the convergence. Coulomb potential is calculated by folding with $1/|\vec{r}_1-\vec{r_2}|$ according to section \[sec\_sph\_coul\]. The total energy can be calculated here as well.
3. Matrix elements of single-particle HF Hamiltonian are calculated by radial integration of a product of the ground-state densities and the matrix elements of densities (\[HF\_me\]) in SHO basis. Kinetic term (\[HF\_kinetic\]) is shown separately from $\mathcal{H}$ in the formula below. Moreover, it is possible to include center-of-mass correction for the kinetic energy (see section \[sec\_kin-cm\]), if correction-before-variation is needed – this option requires also the calculation of density matrix $D_{ab}^{(j,l)} = \sum_\alpha v_\alpha^2 U_{a\alpha\phantom{b}}^{(j,l)} U_{b\alpha}^{(j,l)}$, which is $(2j+1)$-times degenerated in quantum number $m$.
4. Diagonalization of the single-particle HF Hamiltonian $\hat{h}$ to get single-particle energies and matrices $U_{a\alpha}^{(j,l)}$ (with eigenvectors in columns).
$$\begin{aligned}
&R_\alpha(r)=\sum_{a}U_{a\alpha}^{(j,l)}R_a(r) \quad\Rightarrow\quad
J_d(r) = \sum_{j,l} \sum_{\alpha\in(j,l)} (2j+1)v_\alpha^2
\frac{J_{d;\alpha\alpha}^{0L\mathrm{(HF)}}(r)}{\sqrt{2j+1}} \\
&\qquad\qquad\Rightarrow\quad
\langle a|\hat{h}|b\rangle = -\frac{\hbar^2}{2m_q}\langle a|\nabla^2|b\rangle
+ \int\mathrm{d}^3 r \sum_d \frac{\delta\mathcal{H}}{\delta J_d(r)}
\frac{J_{d;ab}^{0L\mathrm{(HF)}}(r)}{\sqrt{2j+1}}\end{aligned}$$
The iterations are repeated until the relative difference in the total energy becomes lower than $10^{-14}$ (it takes from 50 iteration for Ca up to 90 iterations for Pb). Then, four iterations are done without admixing previous densities.
### Full RPA {#sec_fullrpa}
Excitations of a given multipolarity will be treated as RPA phonons. One-phonon state is denoted as $|\nu\rangle$, with energy $E_\nu = \hbar\omega_\nu$ above ground state, and was created by action of operator $\hat{C}_\nu^+$ on the RPA ground state $|\textrm{RPA}\rangle$. $$\hat{C}_\nu^+|\textrm{RPA}\rangle=|\nu\rangle,\quad
\hat{C}_\nu^{\phantom{|}}|\textrm{RPA}\rangle=0$$ Operator $\hat{C}_\nu^+$ is a two-quasiparticle ($2qp$) operator defined by real coefficients $c_{\alpha\beta}^{(\nu\pm)}$ $$\label{phonon_sph}
\hat{C}_\nu^+ = \frac{1}{2}\sum_{\alpha\beta}
C_{j_\alpha m_\alpha j_\beta m_\beta}^{\lambda_\nu\mu_\nu}\Big(
c_{\alpha\beta}^{(\nu-)}\hat{\alpha}_{\alpha}^+\hat{\alpha}_{\beta}^+ +
c_{\alpha\beta}^{(\nu+)}\hat{\alpha}_{\bar{\alpha}}^{\phantom{*}}
\hat{\alpha}_{\bar{\beta}}^{\phantom{*}} \Big)$$ (in the following, I will drop the index $\nu$ in $\lambda_\nu,\:\mu_\nu$), its normalization is given by $$\label{RPA_norm}
\langle[\hat{C}_\nu^{\phantom{|}},\hat{C}_{\nu'}^+]\rangle = \delta_{\nu\nu'} \quad\Rightarrow\quad
\frac{1}{2}\sum_{\alpha\beta} \Big(\big|c_{\alpha\beta}^{(\nu-)}\big|^2
- \big|c_{\alpha\beta}^{(\nu+)}\big|^2 \Big) = 1$$ and it satisfies the RPA equation $$\label{RPA_eq}
{[\hat{H},\hat{C}_\nu^+]}_{2qp} = E_\nu\hat{C}_\nu^+$$ where the index $2qp$ means that I take only the two-quasiparticle portion of the commutator (after normal ordering). Although all commutators should be evaluated in the RPA ground state, I evaluate them in the HF+BCS ground state (i.e., I am using quasi-boson approximation), which is a common practice, as the contribution of $4qp$ and higher correlations in the ground state to the expectation value of commutators is assumed to be low [@Ring1980].
The Hamiltonian is taken as a sum of mean-field part (HF+BCS) and the second functional derivative of the energy density functional (\[full\_hamil\]). $$\hat{H} = \hat{H}_0 + \hat{V}_\mathrm{res} =
\sum_\gamma\varepsilon_\gamma\hat{\alpha}_\gamma^+\hat{\alpha}_\gamma^{\phantom{|}}
+ \frac{1}{2}\sum_{dd'}\int\!\!\!\int\mathrm{d}^3 r\,\mathrm{d}^3 r'
\frac{\delta^2\mathcal{H}}{\delta J_d(\vec{r})\delta J_{d'}({\vec{r}\,}')}
:\!\hat{J}_{d}(\vec{r})\hat{J}_{d'}({\vec{r}\,}')\!:$$ Left hand side of (\[RPA\_eq\]) is then evaluated as (with $\varepsilon_{\alpha\beta} = \varepsilon_\alpha + \varepsilon_\beta$) $$\begin{aligned}
{[\hat{H}_0,\hat{C}_\nu^+]} & = \frac{1}{2}\sum_{\alpha\beta}
\varepsilon_{\alpha\beta} C_{j_\alpha m_\alpha j_\beta m_\beta}^{\lambda \mu}
\Big( c_{\alpha\beta}^{(\nu-)} \hat{\alpha}_\alpha^+ \hat{\alpha}_\beta^+
- c_{\alpha\beta}^{(\nu+)} \hat{\alpha}_{\bar{\alpha}}^{\phantom{*}} \hat{\alpha}_{\bar{\beta}}^{\phantom{*}} \Big) \\
{[\hat{V}_\mathrm{res},\hat{C}_\nu^+]}_{2qp} & = \sum_{dd'}
\int\!\!\!\int\mathrm{d}^3 r\,\mathrm{d}^3 r'
\frac{\delta^2\mathcal{H}}{\delta J_d(\vec{r})\delta J_{d'}({\vec{r}\,}')}
\langle[\hat{J}_d(\vec{r}),\hat{C}_\nu^+]\rangle\hat{J}_{d'}({\vec{r}\,}') \\
& = \sum_{dd'}\gamma_T^d\,
\frac{1}{4}\sum_{\alpha\beta\gamma\delta L}
\int_0^\infty r^2\mathrm{d}r \frac{\delta^2\mathcal{H}}{\delta J_d\delta J_{d'}}
J_{d;\alpha\beta}^{\lambda L*}(r)J_{d';\gamma\delta}^{\lambda L}(r)
\frac{(-1)^{l_\beta+l_\delta+1}}{2\lambda+1}
C_{j_\gamma m_\gamma j_\delta m_\delta}^{\lambda\mu} \nonumber\\
& \qquad\times
\Big(\gamma_T^d c_{\alpha\beta}^{(\nu-)}+c_{\alpha\beta}^{(\nu+)}\Big)
(-\hat{\alpha}_{\gamma}^+ \hat{\alpha}_{\delta}^+ + \gamma_T^{d'}
\hat{\alpha}_{\bar{\gamma}}^{\phantom{*}} \hat{\alpha}_{\bar{\delta}}^{\phantom{*}})\end{aligned}$$
At this point, I will remove duplicate $2qp$ pairs. To do it consistently, I will rescale diagonal pairing factors $$\label{order2qp}
\frac{1}{2}\sum_{\alpha\beta}\mapsto\sum_{\alpha\geq\beta},\quad
u_{\alpha\alpha}^{(+)} = \sqrt{2}\,u_\alpha v_\alpha\quad\textrm{(instead of $2u_\alpha v_\alpha$)}$$ and $c_{\alpha\alpha}^{(\nu\pm)}$ will be rescaled automatically. Diagonal matrix elements contribute only to electric transitions with $\lambda$ even. Then, comparison of coefficients at $\hat{\alpha}_{\gamma}^+ \hat{\alpha}_{\delta}^+$ and $\hat{\alpha}_{\bar{\gamma}}^{\phantom{*}} \hat{\alpha}_{\bar{\delta}}^{\phantom{*}}$ in (\[RPA\_eq\]) leads to
$$\begin{aligned}
(E_\nu-\varepsilon_{\gamma\delta})c_{\gamma\delta}^{(\nu-)} & =
\sum_{dd'L} \sum_{\alpha\geq\beta} \int_0^\infty
\frac{\delta^2\mathcal{H}}{\delta J_d\delta J_{d'}}
J_{d;\alpha\beta}^{\lambda L*}(r)J_{d';\gamma\delta}^{\lambda L}(r)
r^2\mathrm{d}r \frac{(-1)^{l_\beta+l_\delta}}{2\lambda+1}
\Big(c_{\alpha\beta}^{(\nu-)}+\gamma_T^d c_{\alpha\beta}^{(\nu+)}\Big) \\
(E_\nu+\varepsilon_{\gamma\delta})c_{\gamma\delta}^{(\nu+)} & =
-\sum_{dd'L} \sum_{\alpha\geq\beta} \int_0^\infty
\frac{\delta^2\mathcal{H}}{\delta J_d\delta J_{d'}}
J_{d;\alpha\beta}^{\lambda L*}(r)J_{d';\gamma\delta}^{\lambda L}(r)
r^2\mathrm{d}r \frac{(-1)^{l_\beta+l_\delta}}{2\lambda+1}
\Big(\gamma_T^d c_{\alpha\beta}^{(\nu-)}+c_{\alpha\beta}^{(\nu+)}\Big)\end{aligned}$$
\
and these equations can be expressed in a compact matrix form $$\label{fullRPA_eq}
\begin{pmatrix} A & B \\ B & A \end{pmatrix} \binom{c^{(\nu-)}}{c^{(\nu+)}} =
\begin{pmatrix} E_\nu & 0 \\ 0 & -E_\nu \end{pmatrix}
\binom{c^{(\nu-)}}{c^{(\nu+)}}$$ where the real matrices $A,\,B$ in the ordered $2qp$ basis ($p\equiv\alpha\beta,\,p'\equiv\gamma\delta$) are
$$\begin{aligned}
\label{fullRPA_A}
A_{pp'} & = \delta_{pp'}\varepsilon_p + \sum_{dd'L}\frac{(-1)^{l_\beta+l_\delta}}{2\lambda+1}
\int_0^\infty \frac{\delta^2\mathcal{H}}{\delta J_d\delta J_{d'}}
J_{d;p}^{\lambda L}(r)J_{d';p'}^{\lambda L*}(r)r^2\mathrm{d}r \\
\label{fullRPA_B}
B_{pp'} & = \sum_{dd'L} \gamma_T^d \frac{(-1)^{l_\beta+l_\delta}}{2\lambda+1}
\int_0^\infty \frac{\delta^2\mathcal{H}}{\delta J_d\delta J_{d'}}
J_{d;p}^{\lambda L}(r)J_{d';p'}^{\lambda L*}(r)
r^2\mathrm{d}r\end{aligned}$$
Expression $\frac{\delta^2\mathcal{H}}{\delta J_d\delta J_{d'}}$ is symbolical, and includes integration of the delta function, yielding ${\vec{r}\,}' = \vec{r}$. The exchange Coulomb interaction can be treated by Slater approximation as a density functional (\[xc\]) $$\frac{\delta^2\mathcal{H}_\mathrm{xc}}{\delta\rho_p\delta\rho_p}
= \frac{-1}{\sqrt[3\,]{9\pi}} \frac{e^2}{4\pi\epsilon_0} \rho_{0p}^{-2/3}(r)$$ where $\rho_{0p}(r)$ is the ground-state proton density. However, the direct Coulomb interaction gives rise to a double integral instead (see also corrections in (\[coul\_EM2\])) $$\begin{aligned}
\!\!\int_0^\infty\frac{\delta^2\mathcal{H}}{\delta J_d\delta J_{d'}}
J_{d;\alpha\beta}^{\lambda L*}(r)J_{d';\gamma\delta}^{\lambda L}(r)
r^2\mathrm{d}r \quad\mapsto \nonumber\\
\frac{e^2}{4\pi\epsilon_0} \frac{4\pi}{2\lambda+1}
\int_0^\infty\! r^2\mathrm{d}r \int_0^\infty\! r'^2\mathrm{d}r'
&\rho_{\alpha\beta}^\lambda(r)\rho_{\gamma\delta}^\lambda(r')
\times \bigg\{\!\!\begin{array}{l}
r^\lambda / r'^{\lambda+1} \ \ (r<r') \phantom{\big|}\\
r'^\lambda / r^{\lambda+1} \ \ (r\geq r')\phantom{\big|} \end{array}\end{aligned}$$
Matrix equation (\[fullRPA\_eq\]) can be reduced to a diagonalization of a symmetric matrix of half dimension. I define $$\label{half_RPA}
\!\!\begin{array}{l} x_p = c_p^{(\nu-)} + c_p^{(\nu+)}\phantom{\big|} \\
y_p = c_p^{(\nu-)} - c_p^{(\nu+)}, \end{array} \ \
c_p^{(\nu-)} = \tfrac{x_p+y_p}{2}, \ \
c_p^{(\nu+)} = \tfrac{x_p-y_p}{2}, \ \
\begin{array}{l} Q = A + B \\
P = A - B = CC^T \end{array}$$ where the lower-triangular matrix $C$ was defined as a square root of $P$. Equation (\[fullRPA\_eq\]) then turns into $$Q\vec{x}=E_\nu\vec{y}, \quad P\vec{y}=E_\nu\vec{x}$$ and the eigenvalue problem can be formulated in terms of a symmetric matrix $C^T Q C$ with eigenvalues $E_\nu^2$ and eigenvectors $\vec{R}_\nu$. $$\label{CQC}
\vec{x} = C\vec{R}_\nu,\quad C^T\vec{y}=E_\nu\vec{R}_\nu,\quad
C^T Q C\vec{R}_\nu = E_\nu^2\vec{R}_\nu$$ Normalization condition (\[RPA\_norm\]) then becomes $$\vec{x}\cdot\vec{y} = 1,\quad E_\nu = E_\nu\vec{x}\cdot\vec{y} =
\vec{x}\cdot Q\vec{x} = \vec{R}_\nu\cdot C^T Q C\vec{R}_\nu\quad\rightarrow\quad
\vec{R}_\nu^2 = 1/E_\nu$$
### Transition operators
After calculation of the RPA states, yielding $E_\nu$ and $c_{\alpha\beta}^{(\nu\pm)}$, I am interested in the matrix elements of electric and magnetic transition operators and in the transition densities and currents. $$\label{trans_me}
\langle\nu|\hat{M}_{\lambda\mu}|\textrm{RPA}\rangle =
\langle[\hat{C}_\nu^{\phantom{|}},\hat{M}_{\lambda\mu}]\rangle =
\sum_{\alpha\geq\beta} \frac{(-1)^{l_\beta+1}}{\sqrt{2\lambda+1}}
M_{\lambda;\alpha\beta}
\Big( c_{\alpha\beta}^{(\nu-)} + \gamma_T^M c_{\alpha\beta}^{(\nu+)} \Big)^{\!*}$$ $$\begin{aligned}
\label{trans_rho}
\!\delta\rho_{q;\nu}(\vec{r}) & =
\langle[\hat{C}_\nu^{\phantom{|}},\hat{\rho}_{q}(\vec{r})]\rangle =
\sum_{\alpha\geq\beta} \frac{(-1)^{l_\beta+1}}{\sqrt{2\lambda+1}}
\rho_{q;\alpha\beta}^\lambda(r)
\big( c_{\alpha\beta}^{(\nu-)} + c_{\alpha\beta}^{(\nu+)} \big)^{\!*}
Y_{\lambda\mu}^*(\vartheta,\varphi) \\
\label{trans_cur}
\!\delta\vec{j}_{q;\nu}(\vec{r}) & =
\langle[\hat{C}_\nu^{\phantom{|}},\hat{\vec{j}}_{q}(\vec{r})]\rangle =
\sum_L\sum_{\alpha\geq\beta} \frac{(-1)^{l_\beta+1}}{\sqrt{2\lambda+1}}
j_{q;\alpha\beta}^{\lambda L}(r)
\big( c_{\alpha\beta}^{(\nu-)} - c_{\alpha\beta}^{(\nu+)} \big)^{\!*}
\vec{Y}_{\lambda\mu}^{L*}(\vartheta,\varphi)\!\end{aligned}$$ Besides electric ($\gamma_T^{\mathrm{E}\lambda} = 1$) and magnetic ($\gamma_T^{\mathrm{M}\lambda} = -1$) operators in long-wave approximation ($k \equiv E_\nu/\hbar c \ll 1/r$), I will use also electric vortical, toroidal and compression operators [@Kvasil2011]
\[tran\] $$\begin{aligned}
\label{M_E}
\hat{M}_{\lambda\mu}^\mathrm{E} & = \sum_i \hat{M}^{\mathrm{E}}_{\lambda \mu}(\vec{r}_i)
= e\sum_q z_q\sum_{i\in q}
\Big( r^\lambda Y_{\lambda\mu}(\vartheta,\varphi) \Big)_i \\
\label{M_M}
\hat{M}_{\lambda\mu}^\mathrm{M} & = \frac{\mu_N}{c}\sqrt{\lambda(2\lambda+1)}\,
\sum_q \sum_{i\in q} \bigg(
\bigg[ \frac{g_q}{2}\vec{\sigma}
+ \frac{2 z_q}{\lambda+1} \hat{\vec{l}}\,\bigg]\,
r^{\lambda-1}\vec{Y}_{\lambda\mu}^{\lambda-1}(\vartheta,\varphi)
\bigg)_i \\
\hat{M}_{\mathrm{vor};\lambda\mu}^\mathrm{E} & = \frac{-\mathrm{i}/c}{2\lambda+3}
\sqrt{\frac{2\lambda+1}{\lambda+1}}\int\!\mathrm{d}^3 r\,
\hat{\vec{j}}_\mathrm{nuc}(\vec{r})r^{\lambda+1}
\vec{Y}_{\lambda\mu}^{\lambda+1}(\vartheta,\varphi) =
\hat{M}_{\mathrm{tor};\lambda\mu}^\mathrm{E} + \hat{M}_{\mathrm{com};\lambda\mu}^E \\
\label{M_tor}
\hat{M}_{\mathrm{tor};\lambda\mu}^\mathrm{E} & =
\frac{-1}{2c(2\lambda+3)}\sqrt{\frac{\lambda}{\lambda+1}}
\int\mathrm{d}^3 r\,\hat{\vec{j}}_\mathrm{nuc}(\vec{r})\cdot\vec{\nabla}\times
\big[r^{\lambda+2}\vec{Y}_{\lambda\mu}^\lambda(\vartheta,\varphi)\big] \\
% \frac{-\mathrm{i}\sqrt{\lambda}}{2\sqrt{2\lambda+1}} \int\mathrm{d}^3 r\,
% \hat{\vec{j}}_\mathrm{nuc}(\vec{r})\cdot r^{\lambda+1} \bigg[
% \vec{Y}_{\lambda\mu}^{\lambda-1}(\vartheta,\varphi) +
% \sqrt{\frac{\lambda}{\lambda+1}}\frac{2
% \vec{Y}_{\lambda\mu}^{\lambda+1}(\vartheta,\varphi)}{2\lambda+3}\bigg] \\
\label{M_com}
\hat{M}_{\mathrm{com};\lambda\mu}^\mathrm{E} & =
\frac{\mathrm{i}}{2c(2\lambda+3)}
\int\mathrm{d}^3 r\,\hat{\vec{j}}_\mathrm{nuc}(\vec{r})\cdot
\vec{\nabla}\big[r^{\lambda+2} Y_{\lambda\mu}(\vartheta,\varphi)\big]\quad\
\big(\approx -k\hat{M}_{\mathrm{com};\lambda\mu}^{\mathrm{E}\,\prime}\big) \\
% \frac{\mathrm{i}\sqrt{\lambda}}{2\sqrt{2\lambda+1}} \int\mathrm{d}^3 r\,
% \hat{\vec{j}}_\mathrm{nuc}(\vec{r})\cdot r^{\lambda+1} \bigg[
% \vec{Y}_{\lambda\mu}^{\lambda-1}(\vartheta,\varphi) -
% \sqrt{\frac{\lambda+1}{\lambda}}\frac{2
% \vec{Y}_{\lambda\mu}^{\lambda+1}(\vartheta,\varphi)}{2\lambda+3}\bigg]
\hat{M}_{\mathrm{com};\lambda\mu}^{\mathrm{E}\,\prime} & =
\sum_i \hat{M}_{\mathrm{com};\lambda\mu}^{\mathrm{E}\,\prime}(\vec{r}_i) =
\frac{e}{2(2\lambda+3)}
\sum_q z_q \sum_{i\in q} \Big( r^{\lambda+2} Y_{\lambda\mu}(\vartheta,\varphi) \Big)_i\end{aligned}$$
\
where $z_q$ are effective charges of the nucleons, $g_{q}$ are spin g-factors (these are reduced by a quenching factor 0.7), $\hat{\vec{l}} = -\mathrm{i}\vec{r}\times\vec{\nabla}$, and $\vec{j}_\mathrm{nuc}$ is a nuclear current composed of convective and magnetization part $$\label{j_nuc}
\hat{\vec{j}}_\mathrm{nuc}(\vec{r}) = \frac{e\hbar}{m_p}\sum_{q=p,n}\sum_{i\in q}
\Big[ z_q\hat{\vec{j}}_i(\vec{r}) + \frac{1}{4} g_q\vec{\nabla}_{\!(\vec{r})}\times\hat{\vec{s}}_i(\vec{r})\Big]$$ where (convective) current and spin one-body operators are the same as in Skyrme functional (\[Jd\_op\]): $$\hat{\vec{j}}(\vec{r}_0) = \tfrac{\mathrm{i}}{2}
\big[\overleftarrow{\nabla}\delta(\vec{r}-\vec{r}_0)
-\delta(\vec{r}-\vec{r}_0)\overrightarrow{\nabla}\big],\qquad
\hat{\vec{s}}(\vec{r}_0) = \vec{\sigma}\delta(\vec{r}-\vec{r}_0)$$ Formula (\[j\_nuc\]) can be derived by the non-relativistic reduction of Dirac current $\vec{j} = ec\psi^\dagger\vec{\alpha}\psi$, and by replacing electron-like factor $g=2$ by generic $g_q$. Reduced matrix element of the orbital-angular-momentum-like operator $$\hat{l}(\vec{r}) = \sum_j\delta(\vec{r}_j-\vec{r})\hat{l}_j
= -\mathrm{i}\sum_j\delta(\vec{r}_j-\vec{r})\vec{r}_j\times\vec{\nabla}_j$$ involved in $\hat{M}_{\lambda\mu}^\mathrm{M}$ (\[M\_M\]) is evaluated as $$\begin{aligned}
l_{\alpha\beta}^{JL}(r) &=
u_{\alpha\beta}^{(-)}R_\alpha^{(0)}(r)R_\beta^{(0)}(r)\frac{(-1)^{j_\beta+\frac{1}{2}}}{\sqrt{4\pi}}
\sqrt{(2\lambda+1)(2l_\alpha+1)(l_\alpha+1)l_\alpha} \nonumber\\
&\qquad\qquad{}\times\begin{Bmatrix} L & J & 1 \\ l_\alpha & l_\alpha & l_\beta \end{Bmatrix}
\begin{Bmatrix} l_\alpha & l_\beta & J \\ j_\beta & j_\alpha & \frac{1}{2} \end{Bmatrix}
C_{l_\alpha 0 l_\beta 0}^{L 0}\end{aligned}$$ usually with $J=\lambda$ and $L=\lambda-1$.
Operators (\[tran\]) can be derived by the long-wave approximation ($kr\ll 1$) of the exact transition operators [@Greiner1996], using $\vec{\nabla}\cdot\delta\vec{j} = -\partial_t\delta\rho = -\mathrm{i}kc\delta\rho$.
\[exactM\] $$\begin{aligned}
\hat{M}_{\lambda\mu}^\mathrm{exactE} & =
-\frac{(2\lambda+1)!!}{ck^{\lambda+1}} \sqrt{\frac{\lambda}{\lambda+1}}
\int\mathrm{d}^3 r\,\hat{\vec{j}}_\mathrm{nuc}(\vec{r})\cdot\vec{\nabla}\times
\big[j_\lambda(kr)\vec{Y}_{\lambda\mu}^\lambda(\vartheta,\varphi)\big] \\
&\approx \hat{M}_{\lambda\mu}^\mathrm{E} - k\hat{M}_{\mathrm{tor};\lambda\mu}^\mathrm{E}
+ \ldots \nonumber\\
\hat{M}_{\lambda\mu}^\mathrm{exactM} & =
-\mathrm{i}\frac{(2\lambda+1)!!}{ck^\lambda} \sqrt{\frac{\lambda}{\lambda+1}}
\int\mathrm{d}^3 r\,\hat{\vec{j}}_\mathrm{nuc}(\vec{r})\cdot
\big[j_\lambda(kr)\vec{Y}_{\lambda\mu}^\lambda(\vartheta,\varphi)\big] \\
&\approx \hat{M}_{\lambda\mu}^\mathrm{M} + \ldots \nonumber\end{aligned}$$
where $j_\lambda(kr)$ is the spherical Bessel function and $k = E_\nu/\hbar c$. $$j_\lambda(kr) = \sum_{n=0}^\infty
\frac{(-1)^n (kr)^{\lambda+2n}}{\displaystyle2^n n!(2\lambda+2n+1)!!}
= \frac{(kr)^\lambda}{(2\lambda+1)!!}\bigg(1
-\frac{(kr)^2}{2(2\lambda+3)}+O[(kr)^4]\bigg)$$ Quantity $k$ is not a constant: it depends on the particular transition, and also changes sign under hermitian conjugation. For this reason, the electric operators containing an odd power of $k$ (including $j_\lambda(kr)$) are time-even, despite the time-odd nature of the current $\hat{\vec{j}}_\mathrm{nuc}$ (please notice that $\hat{M}_{\mathrm{tor};\lambda\mu}^\mathrm{E}$ in our definition is (strictly speaking) time-odd and non-hermitian, because it was stripped of $k$).
The constant involved in magnetic and toroidal/compression transition operators is $$\frac{\mu_N}{ec} = \frac{\hbar}{2m_p c} = 0.10515445\ \mathrm{fm}$$ and the elementary charge $e$ (as a symbolical parameter without a specific unit system) is usually excluded from the numerical evaluation. Then the matrix element is said to be in units $[e.\mathrm{fm}^\lambda]$ (or $[e.\mathrm{fm}^{\lambda+1}]$ for $\hat{M}_{\mathrm{vtc};\lambda\mu}^\mathrm{E}$, or $[e.\mathrm{fm}^{\lambda+2}]$ for $\hat{M}_{\mathrm{com};\lambda\mu}^{\mathrm{E}\,\prime}$). Magnetic transitions are often enumerated excluding the whole $\mu_N/c$ factor, and are then reported as being in units $[\mu_N.\mathrm{fm}^{\lambda-1}]$ (because $\mu_N/c$ in SI units is equivalent to $\mu_N$ in cgs units).
Gamma absorption cross section is related to the transition probability $$B(\lambda\mu,0\rightarrow\nu) = \big|\langle\nu|\hat{M}_{\lambda\mu}|\textrm{RPA}\rangle\big|^2
= \big|\langle[\hat{C}_\nu^{\phantom{|}},\hat{M}_{\lambda\mu}]\rangle\big|^2$$ by the formula [@VeselyPhD] (assuming the exact transition operators (\[exactM\])): $$\begin{aligned}
\label{cross_sec}
\sigma_\gamma(E) = \frac{8\pi^3\alpha}{e^2}\sum_\nu\sum_{\lambda\mu}
\frac{E^{2\lambda-1}}{(\hbar c)^{2\lambda-2}} \frac{\lambda+1}{\lambda[(2\lambda+1)!!]^2}
&\big[B(\mathrm{E}\lambda\mu,0\rightarrow\nu)+B(\mathrm{M}\lambda\mu,0\rightarrow\nu)\big] \nonumber\\[-8pt]
&\quad{}\times\delta_\Delta(E_\nu-E)\end{aligned}$$ where the Lorentz function $$\delta_\Delta(E_\nu-E) = \frac{\Delta}{2\pi[(E_\nu-E)^2+(\Delta/2)^2]}$$ accounts for a finite half-life, but in practice, other effects are included by choosing a larger width $\Delta$ (such as finite experimental energy resolution, inability to calculate fragmentation of the states due to complex configurations etc.). The observed absorption cross-section is mostly dominated by long-wave isovector E1 transitions, so the larger multipolarities (and also monopole and isoscalar transitions) can be measured only indirectly, for example by electron or alpha scattering. The individual states are usually not distinguishable, and the distribution of the transition probability is depicted by means of a *strength function* $$\label{sf}
S_n(\mathrm{E/M}\lambda\mu; E) = \sum_\nu E^n
B(\mathrm{E/M}\lambda\mu,0\rightarrow\nu)\delta_\Delta(E_\nu-E)$$ where $n$ is usually 0 or 1. Value $n=0$ is assumed in the case of omitted index.
The isoscalar toroidal and compression E1 transitions are very sensitive to the spurious center-of-mass motion, which can be subtracted by a correction $r^3\mapsto r^3-\frac{5}{3}r{\langle r^2\rangle}_0$ [@Kvasil2011].
\[E1vtccm\] $$\begin{aligned}
\hat{M}_{\mathrm{tor};1\mu}^{\mathrm{E},\Delta T=0} &= \frac{-1}{10\sqrt{2}\,c}
\int\mathrm{d}^3 r\,\hat{\vec{j}}_\mathrm{nuc}(\vec{r})\cdot\vec{\nabla}\times
\Big[\Big(r^3-\frac{5}{3}r{\langle r^2\rangle}_0\Big)
\vec{Y}_{1\mu}^1(\vartheta,\varphi)\Big] \nonumber\\
\label{M_torE1cm}
&= \frac{-\mathrm{i}}{2\sqrt{3}\,c} \int\mathrm{d}^3 r\,
\hat{\vec{j}}_\mathrm{nuc}(\vec{r})\cdot
\Big[\big(r^2-{\langle r^2\rangle}_0\big)\vec{Y}_{1\mu}^0
+\frac{\sqrt{2}}{5} r^2\vec{Y}_{1\mu}^2\Big] \\
\hat{M}_{\mathrm{com};1\mu}^{\mathrm{E},\Delta T=0} &= \frac{\mathrm{i}}{10c}
\int\mathrm{d}^3 r\,\hat{\vec{j}}_\mathrm{nuc}(\vec{r})\cdot\vec{\nabla}
\Big[\Big(r^3-\frac{5}{3}r{\langle r^2\rangle}_0\Big)
Y_{1\mu}(\vartheta,\varphi)\Big] \nonumber\\
\label{M_comE1cm}
&= \frac{\mathrm{i}}{2\sqrt{3}\,c} \int\mathrm{d}^3 r\,
\hat{\vec{j}}_\mathrm{nuc}(\vec{r})\cdot
\Big[\big(r^2-{\langle r^2\rangle}_0\big)\vec{Y}_{1\mu}^0
-\frac{2\sqrt{2}}{5} r^2\vec{Y}_{1\mu}^2\Big] \\
\label{M_com2E1cm}
\hat{M}_{\mathrm{com'};1\mu}^{\mathrm{E},\Delta T=0} &= \frac{e}{10} \sum_{i} \big[
\big(r^3-\tfrac{5}{3}r{\langle r^2\rangle}_0\big) Y_{1\mu}(\vartheta,\varphi)\big]_i\end{aligned}$$
Center-of-mass correction essentially integrates and removes the contribution of homogeneous motion of the whole nucleus, since $\vec{Y}_{1\mu}^0 = \vec{e}_\mu/\sqrt{4\pi}$. Below is a derivation, suitable also for non-isoscalar transitions (with a c.m. velocity $\vec{v}_\nu^\mathrm{\,c.m.}$ and a ground-state density $\rho_p(\vec{r})+\rho_n(\vec{r})$). For simplicity, I am taking $m_p=m_n$. $$\begin{aligned}
\vec{j}_{q;\nu}^\mathrm{\,c.m.}(\vec{r}) &=
\frac{\rho_q(\vec{r})m_q\vec{v}_\nu^\mathrm{\,c.m.}}{\hbar}
= \frac{\rho_q(\vec{r})}{A}\int\delta\vec{j}_\nu(\vec{r}_1)\,\mathrm{d}^3 r_1 \\
&= \frac{\rho_q(r)}{A}\vec{e}_{\mu}^{\,*}\sum_{\alpha\geq\beta}
\frac{(-1)^{l_\beta+1}}{\sqrt{3}}
\big( c_{\alpha\beta}^{(\nu-)} - c_{\alpha\beta}^{(\nu+)} \big)^{\!*}
\int j_{q;\alpha\beta}^{10}(r_1)\sqrt{4\pi}\,r_1^2\mathrm{d}r_1 \\
\delta\vec{j}_{q;\nu}^\textrm{\,corrected}(\vec{r}) &=
\delta\vec{j}_{q;\nu}(\vec{r}) - \vec{j}_{q;\nu}^\mathrm{\,c.m.}(\vec{r})\end{aligned}$$ After rearrangement of the integrals in the transition matrix element, the convective current (its lower component) and the density in the transition operator need to be substituted by
\[cmc-generic\] $$\begin{aligned}
z_q\vec{j}_i(\vec{r})\cdot r^2\vec{Y}_{1\mu}^0(\vartheta,\varphi)\quad &\mapsto\quad
\vec{j}_i(\vec{r})\cdot
\big(z_q r^2 - {\langle r^2 \rangle}_t\big)\vec{Y}_{1\mu}^0(\vartheta,\varphi) \\
z_q \big[r^3 Y_{1\mu}(\vartheta,\varphi)\big]_i\quad &\mapsto\quad
\big[\big(z_q r^3-\tfrac{5}{3}r{\langle r^2\rangle}_t\big)
Y_{1\mu}(\vartheta,\varphi)\big]_i\\
&\textrm{with }{\langle r^2\rangle}_t = \int
\frac{z_p\rho_p(r)+z_n\rho_n(r)}{A} 4\pi r^4\mathrm{d}r \nonumber\end{aligned}$$
It is not necessary to apply these corrections, if the spurious mode is sufficiently well separated (e.g. by employing a large SHO basis), but then the spurious state has to be excluded from the calculation of the strength function.
The accuracy of the calculation for electric transitions can be checked by evaluation of the energy-weighted sum rule $m_1$ (EWSR), which relates certain commutators in the ground state to transition probabilities: $$m_1 = \frac{1}{2}\sum_\mu\langle\mathrm{HF}|[\hat{M}_{\lambda\mu}^\dagger,
[\hat{H},\hat{M}_{\lambda\mu}]|\mathrm{HF}\rangle = \sum_\mu\sum_\nu E_\nu B(\mathrm{E}\lambda\mu,0\rightarrow\nu)$$ In spherical symmetry, the transition probability doesn’t depend on $\mu$ $$m_1\mathrm{(RPA)} = (2\lambda+1)\sum_\nu E_\nu
\big|\langle\nu|\hat{M}_{\lambda\mu}|\textrm{RPA}\rangle\big|^2$$ and the ground-state estimate is $$\label{EWSR-wf}
m_1 = (1 + \mathcal{K})\frac{\hbar^2}{2m}\sum_\mu\int
[\vec{\nabla},\hat{M}_{\lambda\mu}^\dagger]\cdot[\vec{\nabla},\hat{M}_{\lambda\mu}]
\rho(\vec{r}) \mathrm{d}^3 r$$ where $\mathcal{K} = 0$ for isoscalar transitions, and for isovector case it is necessary to include non-zero enhancement factor $\mathcal{K}$ acting as a reduced effective mass [@Lipparini1989]: $$\mathcal{K} =\frac{8mb_1}{\hbar^2}\frac{\int[\vec{\nabla},\hat{M}]^2
\rho_n(\vec{r})\rho_p(\vec{r})\mathrm{d}^3r}{\textstyle\int[\vec{\nabla},\hat{M}]^2
\rho(\vec{r})\mathrm{d}^3r}$$ Commutator $[\vec{\nabla},\hat{M}_{\lambda\mu}]$ leads to a simple function for long-wave and time-even compression transitions
$$\begin{aligned}
\vec{\nabla}r^\lambda Y_{\lambda\mu} &= \sqrt{\lambda(2\lambda+1)}\,
r^{\lambda-1}\vec{Y}_{\lambda\mu}^{\lambda-1} \\
\vec{\nabla}r^{\lambda+2} Y_{\lambda\mu} &= \frac{r^{\lambda+1}}{\sqrt{2\lambda+1}}
\big[(2\lambda+3)\sqrt{\lambda}\,\vec{Y}_{\lambda\mu}^{\lambda-1}
- 2\sqrt{\lambda+1}\,\vec{Y}_{\lambda\mu}^{\lambda+1}\big]\end{aligned}$$
Isoscalar E1 compressional transition ($z_p=z_n=1$) with center-of-mass correction (\[M\_com2E1cm\]) gives $$m_1\big[\hat{M}=\tfrac{1}{2}\big(r^3-\tfrac{5}{3}r{\langle r^2\rangle}_0\big) Y_{1\mu}\big]
= \frac{\hbar^2}{2m} \frac{3A}{16\pi} \bigg( 11\langle r^4 \rangle
- \frac{25}{3} \langle r^2 \rangle^2 \bigg)$$
Skyrme RPA in the axially deformed case {#sec_skyr_ax}
---------------------------------------
Full RPA was derived also for the axial symmetry, and the corresponding formalism is given below. Some of the concepts are similar to the spherical case (such as pairing factors, transition operators), so the reader is referred to the previous sections.
Cylindrical coordinates are $$\varrho=\sqrt{x^2+y^2},\ z,\ \varphi=\mathrm{arctg}\,\frac{y}{x};\qquad x=\varrho\cos\varphi,\ y=\varrho\sin\varphi$$ Calculations in axially deformed nuclei don’t conserve total angular momentum, nevertheless, they conserve its $z$-projection and parity, so it is convenient to preserve part of the formalism from the spherical symmetry, namely the convention of $m$-components in vector and tensor operators, and the rule (\[hermit\]) for their hermitian conjugation: $$\label{hermit-copy}
\hat{A}_m^\dagger = (-1)^m\hat{A}_{-m}$$ The operators of differentiation and the spin matrices are then $$\begin{array}{rlrl}
\nabla_{+1}\!\!\!\! &=
\bigg({-}\dfrac{\partial}{\partial x}-\mathrm{i}\dfrac{\partial}{\partial y}\bigg)
= {-}\dfrac{\mathrm{e}^{\mathrm{i}\varphi}}{\sqrt{2}}
\bigg( \dfrac{\partial}{\partial\varrho} + \dfrac{\mathrm{i}}{\varrho}\dfrac{\partial}{\partial\varphi} \bigg)\quad\
& \sigma_{+1}\!\!\!\! &= \begin{pmatrix} 0 & \!{-}\sqrt{2}\, \\ 0 & 0 \end{pmatrix} \\[9pt]
\nabla_0\!\! &= \dfrac{\partial}{\partial z}\qquad
& \sigma_0\!\! &= \begin{pmatrix} 1 & 0 \\ 0 & {-1} \end{pmatrix} \\[9pt]
\nabla_{-1}\!\!\!\! &= \bigg(\dfrac{\partial}{\partial x}-\mathrm{i}\dfrac{\partial}{\partial y}\bigg)
= \dfrac{\mathrm{e}^{-\mathrm{i}\varphi}}{\sqrt{2}}
\bigg( \dfrac{\partial}{\partial\varrho}
- \dfrac{\mathrm{i}}{\varrho}\dfrac{\partial}{\partial\varphi} \bigg)\qquad
& \sigma_{-1}\!\!\!\! &= \begin{pmatrix} 0 & 0 \\ \sqrt{2} & 0 \end{pmatrix}
\end{array}$$ Single-particle wavefunction (and its time-reversal conjugate) is expressed as a spinor (with $m_\alpha^\pm = m_\alpha \pm \frac{1}{2}$) $$\psi_\alpha(\vec{r}) =
\binom{R_{\alpha\uparrow}(\varrho,z)\,\mathrm{e}^{\mathrm{i}m_\alpha^-\varphi}}
{R_{\alpha\downarrow}(\varrho,z)\,\mathrm{e}^{\mathrm{i}m_\alpha^+\varphi}}, \quad
\psi_{\bar{\alpha}}(\vec{r}) = \binom{R_{\alpha\downarrow}(\varrho,z)\,\mathrm{e}^{-\mathrm{i}m_\alpha^+\varphi}}
{-R_{\alpha\uparrow}(\varrho,z)\,\mathrm{e}^{-\mathrm{i}m_\alpha^-\varphi}},$$ and the radial parts of its derivatives will be denoted by a shorthand notation similar to (\[Rpm\]) $$\begin{aligned}
\nabla_{+1}\psi_\alpha &= {-}\frac{\mathrm{e}^{\mathrm{i}\varphi}}{\sqrt{2}}
\binom{(\partial_\varrho R_{\alpha\uparrow} - m_\alpha^- R_{\alpha\uparrow}/\varrho)\,\mathrm{e}^{\mathrm{i}m_\alpha^-\varphi}}
{(\partial_\varrho R_{\alpha\downarrow} - m_\alpha^+ R_{\alpha\downarrow}/\varrho)\,\mathrm{e}^{\mathrm{i}m_\alpha^+\varphi}}
\equiv \mathrm{e}^{\mathrm{i}\varphi}
\begin{pmatrix} R_{\alpha\uparrow}^{(+)}\mathrm{e}^{\mathrm{i}m_\alpha^-\varphi} \\
R_{\alpha\downarrow}^{(+)}\mathrm{e}^{\mathrm{i}m_\alpha^+\varphi}
\end{pmatrix} \nonumber\\
\nabla_0\psi_\alpha &=
\binom{\partial_z R_{\alpha\uparrow}\mathrm{e}^{\mathrm{i}m_\alpha^-\varphi}}
{\partial_z R_{\alpha\downarrow}\mathrm{e}^{\mathrm{i}m_\alpha^+\varphi}}
\equiv \begin{pmatrix} R_{\alpha\uparrow}^{(0)}\mathrm{e}^{\mathrm{i}m_\alpha^-\varphi} \\
R_{\alpha\downarrow}^{(0)}\mathrm{e}^{\mathrm{i}m_\alpha^+\varphi} \end{pmatrix} \\
\nabla_{-1}\psi_\alpha &= \frac{\mathrm{e}^{-\mathrm{i}\varphi}}{\sqrt{2}}
\binom{(\partial_\varrho R_{\alpha\uparrow} + m_\alpha^- R_{\alpha\uparrow}/\varrho)\,\mathrm{e}^{\mathrm{i}m_\alpha^-\varphi}}
{(\partial_\varrho R_{\alpha\downarrow} + m_\alpha^+ R_{\alpha\downarrow}/\varrho)\,\mathrm{e}^{\mathrm{i}m_\alpha^+\varphi}}
\equiv \mathrm{e}^{-\mathrm{i}\varphi}
\begin{pmatrix} R_{\alpha\uparrow}^{(-)}\mathrm{e}^{\mathrm{i}m_\alpha^-\varphi} \\
R_{\alpha\downarrow}^{(-)}\mathrm{e}^{\mathrm{i}m_\alpha^+\varphi} \end{pmatrix} \nonumber\end{aligned}$$ Let’s emphasize that the index $(\pm)$ in the axial case stands for a shift in $m$, whereas in the spherical case, there was a shift in $l$.
Radial functions $R_{\alpha\uparrow\!\downarrow}(\varrho,z),\,R_{\alpha\uparrow\!\downarrow}^{(\pm)}$ are real, and their spinor-wise products will be denoted by a dot to keep the expressions simple: $$R_\alpha \cdot R_\beta \ \equiv\ R_{\alpha\uparrow}(\varrho,z) R_{\beta\uparrow}(\varrho,z)
+ R_{\alpha\downarrow}(\varrho,z) R_{\beta\downarrow}(\varrho,z)$$ Vector currents will be decomposed in the style of tensor operators of rank 1. Vector product in the expression for spin-orbital current leads to (for vector product in $m$-scheme see [@Varshalovich1988 (1.2.28)]) $$(\vec{\nabla}\times\vec{\sigma})\psi_\alpha = \left\{\begin{array}{rl}
{+}1: & \mathrm{i}\,\mathrm{e}^{\mathrm{i}\varphi}
\begin{pmatrix} \big({-}R_{\alpha\uparrow}^{(+)} -\sqrt{2}\,R_{\alpha\downarrow}^{(0)}\big)
\mathrm{e}^{\mathrm{i}m_\alpha^-\varphi} \\
R_{\alpha\downarrow}^{(+)}\mathrm{e}^{\mathrm{i}m_\alpha^+\varphi} \end{pmatrix} \\
0: & \mathrm{i}\,\begin{pmatrix}
{-}\sqrt{2}\,R_{\alpha\downarrow}^{(-)}\mathrm{e}^{\mathrm{i}m_\alpha^-\varphi} \\
{-}\sqrt{2}\,R_{\alpha\uparrow}^{(+)}\mathrm{e}^{\mathrm{i}m_\alpha^+\varphi}
\end{pmatrix} \\
{-}1: & \mathrm{i}\,\mathrm{e}^{-\mathrm{i}\varphi}
\begin{pmatrix} R_{\alpha\uparrow}^{(-)}\mathrm{e}^{\mathrm{i}m_\alpha^-\varphi} \\
\big({-}R_{\alpha\downarrow}^{(-)} -\sqrt{2}\,R_{\alpha\uparrow}^{(0)}\big)
\mathrm{e}^{\mathrm{i}m_\alpha^+\varphi} \end{pmatrix} \end{array} \right.$$
Matrix elements of densities and currents are then
\[axial\_me\] $$\begin{aligned}
% density
\langle\alpha|\hat{\rho}|\beta\rangle &= R_\alpha \cdot R_\beta
\,\mathrm{e}^{\mathrm{i}(m_\beta-m_\alpha)\varphi} \\
% kinetic energy
\langle\alpha|\hat{\tau}|\beta\rangle &= \big( R_\alpha^{(0)} \cdot R_\beta^{(0)}
+ R_\alpha^{(+)} \cdot R_\beta^{(+)} + R_\alpha^{(-)} \cdot R_\beta^{(-)} \big)
\,\mathrm{e}^{\mathrm{i}(m_\beta-m_\alpha)\varphi}\end{aligned}$$ Factor $\mathrm{e}^{\mathrm{i}(m_\beta-m_\alpha)\varphi}$ will be omitted in the following expressions.
$$\begin{aligned}
% vector spin-orbital
\langle\alpha|\vec{\mathcal{J}}|\beta\rangle &= \left\{\!\! \begin{array}{rl}
{+}1:\!& \tfrac{1}{2}\mathrm{e}^{\mathrm{i}\varphi}\big[
\big(R_{\alpha\downarrow}^{(-)} + \sqrt{2}\,R_{\alpha\uparrow}^{(0)}\big)R_{\beta\downarrow}
- R_{\alpha\uparrow}^{(-)} R_{\beta\uparrow} \\
&\qquad\qquad{}-R_{\alpha\uparrow}
\big(R_{\beta\uparrow}^{(+)}+\sqrt{2}\,R_{\beta\downarrow}^{(0)}\big)
+ R_{\alpha\downarrow} R_{\beta\downarrow}^{(+)} \big] \\
0:\!& \tfrac{1}{2}\big[{-}\sqrt{2}\,\big( R_{\alpha\downarrow}^{(-)} R_{\beta\uparrow} + R_{\alpha\uparrow}^{(+)}R_{\beta\downarrow}
+ R_{\alpha\uparrow}R_{\beta\downarrow}^{(-)} + R_{\alpha\downarrow}R_{\beta\uparrow}^{(+)} \big) \big] \\
{-}1:\!& \tfrac{1}{2}\mathrm{e}^{-\mathrm{i}\varphi}\big[
\big( R_{\alpha\uparrow}^{(+)} + \sqrt{2}\,R_{\alpha\downarrow}^{(0)} \big) R_{\beta\uparrow}
- R_{\alpha\downarrow}^{(+)} R_{\beta\downarrow} \\
&\qquad\qquad{}-R_{\alpha\downarrow}
\big( R_{\beta\downarrow}^{(-)}+\sqrt{2}\,R_{\beta\uparrow}^{(0)} \big)
+ R_{\alpha\uparrow} R_{\beta\uparrow}^{(-)} \big]
\end{array} \right. \\
% current
\langle\alpha|\vec{j}|\beta\rangle &= \left\{\!\! \begin{array}{rl}
{+}1:\!& \tfrac{\mathrm{i}}{2}\mathrm{e}^{\mathrm{i}\varphi}
\big( {-}R_\alpha^{(-)}\cdot R_\beta - R_\alpha\cdot R_\beta^{(+)} \big) \\
0:\!& \tfrac{\mathrm{i}}{2}
\big( R_\alpha^{(0)}\cdot R_\beta - R_\alpha\cdot R_\beta^{(0)} \big) \\
{-}1:\!& \tfrac{\mathrm{i}}{2}\mathrm{e}^{-\mathrm{i}\varphi}
\big( {-}R_\alpha^{(+)}\cdot R_\beta - R_\alpha\cdot R_\beta^{(-)} \big) \end{array} \right. \\
% spin
\quad \langle\alpha|\vec{s}|\beta\rangle &= \Bigg\{\!\! \begin{array}{rl}
{+}1:\!& \mathrm{e}^{\mathrm{i}\varphi}\big({-}\sqrt{2}\,R_{\alpha\uparrow} R_{\beta\downarrow}\big) \\
0:\!& R_{\alpha\uparrow} R_{\beta\uparrow} - R_{\alpha\downarrow} R_{\beta\downarrow} \\
{-}1:\!& \mathrm{e}^{-\mathrm{i}\varphi}\big(\sqrt{2}\,R_{\alpha\downarrow} R_{\beta\uparrow}\big)
\end{array}\!\! \\
% kinetic energy-spin
\langle\alpha|\vec{T}|\beta\rangle &= \left\{\!\! \begin{array}{rl}
{+}1:\!& \mathrm{e}^{\mathrm{i}\varphi}(-\sqrt{2}\,)
\big[ R_{\alpha\uparrow}^{(0)} R_{\beta\downarrow}^{(0)}
+ R_{\alpha\uparrow}^{(+)} R_{\beta\downarrow}^{(+)}
+ R_{\alpha\uparrow}^{(-)} R_{\beta\downarrow}^{(-)} \big] \\
0:\!& R_{\alpha\uparrow}^{(0)} R_{\beta\uparrow}^{(0)} - R_{\alpha\downarrow}^{(0)} R_{\beta\downarrow}^{(0)}
+ R_{\alpha\uparrow}^{(+)} R_{\beta\uparrow}^{(+)} \\
&\qquad\qquad{}- R_{\alpha\downarrow}^{(+)} R_{\beta\downarrow}^{(+)}
+ R_{\alpha\uparrow}^{(-)} R_{\beta\uparrow}^{(-)} - R_{\alpha\downarrow}^{(-)} R_{\beta\downarrow}^{(-)} \\
{-}1:\!& \mathrm{e}^{-\mathrm{i}\varphi}\sqrt{2}\,
\big[ R_{\alpha\downarrow}^{(0)} R_{\beta\uparrow}^{(0)}
+ R_{\alpha\downarrow}^{(+)} R_{\beta\uparrow}^{(+)}
+ R_{\alpha\downarrow}^{(-)} R_{\beta\uparrow}^{(-)} \big] \end{array} \right.\\
% curl of current
\langle\alpha|\vec{\nabla}\times\vec{j}|\beta\rangle
&= -\mathrm{i}\big(\vec{\nabla}\psi_\alpha\big)^\dagger\!\times\!\vec{\nabla}\psi_\beta
= \left\{\!\! \begin{array}{rl}
{+}1:\!& \mathrm{e}^{\mathrm{i}\varphi}\big( R_\alpha^{(-)} \cdot R_\beta^{(0)}
+ R_\alpha^{(0)} \cdot R_\beta^{(+)} \big) \\
0:\!& R_\alpha^{(-)} \cdot R_\beta^{(-)} - R_\alpha^{(+)} \cdot R_\beta^{(+)} \\
{-}1:\!& \mathrm{e}^{-\mathrm{i}\varphi}\big( {-}R_\alpha^{(+)} \cdot R_\beta^{(0)}
- R_\alpha^{(0)} \cdot R_\beta^{(-)} \big) \end{array}\right. \\
% divergence of vector spin-orbital
\langle\alpha|\vec{\nabla}\cdot\vec{\mathcal{J}}|\beta\rangle
&= {-}R_{\alpha\uparrow}^{(+)} R_{\beta\uparrow}^{(+)} + R_{\alpha\downarrow}^{(+)} R_{\beta\downarrow}^{(+)}
+ R_{\alpha\uparrow}^{(-)} R_{\beta\uparrow}^{(-)} - R_{\alpha\downarrow}^{(-)} R_{\beta\downarrow}^{(-)} \nonumber\\
& \quad{}-\sqrt{2}\,\big( R_{\alpha\uparrow}^{(0)} R_{\beta\downarrow}^{(-)} + R_{\alpha\downarrow}^{(-)} R_{\beta\uparrow}^{(0)}
+ R_{\alpha\downarrow}^{(0)} R_{\beta\uparrow}^{(+)} + R_{\alpha\uparrow}^{(+)} R_{\beta\downarrow}^{(0)} \big) \\
% scalar spin-orbital
\langle\alpha|\mathcal{J}_s|\beta\rangle &= \tfrac{\mathrm{i}}{2}\big[
\big( R_{\alpha\uparrow}^{(0)} + \sqrt{2}\,R_{\alpha\downarrow}^{(-)} \big) R_{\beta\uparrow}
- \big( R_{\alpha\downarrow}^{(0)} + \sqrt{2}\,R_{\alpha\uparrow}^{(+)} \big) R_{\beta\downarrow} \nonumber\\
&\qquad {}- R_{\alpha\uparrow} \big( R_{\beta\uparrow}^{(0)} + \sqrt{2}\,R_{\alpha\downarrow}^{(-)} \big)
+ R_{\alpha\downarrow} \big( R_{\beta\downarrow}^{(0)} + \sqrt{2}\,R_{\beta\uparrow}^{(+)} \big)\big] \\[5pt]
% tensor spin-orbital
\langle\alpha|\mathcal{J}_t|\beta\rangle &= \left\{\!\! \begin{array}{rl}
{+}2:\!& \tfrac{\mathrm{i}}{\sqrt{2}} \mathrm{e}^{2\mathrm{i}\varphi}
\big( R_{\alpha\uparrow}^{(-)} R_{\beta\downarrow}
+ R_{\alpha\uparrow} R_{\beta\downarrow}^{(+)} \big) \\
{+}1:\!& \tfrac{\mathrm{i}}{2\sqrt{2}} \mathrm{e}^{\mathrm{i}\varphi}
\big[ {-}R_{\alpha\uparrow}^{(-)} R_{\beta\uparrow}
+ \big(R_{\alpha\downarrow}^{(-)}
-\sqrt{2}\,R_{\alpha\uparrow}^{(0)} \big)R_{\beta\downarrow} \\
&\qquad{}- R_{\alpha\uparrow}\big(R_{\beta\uparrow}^{(+)}
-\sqrt{2}\,R_{\beta\downarrow}^{(0)}\big)
+ R_{\alpha\downarrow}R_{\beta\downarrow}^{(+)} \big] \\
0:\!& \tfrac{\mathrm{i}}{2\sqrt{3}}\big[
\big(\sqrt{2}\,R_{\alpha\uparrow}^{(0)}-R_{\alpha\downarrow}^{(-)}\big)R_{\beta\uparrow}
-\big(\sqrt{2}\,R_{\alpha\downarrow}^{(0)}-R_{\alpha\uparrow}^{(+)}\big)R_{\beta\downarrow} \\
& \qquad{}-R_{\alpha\uparrow}\big(\sqrt{2}\,R_{\beta\uparrow}^{(0)}-R_{\beta\downarrow}^{(-)}\big)
+R_{\alpha\downarrow}\big(\sqrt{2}\,R_{\beta\downarrow}^{(0)}-R_{\beta\uparrow}^{(+)}\big)
\big] \\
{-}1:\!& \tfrac{\mathrm{i}}{2\sqrt{2}} \mathrm{e}^{-\mathrm{i}\varphi}
\big[ {-}\big(R_{\alpha\uparrow}^{(+)}
-\sqrt{2}\,R_{\alpha\downarrow}^{(0)} \big)R_{\beta\uparrow}
+ R_{\alpha\downarrow}^{(+)} R_{\beta\downarrow} \\
&\qquad{}- R_{\alpha\uparrow}R_{\beta\uparrow}^{(-)}
+ R_{\alpha\downarrow}\big(R_{\beta\downarrow}^{(-)}
-\sqrt{2}\,R_{\beta\uparrow}^{(0)}\big) \big] \\
{-}2:\!& \tfrac{\mathrm{i}}{\sqrt{2}} \mathrm{e}^{2\mathrm{i}\varphi}
\big( {-}R_{\alpha\downarrow}^{(+)} R_{\beta\uparrow}
- R_{\alpha\downarrow} R_{\beta\uparrow}^{(-)} \big)
\end{array} \right. \\
% orbital moment
\hat{\vec{L}}\psi_\beta &= -\mathrm{i}(\vec{r}\times\vec{\nabla})\psi_\beta
= \left\{\!\! \begin{array}{rl}
{+}1:\!& \mathrm{e}^{\mathrm{i}\varphi}\big( \tfrac{\varrho}{\sqrt{2}}R_\beta^{(0)}
+ z R_\beta^{(+)} \big) \\
0:\!& \tfrac{\varrho}{\sqrt{2}}\big(R_\beta^{(+)}+R_\beta^{(-)}\big) \\
{-}1:\!& \mathrm{e}^{-\mathrm{i}\varphi}\big( \tfrac{\varrho}{\sqrt{2}}R_\beta^{(0)}
- z R_\beta^{(-)} \big) \end{array} \right.\end{aligned}$$
\
In the actual calculation, it is necessary to choose projection of angular momentum $\mu$ and parity $\pi$ (together denoted also as $K^\pi$, where $K=\mu$). Selection of the two-quasiparticle pairs is then restricted by $m_\alpha-m_\beta=\mu$. Transition operators have the form of $$\hat{M}_{\lambda\mu} = \sum_i M_{\lambda\mu}(\varrho_i,z_i)\,
\mathrm{e}^{\mathrm{i}\mu\varphi_i}$$ where $M_{\lambda\mu}(\varrho,z)$ contains a function (or even derivatives) not dependent on $\varphi$.
Single-particle operators (including densities and currents) can be expressed in terms of quasiparticles $$\begin{aligned}
\hat{A} &= \frac{1}{2}\sum_{\alpha\beta}u_{\alpha\beta}^{(\gamma_T^A)}
\langle\alpha|\hat{A}|\beta\rangle
\big(\hat{\alpha}_\alpha^+\hat{\alpha}_{\bar{\beta}}^+ + \gamma_T^A
\hat{\alpha}_{\bar{\alpha}}^{\phantom{*}}\hat{\alpha}_{\beta}^{\phantom{|}}\big) \\
\label{axial_dens}
\hat{\mathbf{J}}_d(\vec{r}) &= \frac{1}{2}\sum_\mu\sum_{\alpha\beta\in\mu}
\mathbf{J}_{d;\alpha\beta}(\varrho,z)
\big(\hat{\alpha}_\alpha^+\hat{\alpha}_{\bar{\beta}}^+ + \gamma_T^d
\hat{\alpha}_{\bar{\alpha}}^{\phantom{*}}\hat{\alpha}_{\beta}^{\phantom{|}}\big)
\mathrm{e}^{-\mathrm{i}\mu\varphi} \\[-6pt]
&\qquad\qquad\qquad\qquad
\textrm{with the selection rule }\ m_\alpha-m_\beta=\mu \nonumber\end{aligned}$$ Expression (\[axial\_dens\]) is defining the shorthand notation $\mathbf{J}_{d;\alpha\beta}(\varrho,z)$ for matrix elements of densities and currents which can have scalar, vector or tensor character (thus bold font). $\mathbf{J}_{d;\alpha\beta}(\varrho,z)$ is derived from (\[axial\_me\]) by adding a pairing factor and omitting $\mathrm{e}^{-\mathrm{i}\mu\varphi}$.
Commutators are evaluated in quasiparticle vacuum as $$\begin{aligned}
\langle[\hat{A}^\dagger,\hat{B}]\rangle &= \frac{\gamma_T^A-\gamma_T^B}{2}
\sum_{\alpha\beta}u_{\beta\alpha}^{(\gamma_T^A)} u_{\alpha\beta}^{(\gamma_T^B)}
\langle\beta|\hat{A}^\dagger|\alpha\rangle\langle\alpha|\hat{B}|\beta\rangle \nonumber\\
&= \frac{1-\gamma_T^A\gamma_T^B}{2}\sum_{\alpha\beta}
u_{\alpha\beta}^{(\gamma_T^A)} u_{\alpha\beta}^{(\gamma_T^B)}
\langle\alpha|\hat{A}|\beta\rangle^*\,\langle\alpha|\hat{B}|\beta\rangle\end{aligned}$$
RPA phonons can be defined as $$\label{RPA_ax}
\hat{C}_\nu^+ = \frac{1}{2}\sum_{\alpha\beta}\big(
c_{\alpha\beta}^{(\nu-)} \hat{\alpha}_\alpha^+ \hat{\alpha}_{\bar{\beta}}^+
- c_{\alpha\beta}^{(\nu+)} \hat{\alpha}_{\bar{\alpha}} \hat{\alpha}_\beta \big)$$ (factor $1/2$ is due to double counting of $\alpha\beta$ vs. $\bar{\beta}\bar{\alpha}$) and their commutator with hermitian density/current operator is
\[comm2\_ax\] $$\begin{aligned}
\langle[\hat{\mathbf{J}}_d(\vec{r}),\hat{C}_\nu^+]\rangle &= \frac{1}{2}\sum_{\alpha\beta}
u_{\alpha\beta}^{(\gamma_T^d)} \langle\alpha|\hat{\mathbf{J}}_d(\vec{r})|\beta\rangle^*\,
\big( c_{\alpha\beta}^{(\nu-)} + \gamma_T^d c_{\alpha\beta}^{(\nu+)} \big) \\
&= \frac{1}{2}\sum_{\alpha\beta}\mathbf{J}_{d;\alpha\beta}^\dagger(\varrho,z)
\big( c_{\alpha\beta}^{(\nu-)} + \gamma_T^d c_{\alpha\beta}^{(\nu+)} \big)\mathrm{e}^{\mathrm{i}\mu\varphi}\end{aligned}$$
where hermitian conjugation is understood in the sense of (\[hermit-copy\]) for vector/tensor components (see decomposition of matrix elements (\[axial\_me\])) and the factor $\mathrm{e}^{\mathrm{i}\mu\varphi}$ will get cancelled by $\mathrm{e}^{-\mathrm{i}\mu\varphi}$ from another $\hat{\mathbf{J}}_{d'}(\vec{r})$ in Skyrme interaction or in Coulomb integral (\[coul\_ax\]). RPA equations $[\hat{H},\hat{C}_\nu^+] = E_\nu\hat{C}_\nu^+$ are then $$\label{fullRPA_eq-copy}
\begin{pmatrix} A & B \\ B & A \end{pmatrix} \binom{c^{(\nu-)}}{c^{(\nu+)}} =
\begin{pmatrix} E_\nu & 0 \\ 0 & -E_\nu \end{pmatrix}
\binom{c^{(\nu-)}}{c^{(\nu+)}}$$ where the matrices $A$ and $B$ are
$$\begin{aligned}
\label{axial_fullRPA_A}
A_{pp'} & = \delta_{pp'}\varepsilon_p + \sum_{dd'}
\iint\mathrm{d}\vec{r}_1\mathrm{d}\vec{r}_2
\frac{\delta^2\mathcal{H}}{\delta J_d(\vec{r}_1)\delta J_{d'}(\vec{r}_2)}
\mathbf{J}_{d;p}^\dagger(\varrho_1,z_1)\cdot\mathbf{J}_{d';p'}(\varrho_2,z_2) \\
\label{axial_fullRPA_B}
B_{pp'} & = \sum_{dd'} \gamma_T^d
\iint\mathrm{d}\vec{r}_1\mathrm{d}\vec{r}_2
\frac{\delta^2\mathcal{H}}{\delta J_d(\vec{r}_1)\delta J_{d'}(\vec{r}_2)}
\mathbf{J}_{d;p}^\dagger(\varrho_1,z_1)\cdot\mathbf{J}_{d';p'}(\varrho_2,z_2)\end{aligned}$$
Index $p$ labels the $2qp$ pair (e.g. $\alpha\beta$), satisfying $m_\alpha-m_\beta=\mu$ and the scalar product is understood in the spherical-tensor sense $$\mathbf{A}^\dagger\cdot\mathbf{B} = \sum_s (-1)^s \big[\mathbf{A}^\dagger\big]_{-s}\big[\mathbf{B}\big]_s = \sum_s \big[\mathbf{A}\big]_s^*\,\big[\mathbf{B}\big]_s$$
Removal of the duplicate $2qp$ pairs (such as (\[order2qp\]) in the spherical case) is done by omitting states $\alpha$ with $m_\alpha < 0$, but including pairs $\alpha\bar{\beta}$ with $m_\alpha+m_\beta=\mu$ and $\alpha>\beta$ in some ordering (and with omission of the Pauli-violating case $\alpha\bar{\alpha}$). The equivalent duplicates are then $\alpha\beta\leftrightarrow\bar{\beta}\bar{\alpha}$ and $\alpha\bar{\beta}\leftrightarrow\beta\bar{\alpha}$. The $2qp$ space for $\mu = 0$ splits into an independent electric and magnetic subspace with symmetric/antisymmetric combination of pairs $\alpha\beta\leftrightarrow\beta\alpha$, respectively: $(\alpha\beta\pm\beta\alpha)/\sqrt{2}$; then the diagonal pairs ($\alpha=\beta$) are present only in electric transitions with even multipolarity \[revision 22.02.2018; papers from 2017 are already correct\].
Coulomb integral {#sec_coul}
----------------
In the following text, I will analyze the correct way of integration for direct two-body Coulomb interaction. The discussion deals also with accuracy of numerical integration in general, which is an important aspect of nuclear calculations, due to the rapid increase of computational cost in reduced symmetry (axial and triaxial nuclei). No further physical questions are treated in this section.
The calculation of Coulomb potential involves a problem of integrable singularity ($1/r$) during the evaluation of discretized integrals in axial and cartesian coordinates. Even the spherical case contains a kink for $r_1 = r_2$, which prevents from the accurate application of Gaussian quadrature. One possible solution employs Talmi-Moshinski transformation to center-of-mass coordinates [@Hassan1980], which shifts the singularity to $r=0$, where it can be integrated easily (it is cancelled by $r^2$ in spherical Jacobian). However, this method is not suitable for DFT, since the calculation of coefficients becomes unfeasible for higher shells ($N\geq12$).
It turns out that Gaussian quadrature is not necessary, and very precise results can be obtained also with equidistant lattice, as follows from Euler-Maclaurin summation formula for a smooth function $f(x)$ [@Edwards1974] $$\label{EMformula}
\sum_{n=M}^{N} f(n) \sim \int_M^N f(x)\mathrm{d}x + \frac{f(M)+f(N)}{2}
+ \sum_{j=1}^{\nu} \frac{B_{2j}}{(2j)!}\big[f^{(2j-1)}(N)-f^{(2j-1)}(M)\big]$$ where $B_{2j}$ are Bernoulli numbers $$B_2 = \frac{1}{6},\ \ B_4 = -\frac{1}{30},\ \ B_6 = \frac{1}{42},\ \
B_8 = -\frac{1}{30},\ldots\quad
\frac{x}{1-\mathrm{e}^{-x}} = \sum_{n=0}^{\infty} \frac{B_n}{n!} x^n$$ The Euler-Maclaurin formula (further abbreviated as E-M) is an asymptotic series, which doesn’t have to converge, and its error is similar to the last included term (which is usually small, since the growth begins only in high-order terms, which are difficult to calculate anyway). When the integration grid is sufficiently large, the harmonic oscillator wavefunction (including its derivatives) on the boundaries is negligible, so the error of integration by simple summation rapidly vanishes, provided the oscillation wavelength $\lambda$ is sufficiently larger than grid spacing $\Delta$. Nyquist limit is $\lambda < 2\Delta$, while the double precision accuracy can be reached already with $\lambda<4\Delta$ for harmonic oscillator basis. However, due to uncertainities arising from numerical differentiation and its use in E-M corrections in Coulomb integral (\[coul\_EM\_HF\]), it is advisable to shift the limit to $\lambda<6\Delta$ in HF and $\lambda<8\Delta$ in RPA. Together with the appropriate integration boundary, it gives $$\label{int_params}
\Delta \leq \frac{\pi b}{3\sqrt{2N}}\ \textrm{ for HF},\quad \Delta \leq \frac{\pi b}{4\sqrt{2N}}\ \textrm{ for RPA}, \quad r_\mathrm{max} \geq 1.3 b\sqrt{2N}$$ where $b=\sqrt{\tfrac{\hbar}{m\omega}}$ is the oscillator length and $N=2\nu_\mathrm{max}+l$ is the number of major shells. These choices correspond to $2.5N$ integration points in HF for spherical symmetry, or $3.3N$ in RPA.
In fact, the methods like Simpson and Romberg integration take advantage of cancellation of the boundary terms in (\[EMformula\]) by admixing sums with larger spacing ($2\Delta,\,4\Delta,$ etc.). Such approach is not suitable here, due to oscillatory character of the wavefunctions, which make the wider-spaced sums incorrect. It is much better to include E-M corrections directly, if needed.
### Spherical symmetry {#sec_sph_coul}
Coulomb interaction is usually taken into account by assuming point charge of proton. Numerical value of the interaction constant in nuclear units is $$\frac{e^2}{4\pi\epsilon_0} = \alpha\hbar c = \frac{197.32697\ \mathrm{MeV.fm}}{137.035999}
= 1.4399645\ \mathrm{MeV.fm}.$$ Spatial part of the interaction can be decomposed in spherical coordinates as [@Varshalovich1988 (5.17.21)] $$\frac{1}{|\vec{r}_1-\vec{r}_2|} = \sum_{lm}\frac{4\pi}{2l+1}Y_{lm}(\vartheta_1,\varphi_1)
Y_{lm}^*(\vartheta_2,\varphi_2)\cdot\bigg\{\begin{array}{ll}
r_1^l/r_2^{l+1} \ \ & \textrm{for } r_1\leq r_2 \\
r_2^l/r_1^{l+1} \ \ & \textrm{for } r_1\geq r_2 \end{array}$$ The value of the integrand is then finite for all $r_1,\,r_2$, and has a kink in $r_1=r_2$. To get an acceptable accuracy of the result, evaluation of the Coulomb integral on equidistant grid needs a correction in $r_1=r_2$ coming from Euler-Maclaurin (E-M) series (\[EMformula\]). Let’s suppose that the grid spacing is $\Delta$ and the kink is located at $r=n\Delta$. Then, E-M series has the form: $$\begin{aligned}
\int_0^{+\infty} f(r)\mathrm{d}r &= \Delta\bigg[\frac{f(0)}{2}+\sum_{m=1}^\infty f(m\Delta)\bigg]
+\frac{\Delta^2}{12}\big[f'(n\Delta^+)-f'(n\Delta^-)\big] \nonumber\\
&\quad{}-\frac{\Delta^4}{720}\big[f'''(n\Delta^+)-f'''(n\Delta^-)\big]
+\frac{\Delta^6}{30240}\big[f^{(5)}(n\Delta^+)-f^{(5)}(n\Delta^-)\big]-\ldots\end{aligned}$$ There is no correction in $r_1=0$ or $r_2=0$ due to the presence of only even powers of $r$ in the integrand. The first case will be explained at (\[r1zeroEM\]) and the second one is obvious.
The integral to be evaluated is $$\label{coul_int}
\int_0^\infty \rho_1^{L*}(r_1)r_1^2\mathrm{d}r_1 \int_0^\infty \rho_2^L(r_2)\mathrm{d}r_2
\cdot \bigg\{\begin{array}{ll} r_2^{L+2}/r_1^{L+1} \quad &\textrm{for }r_2 \leq r_1 \\
r_1^L/r_2^{L-1} \quad &\textrm{for }r_2 \geq r_1 \end{array}$$ where $\rho^L(r)$ is component of multipolarity $L$ (in the sense $\rho(\vec{r}) = \rho^L(r)Y_{LM}(\vartheta,\varphi)$), having a generic power expansion around $r=0$ like $\rho^L(r) = ar^L + br^{L+2} + cr^{L+4} \ldots$ Corrections are then applied to diagonal terms as follows: $$\begin{aligned}
\int_0^\infty r_1^2\mathrm{d}r_1 &\int_0^\infty r_2^2\mathrm{d}r_2
\frac{\rho_1(\vec{r}_1)\rho_2(\vec{r}_2)}{|\vec{r}_1-\vec{r}_2|} = \frac{4\pi}{2L+1}
\sum_{n=1}^\infty n^2\Delta^3\rho_1^{L*}(n\Delta) \nonumber \\
&\!\!\!\!\!{}\times\bigg\{\sum_{m=1}^\infty \bigg[ \Delta^2\rho_2^L(m\Delta)\cdot\Big\{
\begin{array}{ll} m^{L+2}/n^{L+1} & \textrm{for }m\leq n \\
n^L/m^{L-1} & \textrm{for }m > n \end{array} \bigg]
-\frac{\Delta^2}{12}(2L+1)\rho_2(n\Delta) \nonumber\\
\label{coul_EM2}
& {}+\frac{\Delta^4}{720}(2L+1)\bigg[ \frac{L(L+1)}{(n\Delta)^2}\rho_2(n\Delta) + \frac{6}{n\Delta}\rho'_2(n\Delta) + 3\rho''_2(n\Delta)\bigg]\bigg\}\end{aligned}$$ The second E-M correction (last line of (\[coul\_EM2\])) contains derivatives and can be quantified by using the neighboring grid points as $$\label{coul_EM2b}
\frac{\Delta^2}{720}(2L+1)\big\{ \tfrac{L(L+1)}{n^2}\rho_2(n\Delta)
+3\big[\tfrac{n+1}{n}\rho_2((n\!+\!1)\Delta) + \tfrac{n-1}{n}\rho_2((n\!-\!1)\Delta)-2\rho_2(n\Delta)\big] \big\}$$
Let’s return to the question of behavior of integral (\[coul\_int\]) over $r_2$ in the limit $r_1\to0$. It can be separated in two parts $$\label{r1zeroEM}
r_1^L\int_0^\infty \frac{\rho_2^L(r_2)}{r_2^{L-1}}\mathrm{d}r_2 +
\frac{1}{r_1^{L+1}}\int_0^{r_1}\rho_2^L(r_2)\bigg(r_2^{L+2}-\frac{r_1^{2L+1}}{r_2^{L-1}}\bigg)\mathrm{d}r_2$$ The first integral is a constant with respect to $r_1$, while the second part leads to a polynomial of the form $ar_1^{L+2}+br_1^{L+4}+\cdots$, which after multiplication with $\rho_1^L(r_1)$ gives zero correction in subsequent integration over $r_1$.
For the case $L=0$ (used in Hartree-Fock), I will give also the third-order E-M correction, so the diagonal term in summation (\[coul\_EM2\]) becomes $$\label{coul_EM3}
\Delta^2 n \rho - \frac{\Delta^2}{12}\rho
+ \frac{\Delta^4}{240}\bigg(\frac{2\rho'}{n\Delta}+\rho''\bigg)
- \frac{\Delta^6}{6048}\bigg(\frac{4\rho'''}{n\Delta}+\rho^{(4)}\bigg)$$ However, the approximations given previously correspond to $$\textrm{diag.(\ref{coul_EM2})+(\ref{coul_EM2b}), }L=0:\quad
\Delta^2 n \rho - \frac{\Delta^2}{12}\rho + \frac{\Delta^4}{240}\bigg(
\frac{2\rho'}{n\Delta} + \rho'' + \frac{\Delta\rho'''}{3n}
+ \frac{\Delta^2\rho^{(4)}}{12n} \bigg)$$ To correct the last terms of this series into the form of (\[coul\_EM3\]), it is necessary to subtract $31\Delta^6(4\rho'''/(n\Delta)+\rho^{(4)})/60480$. The derivatives can be estimated by
$$\begin{aligned}
2\Delta^3\rho'''(n\Delta) &= \rho((n\!+\!2)\Delta) - 2\rho((n\!+\!1)\Delta)
+ 2\rho((n\!-\!1)\Delta) - \rho((n\!-\!2)\Delta) + O(\Delta^5)\\
\Delta^4\rho^{(4)}(n\Delta) &= \rho((n\!+\!2)\Delta) \!-\! 4\rho((n\!+\!1)\Delta)
\!+\! 6\rho(n\Delta) \!-\! 4\rho((n\!-\!1)\Delta) \!+\! \rho((n\!-\!2)\Delta) \!+\! O(\Delta^6)\end{aligned}$$
\
The diagonal term ($r_1=r_2=n\Delta$) for $L=0$, together with up to third-order E-M correction, then becomes
\[coul\_EM\_HF\] $$\begin{aligned}
\Delta^2 n\rho(n\Delta) &{}- \frac{\Delta^2}{12}\rho(n\Delta) \\
&{}+\frac{\Delta^2}{240}\bigg(\frac{n+1}{n}\rho((n\!+\!1)\Delta)
- 2\rho(n\Delta) + \frac{n-1}{n}\rho((n\!-\!1)\Delta)\bigg) \\
&{}-\frac{31\Delta^2}{60480}\bigg( \frac{n+2}{n}\rho((n\!+\!2)\Delta)
- 4\frac{n+1}{n}\rho((n\!+\!1)\Delta) + 6\rho(n\Delta) \nonumber\\
& \qquad\qquad{}- 4\frac{n-1}{n}\rho((n\!-\!1)\Delta) + \frac{n-2}{n}\rho((n\!-\!2)\Delta) \bigg)\end{aligned}$$
### Cartesian coordinates
Estimation of the Coulomb potential $$V(x_0,y_0,z_0) = \iiint_{-\infty}^{+\infty} \frac{\rho(x,y,z)}{\sqrt{
(x-x_0)^2+(y-y_0)^2+(z-z_0)^2}} \mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z$$ on an equidistant coordinate grid runs into singularity at $\vec{r}=\vec{r}_0$, so the integral in its vicinity has to be evaluated analytically (in the following: $\vec{r}_0 = 0$). The border between integrated and summed function leads to E-M correction, which can be most easily estimated by inverse procedure – cutting out the cube $C=({-}\Delta,\Delta)^3$ from the integral/sum $$\label{coul_3D}
\iiint_{-\infty}^{+\infty} \!\!\frac{\rho(x,y,z)}{\sqrt{
x^2+y^2+z^2}} \mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z =
\iiint_{-\infty}^{+\infty} \!f(x,y,z)\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z =
\sum_{j,k,l} f(j\Delta,k\Delta,l\Delta) \Delta^3$$ where the accuracy of the integral estimation by summation is satisfied for functions vanishing at the integration limits (as discussed previously) – this assumption holds for finite functions. In the case of Coulomb singularity, the central cube is evaluated by integral, instead of summation, by the E-M formula (\[EMformula\]), generalized stepwise to three dimensions:
$$\bigg[\frac{f(-\Delta)+f(\Delta)}{2}+f(0)\bigg]\Delta = \int_{-\Delta}^\Delta f(x)\mathrm{d}x
+ \frac{\Delta^2}{12}\big[f'(\Delta)-f'(-\Delta)\big] + O(\Delta^4)$$
$$\begin{aligned}
\Big[\tfrac{f(-\Delta,-\Delta)+f(-\Delta,\Delta)+f(\Delta,-\Delta)+f(\Delta,\Delta)}{4}
+ \tfrac{f(0,-\Delta)+f(0,\Delta)+f(-\Delta,0)+f(\Delta,0)}{2} + f(0,0)\Big]\Delta^2 = \\
= \iint_{-\Delta}^\Delta f(x,y)\mathrm{d}x\mathrm{d}y + \frac{\Delta^2}{12}\bigg\{
\int_{-\Delta}^\Delta \big[f'(\Delta,y)-f'(-\Delta,y)\big]\mathrm{d}y \\
{}+\int_{-\Delta}^\Delta \big[f'(x,\Delta)-f'(x,-\Delta)\big]\mathrm{d}x\bigg\}+O(\Delta^4)\end{aligned}$$
$$\begin{aligned}
\bigg\{\frac{1}{8}\sum_{s_1 s_2 s_3}^{\pm1} f(s_1\Delta,s_2\Delta,s_3\Delta)
+\frac{1}{4}\sum_{s_1 s_2}^{\pm1} \big[f(s_1\Delta,s_2\Delta,0)+f(s_1\Delta,0,s_2\Delta)
+f(0,s_1\Delta,s_2\Delta)\big] \nonumber\\
{}+\frac{1}{2}\sum_{s=\pm1}\big[f(s\Delta,0,0)+f(0,s\Delta,0)+f(0,0,s\Delta)\big]
+ f(0,0,0)\bigg\}\Delta^3 = \nonumber\\
\label{int_cube}
= \iiint_{-\Delta}^\Delta f(x,y,z)\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z
+ \frac{\Delta^2}{12}\oint_{\partial C}\vec{\nabla}f\cdot\mathrm{d}\vec{S} + O(\Delta^4)\end{aligned}$$
\
The double and triple integrals should now be estimated analytically. Let’s emphasize at this point that the aim of this somewhat cumbersome workaround is to obtain an effective value of $f_0 = f(0,0,0)$ to be plugged into sum (\[coul\_3D\]) instead of the infinite value.
To calculate the integrals in the cube $C=({-}\Delta,\Delta)^3$, the density $\rho(\vec{r})$ will be approximated by Taylor expansion, where only even terms contribute to the integration: $$\begin{aligned}
\rho(\vec{r}) &= \rho_0+\rho_x\tfrac{x^2}{2}+\rho_y\tfrac{y^2}{2}+\rho_z\tfrac{z^2}{2}
+\rho_{xy}\tfrac{x^2 y^2}{4}+\rho_{xz}\tfrac{x^2 z^2}{4}+\rho_{yz}\tfrac{y^2 z^2}{4} \nonumber\\
\label{rho_tayl4}
&\qquad\ {}+\rho_{x4}\tfrac{x^4}{24}+\rho_{y4}\tfrac{y^4}{24}+\rho_{z4}\tfrac{z^4}{24} +\textrm{odd} + O(r^6)\end{aligned}$$ Following integrals will be needed, which can be derived using hyperbolic sine, *per partes* with $f=(xf)'-xf'$, substitution $\sqrt{x^2+a^2}=x+t$ and other tricks.
$$\begin{aligned}
\int\frac{\mathrm{d}x}{\sqrt{a^2+x^2}} &= \ln(x+\sqrt{a^2+x^2})-\ln a \\
\int\ln\frac{a+\sqrt{a^2+b^2+x^2}}{\sqrt{b^2+x^2}}\mathrm{d}x &=
x\ln\frac{a+\sqrt{a^2+b^2+x^2}}{\sqrt{b^2+x^2}}
+ a\ln\frac{x+\sqrt{a^2+b^2+x^2}}{\sqrt{a^2+b^2}} \nonumber\\
&\quad{}- b\,\mathrm{arctg}\frac{ax}{b\sqrt{a^2+b^2+x^2}} \\
\int x\,\mathrm{arctg}\frac{ab}{x\sqrt{a^2+b^2+x^2}}\mathrm{d}x &=
\frac{x^2}{2}\mathrm{arctg}\frac{ab}{x\sqrt{a^2+b^2+x^2}}
-\frac{a^2}{2}\mathrm{arctg}\frac{bx}{a\sqrt{a^2+b^2+x^2}} \nonumber\\
&\quad{}-\frac{b^2}{2}\mathrm{arctg}\frac{ax}{b\sqrt{a^2+b^2+x^2}}
+ab\ln\frac{x+\sqrt{a^2+b^2+x^2}}{\sqrt{a^2+b^2}}\end{aligned}$$
So the basic three-dimensional integral over cube $C$ in (\[int\_cube\]) becomes $$\iiint_{-\Delta}^\Delta \frac{\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z}{\sqrt{x^2+y^2+z^2}}
= \bigg(3\ln\frac{1+\sqrt{3}}{\sqrt{2}}-\frac{3}{2}\,\mathrm{arctg}\frac{1}{\sqrt{3}}\bigg)8\Delta^2 = \Delta^2(24\beta-2\pi)$$ where I defined a useful constant $$\beta = \ln\frac{1+\sqrt{3}}{\sqrt{2}} = 0.658478948$$ A more general evaluation of (\[int\_cube\]) by assuming Taylor expansion (\[rho\_tayl4\]) then leads to
$$\begin{aligned}
\int_C\frac{\rho(\vec{r})\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z}{\sqrt{x^2+y^2+z^2}}
&= 12\Delta^2\big(2\beta-\tfrac{\pi}{6}\big)\rho_0
+\tfrac{\Delta^4}{3}\big(\sqrt{3}+4\beta-\tfrac{\pi}{6}\big)(\rho_x+\rho_y+\rho_z) \nonumber\\
&\quad{}+\tfrac{\Delta^6}{5}\big(\sqrt{3}-2\beta+\tfrac{\pi}{6}\big)(\rho_{xy}\!+\!\rho_{xz}\!+\!\rho_{yz}) \nonumber\\
&\quad{}-\tfrac{\Delta^6}{90}\big(2\sqrt{3}-19\beta+7\tfrac{\pi}{6}\big)(\rho_{x4}\!+\!\rho_{y4}\!+\!\rho_{z4}) \\
\oint_{\partial C}\vec{\nabla}f(\vec{r})\cdot\mathrm{d}\vec{S} &=
-4\pi\rho_0 + 4\Delta^2\big(2\beta-\tfrac{\pi}{6}\big)(\rho_x\!+\!\rho_y\!+\!\rho_z) \nonumber\\
&\quad{}+2\Delta^4\big(\sqrt{3}-4\beta+3\tfrac{\pi}{6}\big)(\rho_{xy}\!+\!\rho_{xz}\!+\!\rho_{yz}) \nonumber\\
&\quad{}-\tfrac{\Delta^4}{3}\big(\sqrt{3}-12\beta+7\tfrac{\pi}{6}\big)(\rho_{x4}+\rho_{y4}+\rho_{z4})\end{aligned}$$
Derivatives can be estimated from the neighboring points, defining convenient symbols $\rho_1\cdots\rho_4$:
$$\begin{aligned}
\rho_1 &=
\sum_{s=\pm1}\big[\rho(s\Delta,0,0)+\rho(0,s\Delta,0)+\rho(0,0,s\Delta)\big]
- 6\rho(0,0,0) \nonumber\\
&= \Delta^2(\rho_x+\rho_y+\rho_z)
+ \tfrac{\Delta^4}{12}(\rho_{x4}+\rho_{y4}+\rho_{z4}) \\
\rho_2 &=
\sum_{s_1s_2}^{\pm1}\big[\rho(s_1\Delta,s_2\Delta,0)+\rho(s_1\Delta,0,s_2\Delta)
+\rho(0,s_1\Delta,s_2\Delta)\big] - 12\rho(0,0,0) \nonumber\\
&= 4\Delta^2(\rho_x+\rho_y+\rho_z)
+ \Delta^4(\rho_{xy}+\rho_{xz}+\rho_{yz})
+ \tfrac{\Delta^4}{3}(\rho_{x4}+\rho_{y4}+\rho_{z4})\end{aligned}$$
$$\begin{aligned}
\rho_3 &= \sum_{s_1s_2s_3}^{\pm1} \rho(s_1\Delta,s_2\Delta,s_3\Delta)-8\rho(0,0,0) \nonumber\\
&= 4\Delta^4(\rho_x+\rho_y+\rho_z) + 2\Delta^4(\rho_{xy}+\rho_{xz}+\rho_{yz})
+\tfrac{\Delta^4}{3}(\rho_{x4}+\rho_{y4}+\rho_{z4}) \\
\rho_4 &= \sum_{s=\pm1}\big[\rho(2s\Delta,0,0)+\rho(0,2s\Delta,0)+\rho(0,0,2s\Delta)\big]
- 6\rho(0,0,0) \nonumber\\
&= 4\Delta^2(\rho_x+\rho_y+\rho_z)
+ \tfrac{4}{3}\Delta^4(\rho_{x4}+\rho_{y4}+\rho_{z4})\end{aligned}$$
These relations can be easily inverted to get
$$\begin{aligned}
\Delta^2(\rho_x+\rho_y+\rho_z) &= \tfrac{4}{3}\rho_1 - \tfrac{1}{12}\rho_4 \\
\Delta^4(\rho_{xy}+\rho_{xz}+\rho_{yz}) &= \rho_2 - 4\rho_1 \\
\Delta^4(\rho_{x4}+\rho_{y4}+\rho_{z4}) &= \rho_4 - 4\rho_1\end{aligned}$$
Desired value of $f(0,0,0)$, according to (\[int\_cube\]), is then
$$\begin{aligned}
f_0\Delta &\approx \big(24\beta-\tfrac{7}{3}\pi
-\tfrac{1}{\sqrt{3}}-\tfrac{3}{\sqrt{2}}-3\big)\rho_0
+ \Delta^2\big(\tfrac{\sqrt{3}}{6}+2\beta-\tfrac{\pi}{9}
-\tfrac{1}{\sqrt{2}}-\tfrac{1}{2}\big)(\rho_x+\rho_y+\rho_z) \nonumber\\
&\quad{}+\Delta^4\big(\tfrac{17}{60}\sqrt{3}-\tfrac{16}{15}\beta+\tfrac{7}{60}\pi
-\tfrac{1}{4\sqrt{2}}\big)(\rho_{xy}+\rho_{xz}+\rho_{yz}) \nonumber\\
&\quad{}-\Delta^4\big(\tfrac{23}{360}\sqrt{3}-\tfrac{49}{90}\beta+\tfrac{49}{1080}\pi
+\tfrac{\sqrt{2}+1}{24}\big)(\rho_{x4}+\rho_{y4}+\rho_{z4}) \\
&\approx 2.774441\rho_0 + 0.049460(\rho_x+\rho_y+\rho_z)
- 0.021887(\rho_{xy}+\rho_{xz}+\rho_{yz}) \nonumber\\
&\quad{}+ 0.004719(\rho_{x4}+\rho_{y4}+\rho_{z4}) \\
&\approx 2.774441\rho_0 + 0.134621\rho_1 - 0.021887\rho_2 + 0.000597\rho_4\end{aligned}$$
By taking E-M corrections up to $\Delta^4$ in (\[int\_cube\]), the coefficients become
$$\begin{aligned}
f_0\Delta &\approx 2.864251\rho_0 + 0.044052(\rho_x+\rho_y+\rho_z)\Delta^2
+ 0.003330(\rho_{xy}+\rho_{xz}+\rho_{yz})\Delta^4 \nonumber\\
&\quad{}- 0.013328(\rho_{x4}+\rho_{y4}+\rho_{z4})\Delta^4 \\
&\approx 2.864251\rho_0 + 0.098728\rho_1 + 0.003330\rho_2 - 0.016999\rho_4\end{aligned}$$
As can be seen, taking higher orders of E-M corrections does not increase the order of integral convergence due to a divergent nature of the integrand. Nevertheless, the accurate value of needed coefficients can be obtained by empirical evaluation of the convergence of Coulomb integral for various charge distributions. Such approach gives
\[f0\_exact\] $$\begin{aligned}
f_0\Delta &= 2.8372974794806\rho_0 + 0.04443271312(\rho_x+\rho_y+\rho_z)\Delta^2
\nonumber\\
&\quad{}+ 0.01962487(\rho_{xy}+\rho_{xz}+\rho_{yz})\Delta^4 \nonumber\\
&\quad{}- 0.00825759(\rho_{x4}+\rho_{y4}+\rho_{z4})\Delta^4 + O(\Delta^6) \\
&\approx 2.83729748\rho_0 + 0.01377450\rho_1 + 0.01962487\rho_2 - 0.01196032\rho_4\end{aligned}$$
and the Coulomb integral then converges as $O(\Delta^8)$ – assuming that the charge density vanishes near the integration boundary, or the E-M corrections up to third order are included there.
Since the calculation of Coulomb potential is a convolution, it would be natural to apply Fourier transformation during the process. Convolution with $1/r$ is then replaced by a multiplication of the frequency domain by $4\pi/k^2$ (derived by taking the limit $\mu\to0$ in $\mathrm{e}^{-\mu r}/r$, whose Fourier transformation is $4\pi/(\mu^2+k^2)$). Again, there is a singularity in $k=0$. In fact, the whole procedure – the F.T. of the density, multiplication by $4\pi/k^2$ and the inverse F.T. – can be expressed as an integral $$\frac{4\pi}{(2\pi)^3}\iiint_{-\pi/\Delta}^{\pi/\Delta}
\frac{\mathrm{e}^{\mathrm{i}\vec{k}\cdot(\vec{r}_2-\vec{r}_1)}}{k^2}\mathrm{d}^3 k
= \frac{1}{2\pi\Delta}\iiint_{-1}^1
\frac{\cos(\pi n_x q_x)\cos(\pi n_y q_y)\cos(\pi n_z q_z)}{q_x^2+q_y^2+q_z^2}
\mathrm{d}^3 q$$ where $\vec{r}_2-\vec{r}_1=(n_x,n_y,n_z)\Delta$. This integral should be evaluated in continuum limit, which corresponds to a shift of the periodic boundary to infinity. To get $O((\mathrm{d}q)^7)$ convergence, it is necessary to include up to third-order E-M correction on the boundary and to take the value of the central point as $$\begin{aligned}
\label{coul_FT0}
&\frac{\cos(\pi n_x q_x)\cos(\pi n_y q_y)\cos(\pi n_z q_z)}{q_x^2+q_y^2+q_z^2}\bigg|_{q=0}
= \frac{8.91363291758515}{(\mathrm{d}q)^2} - \frac{\pi^2}{6}(n_x^2+n_y^2+n_z^2) \nonumber\\
&\qquad\qquad{}+ 0.610299(\mathrm{d}q)^2 [3(n_x^2 n_y^2 + n_x^2 n_z^2 + n_y^2 n_z^2)
-(n_x^4 + n_y^4 + n_z^4)] + O((\mathrm{d}q)^4)\end{aligned}$$ Convolution array obtained by this method should include derivative corrections to all orders, as compared to (\[f0\_exact\]), which includes only up to fourth derivative. However, this method of Fourier-like array is probably not usable due to computational cost of calculating all $N^3$ coefficients, which have to be calculated accurately, and their integration time grows rapidly for large $n$. At least it provides a comparison with the convolution coefficients obtained by the previous methods, see Table \[tab\_coul3D\].
$(\vec{r}_1-\vec{r}_2)/\Delta$ $\Delta/r$ E-M$\leq\!\!\Delta^2$ E-M$\leq\!\!\Delta^4$ exact $\Delta^4$ F.T.
-------------------------------- ------------ ----------------------- ----------------------- ------------------ --------
(0,0,0) $\infty$ 2.2329 2.3342 2.590914 2.4427
(1,0,0) 1.0000 1.1346 1.0987 1.013775 1.0517
(1,1,0) 0.7071 0.6852 0.7104 0.726732 0.7268
(1,1,1) 0.5774 0.5774 0.5774 0.577350 0.5851
(2,0,0) 0.5000 0.5006 0.4830 0.488040 0.4740
: Convolution coefficients for integration of Coulomb interaction on cartesian grid according to naive method, Euler-Maclaurin estimation up to first and second order, exact numerical estimate with up to fourth derivative of $\rho$, and the central part of Fourier array.[]{data-label="tab_coul3D"}
One can also use Fourier method directly: by appling direct and then inverse fast Fourier transformation (FFT) to the density, which should reduce computational cost from $O(N^6)$ to $O((N\log N)^3)$. However, there is a problem of the potential leaking from the periodic boundary (due to discretized momentum), and inability to apply the corrections beyond the first term in (\[coul\_FT0\]). Both difficulties may be solved by placing proper compensating charges on the boundary of the coordinate grid (e.g., employing a multipolar expansion of the nuclear charge distribution, where the main contribution comes from the first few terms [@Dobaczewski1997]).
### Axial symmetry
In axial symmetry (using $m$-scheme and coordinates $\varrho=\sqrt{x^2+y^2}\,,\,z,\,\varphi$, see also section \[sec\_skyr\_ax\]), the direct Coulomb integral is
$$\begin{aligned}
\iint \frac{\rho_1^*(\varrho_1,z_1)\rho_2(\varrho_2,z_2)}{|\vec{r}_1-\vec{r}_2|}
&\exp(\mathrm{i}m_1\varphi_1-\mathrm{i}m_2\varphi_2)\mathrm{d}\vec{r}_1\mathrm{d}\vec{r}_2 = \Bigg| \begin{array}{c}
\mathrm{d}\vec{r}=\varrho\,\mathrm{d}\varrho\,\mathrm{d}z\,\mathrm{d}\varphi \\
\varphi = \varphi_1-\varphi_2 \\
\int \mathrm{e}^{\mathrm{i}(m_1-m_2)\varphi_2}\mathrm{d}\varphi_2 = 2\pi\delta_{m_1m_2}
\end{array} \Bigg| = \nonumber\\
&=2\pi\delta_{m_1m_2}
\int_0^\infty\rho_1^*(\varrho_1,z_1)\varrho_1\mathrm{d}\varrho_1\mathrm{d}z_1
\int_0^\infty\rho_2(\varrho_2,z_2)\varrho_2\mathrm{d}\varrho_2\mathrm{d}z_2 \nonumber\\
\label{coul_ax}
&\qquad\times\int_0^{2\pi}\frac{\exp(\mathrm{i}m\varphi)}{\sqrt{(z_1-z_2)^2+\varrho_1^2+\varrho_2^2
-2\varrho_1\varrho_2\cos\varphi}}\mathrm{d}\varphi\end{aligned}$$
\
where at least the first E-M correction should be taken into account for $\varrho=0$ (as compared to spherical case, where it is zero), so the integral is evaluated like $$\label{axial_int}
\int_0^\infty f(\varrho)\varrho\,\mathrm{d}\varrho = \Big[
\tfrac{\Delta}{12}f(0) + 1\Delta f(1\Delta) + 2\Delta f(2\Delta) + 3\Delta f(3\Delta)
+\ldots \Big]\Delta$$
As can be seen, straightforward evaluation of (\[coul\_ax\]) gives an integral over $\varphi$ $$\begin{aligned}
\label{coul_ax_int}
\int_0^{2\pi} \frac{\exp(\mathrm{i}m\varphi)}{\sqrt{(z_1-z_2)^2
+\varrho_1^2+\varrho_2^2-2\varrho_1\varrho_2\cos\varphi}}
&= \frac{g_m\big(\tfrac{2\varrho_1\varrho_2}{(z_1-z_2)^2
+\varrho_1^2+\varrho_2^2}\big)}{\sqrt{(z_1-z_2)^2+\varrho_1^2+\varrho_2^2}} \\
\label{coul_ax_g}
\textrm{where }\ g_m(x) &= \int_{-\pi}^{\pi}
\frac{\cos(m\varphi)}{\sqrt{1-x\cos\varphi}} \mathrm{d}\varphi\end{aligned}$$ which cannot be expressed in a closed form, but there is a Taylor expansion $$\label{gm_taylor}
g_m(x) = -\mathrm{i}\oint\limits_{|z|=1} \frac{z^{m-1}\mathrm{d}z}{\sqrt{1-x(z+1/z)/2}}
= 2\pi\sum_{k=0}^{\infty} \frac{(4k+2m-1)!!}{k!(k+m)!}\bigg(\frac{x}{4}\bigg)^{m+2k}$$ Function $g_m(x)$ has a logarithmic singularity in $x\to1^{-}$ $$g_m(1-t) = (O(t)-\sqrt{2})\ln t + O(1)$$
It is usually suggested [@Dobaczewski2005] to reformulate the original integral by a Gaussian substitution, e.g. $$\label{gaus_subs}
\frac{1}{|\vec{r}_1-\vec{r}_2|} = \frac{2}{\sqrt{\pi}}\int_0^\infty
\exp[-(\vec{r}_1-\vec{r}_2)^2 t^2]\mathrm{d}t$$ The integral (\[coul\_ax\_int\]) is then replaced by
$$\begin{aligned}
\frac{2}{\sqrt{\pi}}&\int_0^{2\pi}\!\!\mathrm{d}\varphi\int_0^\infty\!\mathrm{d}t
\exp\{-[(z_1\!-\!z_2)^2\!+\!\varrho_1^2\!+\!\varrho_2^2\!
-\!2\varrho_1\varrho_2\cos\varphi]t^2
+\mathrm{i}m\varphi\}
= \Bigg|\!\! \begin{array}{c} u=\mathrm{e}^{\mathrm{i}\varphi} \\
\mathrm{d}u = \mathrm{i}\,\mathrm{e}^{\mathrm{i}\varphi}\mathrm{d}\varphi \\
2\cos\varphi = u + 1/u \end{array} \!\!\Bigg| = \nonumber\\
&= -\frac{2}{\sqrt{\pi}}\mathrm{i}\oint u^{m-1}\mathrm{d}u\int_0^\infty\mathrm{d}t
\exp\big\{{-}\big[(z_1-z_2)^2+\varrho_1^2+\varrho_2^2-\big(u+\tfrac{1}{u}\big)\big]t^2\big\} \nonumber \\
\label{coul_ax2}
&= \frac{2}{\sqrt{\pi}} 2\pi\int_0^\infty
\mathrm{e}^{-[(z_1-z_2)^2+\varrho_1^2+\varrho_2^2]t^2}
\sum_{n=-\infty}^{+\infty}I_n(2\rho_1\rho_2 t^2)u^n \nonumber\\
&= \frac{2}{\sqrt{\pi}} 2\pi\int_0^\infty
\mathrm{e}^{-[(z_1-z_2)^2+(\varrho_1-\varrho_2)^2]t^2}
\frac{I_m(2\rho_1\rho_2 t^2)}{\exp(2\rho_1\rho_2 t^2)}\mathrm{d}t\end{aligned}$$
\
where the modified Bessel function [@Abramowitz1972] was used, which has an asymptotic behavior of $\exp(x)/\sqrt{x}$ (therefore, the computer libraries give it as $I_m(x)/\exp(x)$). Laurent series of modified Bessel function $I_n(x)$ can be derived from normal Bessel function $J_n(x)$ by evaluating it at imaginary axis ($z=\mathrm{i}x$) $$\begin{aligned}
&\exp\bigg[\frac{z}{2}\bigg(t-\frac{1}{t}\bigg)\bigg]
= \sum_{n=-\infty}^{+\infty} J_n(z)\,t^n
\quad\Rightarrow\quad
\exp\bigg[\frac{x}{2}\bigg(\mathrm{i}t+\frac{1}{\mathrm{i}t}\bigg)\bigg] = \sum_{n=-\infty}^{+\infty} J_n(\mathrm{i}x)\,t^n \nonumber\\
&\qquad\qquad\qquad\Rightarrow\quad
\exp\bigg[\frac{x}{2}\bigg(u+\frac{1}{u}\bigg)\bigg] = \sum_{n=-\infty}^{+\infty} I_n(u)\,t^n\end{aligned}$$ where $$I_n(-x) = I_{-n}(x) = I_n(x) = (-\mathrm{i})^n J_n(\mathrm{i}x) =
\sum_{k=0}^\infty \frac{(x/2)^{n+2k}}{k!(n+k)!}$$
The reformulated integral (\[coul\_ax2\]) is then rescaled to fit the interval $x\in(0,1)$ of the Gauss-Legendre quadrature: $$\label{GL_subs}
t^2 = \frac{x^2}{1-x^2},\quad \mathrm{d}t = \frac{\mathrm{d}x}{(1-x^2)\sqrt{1-x^2}},\quad
t\in(0,\infty)\ \Rightarrow\ x\in(0,1)$$ leading to $$\frac{2}{\sqrt{\pi}} 2\pi\int_0^1
\exp\big\{{-}[(z_1-z_2)^2+(\varrho_1-\varrho_2)^2]\tfrac{x^2}{1-x^2}\big\}
\frac{I_m(2\rho_1\rho_2 \frac{x^2}{1-x^2})}{\exp(2\rho_1\rho_2 \frac{x^2}{1-x^2})}
\frac{\mathrm{d}x}{(1-x^2)\sqrt{1-x^2}}$$ As can be seen, the reformulation of integral (\[coul\_ax\]) did not remove the singularity in $\varrho_1=\varrho_2,\,z_1=z_2$, and the result of integration remains finite only due to the finite number of integration points of the subsequent Gauss-Legendre quadrature (e.g. 20-point quadrature gives around two-fold overestimation). A correct removal of this singularity (i.e., its analytical integration) has to take into account the value of a finite grid spacing $\Delta$, as was demonstrated in the previous section for cartesian coordinates (and will be done also for axial case later in this section, see (\[coul\_ax\_0\])).
However, the representation (\[coul\_ax2\]) enables to precisely evaluate the Coulomb integral in certain circumstances. Namely, by assuming a finite charge distribution of proton, here taken as $\sqrt{\langle r^2\rangle}=0.87\ \mathrm{fm}$ (which is larger than the usual grid spacing 0.4–0.7 fm). I will assume Gaussian distribution in the following treatment, instead of usually employed exponential distribution, since the physical properties should not depend very much on the type of the distribution [@Carroll2011]. $$\label{proton_gaus}
\rho(\vec{r}) = \bigg(\frac{a}{\pi}\bigg)^{\!3/2}\mathrm{e}^{-a(\vec{r}-\vec{r}_0)^2},
\qquad \textrm{where }\ a = \frac{3}{2\langle r^2\rangle}$$ Distribution (\[proton\_gaus\]) can be directly convoluted with Gaussian in (\[gaus\_subs\]). Convolution should be done twice (two smeared protons are interacting), nevertheless, the commutativity and associativity of convolutions simplifies the calculations to $$\bigg(\frac{a}{2\pi}\bigg)^{3/2} \int_{-\infty}^{+\infty}
\mathrm{e}^{-(\vec{r_1}-\vec{r})^2 t^2} \mathrm{e}^{-a(\vec{r}-\vec{r}_2)^2/2}
\mathrm{d}\vec{r} = \bigg(\frac{a}{a+2t^2}\bigg)^{3/2}
\exp\bigg[{-}\frac{at^2}{a+2t^2}(\vec{r}_1-\vec{r}_2)^2\bigg]$$ Integral (\[coul\_ax2\]) is then replaced by
$$\frac{2}{\sqrt{\pi}} 2\pi\int_0^\infty \bigg(\frac{a}{a+2t^2}\bigg)^{\!3/2}\!\!
\exp\bigg\{{-}[(z_1\!-\!z_2)^2+(\varrho_1\!-\!\varrho_2)^2]\frac{at^2}{a+2t^2}\bigg\}
\frac{I_m(2\rho_1\rho_2 \frac{at^2}{a+2t^2})}{\exp(2\rho_1\rho_2 \frac{at^2}{a+2t^2})}\mathrm{d}t$$
\
Application of the substitution (\[GL\_subs\]) gives $$\frac{a}{a+2t^2} = \frac{a(1-x^2)}{a+(2-a)x^2},\qquad
\frac{at^2}{a+2t^2} = \frac{ax^2}{a+(2-a)x^2},$$ leading to the integral
$$\frac{2}{\sqrt{\pi}} 2\pi\int_0^1 \Big(\tfrac{a}{a+(2-a)x^2}\Big)^{\!3/2}\!\!
\exp\Big\{{-}[(z_1\!-\!z_2)^2\!+\!(\varrho_1\!-\!\varrho_2)^2]\tfrac{ax^2}{a+(2-a)x^2}\Big\}
\frac{I_m\big(2\rho_1\rho_2 \frac{ax^2}{a+(2-a)x^2}\big)}{\exp\big(2\rho_1\rho_2 \frac{ax^2}{a+(2-a)x^2}\big)}\mathrm{d}x$$
\
which is finite and well defined for any $\varrho,\,z$.
It has to be noted that Skyrme functionals were usually fitted assuming point-like charges, and also Hartree-Fock calculation is done this way. So the usage of smeared charge in RPA can be considered as a violation of self-consistency, and is therefore disabled in the presented calculations (its usage almost doesn’t change the results, only a slight downshift (ca. 0.1 MeV) of spurious state is observed).
Finally, it is also possible to employ empirical procedure similar to (\[f0\_exact\]), which gives the following replacement of the divergent point in (\[coul\_ax\_int\]): $$\label{coul_ax_0}
\frac{g_m\Big(\tfrac{2\varrho_1\varrho_2}{(z_1-z_2)^2+\varrho_1^2+\varrho_2^2}\Big)}
{\sqrt{(z_1-z_2)^2+\varrho_1^2+\varrho_2^2}} \bigg|_{z_1=z_2,\,\varrho_1=\varrho_2}
\!\!\!\!= \frac{1}{\varrho} \bigg[2\ln\frac{\varrho}{\Delta} + 6.779948935
- 4\sum_{n=1}^m \frac{1}{2n-1} \bigg] + O(\Delta^2)$$ where $\Delta = \mathrm{d}z = \mathrm{d}\varrho$. For the point on the axis ($z_1=z_2$ and $\varrho_1=\varrho_2=0$, assuming $m=0$, otherwise the contribution is zero), the first term in (\[axial\_int\]) is replaced as $$\frac{\Delta}{12} \frac{\rho_2(0,z_2) g_0(0)}{\sqrt{0^2+0^2+0^2}}
\quad\mapsto\quad 2.1770180559\,\rho_2(0,z_1)$$ For all other points, the integral in the function $g_m(x)$ (\[coul\_ax\_g\]) can be calculated as a simple sum of equidistantly sampled integrand, which converges rapidly (due to periodicity); or by Taylor series (\[gm\_taylor\]) for small $x$ and $m>0$, where the direct integration runs into numerical problems (subtraction of large numbers to get a small number). It is also advisable to use extended precision (long double) internally during the calculation of $g_m(x)$, to get an accurate result in double precision.
Pairing interaction {#sec_pair}
-------------------
Short-range part of the nuclear interaction gives rise to a superfluid phase transition in open-shell nuclei. This interaction gives rise to even-odd staggering of the nuclear masses and separation energies, and is therefore denoted as *pairing*. Pairing was implemented on the BCS level, so that the HF+BCS ground state is $$\label{BCS_gs}
|\mathrm{BCS}\rangle = \prod_{\beta}^{m_\beta>0}(u_\beta+v_\beta\hat{a}_\beta^+\hat{a}_{\bar{\beta}}^+)|0\rangle,
\qquad\textrm{where }u_\beta^2 + v_\beta^2 = 1.$$ Since Skyrme interaction is assumed only in the $ph$ channel, pairing interaction is added separately and acts only in the $pp$ channel, either as a “volume pairing” ($\hat{V}_\mathrm{pair}$) or as a “surface pairing” ($\hat{V}'_\mathrm{pair}$):
\[V\_pair\] $$\begin{aligned}
\label{V_pair1}
\hat{V}_\mathrm{pair} &= \sum_{q=p,n}\sum_{ij\in q}^{i<j} V_q\delta(\vec{r}_i-\vec{r}_j) \\
\label{V_pair2}
\hat{V}'_\mathrm{pair} &= \sum_{q=p,n}\sum_{ij\in q}^{i<j}
V_q\bigg(1-\frac{\rho(\vec{r}_i)}{\rho_0}\bigg)\delta(\vec{r}_i-\vec{r}_j)\end{aligned}$$
The matrix element between two many-body states (Slater determinants) differing by two wavefunctions is then: $$\label{pair_me}
\langle\alpha\beta|\hat{V}_\mathrm{pair}|\gamma\delta\rangle =
\iint \psi_\alpha^\dagger(\vec{r}_1)\psi_\beta^\dagger(\vec{r}_2)\hat{V}_\mathrm{pair}
\big[\psi_\gamma(\vec{r}_1)\psi_\delta(\vec{r}_2)
-\psi_\delta(\vec{r}_1)\psi_\gamma(\vec{r}_2)\big]\mathrm{d}\vec{r}_1\mathrm{d}\vec{r}_2$$
For further evaluation of the matrix elements, I will explicitly separate the spin part ($\chi$) of the wavefunction: $$\begin{aligned}
\psi_\alpha(\vec{r}) = \sum_s^{\pm1/2} \psi_{\alpha s}(\vec{r})\chi_s
&= R_\alpha(r)\sum_s^{\pm1/2}
C_{l_\alpha,m_\alpha-s,\frac{1}{2},s}^{j_\alpha,m_\alpha}
Y_{l_\alpha,m_\alpha-s}(\vartheta,\varphi)\chi_s, \\
&\qquad\qquad\textrm{where }\chi_{+1/2} = \binom{1}{0},\ \chi_{-1/2} = \binom{0}{1}
\nonumber\end{aligned}$$ In the pairing channel, the wavefunctions are coupled to pairs $(\alpha\beta)$ and $(\gamma\delta)$, and the $\delta$-interaction doesn’t depend on spin, so it useful to decompose the spin part of the 2-body wavefunction to triplet and singlet. The matrix element can be then decomposed schematically as $$\sum_{s_1 s_2}^{\pm1/2} f_{s_1 s_2}^* g_{s_1 s_2} = \sum_{s_1 s_2}
\Big(\sum_{JM}C_{\frac{1}{2}s_1\frac{1}{2}s_2}^{JM} f_{JM}^*\Big)
\Big(\sum_{J'M'}C_{\frac{1}{2}s_1\frac{1}{2}s_2}^{J'M'} g_{J'M'}^{\phantom{*}}\Big)
= \sum_{JM} f_{JM}^*g_{JM}^{\phantom{*}}$$ where $J,J'\in{0,1}$, and the symbols $f_{JM}$ and $g_{JM}^{\phantom{*}}$ were defined using orthogonality of Clebsch-Gordan coefficients: $$f_{JM} = \sum_{s_1 s_2}C_{\frac{1}{2}s_1\frac{1}{2}s_2}^{JM} f_{s_1 s_2},\qquad
f_{s_1 s_2} = \sum_{JM}C_{\frac{1}{2}s_1\frac{1}{2}s_2}^{JM} f_{JM}$$ $$f_{00} = \frac{f_{\uparrow\downarrow}-f_{\downarrow\uparrow}}{\sqrt{2}},\quad
f_{11} = f_{\uparrow\uparrow},\quad
f_{10} = \frac{f_{\uparrow\downarrow}+f_{\downarrow\uparrow}}{\sqrt{2}},\quad
f_{1,-1} = f_{\downarrow\downarrow}$$ Evaluation of the pairing matrix element of $\delta$-force (\[V\_pair1\]) (and similarly for (\[V\_pair2\])) leads to the cancellation of the triplet component due to antisymmetrization. $$\begin{aligned}
\langle\alpha\bar{\alpha}|\hat{V}_\mathrm{pair}|\beta\bar{\beta}\rangle
&= V_q\sum_{s_1s_2}\int\psi_{\alpha s_1}^*(\vec{r})\psi_{\bar{\alpha}s_2}^*(\vec{r})
\big[\psi_{\beta s_1}(\vec{r})\psi_{\bar{\beta}s_2}(\vec{r})-
\psi_{\bar{\beta}s_1}(\vec{r})\psi_{\beta s_2}(\vec{r})\big]\mathrm{d}^3 r
\nonumber\\
&= V_q\int(\psi_{\alpha\uparrow}^*\psi_{\bar{\alpha}\downarrow}^* -
\psi_{\alpha\downarrow}^*\psi_{\bar{\alpha}\uparrow}^*)
(\psi_{\beta\uparrow}\psi_{\bar{\beta}\downarrow}-\psi_{\beta\downarrow}\psi_{\bar{\beta}\uparrow})\,\mathrm{d}^3 r \quad (\alpha,\beta\in q)\end{aligned}$$ I will denote one of the parentheses as $\sqrt{2}\,\big[\psi_{\beta}\psi_{\bar{\beta}}\big]_{00}$ and define the pairing density $\kappa(\vec{r})$, using $\psi_{\bar{\beta}} = \mathrm{i}\sigma_y\psi_\beta^*$ and Bogoliubov transformation \[bogoliubov\] to quasiparticles: $$\begin{aligned}
\label{pair_dens}
\hat{\kappa}(\vec{r}) &= -\sqrt{2}\sum_{\beta>0}\big[\psi_\beta(\vec{r})\psi_{\bar{\beta}}(\vec{r})\big]_{00}
(\hat{a}_\beta^+\hat{a}_{\bar{\beta}}^+ +\hat{a}_{\bar{\beta}}\hat{a}_\beta) \\
\label{pair_dens_qp}
&= \sum_{\beta>0} \psi_\beta^\dagger(\vec{r})\psi_\beta(\vec{r})
\big[2 u_\beta v_\beta(1-\hat{\alpha}_\beta^+\hat{\alpha}_\beta-\hat{\alpha}_{\bar{\beta}}^+\hat{\alpha}_{\bar{\beta}})
+ (u_\beta^2-v_\beta^2)
(\hat{\alpha}_\beta^+\hat{\alpha}_{\bar{\beta}}^+ + \hat{\alpha}_{\bar{\beta}}\hat{\alpha}_\beta) \big]\end{aligned}$$ In spherical symmetry, pairing is applied only in the monopole part of the interaction, so the summation over $m_\beta$ leads to $$\label{summ_pair}
\sum_{m_\beta>0}\psi_\beta^\dagger(\vec{r})\psi_\beta(\vec{r}) = \frac{2j_\beta+1}{8\pi} R_\beta^2(r)$$
In the formalism of density functional theory, it is necessary to reformulate the two-body pairing interaction (\[pair\_me\]) to a functional of pairing density (\[pair\_dens\]). This is done by comparing the expectation value of $\hat{V}_\mathrm{pair}$ and $\hat{\kappa}_q$ in the BCS ground state (\[BCS\_gs\]). $$\begin{aligned}
\langle\mathrm{BCS}|\hat{\kappa}_q(\vec{r})|\mathrm{BCS}\rangle &=
\sum_{\beta>0}^{\beta\in q} 2 u_\beta v_\beta
\psi_\beta^\dagger(\vec{r})\psi_\beta(\vec{r}) \stackrel{\mathrm{def}}{=} \kappa_q(\vec{r}) \\
\langle\mathrm{BCS}|\hat{V}_\mathrm{pair}|\mathrm{BCS}\rangle &=
\sum_{\beta>0}v_\beta^2
\langle\beta\bar{\beta}|\hat{V}_\mathrm{pair}|\beta\bar{\beta}\rangle
+ \sum_{\alpha,\beta>0}^{\alpha\neq\beta}u_\alpha v_\alpha u_\beta v_\beta
\langle\alpha\bar{\alpha}|\hat{V}_\mathrm{pair}|\beta\bar{\beta}\rangle \nonumber\\
&= \frac{1}{4}\sum_{q=p,n}V_q\int\kappa_q^2(\vec{r})\mathrm{d}\vec{r}
+ \sum_{\beta>0}v_\beta^4
\langle\beta\bar{\beta}|\hat{V}_\mathrm{pair}|\beta\bar{\beta}\rangle\end{aligned}$$ The last term corresponds to an interaction in $ph$ channel and is therefore dropped (it is already included in the non-pairing part of the functional). Pairing part of density functional is therefore
$$\begin{aligned}
\mathcal{H}_\mathrm{pair} &= \frac{1}{4}\sum_{q=p,n}V_q\int\kappa_q^2(\vec{r})\mathrm{d}\vec{r} \\
\mathcal{H}'_\mathrm{pair} &= \frac{1}{4}\sum_{q=p,n}
V_q\int\bigg(1-\frac{\rho(\vec{r})}{\rho_0}\bigg)\kappa_q^2(\vec{r})\mathrm{d}\vec{r}\end{aligned}$$
Reduced matrix element of the pairing density for RPA in the spherical symmetry can be derived by rewriting the second part of (\[pair\_dens\_qp\]). $$\begin{aligned}
\hat{\alpha}_\beta^+\hat{\alpha}_{\bar{\beta}}^+ + \hat{\alpha}_{\bar{\beta}}\hat{\alpha}_\beta &= (-1)^{l_\beta+j_\beta+m_\beta}
(\hat{\alpha}_\beta^+\hat{\alpha}_{-\beta}^+ - \hat{\alpha}_{\bar{\beta}}\hat{\alpha}_{\overline{-\beta}}) \\
&=
(-1)^{l_\beta}\sqrt{2j_\beta+1}\,C_{j_\beta,m_\beta,j_\beta,-m_\beta}^{0,0}
(-\hat{\alpha}_\beta^+\hat{\alpha}_{-\beta}^+ + \hat{\alpha}_{\bar{\beta}}\hat{\alpha}_{\overline{-\beta}})\end{aligned}$$ Comparison of this expression and (\[pair\_dens\_qp\])+(\[summ\_pair\]) with (\[rme\]) then leads to r.m.e. $$\kappa_{\beta,-\beta}^{J=0}(r) = \sqrt{\frac{2j_\beta+1}{4\pi}}\,(u_\beta^2-v_\beta^2)R_\beta^2(r)$$ which has to be further divided by $\sqrt{2}$ to provide correct treatment in the convention of omitted duplicate pairs (\[order2qp\]).
In fact, $\delta$-interaction gives rise to a diverging pairing energy. This problem can be circumvented by using finite-range pairing interaction, as is done in Gogny force, or by applying a cutoff weight $f_\beta$ to the pairing density, as is usually done for Skyrme [@Bender2000]: $$\label{pair_dens+f}
\hat{\kappa}(\vec{r}) = \sum_{\beta>0} f_\beta\psi_\beta^\dagger(\vec{r})\psi_\beta(\vec{r})
\big[2 u_\beta v_\beta(1-\hat{\alpha}_\beta^+\hat{\alpha}_\beta-\hat{\alpha}_{\bar{\beta}}^+\hat{\alpha}_{\bar{\beta}})
+ (u_\beta^2-v_\beta^2)
(\hat{\alpha}_\beta^+\hat{\alpha}_{\bar{\beta}}^+ + \hat{\alpha}_{\bar{\beta}}\hat{\alpha}_\beta) \big]$$ $$f_\beta = \frac{1}{1+\exp[10(\epsilon_\beta-\lambda_q-\Delta E_q)/\Delta E_q]}$$ Cutoff weight is meant to damp higher-lying levels, and the cutoff parameter $\Delta E_q$ (usually in the range of 5-9 MeV) is adjusted during the HF+BCS iterations, according to the actual level density, to yield $$2\sum_{\beta\in q}^{m_\beta>0} f_\beta = N_q + 1.65N_q^{2/3} \qquad\textrm{($N_q$ is the particle number).}$$ Pairing strengths $V_p,\,V_n$ (which are negative), obtained with this condition, are given in [@Reinhard1999] for SkM\*, SkT6, SLy4, SkI1, SkI3, SkI4, SkP, SkO, and in [@Guo2007] for SLy6.
Center-of-mass correction of the kinetic energy {#sec_kin-cm}
-----------------------------------------------
Many-body wavefunction in the form of Slater determinant does not guarantee that the center of mass is fixed in the center of coordinates. In fact, it has certain distribution around the center, and the expectation value of linear momentum is fluctuating as well. In this way the main-field theory breaks the translational symmetry, which can be approximately restored by subtraction of the center-of-mass kinetic energy from the total ground-state energy [@Reinhard2011]. $$\begin{aligned}
\mathcal{H}_\mathrm{c.m.}
&= -\frac{1}{2M}{\langle\mathrm{HF}|\hat{P}_\mathrm{tot}^2|\mathrm{HF}\rangle}_\mathrm{Slater} \nonumber\\
\label{H_cm}
&= \frac{-1}{2(Zm_p + Nm_n)}\bigg(
\sum_{i}{\langle\mathrm{HF}|\hat{p}_i^2|\mathrm{HF}\rangle}_\mathrm{Slater} +
\sum_{i\neq j} {\langle\mathrm{HF}|\hat{\vec{p}}_i\cdot\hat{\vec{p}}_j|\mathrm{HF}\rangle}_\mathrm{Slater} \bigg)\end{aligned}$$ The first term in (\[H\_cm\]) is similar to single-particle kinetic energy and can be included by rescaling of the nucleon mass (before or after variation). The second term looks like a two-body interaction, for which the direct term is zero in spherical symmetry (operator $\hat{p}$ shifts the angular momentum $l$ by $\pm1$ and changes the parity), and only the exchange term contributes.
$$\begin{aligned}
\label{me_cm2a}
\sum_{j\neq k} \langle\mathrm{HF}|\hat{\vec{p}}_j\cdot\hat{\vec{p}}_k|\mathrm{HF}\rangle_\mathrm{Slater}
&= \hbar^2\sum_{\alpha\beta}v_\alpha^2 v_\beta^2
\langle\alpha\beta|\vec{\nabla}_1\cdot\vec{\nabla}_2|\beta\alpha\rangle \\
&= \hbar^2\sum_{\alpha\beta}v_\alpha^2 v_\beta^2\!\!\sum_\mu^{-1,0,1}\!\!(-1)^\mu
\langle\alpha|\nabla_\mu|\beta\rangle\langle\beta|\nabla_{-\mu}|\alpha\rangle\end{aligned}$$
Matrix element of the derivative operator is evaluated in spherical symmetry according to [@Varshalovich1988 (7.1.24)] and (\[Rpm\]): $$\label{deriv_me}
\langle\alpha|\nabla_\mu|\beta\rangle
= \frac{(-1)^{j_\beta+l_\beta-\frac{1}{2}}}{\sqrt{2l_\alpha+1}}
\bigg(\int R_\alpha(r)R_\beta^{(\pm)}(r)r^2\mathrm{d}r\bigg)
\begin{Bmatrix} j_\beta & j_\alpha & 1 \\ l_\alpha & l_\beta & \frac{1}{2} \end{Bmatrix}
C_{j_\beta,m_\beta,1,\mu}^{j_\alpha,m_\alpha+\mu}$$ and similar expression is found for $\langle\beta|\nabla_{-\mu}|\alpha\rangle$ (with $R_\alpha^{(\mp)}$). In the following, I will assume that selection rules on $j_\alpha,j_\beta,l_\alpha,l_\beta$ are satified. Clebsch-Gordan coefficients are then eliminated by employing their symmetry [@Varshalovich1988 (8.4.10)] and orthogonality [@Varshalovich1988 (8.1.8)], and including the summation over $m_\beta$. $$\begin{aligned}
\sum_{\mu,m_\beta}(-1)^\mu C_{j_\beta,m_\beta,1,\mu}^{j_\alpha,m_\alpha+\mu}
C_{j_\alpha,m_\alpha,1,-\mu}^{j_\beta,m_\beta-\mu}
&= (-1)^{j_\alpha-j_\beta}\sqrt{\frac{2j_\beta+1}{2j_\alpha+1}}
\sum_{\mu,m_\beta} C_{j_\beta,m_\beta,1,\mu}^{j_\alpha,m_\alpha+\mu}
C_{j_\beta,m_\beta,1,\mu}^{j_\alpha,m_\alpha+\mu} \\
&= (-1)^{j_\alpha-j_\beta}\sqrt{\frac{2j_\beta+1}{2j_\alpha+1}}\end{aligned}$$ Summation over $m_\alpha$ then gives additional factor $(2j_\alpha+1)$. The second radial integral will be modified by *per partes*, taking into account the definition of $R_\alpha^{(\pm)}$ (\[Rpm\]) and $l_\alpha = l_\beta\pm1$. $$\int R_\beta(r)R_\alpha^{(\mp)}(r)r^2\mathrm{d}r =
\sqrt{\frac{(2j_\alpha+1)(2l_\beta+1)}{(2j_\beta+1)(2l_\alpha+1)}}
\int R_\alpha(r)R_\beta^{(\pm)}(r)r^2\mathrm{d}r$$ Matrix element (\[me\_cm2a\]) is then $$\label{me_cm2b}
\sum_{m_\alpha m_\beta}
\langle\alpha\beta|\vec{\nabla}_1\cdot\vec{\nabla}_2|\beta\alpha\rangle
= -\frac{2j_\alpha+1}{2l_\alpha+1}
\bigg(\int R_\alpha(r) R_\beta^{(\pm)}(r)r^2\mathrm{d}r\bigg)^{\!2}
\begin{Bmatrix} j_\beta & j_\alpha & 1 \\ l_\alpha & l_\beta &\frac{1}{2} \end{Bmatrix}^2$$
Variation of $\psi_\alpha$ in Hartree-Fock style with general wavefunctions then gives non-local term in single-particle Hamiltonian (besides common local terms, such as kinetic single-particle term, Skyrme and direct Coulomb, collected in $\hat{h}_\mathrm{local}$) $$\varepsilon_\alpha R_\alpha(r) = \hat{h}_\mathrm{local}(r)R_\alpha(r)
+ const\cdot\sum_\beta v_\beta^2 R_\beta^{(\pm)}(r)
\int R_\beta^{(\pm)}(r')R_\alpha(r')r'^2\mathrm{d}r'$$ Numerical difficulties involved in the evaluation of exchange integral can be avoided with the basis of spherical harmonic oscillator. Integral (\[me\_cm2b\]) is then evaluated analytically in terms of density matrix $D_{\nu\nu'}^{(j,l)}$ (given in large square brackets). $$\begin{aligned}
\sum_{\alpha\in(j,l,m)}\!\!\!v_\alpha^2 R_\alpha(r_1)R_\alpha(r_2) &=
\sum_{\nu,\nu'}\bigg[\sum_\alpha v_\alpha^2 U_{\nu\alpha}^{(j,l)} U_{\nu'\alpha}^{(j,l)}\bigg] R_{\nu l}(r_1) R_{\nu'l}(r_2) \nonumber\\
&= \sum_{\nu,\nu'} D_{\nu\nu'}^{(j,l)} R_{\nu l}(r_1) R_{\nu'l}(r_2)\end{aligned}$$ Product of wavefunctions shifted in $l$ by differentiation are then evaluated using (\[Rpm\_sho\]).
$$\begin{aligned}
\!\sum_{\alpha\in(j,l,m)}\!\!\!v_\alpha^2 R_\alpha^{(+)}(r_1)R_\alpha^{(+)}(r_2) &=
\tfrac{1}{b^2}(2j+1)(l+1)(2l+3)\sum_{\nu,\nu'}
\sum_\alpha v_\alpha^2 U_{\nu\alpha}^{(j,l)} U_{\nu'\alpha}^{(j,l)}
\nonumber\\[-4pt] &\qquad\qquad\cdot
\big[\sqrt{\nu+l+3/2}\,R_{\nu,l+1}(r_1) + \sqrt{\nu}\,R_{\nu-1,l+1}(r_1)\big]
\nonumber\\ &\qquad\qquad\cdot
\big[\sqrt{\nu'+l+3/2}\,R_{\nu',l+1}(r_2) + \sqrt{\nu'}\,R_{\nu'-1,l+1}(r_2)\big]
\nonumber\\
&= \tfrac{1}{b^2}(2j+1)(l+1)(2l+3)\sum_{\nu,\nu'} D_{\nu\nu'}^{(j,l{+})}
R_{\nu,l+1}(r_1) R_{\nu',l+1}(r_2) \\
\!\sum_{\alpha\in(j,l,m)}\!\!\!v_\alpha^2 R_\alpha^{(-)}(r_1)R_\alpha^{(-)}(r_2) &=
\tfrac{1}{b^2}(2j+1)l(2l-1)\sum_{\nu,\nu'}
\sum_\alpha v_\alpha^2 U_{\nu\alpha}^{(j,l)} U_{\nu'\alpha}^{(j,l)}
\nonumber\\[-4pt] &\qquad\qquad\cdot
\big[\sqrt{\nu+l+1/2}\,R_{\nu,l-1}(r_1) + \sqrt{\nu+1}\,R_{\nu+1,l-1}(r_1)\big]
\nonumber\\ &\qquad\qquad\cdot
\big[\sqrt{\nu'+l+1/2}\,R_{\nu',l-1}(r_2) + \sqrt{\nu'+1}\,R_{\nu'+1,l-1}(r_2)\big]
\nonumber\\
&= \tfrac{1}{b^2}(2j+1)l(2l-1)\sum_{\nu,\nu'} D_{\nu\nu'}^{(j,l{-})}
R_{\nu,l-1}(r_1) R_{\nu',l-1}(r_2)\end{aligned}$$
\
where I defined modified density matrices $D_{\nu\nu'}^{(j,l{\pm})}$, which can be calculated easily from the standard density matrix $D_{\nu\nu'}^{(j,l)}$.
$$\begin{aligned}
D_{\nu\nu'}^{(j,l{+})} &= \sqrt{\nu+l+3/2}
\big(\sqrt{\nu'+l+3/2}\,D_{\nu\nu'}^{(j,l)}
+ \sqrt{\nu'+1}\,D_{\nu,\nu'+1}^{(j,l)}\big) \nonumber\\
&\quad+\sqrt{\nu+1} \big(\sqrt{\nu'+l+3/2}\,D_{\nu+1,\nu'}^{(j,l)}
+ \sqrt{\nu'+1}\,D_{\nu+1,\nu'+1}^{(j,l)}\big) \\
D_{\nu\nu'}^{(j,l{-})} &= \sqrt{\nu+l+1/2}
\big(\sqrt{\nu'+l+1/2}\,D_{\nu\nu'}^{(j,l)}
+ \sqrt{\nu'}\,D_{\nu,\nu'-1}^{(j,l)}\big) \nonumber\\
&\quad+\sqrt{\nu} \big(\sqrt{\nu'+l+1/2}\,D_{\nu-1,\nu'}^{(j,l)}
+ \sqrt{\nu'}\,D_{\nu-1,\nu'-1}^{(j,l)}\big)\end{aligned}$$
Matrix element (\[me\_cm2b\]) can be now summed within the corresponding spaces $(j_\alpha,l_\alpha),\,(j_\beta,l_\beta)$, using orthogonality of $R_{\nu l}(r)$ to get:
$$\begin{aligned}
\!\!\!\sum_{\alpha\beta}^{l_\alpha=l_\beta+1}\!\!\!\! \langle\alpha\beta|\vec{\nabla}_1\!\cdot\!\vec{\nabla}_2|\beta\alpha\rangle
&= -\frac{(2j_\alpha+1)(2j_\beta+1)l_\alpha}{b^2}
\begin{Bmatrix} j_\beta & \!j_\alpha\! & 1 \\
l_\alpha & \!l_\alpha-1\! & \frac{1}{2} \end{Bmatrix}^2
\sum_{\nu\nu'} D_{\nu\nu'}^{(j_\alpha,l_\alpha)} D_{\nu\nu'}^{(j_\beta,l_\beta{+})} \\
\!\!\!\sum_{\alpha\beta}^{l_\alpha=l_\beta-1}\!\!\!\! \langle\alpha\beta|\vec{\nabla}_1\!\cdot\!\vec{\nabla}_2|\beta\alpha\rangle
&= -\frac{(2j_\alpha+1)(2j_\beta+1)l_\beta}{b^2}
\begin{Bmatrix} j_\beta & \!j_\alpha\! & 1 \\
l_\beta-1 & \!l_\beta\! & \frac{1}{2} \end{Bmatrix}^2
\sum_{\nu\nu'} D_{\nu\nu'}^{(j_\alpha,l_\alpha)} D_{\nu\nu'}^{(j_\beta,l_\beta{-})}\end{aligned}$$
where $\sum_{\alpha\beta}$ runs also over $m$, while $\sum_{\nu\nu'}$ is understood for a fixed $m$, since $D_{\nu\nu'}$ is degenerate in $m$. Evaluation of $6j$ symbol according to [@Varshalovich1988 tab. 9.1] gives $$\begin{Bmatrix} j' & j & 1 \\
l & l-1 & \frac{1}{2} \end{Bmatrix}^2 = \Bigg\{ \begin{array}{ll}
\frac{1}{2j(2j+1)^2(j+1)} & \textrm{for }j' = j = l-\frac{1}{2} \\[4pt]
\frac{1}{4jl} & \textrm{for }j' = j-1
\end{array}$$ leading to
$$\begin{aligned}
\sum_{\alpha\beta}^{j_\alpha=j_\beta}\! v_\alpha^2 v_\beta^2\langle\alpha\beta|\vec{\nabla}_1\!\cdot\!\vec{\nabla}_2|\beta\alpha\rangle
&= \frac{-(2j_\alpha+1)}{b^2(2l_\alpha+1)(2l_\beta+1)}
\sum_{\nu\nu'} D_{\nu\nu'}^{(j_\alpha,l_\alpha)} D_{\nu\nu'}^{(j_\beta,l_\beta{\pm})} \\
\!\!\!\sum_{\alpha\beta}^{j_\alpha=j_\beta\pm1}\!\!\!\! v_\alpha^2 v_\beta^2\langle\alpha\beta|\vec{\nabla}_1\!\cdot\!\vec{\nabla}_2|\beta\alpha\rangle
&= -\frac{(2j_\alpha+1)(2j_\beta+1)}{2b^2(j_\alpha+j_\beta+1)}
\sum_{\nu\nu'} D_{\nu\nu'}^{(j_\alpha,l_\alpha)} D_{\nu\nu'}^{(j_\beta,l_\beta{\pm})}\end{aligned}$$
$$\langle\nu,j,l|\hat{h}_\mathrm{c.m.ex}|\nu',j,l\rangle = \frac{\hbar^2}{Mb^2}
\sum_{\scriptstyle l'=l\pm1} \bigg[\frac{D_{\nu\nu'}^{(j,l'{\mp})}}{(2l+1)(2l'+1)}
+ \frac{(2j'+1)D_{\nu\nu'}^{(j\pm1,l'{\mp})}}{2(j+j'+1)}\bigg]$$
In open-shell nuclei, the center-of-mass term contributes also in the pairing channel, according to (\[deriv\_me\]): $$\begin{aligned}
\mathcal{H}_\mathrm{c.m.} &= \frac{\hbar^2}{2M} \frac{1}{4}\sum_{\alpha\beta}
u_\alpha^2 v_\alpha^2 u_\beta^2 v_\beta^2
{\langle\alpha\bar{\alpha}|2\vec{\nabla}_1\cdot\vec{\nabla}_2|\beta\bar{\beta}\rangle}_\mathrm{Slater} \nonumber\\
&= \frac{\hbar^2}{2M}\sum_{\alpha\beta}u_\alpha^2 v_\alpha^2 u_\beta^2 v_\beta^2
\sum_\mu \langle\bar{\alpha}|\nabla_{-\mu}|\bar{\beta}\rangle\langle\alpha|\nabla_\mu|\beta\rangle \nonumber\\
\label{cm_pair}
&= \frac{\hbar^2}{2M}\sum_{\alpha\beta}^{(j,l,\not m)}
u_\alpha^2 v_\alpha^2 u_\beta^2 v_\beta^2 \frac{2j_\alpha+1}{2l_\alpha+1}
\bigg(\int R_\alpha(r) R_\beta^{(\pm)}(r)r^2\mathrm{d}r\bigg)^{\!2}
\begin{Bmatrix} j_\beta & j_\alpha & 1 \\ l_\alpha & l_\beta &\frac{1}{2} \end{Bmatrix}^2\end{aligned}$$ where the $1/4$ in the first line is due to summation over positive and negative $m$, the exchange term is absorbed to the direct term by $$\psi_{\bar{\beta}}(\vec{r}_1)\psi_\beta(\vec{r}_2) =
-\psi_{-\beta}^{\phantom{|}}(\vec{r}_1)\psi_{\overline{-\beta}}(\vec{r}_2),$$ and the summation in (\[cm\_pair\]) doesn’t run over $m$, as it was already included like in (\[me\_cm2b\]).
It is possible also to include $\mathcal{H}_\mathrm{c.m.}$ in the residual interaction of RPA, which seems necessary for the self-consistency, when starting with a ground state calculated in variation-after-projection (VAP) approach. RPA already restores the symmetry to a certain degree, mainly limited by the size of the model space, and it can be expected that the includion of $\mathcal{H}_\mathrm{c.m.}$ will make the separation of the spurious motion even better. The derivation of the residual interaction from the two-body part of $\mathcal{H}_\mathrm{c.m.}$ is a bit cumbersome, as it requires to take into account both direct and exchange terms, which in the spherical symmetry require recoupling of the angular momenta. $$\hat{V}_\mathrm{res}^\mathrm{(c.m.)} = \frac{\hbar^2}{2M} \sum_{\alpha\beta\gamma\delta} \sum_\mu (-1)^\mu
\langle\bar{\alpha}|\nabla_{-\mu}|\beta\rangle \langle\bar{\gamma}|\nabla_\mu|\delta\rangle
:\! \hat{a}_{\bar{\alpha}}^+\hat{a}_\beta^{\phantom{|}}
\hat{a}_{\bar{\gamma}}^+\hat{a}_\delta^{\phantom{|}} \!:$$ Matrix element of the derivative operator is, according to (\[deriv\_me\]) and (\[t\_inv\]): $$\langle\bar{\alpha}|\nabla_\mu|\beta\rangle =
\frac{(-1)^{\mu+j_\beta+\frac{1}{2}}}{\sqrt{3}\,(2l_\alpha+1)}
\bigg(\int R_\alpha^{(0)}R_\beta^{(\pm)}r^2\mathrm{d}r\bigg)
\begin{Bmatrix} l_\alpha & l_\beta & 1 \\ j_\beta & j_\alpha & \frac{1}{2} \end{Bmatrix}
C_{j_\alpha m_\alpha j_\beta m_\beta}^{1,-\mu}
= \langle\bar{\beta}|\nabla_\mu|\alpha\rangle$$ The symmetry $(\alpha\leftrightarrow\beta)$ is then applied together with a transformation to quasiparticles (\[bogoliubov\]): $$\begin{aligned}
\hat{a}_{\bar{\alpha}}^+\hat{a}_\beta^{\phantom{|}} \mapsto
\tfrac{1}{2}\big(\hat{a}_{\bar{\alpha}}^+\hat{a}_\beta^{\phantom{|}} \!+\!
\hat{a}_{\bar{\beta}}^+\hat{a}_\alpha^{\phantom{|}}\big) &=
\tfrac{1}{2}\big[
\big(u_\alpha\hat{\alpha}_{\bar{\alpha}}^+ \!-\! v_\alpha\hat{\alpha}_\alpha^{\phantom{|}}\big)
\big(u_\beta\hat{\alpha}_\beta^{\phantom{|}} \!+\! v_\beta\hat{\alpha}_{\bar{\beta}}^+\big) \nonumber\\
&\qquad{}+\big(u_\beta\hat{\alpha}_{\bar{\beta}}^+ \!-\! v_\beta\hat{\alpha}_\beta^{\phantom{|}}\big)
\big(u_\alpha\hat{\alpha}_\beta^{\phantom{|}} \!+\! v_\alpha\hat{\alpha}_{\bar{\beta}}^+\big)
\big] \nonumber\\
&= \tfrac{1}{2}\big[u_{\alpha\beta}^{(p{-})}
\big(\hat{\alpha}_{\bar{\alpha}}^+\hat{\alpha}_\beta^{\phantom{|}}
+ \hat{\alpha}_{\bar{\beta}}^+\hat{\alpha}_\alpha^{\phantom{|}}\big)
+ u_{\alpha\beta}^{(-)}\big(\hat{\alpha}_{\bar{\alpha}}^+\hat{\alpha}_{\bar{\beta}}^+
+ \hat{\alpha}_\alpha^{\phantom{|}}\hat{\alpha}_\beta^{\phantom{|}}\big)\big]\end{aligned}$$ where, besides already defined pairing factors (\[u\_ab\]), I introduced a corresponding factor for the particle-particle channel: $$u_{\alpha\beta}^{(\pm)} = u_\alpha v_\beta \pm v_\alpha u_\beta, \qquad
u_{\alpha\beta}^{(p{\pm})} = u_\alpha u_\beta \mp v_\alpha v_\beta$$ RPA phonons are (\[phonon\_sph\]) $$\begin{aligned}
\hat{C}_\nu^+ &= \frac{1}{2}\sum_{\alpha\beta}
C_{j_\alpha m_\alpha j_\beta m_\beta}^{\lambda \mu}\Big(
c_{\alpha\beta}^{(\nu-)}\hat{\alpha}_{\alpha}^+\hat{\alpha}_{\beta}^+ +
c_{\alpha\beta}^{(\nu+)}\hat{\alpha}_{\bar{\alpha}}^{\phantom{*}}
\hat{\alpha}_{\bar{\beta}}^{\phantom{*}} \Big) \nonumber\\
&= \frac{1}{2}\sum_{\alpha\beta}(-1)^{l_\alpha + l_\beta + \lambda + \mu}
C_{j_\alpha m_\alpha j_\beta m_\beta}^{\lambda,-\mu}\Big(
c_{\alpha\beta}^{(\nu-)}\hat{\alpha}_{\bar{\alpha}}^+\hat{\alpha}_{\bar{\beta}}^+ +
c_{\alpha\beta}^{(\nu+)}\hat{\alpha}_{\alpha}^{\phantom{|}}
\hat{\alpha}_{\beta}^{\phantom{|}} \Big)\end{aligned}$$ Then, in the evaluation of commutator $[\hat{V}_\mathrm{res}^\mathrm{(c.m.)},\hat{C}_\nu^+]$, there are three types of terms:
- direct (active only in E1) $$u_{\alpha\beta}^{(-)}\big[\hat{\alpha}_{\bar{\alpha}}^+\hat{\alpha}_{\bar{\beta}}^+
+ \hat{\alpha}_\alpha^{\phantom{|}}\hat{\alpha}_\beta^{\phantom{|}},\hat{C}_\nu^+\big]
= u_{\alpha\beta}^{(-)}C_{j_\alpha m_\alpha j_\beta m_\beta}^{\lambda \mu}\big(
{-}c_{\alpha\beta}^{(\nu-)} + c_{\alpha\beta}^{(\nu+)} \big) \quad
(\times 2\textrm{ for }(\alpha\beta\leftrightarrow\gamma\delta))$$
- exchange normal (contributing negatively to $B$ matrix in RPA eq. (\[fullRPA\_eq\])) $$\begin{aligned}
u_{\alpha\beta}^{(-)}u_{\gamma\delta}^{(-)}
&\big[-\hat{\alpha}_{\bar{\alpha}}^+\hat{\alpha}_{\bar{\gamma}}^+
\hspace{7pt}\underbracket[0.5pt]{\hspace{-8pt}\hat{\alpha}_{\bar{\beta}}^+\hat{\alpha}_{\bar{\delta}}^+\hspace{-4pt}}\hspace{2pt}
{}-{} \hat{\alpha}_\alpha^{\phantom{|}}\hat{\alpha}_\gamma^{\phantom{|}}
\hspace{7pt}\underbracket[0.5pt]{\hspace{-8pt}\hat{\alpha}_\beta^{\phantom{|}}\hat{\alpha}_\delta^{\phantom{|}}\hspace{-2pt}}\,,\hat{C}_\nu^+\big] = \\
&= u_{\alpha\beta}^{(-)}u_{\gamma\delta}^{(-)}C_{j_\beta m_\beta j_\delta m_\delta}^{\lambda\mu}
\Big(c_{\beta\delta}^{(\nu-)}\hat{\alpha}_\alpha^{\phantom{|}}\hat{\alpha}_\gamma^{\phantom{|}}
- c_{\beta\delta}^{(\nu+)}\hat{\alpha}_{\bar{\alpha}}^+\hat{\alpha}_{\bar{\gamma}}^+ \Big)
\qquad(\times 2)\end{aligned}$$ + a similar term coupled as $(\alpha\delta)(\beta\gamma)$
- exchange pairing (contributing to $A$ matrix) $$\begin{aligned}
u_{\alpha\beta}^{(p{-})}u_{\gamma\delta}^{(p{-})}
&\big[-\hat{\alpha}_{\bar{\alpha}}^+\hat{\alpha}_{\bar{\gamma}}^+
\hspace{7pt}\underbracket[0.5pt]{\hspace{-8pt}\hat{\alpha}_\beta^{\phantom{|}}\hat{\alpha}_\delta^{\phantom{|}}\hspace{-2pt}}
{}-{} \hat{\alpha}_\alpha^{\phantom{|}}\hat{\alpha}_\gamma^{\phantom{|}}
\hspace{7pt}\underbracket[0.5pt]{\hspace{-8pt}\hat{\alpha}_{\bar{\beta}}^+\hat{\alpha}_{\bar{\delta}}^+\hspace{-4pt}}\hspace{3pt},\hat{C}_\nu^+\big] = \\
&= u_{\alpha\beta}^{(p{-})}u_{\gamma\delta}^{(p{-})}C_{j_\beta m_\beta j_\delta m_\delta}^{\lambda\mu}
\Big(c_{\beta\delta}^{(\nu-)}\hat{\alpha}_{\bar{\alpha}}^+\hat{\alpha}_{\bar{\gamma}}^+ -
c_{\beta\delta}^{(\nu+)}\hat{\alpha}_\alpha^{\phantom{|}}\hat{\alpha}_\gamma^{\phantom{|}} \Big)
\qquad(\times 2)\end{aligned}$$ + a similar term coupled as $(\alpha\delta)(\beta\gamma)$
Two exchange terms can be combined to provide time-even and time-odd contribution to the residual interaction:
$$\begin{aligned}
-B:\ \quad u_{\alpha\beta}^{(-)}u_{\gamma\delta}^{(-)} &= u_\alpha v_\beta u_\gamma v_\delta
- u_\alpha v_\beta v_\gamma u_\delta - v_\alpha u_\beta u_\gamma v_\delta
+ v_\alpha u_\beta v_\gamma u_\delta \nonumber\\
A:\quad u_{\alpha\beta}^{(p{-})}u_{\gamma\delta}^{(p{-})} &= u_\alpha u_\beta u_\gamma u_\delta
+ u_\alpha u_\beta v_\gamma v_\delta + v_\alpha v_\beta u_\gamma u_\delta
+ v_\alpha v_\beta v_\gamma v_\delta \nonumber\\
V_\mathrm{even} = \frac{A+B}{2}:\quad &\frac{1}{2}\big(
u_{\alpha\gamma}^{(p{+})}u_{\beta\delta}^{(p{+})}
+ u_{\alpha\gamma}^{(+)}u_{\beta\delta}^{(+)}\big) \\
V_\mathrm{odd} = \frac{A-B}{2}:\quad &\frac{1}{2}\big(
u_{\alpha\gamma}^{(p{-})}u_{\beta\delta}^{(p{-})}
+ u_{\alpha\gamma}^{(-)}u_{\beta\delta}^{(-)}\big)\end{aligned}$$
The direct term then contributes only to the time-odd part of $\hat{V}_\mathrm{res}$ in E1 $$\begin{aligned}
\label{V_pair-dir}
\!V_\mathrm{odd}:\ \frac{\hbar^2}{2M} \frac{1}{4}\sum_{\alpha\beta\gamma\delta}
&\frac{2u_{\alpha\beta}^{(-)}u_{\gamma\delta}^{(-)}(-1)^{j_\beta+j_\delta}}{3(2l_\alpha+1)(2l_\gamma+1)}
\bigg(\int R_\alpha^{(0)}R_\beta^{(\pm)}r^2\mathrm{d}r\bigg)
\bigg(\int R_\gamma^{(0)}R_\delta^{(\pm)}r^2\mathrm{d}r\bigg) \nonumber\\
& {\quad}\times
\begin{Bmatrix} l_\alpha & l_\beta & 1 \\ j_\beta & j_\alpha & \frac{1}{2} \end{Bmatrix}
\begin{Bmatrix} l_\gamma & l_\delta & 1 \\ j_\delta & j_\gamma & \frac{1}{2} \end{Bmatrix}\end{aligned}$$ and the exchange term contributes to both time-even ($s={+}$) and time-odd ($s={-}$) part
$$\begin{aligned}
\!\!V_\mathrm{even/odd}:\ &\frac{\hbar^2}{2M} \frac{1}{4}\sum_{\alpha\beta\gamma\delta}^{(\alpha\beta)(\gamma\delta)}
\bigg[\big(u_{\alpha\beta}^{(ps)}u_{\gamma\delta}^{(ps)}
+ u_{\alpha\beta}^{(s)}u_{\gamma\delta}^{(s)}\big)
\frac{(-1)^{l_\alpha+l_\beta+\lambda+j_\beta+j_\delta}
\delta_{l_\alpha l_\gamma^\pm} \delta_{l_\beta l_\delta^\pm}}{(2l_\alpha+1)(2l_\beta+1)}
\begin{Bmatrix} j_\alpha & j_\gamma & 1 \\ j_\delta & j_\beta & \lambda \end{Bmatrix}
\nonumber\\
&{\quad}\times\bigg(\int R_\alpha^{(0)}R_\gamma^{(\pm)}r^2\mathrm{d}r\bigg)
\bigg(\int R_\beta^{(0)}R_\delta^{(\pm)}r^2\mathrm{d}r\bigg)
\begin{Bmatrix} l_\alpha & l_\gamma & 1 \\ j_\gamma & j_\alpha & \frac{1}{2} \end{Bmatrix}
\begin{Bmatrix} l_\beta & l_\delta & 1 \\ j_\delta & j_\beta & \frac{1}{2} \end{Bmatrix}
\nonumber\\ \label{V_pair-ex}
&{}+\big(u_{\alpha\beta}^{(ps)}u_{\gamma\delta}^{(ps)}
\pm u_{\alpha\beta}^{(s)}u_{\gamma\delta}^{(s)}\big)
\frac{(-1)^{l_\alpha+l_\beta+j_\beta+j_\delta}
\delta_{l_\alpha l_\delta^\pm} \delta_{l_\beta l_\gamma^\pm}}{(2l_\alpha+1)(2l_\beta+1)}
\begin{Bmatrix} j_\alpha & j_\delta & 1 \\ j_\gamma & j_\beta & \lambda \end{Bmatrix} \nonumber\\
&{\quad}\times\bigg(\int R_\alpha^{(0)}R_\delta^{(\pm)}r^2\mathrm{d}r\bigg)
\bigg(\int R_\beta^{(0)}R_\gamma^{(\pm)}r^2\mathrm{d}r\bigg)
\begin{Bmatrix} l_\alpha & l_\delta & 1 \\ j_\delta & j_\alpha & \frac{1}{2} \end{Bmatrix}
\begin{Bmatrix} l_\beta & l_\gamma & 1 \\ j_\gamma & j_\beta & \frac{1}{2} \end{Bmatrix}
\bigg]\end{aligned}$$
\
where corresponding substitutions (like $\beta\leftrightarrow\gamma$ etc.) were made to arrange the pairs in the residual interaction $V_{pp'}$ to $p=(\alpha\beta)$ and $p'=(\gamma\delta)$, which are assumed to satisfy the selection rules for the given multipolarity (besides selection rules like $\delta_{l_\alpha l_\gamma^\pm}$ and $\delta_{l_\beta l_\delta^\pm}$ which follow from the cross matrix elements of $\nabla$). Duplicate pairs can be now safely removed according to (\[order2qp\]), since the matrix element (\[V\_pair-ex\]) is fully symmetrized.
The exchange kinetic c.m. term was not implemented in axial HF nor in SRPA (which would be too much complicated). However, in both cases the direct term can be included alone in E1, providing somewhat similar effect to full approach of HF VAP + RPA with $\mathcal{H}_\mathrm{c.m.}$. This direct term is then expressed in terms of current density, more precisely by its $L=0$ component (independent on angle; $\vec{Y}_{1\mu}^0 = \vec{e}_\mu/\sqrt{4\pi}$):
\[Hcm\_dir\] $$\begin{aligned}
\mathcal{H}_\mathrm{c.m.}^\mathrm{dir} &= -\frac{\hbar^2}{2M}
\bigg(\int \vec{j}(\vec{r})\,\mathrm{d}^3 r\bigg)\cdot
\bigg(\int \vec{j}(\vec{r})\,\mathrm{d}^3 r\bigg) \\
&= -\frac{\hbar^2}{2M} 4\pi\sum_\mu (-1)^\mu
\bigg(\int \vec{j}(\vec{r})\cdot\vec{Y}_{1,\mu}^0 \,\mathrm{d}^3 r\bigg)
\bigg(\int \vec{j}(\vec{r})\cdot\vec{Y}_{1,-\mu}^0 \,\mathrm{d}^3 r\bigg)\end{aligned}$$ In the spherical symmetry, the reduced-matrix-element formula is $$V_\mathrm{odd}:\quad -\frac{\hbar^2}{2M} 8\pi \frac{1}{4}\sum_{\alpha\beta\gamma\delta}
\bigg(\int j_{\alpha\beta}^{10*}(r) r^2 \mathrm{d}r\bigg)
\bigg(\int j_{\gamma\delta}^{10}(r) r^2 \mathrm{d}r\bigg)$$ and in the axial symmetry: $$V_\mathrm{odd}:\quad -\frac{\hbar^2}{2M} 2 \frac{1}{4}\sum_{\alpha\beta\gamma\delta}
\bigg(\int \vec{j}_{\alpha\beta}^{\,\dagger}(\varrho,z) 2\pi\varrho\,\mathrm{d}\varrho\,\mathrm{d}z\bigg)
\cdot\bigg(\int \vec{j}_{\gamma\delta}(\varrho,z) 2\pi\varrho\,\mathrm{d}\varrho\,\mathrm{d}z\bigg)$$
The response for SRPA (see the large parentheses in (\[XY\_op\])) is an ordinary vector, not a position-dependent quantity.
Numerical codes {#ch_num}
===============
The following computer programs dealing with Skyrme functional were developed and utilized in the calculations:
- spherical HF in SHO basis (`sph_hf`) – applicable only for closed-shell nuclei. The main parameters are oscillator length $b=\sqrt{\hbar/m\omega}$ (\[SHO\]) and the basis size – as a number of SHO major shells $N_\mathrm{HF}$ (understood as $N=2\nu_\mathrm{max}+l$, where $\nu$ is the radial quantum number).
- spherical full RPA (`sph_qrpa`) in SHO basis or with wavefunctions given on equidistant grid (provided by HF+BCS in Reinhard’s `haforpa`). The main input parameters are the multipolarity and parity of the transition (and corresponding transition operator), and the number of major shells $N_\mathrm{RPA}$ (with the lowest energy) passed from HF to RPA.
- spherical separable RPA (`sph_srpa`) – same as before, but taking also the input operators, which induce the separable form of the residual interaction.
- axial full RPA (`skyax_qrpa`), taking the single-particle HF+BCS basis from axial Hartree-Fock (`skyax_hfb`, provided by Paul-Gerhard Reinhard). Separable axial RPA (`skyax_me` and `sky_srpa`) was provided by Wolfgang Kleinig. These programs will be utilized only in the next chapter. They were used with a fixed grid spacing of 0.4 fm (the smallest allowed value).
This chapter will give an analysis of various factors influencing the accuracy of calculation in the spherical symmetry for SHO and grid-based codes. These calculations were done on 2.5 GHz Intel i5 (Sandy Bridge) processor using single thread (with vectorization in the matrix algorithms), for which the computation times are given.
Most of the calculations below were done with SLy7 parametrization [@SLy6] of Skyrme functional, which contains both $\mathcal{J}^2$ (tensor) term and center-of-mass correction. The mass of proton and neutron are taken as equal with $\hbar^2/2m = 20.73553\ \mathrm{MeV.fm^2}$. Calculations with large spherical-harmonic-oscillator (SHO) basis were done for double-magic nuclei, due to absence of pairing in my Skyrme HF program. Parametrization SGII [@SGII], which includes $\mathcal{J}^2$ term (and no c.m.c.; $\hbar^2/2m = 20.7525\ \mathrm{MeV.fm^2}$), was used for some calculations of magnetic transitions, because it was fitted on Gamow-Teller transitions (therefore, a better agreement with experiments on M1 is expected).
Strength functions $S_0(\mathrm{E})$ (\[sf\]) will be given only for one component $\lambda\mu$, so the results should be multiplied with $2\lambda+1$ to get the total strength, except the plots of $\sigma_\gamma(\mathrm{E1})$ (\[cross\_sec\]) which already have the correct scaling.
Effects of the basis parameters
-------------------------------
As will be shown below, utilization of the SHO basis has certain advantages. First, it allows to employ approximate restoration of the translational symmetry in HF by subtraction of the center-of-mass kinetic energy before variation (section \[sec\_kin-cm\]) at almost no cost. Second, it allows to push E1 spurious state to almost zero energy and reduce the amount of center-of-mass contribution to the time-even transition density of the remaining states. This section gives an analysis with the aim of proper choice of parameters of the basis, and its relation to the kinetic center-of-mass correction and to the separation of E1 spurious mode (i.e., the translational motion of the nucleus as a whole).
[|c|c|c|c|c|c|c|c|c|c|c|]{}
------------------------------------------------------------------------
SLy7, ground & & & & &\
state \[MeV\] & VBP & VAP & VBP & VAP & VBP & VAP & VBP & VAP & VBP & VAP\
------------------------------------------------------------------------
$T_\mathrm{s.p.}$ & 652.06 & 656.91 & 840.40 & 845.50 & 1016.53 & 1021.44 & 2461.93 & 2466.02 & 3881.66 & 3885.26\
$V_\textrm{coul-dir}$ & 79.66 & 79.92 & 78.53 & 78.71 & 143.25 & 143.54 & 359.65 & 359.83 & 826.81 & 827.04\
$V_\textrm{coul-ex}$ & -7.50 & -7.53 & -7.42 & -7.44 & -10.88 & -10.91 & -18.82 & -18.83 & -31.26 & -31.27\
$T_\mathrm{c.m.1}$ & -16.30 & -16.42 & -17.51 & -17.61 & -18.15 & -18.24 & -18.65 & -18.68 & -18.66 & -18.68\
$T_\mathrm{c.m.2}$ & 8.21 & 8.15 & 9.42 & 9.37 & 10.05 & 10.01 & 12.15 & 12.13 & 12.87 & 12.85\
$E_\mathrm{total}$ & -344.92 & -345.01 & -415.89 & -415.97 & -482.26 & -482.32 & -1102.85 & -1102.88 & -1636.84 & -1636.85\
------------------------------------------------------------------------
$E_\mathrm{exp}$ & & & & &\
The center-of-mass correction in Hartree-Fock can be applied either after diagonalization, to get corrected total energy only (variation before projection, VBP), or already in the HF Hamiltonian (variation after projection, VAP). Comparison of both approaches is shown in Table \[tab\_HFcm\], giving important contributions to the total energy. As can be seen, the effect of VBP/VAP on the total energy is below 0.1 MeV and decreases for heavier nuclei. For further RPA calculations, when $\mathcal{H}_\mathrm{c.m.}$ is not explicitly mentioned, I will use HF VBP approach with no $\mathcal{H}_\mathrm{c.m.}$ in RPA residual interaction.
[|c|c|c|c|c|c|c|c|c|c|c|]{} $b_\mathrm{min}$ \[fm\] & &\
------------------------------------------------------------------------
$N_\mathrm{HF}$ & $^{40}$Ca & $^{48}$Ca & $^{56}$Ni & $^{132}$Sn & $^{208}$Pb & $^{40}$Ca & $^{48}$Ca & $^{56}$Ni & $^{132}$Sn & $^{208}$Pb\
30 & 1.577 & 1.603 & 1.614 & 1.770 & 1.933 & 1.577 & 1.603 & 1.614 & 1.771 & 1.933\
40 & 1.573 & 1.615 & 1.505 & 1.775 & 1.825 & 1.573 & 1.615 & 1.506 & 1.775 & 1.825\
60 & 1.550 & 1.535 & 1.502 & 1.686 & 1.808 & 1.550 & 1.538 & 1.504 & 1.686 & 1.808\
80 & 1.515 & 1.546 & 1.481 & 1.656 & 1.734 & 1.515 & 1.547 & 1.482 & 1.656 & 1.734\
100 & 1.469 & 1.515 & 1.467 & 1.638 & 1.697 & 1.471 & 1.516 & 1.468 & 1.639 & 1.697\
120 & 1.48 & 1.48 & 1.49 & 1.624 & 1.683 & 1.48 & 1.49 & 1.49 & 1.62 & 1.684\
![Nucleon densities for the optimal parameters $b$ for the given size of the basis as listed in table \[tab\_b-optim\].[]{data-label="fig_bmin_dens"}](logrho.pdf){width="\textwidth"}
The calculation with the SHO basis has one free parameter – oscillator length $b$ (\[SHO\]) – which takes the role of grid size from grid-based HF solvers. As a first estimate of $b$, I looked for the value which minimizes the ground state energy for the given basis size (Table \[tab\_b-optim\], Fig. \[fig\_bmin\_dens\]). To exclude any possible bias due to a discrete integration grid, I employed the following integration parameters instead of (\[int\_params\]): $$\Delta_\mathrm{grid} = 0.05\ \mathrm{fm},\qquad r_\mathrm{max} = 1.4 b\sqrt{2N}$$ The ground state energy is converging rapidly with increasing basis. The upshift of the energy minimum in comparison to $N_\mathrm{HF}=120$ was from 1 keV (Ca) to 15 keV (Pb) for $N_\mathrm{HF}=30$, from 0.05 keV (Ca) to 2 keV (Pb) for $N_\mathrm{HF}=40$, and from 2 meV (Ca) to 60 meV (Pb) for $N_\mathrm{HF}=100$. Although such a large basis is certainly not needed for the evaluation of the ground-state energy, it becomes important in subsequent RPA step, where it helps to separate center-of-mass motion (in E1) and provides a sufficiently dense sampling of the continuum – with energy step ca. 5 MeV per major shell in the range 50–100 MeV for s.p. excitation energy (grid based calculation with $R_\mathrm{box} = 3\cdot1.16 A^{1/3}$ led to 10–12 MeV / major shell for calcium and 5 MeV / major shell for $^{208}$Pb).
![Nucleon densities and isovector E1 strength functions (smoothing $\Delta=1\ \mathrm{MeV}$) for $^{40}$Ca for various oscillator lengths with $N_\mathrm{HF}(p,n) = 175,150$.[]{data-label="fig_40Ca_b"}](Ca40_rpa-optim.pdf){width="\textwidth"}
In further calculations, parameter $b$ is chosen close to the minimum in energy at $N=120$, and the number of major shells is chosen separately for protons and neutrons in order to minimize the oscillations on the logarithmic plot of ground-state proton and neutron densities (Fig. \[fig\_40Ca\_b\]), and for the linear part of $\log_{10}\rho$ to reach certain reasonable level (-18 for Ca, -15.5 for Pb). It was found that this criterion leads also to consistent RPA results, i.e., that the strength function doesn’t depend much on the number of major shells passed to RPA (assuming $N_\mathrm{RPA}\geq40$). This fact is demonstrated for $^{40}$Ca in Fig. \[fig\_40Ca\_b\] where the deviations in RPA results are found for the cases of $b$ shifted by $\pm0.1\ \mathrm{fm}$. As can be seen, the converged shape of the strength function depends somewhat on $b$ – this effect is probably a consequence of particular discretization of the continuum (i.e., the nodal structure of the wavefunctions). The dependence of shape and convergence of the strength function on $b$ is not so pronounced for heavier nuclei. The choice of $b$ and $N_\mathrm{HF}$ (see Table \[tab\_E1RPA\]) deduced in this way for $^{40,48}$Ca and $^{208}$Pb will be used also in the following sections.
[|r|r|r|c|c c|r r|c c|]{}
------------------------------------------------------------------------
$\!N_\mathrm{RPA}\!\!$ & $E_\mathrm{wf}\ $ & \# & $t$ & & &\
& & $2qp$ & & VBP & VAP & VBP & VAP & VBP & VAP\
\
------------------------------------------------------------------------
20 & 26 & 260 & 0.15 & 2660 & 170 & 9.452 & 9.508 & $1.001 + 10^{-2.8}$ & $\!10^{-2.4} + 10^{-4.6}$\
40 & 103 & 560 & 0.22 & 1490 & 17.1 & 9.199 & 9.217 & $1.000 + 10^{-3.7}$ & $\!10^{-3.9} + 10^{-4.7}$\
60 & 230 & 860 & 0.39 & 532 & 1.28 & 8.773 & 8.623 & $1.000 + 10^{-5.2}$ & $\!10^{-5.3} + 10^{-5.5}$\
80 & 430 & 1160 & 0.70 & 82.5 & 0.002i & 7.998 & 7.123 & $1.000 + 10^{-8.0}$ & $\quad(-) + 10^{-8.0}$\
100 & 690 & 1460 & 1.17 & 21.5 & – & 7.473 & – & $1.000 + 10^{-9.1}$ & –\
------------------------------------------------------------------------
grid & 200 & 293 & 0.008 & 827 & 2.25 & 8.974 & 8.974 & 1.000 + $10^{-4.4}$ & $\!10^{-5.1}$ + $10^{-5.2}$\
\
------------------------------------------------------------------------
20 & 21 & 283 & 0.19 & 2937 & 283 & 10.985 & 11.024 & $1.002 + 10^{-2.8}$ & $\!10^{-2.1} + 10^{-4.0}$\
40 & 95 & 613 & 0.28 & 1415 & 23.9 & 10.501 & 10.550 & $1.000 + 10^{-3.6}$ & $\!10^{-3.6} + 10^{-4.4}$\
60 & 220 & 943 & 0.50 & 438 & 1.42 & 10.134 & 10.012 & $1.000 + 10^{-5.3}$ & $\!10^{-5.0} + 10^{-5.6}$\
80 & 400 & 1273 & 0.90 & 88.4 & 0.020 & 9.581 & 9.130 & $1.000 + 10^{-7.9}$ & $\!10^{-7.5} + 10^{-8.1}$\
100 & 650 & 1603 & 1.52 & 26.2 & 0.012 & 9.393 & 8.747 & $1.000 + 10^{-9.1}$ & $\!10^{-9.5} + 10^{-9.0}$\
120 & 970 & 1933 & 2.43 & 5.38 & – & 9.165 & – & $\!1.000 + 10^{-10.6}\!\!\!$ & –\
------------------------------------------------------------------------
grid & 170 & 322 & 0.01 & 899 & 1.62 & 10.397 & 10.397 & 1.000 + $10^{-4.2}$ & $\!10^{-5.5}$ + $10^{-5.1}$\
\
------------------------------------------------------------------------
20 & 18 & 743 & 0.45 & 2069 & 134 & 7.527 & 7.545 & $0.999 + 10^{-2.8}$ & $\!10^{-2.4} + 10^{-4.2}$\
40 & 94 & 1773 & 1.81 & 770 & 6.81 & 7.445 & 7.460 & $1.000 + 10^{-4.2}$ & $\!10^{-4.1} + 10^{-5.4}$\
60 & 220 & 2803 & 6.31 & 243 & 0.31 & 7.207 & 7.194 & $1.000 + 10^{-5.8}$ & $\!10^{-5.8} + 10^{-6.2}$\
80 & 420 & 3833 & 15.8 & 33.8 & 0.015 & 6.444 & 6.252 & $1.000 + 10^{-8.2}$ & $\!10^{-7.4} + 10^{-8.1}$\
100 & 690 & 4863 & 32.3 & 7.81 & 0.008i & 6.324 & 6.073 & $1.000 + 10^{-10.7}\!\!$ & $\quad(-) + 10^{-10.8}\!\!$\
120 & 1060 & 5893 & 57.6 & 0.955 & 0.023i & 6.316 & 6.059 & $1.000 + 10^{-11.7}\!\!$ & $\quad(-) + 10^{-11.8}\!\!$\
------------------------------------------------------------------------
grid & 60 & 873 & 0.31 & 1131 & 17.0 & 7.537 & 7.537 & $1.000 + 10^{-3.6}$ & $\!10^{-3.6}$ + $10^{-5.0}$\
\
------------------------------------------------------------------------
\* Collective state, which has not yet the second lowest energy due to a small basis.
Table \[tab\_E1RPA\] gives the results of long-wave isoscalar electric dipole RPA calculation ($z_p=z_n=1$) for the nuclei $^{40,\,48}$Ca and $^{208}$Pb, where the whole strength should be accumulated in the spurious state close to zero energy. $\mathcal{H}_\mathrm{c.m.}$ was either included (VAP) or not included (VBP) in the self-consistent interaction. The calculation time is very similar for VBP and VAP, when using SHO basis. It was found that the good separation of the spurious state in E1 in the more physically appropriate choice of HF VAP and RPA with $\mathcal{H}_\mathrm{c.m.}$ can be achieved also by using HF VBP and E1 RPA including only the direct term $\mathcal{H}_\mathrm{c.m.}^\mathrm{dir}$ (\[Hcm\_dir\]) – this approach is suitable for SRPA (where the full $\mathcal{H}_\mathrm{c.m.}$ is very cumbersome to apply) and for axial nuclei (there, its application was not found to be much beneficial, apparently due to low precision of the HF results). However, such a trick doesn’t offer much advantage (besides shifting $E_\mathrm{spurious}$ closer to zero) over a simple elimination of the E1 spurious contribution by the proper effective charges ($z_p=N/A,\,z_n=-Z/A$) or by the cmc term in E1 tor/com operators (\[cmc-generic\]).
$\mathcal{H}_\mathrm{c.m.}^\mathrm{dir}$ was also used for the grid-based calculation starting with `haforpa` (given under VAP column in table \[tab\_E1RPA\]), because there the rigorous HF VAP approach leads to a crash of the full RPA calculation for calcium, while $^{208}$Pb succeeds only with a smaller basis (20+21 major shells), and then the results are even slightly worse than with a simple HF VBP + $\mathcal{H}_\mathrm{c.m.}^\mathrm{dir}$ approach.
![Strength functions for long-wave E1 and isoscalar toroidal/compression E1 transitions in $^{48}$Ca and $^{208}$Pb, giving also the effect of center-of-mass correction – either as $\mathcal{H}_{\mathrm{c.m.}}$ or as a correction in transition operator – “cmc” (\[E1vtccm\]). Strength of the toroidal transition was increased 2- or 3-times to get a reasonable scaling.[]{data-label="fig_vtccm"}](vtc_cm.pdf){width="\textwidth"}
Table \[tab\_E1RPA\] shows also a significant decrease of the energy of the first E1 state (after the spurious one), which has isoscalar character, with the increasing basis. Low-lying states are an important component of the so-called pygmy resonance [@Savran2013], although in the case of lead, the most of strength is concentrated in the second lowest state, whose downshift is not so dramatic (going as 7.913, 7.798, 7.697, 7.641, 7.637, 7.636 for VAP approach with $N_\mathrm{RPA}=20\!\!-\!\!120$; or as 7.901, 7.788, 7.691, 7.636, 7.633, 7.633 for VBP approach). Accurate determination of the energy of low-lying pygmy mode is probably guaranteed only with continuum RPA [@Daoutidis2011] (although still on the one-phonon level, which underestimates the fragmentation).
The influence of the kinetic center-of-mass term $\mathcal{H}_{\mathrm{c.m.}}$ (only direct term was used in the grid-based calculation) and the cmc correction in the transition operators (\[E1vtccm\]) is depicted in fig. \[fig\_vtccm\] for E1 transitions. Smaller basis ($N_\mathrm{RPA} = 40$) was used to demonstrate the effect. Effective charges were $z_p=N/A,\,z_n=-Z/A$ for long-wave E1 and $z_p=z_n=0.5,\,g_p=g_n=0.88\times0.7$ for toroidal/compression E1. It is clear that VAP approach has certain influence on the overall shape of the strength function, but it doesn’t seem to be very important (Fig. \[fig\_vtccm\]a-c). Term $\mathcal{H}_\mathrm{c.m.}$ in RPA has one interesting property: It removes the isoscalar center-of-mass strength in the transition operators with time-even densities (see the exhaustion of EWSR by spurious state in Table \[tab\_E1RPA\]), but doesn’t have such effect on the time-odd current. In fact, the strength of the spurious state for toroidal/compression transition is almost 100-times larger than for VBP (with no $\mathcal{H}_\mathrm{c.m.}$ in RPA), and the same behavior is found for VBP+$\mathcal{H}_\mathrm{c.m.}^\mathrm{dir}$ (so the “cmc” in transition operator must be included in such cases). The reason can be traced down to the structure coefficients: coefficients $c_p^{(\nu{+})}$ acquire a sign opposite to $c_p^{(\nu{-})}$, so only the time-even quantities are reduced. When $\mathcal{H}_\mathrm{c.m.}$ is omitted (VBP approach), the coefficients $c_p^{(\nu{+})}$ and $c_p^{(\nu{-})}$ have the same sign, and the opposite effect is observed: spurious time-odd strength is reduced as $\sim1/E$, while the time-even strength keeps the sum-rule. Again, disappearance of the isoscalar E1 EWSR contribution of the spurious state with VAP can be related to cancellation of the mass constant (coming from the double commutator of kinetic term and $rY_{1\mu}$) by $\mathcal{H}_{\mathrm{c.m.}}$, which was not included in the ground-state EWSR estimate, so the total relative EWSR of E1 RPA states goes down to 0% in Table \[tab\_E1RPA\].
Finally, a comparison of the calculation times for particular RPA procedures is given in table \[tab\_cpu\]. Calculation of the matrix elements scales like $O(N^2)$. Matrix algorithms scale like $O(N^3)$ and consist of the following steps: square root of the matrix $P$ (\[half\_RPA\]), matrix multiplication to calculate $C^TQC$ (\[CQC\]) and its diagonalization in two steps – Householder transformation (bringing the matrix to tri-diagonal form) and Householder-like iterations (gradually decreasing the off-diagonal elements) – and finally, the conversion of eigenvectors $\vec{R}_\nu$ to structure constants $c_p^{(\nu\pm)}$.
[|c|r|r|r|]{}
------------------------------------------------------------------------
& $^{40}$Ca & $^{48}$Ca & $^{208}$Pb\
$N_\mathrm{RPA}$ & $100\ $ & $120\ $ & $120\ \ $\
$A_{pp'},B_{pp'}$ & 22 s & 41 s & 317 s\
$\sqrt{P}$ & 1 s & 1 s & 35 s\
$C^T QC$ & 9 s & 23 s & 944 s\
Householder & 13 s & 33 s & 1214 s\
3-diag. iter. & 6 s & 14 s & 365 s\
$c_p^{(\nu\pm)}$ & 8 s & 18 s & 557 s\
Influence of tensor and spin terms {#sec_spin-tens}
----------------------------------
Full Skyrme functional (\[Skyrme\_DFT\]) contains also the time-odd terms which are not active in the calculation of the ground state of an even-even nucleus. Especially the spin terms ($\tilde{b}_0,\,\tilde{b}_0',\,\tilde{b}_2,\,\tilde{b}_2',\,\tilde{b}_3,\,\tilde{b}_3'$) are difficult to estimate experimentally, since they are not coupled to other time-even terms by Galilean invariance [@Dobaczewski1995]. We can fix these terms by a condition that the functional is fully equivalent to the density-dependent two-body interaction (\[V\_skyrme\]). For this reason, the present work is restricted mainly to parametrizations, which include the $\mathcal{J}^2$ (tensor) term (parameters $\tilde{b}_1,\,\tilde{b}_1'$, containing both time-even and time-odd parts) – SLy7 [@SLy6] and SGII [@SGII] – and which don’t apply tweaking of the individual parameters, as is done for $b_4'$ in SkI3 and SkI4 [@SkI3], although the tweaked functionals sometimes better describe the M1 resonance [@Vesely2009; @Nesterenko2010].
![Strength functions for E1, toroidal E1 (with natural charges and cmc) and M1 transitions in $^{48}$Ca and $^{208}$Pb, giving also the cases with omitted spin or tensor terms in the residual interaction, and the experimental data for $^{48}$Ca M1 [@Steffen1983] and $^{208}$Pb M1 [@Laszewski1988][]{data-label="fig_spin-tens"}](spin-tens.pdf){width="\textwidth"}
The importance of spin and tensor terms is demonstrated in Fig. \[fig\_spin-tens\], which shows the calculations with omitted spin or tensor terms. In electric dipole transitions, the impact is clearly visible only in the higher order term, represented here by toroidal strength function, calculated with natural charges. Unfortunately, this quantity (being mostly isovector) is probably not measurable. However, magnetic dipole transitions are experimentally accessible, and confirm the necessity of inclusion of the spin terms, as was also found previously [@VeselyPhD]. With regard to accuracy of M1 for the given parametrizations, SLy7 appears to provide slightly better agreement with experiment, although the second peak in $^{208}$Pb is beyond the experimental range [@Laszewski1988]. On the other hand, the first peak according to SGII may be identified with the $1^+$ state at 5.8445 MeV, having isoscalar nature with $B(\mathrm{M1})\!\!\uparrow\,=1.0(4)\,\mu_N^2$ [@Muller1985], which is however too weak to explain the calculated $B(\mathrm{M1})\!\uparrow\,=5.7\,\mu_N^2$. Tensor term was found to have a minor influence in all cases.
Comparison of exact and long-wave E1 s.f.
-----------------------------------------
The transition probabilities and strength functions are usually evaluated with long-wave versions of the transition operators (\[tran\]). It is therefore instructive to give a comparison to the results obtained with the exact transition operators (\[exactM\]), which can be calculated easily in the case of full RPA.
![Comparison of electric dipole strength functions for $^{40,48}$Ca and $^{208}$Pb obtained with usual long-wave and exact transition operators (with natural charges). The overall strength of the exact s.f. is found to be uniformly reduced due to usage of the bare mass.[]{data-label="fig_E1exact"}](E1exact.pdf){width="\textwidth"}
As can be seen in Fig. \[fig\_E1exact\] (calculated here with inclusion of the kinetic center-of-mass term), “exact” strength function has significanly reduced amplitude (1.4-times). This fact can be explained by the inadequate use of bare mass in the nuclear current (\[j\_nuc\]). Effective mass is smaller, but only in isovector transitions, as was mentioned also in the explanation of EWSR (\[EWSR-wf\]), and can be demonstrated for E1 compression transitions. These can be calculated either by current-based transition operator $\hat{M}_\mathrm{com}$ or by the density-based $\hat{M}_\mathrm{com'}$, which are related by the continuity equation. Isoscalar transition ($z_p=z_n=1$) shows nearly equal results for both choices, while isovector transition ($z_p=N/A,\,z_n=-Z/A$) gives reduced strength for current-based operator.
![Comparison of current- and density-based compression strength for $^{40,48}$Ca and $^{208}$Pb. Better agreement in scale is found for isoscalar s.f.[]{data-label="fig_E1vtcT0"}](E1vtcT01.pdf){width="\textwidth"}
Comparison with exact operators is also done for M1 and E2 transitions of $^{208}$Pb in Fig. \[fig\_E2M1exact\]. Quadrupole resonance was calculated with the natural charges ($z_p=1,\,z_n=0$), so the resulting strength function is a superposition of both isoscalar and isovector component.
![Comparison of “exact” and long-wave E2 and M1 strength for $^{208}$Pb (with natural charges). Isoscalar transitions of E2 (two largest peaks) are not reduced, in contrast with the isovector residue. Green lines in a) show the position of the first two $2^+$ states [@A208] and the centroid and width of isoscalar giant quadrupole resonance [@Youngblood2004], same for M1 spin-flip resonance [@Laszewski1988].[]{data-label="fig_E2M1exact"}](E2M1exact.pdf){width="\textwidth"}
A closer look on the presented results also shows that the effective mass is not a constant, but depends on multipolarity and also on energy (except long-wave E1). This point was not further studied here, and the remaining sections present only the results involving long-wave or toroidal/compression operators.
Full RPA versus SRPA
--------------------
Faster calculation of the strength function can be achieved through separable RPA [@Nesterenko2002; @Nesterenko2006], whose spherical formulation is described in appendix \[app\_SRPA\], and the main features will be summarized also here. SRPA requires a set of input operators which provide generating fields for nuclear excitation, and the resulting responses are giving rise to a separable form of the residual interaction. The accuracy of the separable interaction is proportional to the number of input operators, which have to be chosen by a clever way to cover the most important aspects of the full interaction. The following time-even operators were utilized for electric transitions: $$\label{Q1234}
\begin{split}
\hat{Q}_1 &= \int \mathrm{d}^3r \, \hat{\rho}(\vec{r}) \, r^\lambda Y_{\lambda\mu},\quad
\hat{Q}_2 = \int \mathrm{d}^3r \, \hat{\rho}(\vec{r}) \, r^{\lambda+2} Y_{\lambda\mu}, \\
\hat{Q}_3 &= \int \mathrm{d}^3r \, \hat{\rho}(\vec{r}) \,
j_\lambda(0.9x_\lambda r/r_\mathrm{diff}),\quad
\hat{Q}_4 = \int \mathrm{d}^3r \, \hat{\rho}(\vec{r}) \, r^\lambda Y_{\lambda\mu}
j_\lambda(1.2x_\lambda r/r_\mathrm{diff})
\end{split}$$ The operators were used in consecutive order, so, e.g., the 2-operator SRPA means that $\hat{Q}_1$ and $\hat{Q}_2$ were used. Accurate description of toroidal transitions required also additional operators containing spin $$\label{Q567}
\hat{Q}_{5,6} = \int\mathrm{d}^3 r\: [\vec{\nabla}\cdot\hat{\vec{\mathcal{J}}}(\vec{r})]
\cdot\Big\{\begin{array}{l} r Y_{1\mu} \\ r^3 Y_{1\mu} \end{array}, \quad
\hat{Q}_7 = \int\mathrm{d}^3 r\: \hat{\vec{\mathcal{J}}}(\vec{r})\cdot
\vec{\nabla}\times r^3 \vec{Y}_{1\mu}^1$$ and time-odd operators $$\label{P_add}
\hat{P}_{8} = \int\mathrm{d}^3 r\: [\vec{\nabla}\times\hat{\vec{s}}(\vec{r})]
\cdot\vec{\nabla}\times r^3\vec{Y}_{1\mu}^1, \quad
\hat{P}_{9,10} = \int\mathrm{d}^3 r\: \hat{\vec{j}}(\vec{r})
\cdot\vec{\nabla}\times \Big\{\begin{array}{l} r\vec{Y}_{1\mu}^1 \\
r^3 \vec{Y}_{1\mu}^1\end{array}$$ making use of familiar Skyrme currents (\[Jd\_op\]).
Separate operators were used for protons and neutrons. Moreover, another time-conjugate operators were created as $$\hat{P}_{k} = \mathrm{i}[\hat{H},\hat{Q}_{k}]\qquad \textrm{or}\qquad
\hat{Q}_{k} = \mathrm{i}[\hat{H},\hat{P}_{k}]$$ so the total number of input operators and the dimension of separable interaction is 4-times larger than the numbers given here.
![Comparison of E1 strength functions calculated by full RPA and SRPA with increasing number of input operators.[]{data-label="fig_E1srpa"}](E1srpa.pdf){width="\textwidth"}
Figure \[fig\_E1srpa\] shows the results for electric dipole strength functions. Toroidal and compression transitions were calculated with natural charges and center-of-mass correction in operators (\[cmc-generic\]). As can be seen, one operator already gives correct position of giant dipole resonance (GDR). The second operator corrects also the compression strength function, but much more operators (containing also spin) are needed for accurate reproduction of the toroidal s.f. Even in that case, the calculation time for $^{208}$Pb is reduced from one hour (full RPA) to around 2 minutes (SRPA with 4000-point strength function).
![Comparison of E1 strength functions calculated by full RPA and SRPA with increasing number of input operators.[]{data-label="fig_M1E2srpa"}](M1E2srpa.pdf){width="60.00000%"}
Other multipolarities are satisfactorily described already with fewer operators. Figure \[fig\_M1E2srpa\] shows results for electric quadrupole (with natural charges; using $\hat{Q}_1,\,\hat{Q}_2,\,\hat{Q}_5$) and M1 transitions. Magnetic transition were calculated with up to three operators: $$\label{P_magn}
\hat{P}_1 = \vec{\sigma}\cdot r^0\vec{Y}_{1\mu}^0,\quad
\hat{P}_2 = \vec{\sigma}\cdot r^2\vec{Y}_{1\mu}^0,\quad
\hat{P}_3 = \vec{l}\cdot r^2\vec{Y}_{1\mu}^0.$$ The $^{48}$Ca M1 s.f. experienced a large shift for 2 input operators, demonstrating that SRPA is prone to instabilities. Such occasional deviations show that a perfect description by means of SRPA cannot be guaranteed a priori, and this method remains mainly an interesting mathematical tool to provide a first estimate of the strength function, when full RPA is numerically not feasible.
Physical results {#ch_results}
================
This chapter will present selected results, either published or submitted for publication, calculated mostly with full RPA in the axial (cylindrical) symmetry, with exception of the first part dealing with pygmy resonance in spherical nuclei. The programs described in this work were utilized in the following areas of research:
- toroidal nature of the low-energy (pygmy) E1 mode [@Repko2013; @Reinhard2014; @Nesterenko2015]
- vortical, toroidal and compression transitions by spherical SRPA in tin isotopes [@Kvasil2013] (not given here)
- low lying $2^+$ states in rare earths [@Nesterenko-lowE2] (not given here)
- monopole transitions in spherical nuclei [@Kvasil2015-ischia; @Kvasil2015] and in deformed $^{24}$Mg [@Nesterenko-Mg24] (not given here)
- description of axial full RPA with a sample calculation of $^{154}$Sm E1, E2, M1 [@Repko-istros]
- magnetic (M1) transitions in deformed $^{50}$Cr [@Pai2016]
Strength functions displayed in this chapter employ a double-folding procedure, which makes the Lorentz-smoothing parameter $\Delta$ energy-dependent. Formula (\[sf\]) is then replaced by $$\begin{aligned}
\label{sf_df}
S_n(\mathrm{E/M}\lambda\mu; E) &= \sum_\nu E^n
B(\mathrm{E/M}\lambda\mu,0\rightarrow\nu)\delta_{\Delta_\nu}(E_\nu-E) \\
\delta_{\Delta_\nu}(E_\nu-E) &= \frac{\Delta_\nu}{2\pi[(E_\nu-E)^2+(\Delta_\nu/2)^2]}, \\[-8pt]
&\qquad\qquad\qquad\qquad\textrm{where }\Delta_\nu = \begin{cases}
\Delta_0\quad\textrm{for }E_\nu < E_0 \\
\Delta_0+a(E_\nu-E_0)\quad\textrm{for }E_\nu > E_0 \end{cases} \nonumber\end{aligned}$$ and $E_0$ is the nucleon emission thershold (the smaller of $p/n$ separation energies). Other parameters are chosen as $$\label{df_param}
\Delta_0 = 0.15\ \mathrm{MeV},\qquad a = 0.15$$
Transition densities and currents (\[trans\_rho\],\[trans\_cur\]) are calculated either for one state, or are averaged over multiple states in the given energy interval. In this averaging, there is an ambiguity in the overall phase factors of the structure constants. For this reason, the transition currents are weighted by transition matrix elements [@Repko2013], which also somewhat suppress the non-collective states, so that the resulting density better expresses the nature of the excitations in a given energy region.
\[cur\_avg\] $$\begin{aligned}
\delta\rho_q^\mathrm{(E\lambda)}(\vec{r}) &= \sum_{\nu\in(E_1,E_2)}
\langle[\hat{C}_\nu^{\phantom{|}},\hat{M}_{\lambda\mu}^\mathrm{E}]\rangle^*\,
\langle[\hat{C}_\nu^{\phantom{|}},\hat{\rho}_q(\vec{r})]\rangle \\
\delta\vec{j}_q^\mathrm{(\lambda)}(\vec{r}) &= \sum_{\nu\in(E_1,E_2)}
\langle[\hat{C}_\nu^{\phantom{|}},\hat{M}_{\lambda\mu}]\rangle^*\,
\langle[\hat{C}_\nu^{\phantom{|}},\hat{\vec{j}}_q(\vec{r})]\rangle\end{aligned}$$
The multipolar $\mu$-components of the strength functions and other quantities (for given $\lambda$) are denoted also by $K^\pi$, and then they are understood as a sum of $\mu=\pm K$ components, while $\pi=(-1)^\lambda$ for electric transitions and $\pi=(-1)^{\lambda+1}$ for magnetic transitions.
Toroidal nature of low-energy E1 mode
-------------------------------------
Figure \[fig\_pygmy\_sf\] shows the E1 strength functions, calculated with full RPA for spherical nuclei with large SHO basis ($N_\mathrm{RPA}=80$ for $^{40}$Ca and $N_\mathrm{RPA}=100$ for $^{48}$Ca and $^{208}$Pb) and with all center-of-mass corrections (kinetic + operator). Double folding was done with (\[df\_param\]). The inset in Fig. \[fig\_pygmy\_sf\]g shows that the (one-phonon) RPA cannot reproduce the experimentally observed fragmentation in $^{208}$Pb [@Poltoratska2012].
![Double-folded strength functions of $^{40,48}$Ca, $^{208}$Pb for E1 transitions: GDR, and isoscalar toroidal and compression. Experimental data are given for photoabsorption cross section: $^{40}$Ca [@Ahrens1975], $^{48}$Ca [@OKeefe1987], $^{208}$Pb [@Veyssiere1970], for $(p,p')$-deduced $B(\mathrm{E1})$ in $^{208}$Pb (exp. 2) [@Poltoratska2012]; and for isoscalar E1 [@Youngblood2004]. The transitions were calculated with full RPA with Skyrme parametrization SLy7 and including $\mathcal{H}_\mathrm{c.m.}$ (with HF VAP). Isoscalar transition operators were taken with $z_p=z_n=0.5,\,g_p=g_n=0.88\times0.7$[]{data-label="fig_pygmy_sf"}](pygmy_sf.pdf){width="\textwidth"}
Giant dipole resonance (GDR) of heavy neutron-rich nuclei contains a low-energy branch, around the nucleon emission threshold (Fig. \[fig\_pygmy\_sf\]g), which is called “pygmy” mode and is mostly interpreted as oscillation of the neutron skin with respect to proton-neutron core [@Savran2013]. To confirm this assumption, I calculated transition densities (Fig. \[fig\_Pb208rho\]) and transition currents (Fig. \[fig\_Pb\_cur\]) of $^{208}$Pb in the pygmy region (here chosen as 6-8.5 MeV). Weighting operator in (\[cur\_avg\]) was long-wave (isovector) E1 operator to avoid any bias in the interpretation by “forcing” certain type of motion due to the operator choice.
![Transition densities of $^{208}$Pb weighted by long-wave E1 operator in given energy intervals. Calculated by full RPA, Skyrme SLy7, including $\mathcal{H}_\mathrm{c.m.}$ (with HF VAP).[]{data-label="fig_Pb208rho"}](Pb208_rho.pdf){width="\textwidth"}
![Transition currents of $^{208}$Pb weighted by long-wave E1 operator in given energy intervals.[]{data-label="fig_Pb_cur"}](Pb_cur.pdf){width="90.00000%"}
Although the transition densities appear to confirm the “pygmy” picture of oscillating neutron skin, we should be careful at this interpretation due to the fact that the transition density is not sensitive to vortical motion according to the continuity equation (divergence of curl is zero): $$-\mathrm{i}kc\delta\rho = -\partial_t\delta\rho = \vec{\nabla}\cdot\delta\vec{j}$$ And indeed, Figures \[fig\_Pb\_cur\]ab and \[fig\_Ca\_cur\] indicate toroidal flow, together with the larger amplitude of the toroidal s.f. in comparison with compression s.f. In fact, a motion reminiscent of skin vibration appears to be present in the high-energy area of the compression resonance (Fig. \[fig\_pygmy\_sf\]i, Fig. \[fig\_Pb\_cur\]ef).
![Transition currents of $^{40}$Ca and $^{48}$Ca weighted by long-wave E1 operator in given energy intervals.[]{data-label="fig_Ca_cur"}](Ca_cur.pdf){width="70.00000%"}
We can therefore assume the following interpretation of the increased low-energy strength of the photoabsorption cross section with increasing neutron excess: The low-energy E1 states have mostly isoscalar toroidal nature with some compression component, and the electromagnetic strength is related to the compensating center-of-mass motion of the protons in response to the compressional component of the neutron skin. Isoscalar nature of the low-lying E1 states was also experimentally confirmed in $^{40}$Ca [@Papakonstantinou2011] and $^{48}$Ca [@Derya2014]. Calcium isotopes were theoretically analyzed also with second-RPA [@Gambacurta2011], which includes two-phonon configurations, increasing the strength and fragmentation of the low-lying E1 strength, and the characteristic neutron-skin vibration was not confirmed.
The EWSR of the long-wave isovector E1 is exhausted by 0.35%, 0.24% and 1.08% in the selected intervals for $^{40,48}$Ca (Fig. \[fig\_Ca\_cur\]) and $^{208}$Pb (Fig. \[fig\_Pb\_cur\]ab), respectively.
Deformed $^{154}$Sm: E0, E1, E2
-------------------------------
![Double-folded strength functions of $^{154}$Sm for electric monopole, dipole and quadrupole resonance, compared to the experimental data (isoscalar [@Youngblood2004], isovector [@Carlos1974]), and magnetic dipole transitions. Experimental distributions are given with respective errorbars, and individual states are given by vertical lines.[]{data-label="fig_Sm154sf"}](Sm154_sf_rev.pdf){width="\textwidth"}
Calculation of giant resonances was performed also for deformed nucleus $^{154}$Sm, which has experimental data available for isovector E1 (by photoabsorption) [@Carlos1974] as well as isoscalar E0 and E2 (by $\alpha$-scattering) [@Youngblood2004]. Parametrization SLy6 [@SLy6] was utilized, which was fitted without the tensor term (tensor term was found to cause certain problems in axial E1 RPA – too high spurious state and broken rotational symmetry for spherical nuclei – related probably to the Hartree-Fock). Equilibrium deformation $\beta=0.341$ was determined by HF. Strength functions of E0, E1, E2, as well as M1, are depicted in Fig. \[fig\_Sm154sf\]. They were calculated by full RPA and SRPA with 5 input operators $$\begin{split}
\hat{Q}_1 &= r^\lambda Y_{\lambda\mu},\quad
\hat{Q}_2 = r^{\lambda+2} Y_{\lambda+2,\mu},\quad
\hat{Q}_3 = j_\lambda(0.6r\ldots)Y_{\lambda\mu},\\
&\quad\hat{Q}_4 = j_\lambda(0.9r\ldots)Y_{\lambda\mu},\quad
\hat{Q}_5 = j_\lambda(1.2r\ldots)Y_{\lambda\mu}
\end{split}$$ and the single-particle levels were taken up to 40 MeV. In the case of E2 ($\mu=0$), the second operator was replaced by $r^2Y_{00}$. The calculation time was around 24 hours for the most demanding full RPA cases (E1, $\mu=1$, 22570 $2qp$ pairs, using 24 GB of RAM and 8 threads on 12-core 3.46 GHz Intel Xeon Westmere workstation, spurious state was at 2.102 MeV; same for E2, $\mu=1$, 22558 $2qp$, with spurious state at 0.962 MeV; later optimizations led to over 2-fold speedup), while SRPA calculation took 1 hour for E1, and 2 hours for E2, respectively, on 2.5 GHz Intel i5 Sandy Bridge laptop (using one thread). However, the results of full RPA calculation for E2 ($\mu=1$) could be reused to evaluate also M1 at almost no cost ($\lambda$ is not a good quantum number in the body-fixed system, so the multipolarity is provided only by the transition operator). Double folding parameters were chosen as (\[df\_param\]), isoscalar transition used $z_p=z_n=1$, isovector charges were $z_p=N/A,\,z_n=-Z/A$ and magnetic transition was calculated with natural charges. The first state of E2 ($\mu=1$) and M1 ($\mu=1$) is spurious and corresponds to rotation of the nucleus.
Isoscalar giant quadrupole resonance (IS GQR) is clearly up-shifted in energy, compared to the experimental data [@Youngblood2004]. This shift is caused by effective mass smaller than 1 ($m^*/m\approx0.7$ for SLy-forces), while GQR can be accurately reproduced with parametrizations having $m^*/m\approx1$ [@Nesterenko2006] (e.g. SkT6 [@SkT6]) – however, these forces fail to describe the GDR. There was an attempt to resolve such tensions by a new parametrization SV-bas ($m^*/m\approx0.9$) [@SVbas], which however fails in the calculation of M1 transition (due to non-positive-definite matrix $P$). For this reason, and also due to omission of the tensor term and due to the tuning of $b_4'$ term – which is inconsistent with 2-body Skyrme interaction (\[V\_skyrme\]) – the SV-bas force was not used in the present work.
![Transition currents of $^{154}$Sm, evaluated in the area of giant quadrupole resonance, weighted by isoscalar E2 operator.[]{data-label="fig_Sm154E2"}](Sm154_E2T0_cur_rev.pdf){width="90.00000%"}
It is instructive to show the transition currents corresponding to individual branches of the GQR (Fig. \[fig\_Sm154E2\]). $K^\pi=0^+$ branch has a character of $\beta$-vibration and the lowest energy, and mixes with E0 resonance. On the contrary, $K^\pi=2^+$ branch has the highest energy and a character of $\gamma$-vibration, and is relative pure, so that also separable RPA describes it accurately (Fig. \[fig\_Sm154sf\]c).
![Transition currents of $^{154}$Sm, evaluated in the area of scissor mode, weighted by the M1 operator.[]{data-label="fig_Sm154M1"}](Sm154_M1_cur.pdf){width="70.00000%"}
Finally, two more pictures are given: for the scissor mode of M1 resonance (Fig. \[fig\_Sm154M1\]) and for low-energy mode of the E1 resonance (Fig. \[fig\_Sm154pygmy\]). The first case is not present in spherical nuclei, and the second case shows a clear difference between the two components ($\mu=0,1$) of toroidal resonance caused by broken spherical symmetry.
![Transition currents of $^{154}$Sm, evaluated in the area of “pygmy” mode, weighted by the isovector E1 operator.[]{data-label="fig_Sm154pygmy"}](Sm154_pygmy_cur_rev.pdf){width="80.00000%"}
Scissor and spin-flip parts of M1 resonance in $^{50}$Cr
--------------------------------------------------------
Recent experimental data on M1 transitions in deformed nucleus $^{50}$Cr [@Pai2016] allow to investigate accuracy of theoretical predictions for this light deformed nucleus. Full quasiparticle RPA calculation utilized Skyrme force SGII [@SGII], which was developed with aim to reproduce the Gamow-Teller transitions. Equilibrium deformation was $\beta=0.314$. Figure \[fig\_Cr50\_M11\] gives the comparison of M1 ($\mu=1$) transition probabilities, calculated by full RPA (s.p. basis up to 50 MeV), with the experiment. The scissor mode (see also Fig. \[fig\_Cr50\_scissor\]) is concentrated in one state, in agreement with the experiment, and the spin-flip mode is spread in multiple states of higher energy. Skyrme calculation underestimates the energy of both modes, while the weak fragmentation of calculated spin-flip mode is caused by one-phonon nature of RPA.
![Comparison of experimental and theoretical M1 transitions with energy up to 10 MeV in $^{50}$Cr. Experimental values of B(M1)$\uparrow$ have an uncertainity of about 10% [@Pai2016]. RPA results are given for component $\mu=\pm1$. Additional plots show the transition strengths calculated only with either c) orbital, or d) spin part of the M1 transition operator.[]{data-label="fig_Cr50_M11"}](Cr50_M11.pdf){width="70.00000%"}
![Transition currents (convective and magnetization) of the 2.82 MeV scissor mode.[]{data-label="fig_Cr50_scissor"}](Cr50_curall.pdf){width="70.00000%"}
Orbital motion of the states beyond 3 MeV is mostly isoscalar, except the strongest 7.65 MeV spin-flip state, as can be demonstrated by evaluating the electric quadrupole transition probabilities (Fig. \[fig\_Cr50\_E21\]). The $\mu=0$ branch of M1 transitions gives minor contribution, and becomes important only in the higher-energy region (9-12 MeV; see Fig. \[fig\_Cr50\_M1sf\]).
![Transitions $K^\pi=1^+$ in $^{50}$Cr evaluated with E2 operator ($\mu=\pm1$). Most of the states have isoscalar character, except isovector scissor mode at 2.82 MeV and spin-flip mode at 7.65 MeV with mixed character. Two strongest isoscalar transitions are truncated, giving their B(E2).[]{data-label="fig_Cr50_E21"}](Cr50_E21.pdf){width="70.00000%"}
![Total calculated $^{50}$Cr M1 strength given as a strength function. Orbital and spin part show constructive interference up to 7 MeV, and destructive interference beyond that.[]{data-label="fig_Cr50_M1sf"}](Cr50_M1_sf_rev.pdf){width="70.00000%"}
Summary
=======
Skyrme Random Phase Approximation was successfully implemented for spherical and axially symmetric nuclei, including the Coulomb and pairing interaction. Spherical formalism involved advanced angular-momentum-coupling techniques and enabled to formulate the results in a convenient and rotationally-invariant way of density/current reduced matrix elements, in contrast with previous formulations, which were either not manifestly invariant [@Reinhard1992] or too cumbersome [@Terasaki2005; @Colo2013].
Kinetic center-of-mass term was implemented in Hartree-Fock before variation, as well as in RPA, and led to an interesting behavior with respect to removal of the spurious motion. Nevertheless, this term can be safely omitted due to relativelly small influence on the giant resonances. In the axial case, the logarithmic singularity of the Coulomb integral was correctly removed by a procedure inspired by the cartesian case, which can be treated analytically (to a certain degree).
Large-basis calculation on the spherical closed-shell nuclei $^{40,48}$Ca and $^{208}$Pb was used to demonstrate the influence of the oscillator length, tensor ($\mathcal{J}^2$) and spin terms in Skyrme functional, accuracy of the long-wave transition operators and separable RPA method. Last chapter gives an analysis of selected physical topis, such as isoscalar toroidal character of the low-energy (“pygmy”) E1 mode, electric multipolar resonances in deformed $^{154}$Sm and demonstration of scissor and spin-flip M1 transitions in $^{50}$Cr. All these calculations are restricted to one-phonon excitations with a discrete basis of s.p. states, so the fragmentation due to coupling with complex configurations and escape width are simulated only by Lorentz smoothing of the strength functions. Better description should include multi-phonon configurations, however, it is not clear whether such methods are physically valid for density functionals. For example, the zero-range interaction leads to divergent correlation energy [@Moghrabi2010]. If we stay in the present one-phonon RPA framework, there is still a possibility of extension to $\beta$-transitions.
The results of calculations with the methods described in this work were published in [@Repko2013; @Reinhard2014; @Nesterenko2015; @Kvasil2013; @Kvasil2015-ischia; @Kvasil2015] and submitted for publication in [@Nesterenko-lowE2; @Nesterenko-Mg24; @Repko-istros; @Pai2016].
Detailed derivation of Skyrme functional from two-body interaction {#app_skyr-dft}
==================================================================
Skyrme interaction (\[V\_skyrme\]) can be completely rewritten (including its exchange term) into a density functional (\[Skyrme\_DFT\]) in terms of generalized one-body densities and currents (\[Jd\_gs\]).
Indices $j,k,l$ will denote cartesian coordinates, so the summations run over $\{x,y,z\}$. Index $s$ in $\psi_{\alpha s}(\vec{r})$ denotes spin projection of a given wave function.
$$\begin{array}{ll}
\langle \vec{r}_1 s_1,\vec{r}_2 s_2|\alpha\beta\rangle = \psi_{\alpha s_1}(\vec{r}_1) \psi_{\beta s_2}(\vec{r}_2), \ \ & \displaystyle
\langle \alpha\beta| \delta(\vec{r}_1-\vec{r}_2)|\alpha\beta\rangle =
\int \psi_\alpha^\dagger(\vec{r})\psi_\alpha^{\phantom{|}}(\vec{r})
\psi_\beta^\dagger(\vec{r})\psi_\beta^{\phantom{|}}(\vec{r})\mathrm{d}^3 r \\
\langle \vec{r}_1 s_1,\vec{r}_2 s_2|\beta\alpha\rangle = \psi_{\beta s_1}(\vec{r}_1) \psi_{\alpha s_2}(\vec{r}_2), \ \ &
\langle \vec{r}_1 s_1,\vec{r}_2 s_2|\hat{P}_\sigma|\beta\alpha\rangle = \psi_{\beta s_2}(\vec{r}_1) \psi_{\alpha s_1}(\vec{r}_2)\phantom{\Big{|}}
\end{array}$$
\
Spin-exchange term $\hat{P}_\sigma=\frac{1}{2}(1+\vec{\sigma}_1\cdot\vec{\sigma}_2)$ (\[P\_sigma\]) can either act to reverse the effect of HF exchange term (with later setting $\vec{r}_1=\vec{r}_2$), or is taken simply from its definition and introduces $\vec{\sigma}$ matrices into integrals: $$\begin{aligned}
\langle \alpha\beta|\hat{P}_\sigma\delta(\vec{r}_1-\vec{r}_2)|\beta\alpha\rangle &=
\int \psi_\alpha^\dagger(\vec{r})\psi_\alpha^{\phantom{|}}(\vec{r})
\psi_\beta^\dagger(\vec{r})\psi_\beta^{\phantom{|}}(\vec{r})\mathrm{d}^3 r \\
\langle \alpha\beta|\hat{P}_\sigma\delta(\vec{r}_1-\vec{r}_2)|\alpha\beta\rangle &=
\frac{1}{2}\int \psi_\alpha^\dagger(\vec{r})\psi_\alpha^{\phantom{|}}(\vec{r})
\psi_\beta^\dagger(\vec{r})\psi_\beta^{\phantom{|}}(\vec{r})\mathrm{d}^3 r \\
&\quad {}+
\frac{1}{2}\int [\psi_\alpha^\dagger(\vec{r})\vec{\sigma}\psi_\alpha^{\phantom{|}}(\vec{r})]\cdot
[\psi_\beta^\dagger(\vec{r})\vec{\sigma}\psi_\beta^{\phantom{|}}(\vec{r})]\mathrm{d}^3 r \\
&= \langle\alpha\beta|\hat{P}_\sigma\delta(\vec{r}_1-\vec{r}_2)\hat{P}_\sigma|\beta\alpha\rangle = \langle\alpha\beta|\delta(\vec{r}_1-\vec{r}_2)|\beta\alpha\rangle\end{aligned}$$ Term with $t_0$ and $x_0$ can be now obtained easily: $$\label{t0-term}
\begin{split}
\sum_{\alpha\beta}\langle\alpha\beta|t_0(1+x_0\hat{P}_\sigma)\delta(\vec{r}_1-\vec{r}_2)|\alpha\beta\rangle &= \int \bigg[
t_0\Big(1+\frac{x_0}{2}\Big)\rho^2 + \frac{t_0 x_0}{2}\vec{s}^2 \bigg]\mathrm{d}^3 r \\
\sum_{\alpha\beta\in q}\langle\alpha\beta|t_0(1+x_0\hat{P}_\sigma)\delta(\vec{r}_1-\vec{r}_2)|\beta\alpha\rangle &= \int \bigg[
t_0\Big(\frac{1}{2}+x_0\Big)\rho_q^2 + \frac{t_0}{2}\vec{s}_q^2 \bigg]\mathrm{d}^3 r
\end{split}$$ Parts of $t_1$ term:
$$\begin{aligned}
\langle\alpha\beta|(1+x_1\hat{P}_\sigma)&\delta(\vec{r}_1-\vec{r}_2)(\vec{\nabla}_1-\vec{\nabla}_2)^2|\alpha\beta\rangle = \\
&= \Big(1+\frac{x_1}{2}\Big)\int \psi_\alpha^\dagger \psi_\beta^\dagger
\big\{[\Delta\psi_\alpha^{\phantom{|}}]\psi_\beta^{\phantom{|}} -
2[\vec{\nabla}\psi_\alpha^{\phantom{|}}]\cdot[\vec{\nabla}\psi_\beta^{\phantom{|}}]
+ \psi_\alpha^{\phantom{|}}[\Delta\psi_\beta^{\phantom{|}}]\big\}
\mathrm{d}^3 r \\
&\ \ {}+ \frac{x_1}{2} \int \psi_\alpha^\dagger \psi_\beta^\dagger
\bigg\{[\vec{\sigma}\Delta\psi_\alpha^{\phantom{|}}]\cdot[\vec{\sigma}\psi_\beta^{\phantom{|}}]
-2\sum_{j,k}[\partial_j\sigma_k\psi_\alpha^{\phantom{|}}][\partial_j\sigma_k\psi_\beta^{\phantom{|}}]
+[\vec{\sigma}\psi_\alpha^{\phantom{|}}]\cdot[\vec{\sigma}\Delta\psi_\beta^{\phantom{|}}]\bigg\} \mathrm{d}^3 r \\
\langle\alpha\beta|(1+x_1\hat{P}_\sigma)&\delta(\vec{r}_1-\vec{r}_2)(\vec{\nabla}_1-\vec{\nabla}_2)^2|\beta\alpha\rangle = \\
=& \Big(\frac{1}{2}+x_1\Big)\int \psi_\alpha^\dagger \psi_\beta^\dagger
\big\{[\Delta\psi_\alpha^{\phantom{|}}]\psi_\beta^{\phantom{|}} -
2[\vec{\nabla}\psi_\alpha^{\phantom{|}}]\cdot[\vec{\nabla}\psi_\beta^{\phantom{|}}]
+ \psi_\alpha^{\phantom{|}}[\Delta\psi_\beta^{\phantom{|}}]\big\}
\mathrm{d}^3 r \phantom{\bigg{|}}\\
&\ \ {}+ \frac{1}{2} \int \psi_\alpha^\dagger \psi_\beta^\dagger
\bigg\{[\vec{\sigma}\Delta\psi_\alpha^{\phantom{|}}]\cdot[\vec{\sigma}\psi_\beta^{\phantom{|}}]
-2\sum_{j,k}[\partial_j\sigma_k\psi_\alpha^{\phantom{|}}][\partial_j\sigma_k\psi_\beta^{\phantom{|}}]
+[\vec{\sigma}\psi_\alpha^{\phantom{|}}]\cdot[\vec{\sigma}\Delta\psi_\beta^{\phantom{|}}]\bigg\} \mathrm{d}^3 r\end{aligned}$$
\
I prepare derivatives of the previously defined densities:
$$\begin{aligned}
\Delta\rho_q(\vec{r}) &= 2\tau_q(\vec{r}) + \sum_{\alpha\in q}\big\{
[\Delta\psi_\alpha^{\phantom{|}}(\vec{r})]^\dagger \psi_\alpha^{\phantom{|}}(\vec{r}) +
\psi_\alpha^\dagger(\vec{r}) [\Delta\psi_\alpha^{\phantom{|}}(\vec{r})]\big\} \\
\Delta\vec{s}_q(\vec{r}) &= 2\vec{T}_q(\vec{r}) + \sum_{\alpha\in q}\big\{
[\vec{\sigma}\Delta\psi_\alpha^{\phantom{|}}(\vec{r})]^\dagger \psi_\alpha^{\phantom{|}}(\vec{r}) +
\psi_\alpha^\dagger(\vec{r}) [\vec{\sigma}\Delta\psi_\alpha^{\phantom{|}}(\vec{r})]\big\} \\
\ [\vec{\nabla}\rho(\vec{r})]^2 - 4[\vec{j}(\vec{r})]^2 &=
2\sum_{\alpha\beta}\big\{
[\vec{\nabla}\psi_\alpha^{\phantom{|}}(\vec{r})]^\dagger \cdot
[\vec{\nabla}\psi_\beta^{\phantom{|}}(\vec{r})]^\dagger \psi_\alpha^{\phantom{|}}(\vec{r})\psi_\beta^{\phantom{|}}(\vec{r}) \\[-8pt]
& \qquad\qquad{}+
\psi_\alpha^\dagger(\vec{r})\psi_\beta^\dagger(\vec{r})
[\vec{\nabla}\psi_\alpha^{\phantom{|}}(\vec{r})]\cdot
[\vec{\nabla}\psi_\beta^{\phantom{|}}(\vec{r})] \big\} \\
\ [\partial_j s_k(\vec{r})]^2-4[\mathcal{J}_{jk}(\vec{r})]^2 &=
2\sum_{\alpha\beta}\big\{
[\partial_j\sigma_k\psi_\alpha^{\phantom{|}}(\vec{r})]^\dagger
[\partial_j\sigma_k\psi_\beta^{\phantom{|}}(\vec{r})]^\dagger \psi_\alpha^{\phantom{|}}(\vec{r})\psi_\beta^{\phantom{|}}(\vec{r}) \\[-8pt]
& \qquad\qquad {}+
\psi_\alpha^\dagger(\vec{r})\psi_\beta^\dagger(\vec{r})
[\partial_j\sigma_k\psi_\alpha^{\phantom{|}}(\vec{r})]
[\partial_j\sigma_k\psi_\beta^{\phantom{|}}(\vec{r})] \big\} \\
\int_V [2\rho\Delta\rho + 2(\vec{\nabla}\rho)^2] \mathrm{d}^3 r &=
\int_V \Delta(\rho^2) \mathrm{d}^3 r =
\oint_{\partial V} \vec{\nabla}(\rho^2)\cdot \mathrm{d}\vec{S} = 0\end{aligned}$$
\
The whole $t_1$ term:
$$\begin{split}
-\frac{1}{8}t_1 \sum_{\alpha\beta}&\,\langle\alpha\beta|(1+x_1\hat{P}_\sigma)[(\overleftarrow{\nabla}_1-\overleftarrow{\nabla}_2)^2\delta(\vec{r}_1-\vec{r}_2) + \delta(\vec{r}_1-\vec{r}_2)(\overrightarrow{\nabla}_1-\overrightarrow{\nabla}_2)^2]|\alpha\beta\rangle = \\
&= \int
\bigg\{ {-}\frac{t_1(2+x_1)}{16}[2(\Delta\rho-2\tau)\rho-(\vec{\nabla}\rho)^2+4\vec{j}^2] \\
&\qquad\qquad{}-\frac{t_1 x_1}{16} \bigg[2(\Delta\vec{s}-2\vec{T})\cdot\vec{s}-\sum_{j,k}[(\partial_j s_k)^2-4(\mathcal{J}_{jk})^2]\bigg]
\bigg\}\mathrm{d}^3 r \\
&= \int
\bigg\{ \frac{t_1(2+x_1)}{16}[3(\vec{\nabla}\rho)^2+4\rho\tau-4\vec{j}^2]\\
&\qquad\qquad{}+
\frac{t_1 x_1}{16} \bigg[4\vec{s}\cdot\vec{T}+\!\!\sum_{j,k=x,y,z}\!\![3(\partial_j s_k)^2-4(\mathcal{J}_{jk})^2]\bigg]
\bigg\}\mathrm{d}^3 r \\
-\frac{1}{8}t_1 \sum_{\alpha\beta\in q}&\,\langle\alpha\beta|(1+x_1\hat{P}_\sigma)[(\overleftarrow{\nabla}_1-\overleftarrow{\nabla}_2)^2\delta(\vec{r}_1-\vec{r}_2) + \delta(\vec{r}_1-\vec{r}_2)(\overrightarrow{\nabla}_1-\overrightarrow{\nabla}_2)^2]|\beta\alpha\rangle = \phantom{\Bigg{|}}\\
&= \int
\bigg\{ \frac{t_1(1+2x_1)}{16}[3(\vec{\nabla}\rho_q)^2+4\rho_q\tau_q-4\vec{j}_q^2]\\
&\qquad\qquad{}+
\frac{t_1}{16} \bigg[4\vec{s}_q\cdot\vec{T}_q+\sum_{j,k}[3(\partial_j s_{q;k})^2-4(\mathcal{J}_{q;jk})^2]\bigg]
\bigg\}\mathrm{d}^3 r
\end{split}$$
\
Parts of $t_2$ term:
$$\begin{aligned}
\langle\alpha\beta|(1+x_2\hat{P}_\sigma)\overleftarrow{\nabla}_1\cdot
\delta(\vec{r}_1-\vec{r}_2)(\vec{\nabla}_1-\vec{\nabla}_2)|\alpha\beta\rangle =
\Big(1+\frac{x_2}{2}\Big)\int \psi_\beta^\dagger (\vec{\nabla}\psi_\alpha^\dagger)\cdot
[\psi_\beta^{\phantom{|}}(\vec{\nabla}\psi_\alpha^{\phantom{|}}) -
(\vec{\nabla}\psi_\beta^{\phantom{|}})\psi_\alpha^{\phantom{|}}]
\mathrm{d}^3 r \\
{}+ \frac{x_2}{2} \int \sum_{j} \bigg\{
[(\partial_j\psi_\alpha^\dagger)\vec{\sigma}(\partial_j\psi_\alpha^{\phantom{|}})]
\cdot[\psi_\beta^\dagger\vec{\sigma}\psi_\beta^{\phantom{|}}]
-[(\partial_j\vec{\sigma}\psi_\alpha^{\phantom{|}})^\dagger\psi_\alpha^{\phantom{|}}] \cdot
[\psi_\beta^\dagger(\partial_j\vec{\sigma}\psi_\beta^{\phantom{|}})]
\bigg\} \mathrm{d}^3 r \\
\langle\alpha\beta|(1+x_2\hat{P}_\sigma)\overleftarrow{\nabla}_1\cdot
\delta(\vec{r}_1-\vec{r}_2)(\vec{\nabla}_1-\vec{\nabla}_2)|\beta\alpha\rangle =
\Big(\frac{1}{2}+x_2\Big)\int \psi_\beta^\dagger (\vec{\nabla}\psi_\alpha^\dagger)\cdot
[(\vec{\nabla}\psi_\beta^{\phantom{|}})\psi_\alpha^{\phantom{|}} -
\psi_\beta^{\phantom{|}}(\vec{\nabla}\psi_\alpha^{\phantom{|}})]
\mathrm{d}^3 r \\
{}- \frac{1}{2} \int \sum_{j} \bigg\{
[(\partial_j\psi_\alpha^\dagger)\vec{\sigma}(\partial_j\psi_\alpha^{\phantom{|}})]
\cdot[\psi_\beta^\dagger\vec{\sigma}\psi_\beta^{\phantom{|}}]
-[(\partial_j\vec{\sigma}\psi_\alpha^{\phantom{|}})^\dagger\psi_\alpha^{\phantom{|}}] \cdot
[\psi_\beta^\dagger(\partial_j\vec{\sigma}\psi_\beta^{\phantom{|}})]
\bigg\} \mathrm{d}^3 r\end{aligned}$$
\
Useful derivatives:
$$\begin{aligned}
[\vec{\nabla}\rho(\vec{r})]^2 + 4[\vec{j}(\vec{r})]^2 &=
2\sum_{\alpha\beta}\big\{
\psi_\beta^\dagger(\vec{r})[\vec{\nabla}\psi_\alpha^{\phantom{|}}(\vec{r})]^\dagger \cdot
[\vec{\nabla}\psi_\beta^{\phantom{|}}(\vec{r})] \psi_\alpha^{\phantom{|}}(\vec{r})\\[-8pt]
&\qquad\qquad{}+
\psi_\alpha^\dagger(\vec{r})[\vec{\nabla}\psi_\beta^{\phantom{|}}(\vec{r})]^\dagger\cdot
[\vec{\nabla}\psi_\alpha^{\phantom{|}}(\vec{r})]\psi_\beta^{\phantom{|}}(\vec{r}) \big\} \\
\ [\partial_j s_k(\vec{r})]^2+4[\mathcal{J}_{jk}(\vec{r})]^2 &=
2\sum_{\alpha\beta}\big\{
[\partial_j\sigma_k\psi_\alpha^{\phantom{|}}(\vec{r})]^\dagger \psi_\alpha^{\phantom{|}}(\vec{r})
\psi_\beta^\dagger(\vec{r}) [\partial_j\sigma_k\psi_\beta^{\phantom{|}}(\vec{r})] \\[-8pt]
&\qquad\qquad{}+
\psi_\alpha^\dagger(\vec{r})[\partial_j\sigma_k\psi_\alpha^{\phantom{|}}(\vec{r})]
[\partial_j\sigma_k\psi_\beta^{\phantom{|}}(\vec{r})]^\dagger\psi_\beta^{\phantom{|}}(\vec{r})
\big\}\end{aligned}$$
\
The whole $t_2$ term:
$$\begin{split}
\frac{1}{4}t_2\sum_{\alpha\beta}&\,\langle\alpha\beta|(1+x_2\hat{P}_\sigma)(\overleftarrow{\nabla}_1-\overleftarrow{\nabla}_2)\cdot
\delta(\vec{r}_1-\vec{r}_2)(\overrightarrow{\nabla}_1-\overrightarrow{\nabla}_2)|\alpha\beta\rangle =\\
&= \int \bigg\{
\frac{t_2(2+x_2)}{16}[4\rho\tau-(\vec{\nabla}\rho)^2-4\vec{j}^2] \\
&\qquad\qquad{}+
\frac{t_2 x_2}{16}\bigg[4\vec{s}\cdot\vec{T}-\!\!\sum_{j,k=x,y,z}\!\![(\partial_j s_k)^2+4(\mathcal{J}_{jk})^2]\bigg]\bigg\} \mathrm{d}^3 r \\
&= \int \bigg\{
{-}\frac{t_2(1+2x_2)}{16}[4\rho_q\tau_q-(\vec{\nabla}\rho_q)^2-4\vec{j}_q^2] \\
&\qquad\qquad{}-
\frac{t_2}{16} \bigg[4\vec{s}_q\cdot\vec{T}_q-\!\!\sum_{j,k=x,y,z}\!\![(\partial_j s_{q;k})^2+4(\mathcal{J}_{q;jk})^2]\bigg]\bigg\} \mathrm{d}^3 r
\end{split}$$
\
Density-dependent $t_3$ term is just a simple variation of $t_0$ term:
$$\begin{split}
\frac{1}{6}t_3\sum_{\beta\gamma}\langle\beta\gamma|(1+x_3\hat{P}_\sigma)\delta(\vec{r}_1-\vec{r}_2)\rho^\alpha\Big(\frac{\vec{r_1}+\vec{r}_2}{2}\Big)|\beta\gamma\rangle &= \int \bigg[
\frac{t_3(2+x_3)}{12}\rho^{\alpha+2} + \frac{t_3 x_3}{12}\rho^\alpha\vec{s}^2
\bigg]\mathrm{d}^3 r \\
\frac{1}{6}t_3\sum_{\beta\gamma\in q}\langle\beta\gamma|(1+x_3\hat{P}_\sigma)\delta(\vec{r}_1-\vec{r}_2)\rho^\alpha\Big(\frac{\vec{r_1}+\vec{r}_2}{2}\Big)|\gamma\beta\rangle &= \int \bigg[
\frac{t_3(1+2x_3)}{12}\rho^\alpha\rho_q^2 + \frac{t_3}{12}\rho^\alpha\vec{s}_q^2
\bigg]\mathrm{d}^3 r
\end{split}$$
\
Parts of $t_4$ term:
$$\begin{aligned}
\langle\alpha\beta|&\,(\vec{\sigma}_1+\vec{\sigma}_2)\cdot[\overleftarrow{\nabla}_1\times
\delta(\vec{r}_1-\vec{r}_2)(\overrightarrow{\nabla}_1-\overrightarrow{\nabla}_2)]|\alpha\beta\rangle = \\
&= \sum_{ijk}\varepsilon_{ijk}\int \big[(\sigma_i\partial_j\psi_\alpha^{\phantom{|}})^\dagger \psi_\beta^\dagger +
(\partial_j\psi_\alpha^{\phantom{|}})^\dagger (\sigma_i\psi_\beta^{\phantom{|}})^\dagger\big]
\big[(\partial_k\psi_\alpha^{\phantom{|}})\psi_\beta^{\phantom{|}} -
\psi_\alpha^{\phantom{|}}(\partial_k\psi_\beta^{\phantom{|}})\big]
\mathrm{d}^3 r \\
\langle\alpha\beta|&\,(\vec{\sigma}_1+\vec{\sigma}_2)\cdot[\overleftarrow{\nabla}_1\times
\delta(\vec{r}_1-\vec{r}_2)(\overrightarrow{\nabla}_1-\overrightarrow{\nabla}_2)]|\beta\alpha\rangle = \phantom{\bigg{|}}\\
&= \frac{1}{2}\sum_{ijk}\varepsilon_{ijk}\int \big[(\sigma_i\partial_j\psi_\alpha^{\phantom{|}})^\dagger \psi_\beta^\dagger +
(\partial_j\psi_\alpha^{\phantom{|}})^\dagger (\sigma_i\psi_\beta^{\phantom{|}})^\dagger\big]
\big[\psi_\alpha^{\phantom{|}}(\partial_k\psi_\beta^{\phantom{|}}) -
(\partial_k\psi_\alpha^{\phantom{|}})\psi_\beta^{\phantom{|}}\big]
\mathrm{d}^3 r \\
&\quad {}+
\frac{1}{2}\sum_{ijkn}\varepsilon_{ijk}\int \big[(\sigma_n\sigma_i\partial_j\psi_\alpha^{\phantom{|}})^\dagger (\sigma_n\psi_\beta^{\phantom{|}})^\dagger +
(\sigma_n\partial_j\psi_\alpha^{\phantom{|}})^\dagger (\sigma_n\sigma_i\psi_\beta^{\phantom{|}})^\dagger\big] \\[-8pt]
&\qquad\qquad\qquad\qquad{}\times
\big[\psi_\alpha^{\phantom{|}}(\partial_k\psi_\beta^{\phantom{|}}) -
(\partial_k\psi_\alpha^{\phantom{|}})\psi_\beta^{\phantom{|}}\big]
\mathrm{d}^3 r\end{aligned}$$
\
I will substitute following relations into the exchange term: $$\sigma_i\sigma_n = \delta_{in}+\mathrm{i}\sum_p\varepsilon_{inp}\sigma_p
\qquad\textrm{and}\qquad
\sum_i\varepsilon_{ijk}\varepsilon_{inp} = \delta_{jn}\delta_{kp} - \delta_{jp}\delta_{kn}$$
$$\begin{aligned}
\langle\alpha\beta|&\,(\vec{\sigma}_1+\vec{\sigma}_2)\cdot[\overleftarrow{\nabla}_1\times
\delta(\vec{r}_1-\vec{r}_2)(\overrightarrow{\nabla}_1-\overrightarrow{\nabla}_2)]|\beta\alpha\rangle = \phantom{\bigg{|}}\\
&= \sum_{ijk}\varepsilon_{ijk}\int \big[(\sigma_i\partial_j\psi_\alpha^{\phantom{|}})^\dagger \psi_\beta^\dagger +
(\partial_j\psi_\alpha^{\phantom{|}})^\dagger (\sigma_i\psi_\beta^{\phantom{|}})^\dagger\big]
\big[\psi_\alpha^{\phantom{|}}(\partial_k\psi_\beta^{\phantom{|}}) -
(\partial_k\psi_\alpha^{\phantom{|}})\psi_\beta^{\phantom{|}}\big]
\mathrm{d}^3 r \\
&\quad {}+
\frac{\mathrm{i}}{2}\sum_{jk}\varepsilon_{ijk}\int
\big[(\partial_j\psi_\alpha^{\phantom{|}})^\dagger \psi_\beta^\dagger -
(\partial_j\psi_\alpha^{\phantom{|}})^\dagger \psi_\beta^\dagger\big]
(\sigma_{1k}\sigma_{2j}-\sigma_{1j}\sigma_{2k})
\big[\psi_\alpha^{\phantom{|}}(\partial_k\psi_\beta^{\phantom{|}}) -
(\partial_k\psi_\alpha^{\phantom{|}})\psi_\beta^{\phantom{|}}\big]
\mathrm{d}^3 r\end{aligned}$$
\
The first part is equal to $(-1)\times$direct term and the second part is zero. Then I use following relations:
$$\begin{aligned}
\sum_{ijk}\varepsilon_{ijk} (\partial_j\psi)^\dagger\sigma_i(\partial_k\psi)
&= \sum_{ijk}^{xyz}\varepsilon_{ijk} \big[ \partial_j(\psi^\dagger\sigma_i\partial_k\psi)
- (\psi^\dagger\sigma_i\partial_j\partial_k\psi)\big] =
\vec{\nabla}\cdot\big[\psi^\dagger(\vec{\nabla}\times\vec{\sigma})\psi\big] - 0\\
&= \sum_{ijk}\varepsilon_{ijk} \partial_k [(\sigma_i\partial_j\psi)^\dagger\psi] =
-\vec{\nabla}\cdot\big\{[(\vec{\nabla}\times\vec{\sigma})\psi]^\dagger\psi\big\}\\
\sum_{jk}\varepsilon_{ijk} (\partial_j\psi)^\dagger(\partial_k\psi)
&=
\sum_{jk}\varepsilon_{ijk} \big[ \partial_j(\psi^\dagger\partial_k\psi)
- (\psi^\dagger\partial_j\partial_k\psi)\big] =
\big[\vec{\nabla}\times(\psi^\dagger\vec{\nabla}\psi)\big]_i - 0 \\
&=
\sum_{jk}\varepsilon_{ijk} \partial_k[(\partial_j\psi)^\dagger\psi] =
-\big\{\vec{\nabla}\times[(\vec{\nabla}\psi)^\dagger\psi]\big\}_i\end{aligned}$$
$$\begin{aligned}
\langle\alpha\beta|&(\vec{\sigma}_1+\vec{\sigma}_2)\cdot[\overleftarrow{\nabla}_1\times
\delta(\vec{r}_1-\vec{r}_2)(\overrightarrow{\nabla}_1-\overrightarrow{\nabla}_2)]|\alpha\beta\rangle = \\
&\qquad=\int \Big\{
\vec{\nabla}\cdot\big[\psi_\alpha^\dagger(\vec{\nabla}\times\vec{\sigma})\psi_\alpha^{\phantom{|}}\big]\psi_\beta^\dagger\psi_\beta^{\phantom{|}}
+\big[\vec{\nabla}\times(\psi^\dagger\vec{\nabla}\psi)\big]\cdot
(\psi_\beta^\dagger\vec{\sigma}\psi_\beta^{\phantom{|}}) \\[-8pt]
&\qquad\qquad\qquad{}-\sum_{ijk}\varepsilon_{ijk}\Big[
[(\partial_j\psi_\alpha^{\phantom{|}})^\dagger\sigma_i\psi_\alpha^{\phantom{|}}]
(\psi_\beta^\dagger\partial_k\psi_\beta^{\phantom{|}})
+[(\partial_j\psi_\alpha^{\phantom{|}})^\dagger\psi_\alpha^{\phantom{|}}]
(\psi_\beta^\dagger\sigma_i\partial_k\psi_\beta^{\phantom{|}})\Big]\Big\}\mathrm{d}^3 r \phantom{\bigg{|}}\end{aligned}$$
\
Remaining indexed terms (after addition of the $\overleftarrow{\nabla}_2$ term, i.e., symmetrization in $\alpha\leftrightarrow\beta$) can be obtained by subtraction of the following two lines: $$\begin{aligned}
\langle\alpha|\hat{\mathcal{J}}_{ji}|\alpha\rangle
\partial_k(\psi_\beta^\dagger\psi_\beta^{\phantom{|}}) &=
\frac{\mathrm{i}}{2}
\big[(\partial_j\psi_\alpha^{\phantom{|}})^\dagger\sigma_i\psi_\alpha^{\phantom{|}}
-\psi_\alpha^\dagger\sigma_i(\partial_j\psi_\alpha^{\phantom{|}})\big]
\big[(\partial_k\psi_\beta^{\phantom{|}})^\dagger\psi_\beta^{\phantom{|}}
+\psi_\beta^\dagger(\partial_k\psi_\beta^{\phantom{|}})\big] \\
\partial_j(\psi_\alpha^\dagger\sigma_i\psi_\alpha^{\phantom{|}})
\langle\beta|\hat{j}_k|\beta\rangle &=
\frac{\mathrm{i}}{2}
\big[(\partial_j\psi_\alpha^{\phantom{|}})^\dagger\sigma_i\psi_\alpha^{\phantom{|}}
+\psi_\alpha^\dagger\sigma_i(\partial_j\psi_\alpha^{\phantom{|}})\big]
\big[(\partial_k\psi_\beta^{\phantom{|}})^\dagger\psi_\beta^{\phantom{|}}
-\psi_\beta^\dagger(\partial_k\psi_\beta^{\phantom{|}})\big]\end{aligned}$$ The whole $t_4$ term:
$$\begin{aligned}
\frac{\mathrm{i}}{4}t_4\sum_{\alpha\beta}\langle\alpha\beta|&\,(\vec{\sigma}_1+\vec{\sigma}_2)\cdot[(\overleftarrow{\nabla}_1-\overleftarrow{\nabla}_2)\times
\delta(\vec{r}_1-\vec{r}_2)(\overrightarrow{\nabla}_1-\overrightarrow{\nabla}_2)]|\alpha\beta\rangle = \nonumber\\
&= \frac{t_4}{2}\int \big[ {-}\rho\vec{\nabla}\!\cdot\!\vec{\mathcal{J}}-\vec{s}\cdot(\vec{\nabla}\!\times\!\vec{j}) + \vec{\mathcal{J}}\cdot\vec{\nabla}\rho - \vec{j}\cdot(\vec{\nabla}\!\times\!\vec{s})
\big]\mathrm{d}^3 r = \\
&= t_4\int \big[ {-}\rho\vec{\nabla}\!\cdot\!\vec{\mathcal{J}}-\vec{s}\cdot(\vec{\nabla}\!\times\!\vec{j}) \big]\mathrm{d}^3 r \nonumber\\
\frac{\mathrm{i}}{4}t_4\sum_{\alpha\beta\in q}\langle\alpha\beta|&\,(\vec{\sigma}_1+\vec{\sigma}_2)\cdot[(\overleftarrow{\nabla}_1-\overleftarrow{\nabla}_2)\times
\delta(\vec{r}_1-\vec{r}_2)(\overrightarrow{\nabla}_1-\overrightarrow{\nabla}_2)]|\beta\alpha\rangle = \phantom{\Bigg{|}} \nonumber\\
&= \frac{t_4}{2}\int \big[ \rho_q\vec{\nabla}\!\cdot\!\vec{\mathcal{J}}_q+\vec{s}_q\cdot(\vec{\nabla}\!\times\!\vec{j}_q) - \vec{\mathcal{J}}_q\cdot\vec{\nabla}\rho_q + \vec{j}_q\cdot(\vec{\nabla}\!\times\!\vec{s}_q)
\big]\mathrm{d}^3 r \nonumber\\
\label{t4-term}
&= t_4\int \big[ \rho_q\vec{\nabla}\!\cdot\!\vec{\mathcal{J}}_q+\vec{s}_q\cdot(\vec{\nabla}\!\times\!\vec{j}_q) \big]\mathrm{d}^3 r\end{aligned}$$
Derivation of the matrix element of spin-orbital current {#app_Jab}
========================================================
The steps of the derivation are here presented in the form of references, (V. *number*), to the formulae from the book of Varshalovich [@Varshalovich1988], which are given here after application of necessary substitutions and other transformations.
Spherical components of the single particle matrix element are $$\label{Jab_me}
\big[\langle\alpha|\hat{\vec{\mathcal{J}}}(\vec{r})|\beta\rangle\big]_M =
-\frac{\mathrm{i}}{2}
\big\{ \psi_\alpha^\dagger[(\vec{\nabla}\times\vec{\sigma})_M
\psi_\beta^{\phantom{\dagger}}] -
(-1)^M[(\vec{\nabla}\times\vec{\sigma})_{-M}
\psi_\alpha^{\phantom{\dagger}}]^\dagger\psi_\beta^{\phantom{\dagger}}
\big\}$$ $${[\vec{\nabla}\times\vec{\sigma}]}_M = -\mathrm{i}\sqrt{2}\sum_{\mu\nu}
C_{1\mu1\nu}^{1M} \sigma_{\nu} \nabla_{\mu}\qquad
(\sigma_\nu = 2\hat{s}_\nu)
\tag{V.~1.2.28}$$ $$\nabla_\mu \psi_\beta^{\phantom{+}} = \sum_{l_2=l_\beta\pm1} R_\beta^{(\pm)}
\frac{(-1)^{j_\beta+l_\beta-\frac{1}{2}}}{\sqrt{2l_2+1}}
\sum_K
\begin{Bmatrix} j_\beta & K & 1 \\ l_2 & l_\beta & \frac{1}{2}\end{Bmatrix}
C_{j_\beta,m_\beta,1,\mu}^{K,m_\beta+\mu}
\Omega_{K,m_\beta+\mu}^{l_2}
\tag{V.~7.1.24}$$ $$\sigma_\nu \Omega_{K,m_\beta+\mu}^{l_2} = \sum_{K'}
(-1)^{l_2+K'-\frac{1}{2}}
\sqrt{6(2K+1)}\,
\begin{Bmatrix} K & K' & 1 \\ \frac{1}{2} & \frac{1}{2} & l_2 \end{Bmatrix}
C_{K,m_\beta+\mu,1,\nu}^{K',m_\beta+\mu+\nu}
\Omega_{K',m_\beta+\mu+\nu}^{l_2}
\tag{V.~7.1.28}$$ $$\sum_{\mu\nu}
C_{1\mu1\nu}^{1M}
C_{K,m_\beta+\mu,1,\nu}^{K',m_\beta+M}
C_{j_\beta,m_\beta,1,\mu}^{K,m_\beta+\mu} =
\sqrt{3(2K+1)}\,
\begin{Bmatrix} 1 & 1 & 1 \\ K' & j_\beta & K \end{Bmatrix}
C_{1,M,j_\beta,m_\beta}^{K',m_\beta+M}
\tag{V.~8.7.12}$$ $$\sum_K (2K+1)
\begin{Bmatrix} j_\beta & 1 & K \\ l_2 & \frac{1}{2} & l_\beta \end{Bmatrix}
\begin{Bmatrix} l_2 & \frac{1}{2} & K \\ 1 & K' & \frac{1}{2} \end{Bmatrix}
\begin{Bmatrix} 1 & K' & K \\ j_\beta & 1 & 1 \end{Bmatrix} =
(-1)^{j_\beta+K'}
\begin{Bmatrix} j_\beta & K' & 1 \\ l_\beta & l_2 & 1 \\ \frac{1}{2} & \frac{1}{2} & 1 \end{Bmatrix}
\tag{V.~9.8.5}$$ Together: $${[\vec{\nabla}\times\vec{\sigma}]}_M \psi_\beta^{\phantom{+}} =
-6\mathrm{i} \sum_{l_2=l_\beta\pm1}
\frac{R_\beta^{(\pm)}}{\sqrt{2l_2+1}}
\sum_{K'}
\begin{Bmatrix} j_\beta & K' & 1 \\ l_\beta & l_2 & 1 \\ \frac{1}{2} & \frac{1}{2} & 1 \end{Bmatrix}
C_{1,M,j_\beta,m_\beta}^{K',m_\beta+M}
\Omega_{K',m_\beta+M}^{l_2}$$ $$\begin{aligned}
\frac{\Omega_{j_\alpha m_\alpha}^{l_\alpha\dagger} \Omega_{K',m_\beta+M}^{l_2}}{\sqrt{(2j_\alpha+1)(2l_\alpha+1)(2l_2+1)}} =
& \sum_L
\frac{(-1)^{j_\alpha+m_\alpha+K'+L+\frac{1}{2}}\sqrt{2K'+1}}{\sqrt{4\pi(2L+1)}}
\begin{Bmatrix} l_\alpha & l_2 & L \\ K' & j_\alpha & \frac{1}{2} \end{Bmatrix}
\nonumber\\
&C_{l_\alpha 0 l_2 0}^{L 0}
C_{j_\alpha,-m_\alpha,K',m_\beta+M}^{L,m_\beta-m_\alpha+M}
Y_{L,m_\beta-m_\alpha+M}
\tag{V.~7.2.40}\end{aligned}$$ $$\begin{aligned}
C_{j_\alpha,-m_\alpha,K',m_\beta+M}^{L,m_\beta-m_\alpha+M}
C_{1,M,j_\beta,m_\beta}^{K',m_\beta+M} &=
\sum_J (-1)^{j_\beta+K'} \sqrt{(2J+1)(2K'+1)}\,
\begin{Bmatrix} j_\beta & j_\alpha & J \\ L & 1 & K' \end{Bmatrix} \nonumber\\
&\quad\times C_{j_\beta,m_\beta,j_\alpha,-m_\alpha}^{J,m_\beta-m_\alpha}
C_{1,M,J,m_\beta-m_\alpha}^{L,m_\beta-m_\alpha+M} \tag{V.~8.7.35}\\
&= \sum_J (-1)^{K'+J+j_\alpha+M} \sqrt{(2L+1)(2K'+1)} \nonumber\\[-6pt]
&\quad\times\begin{Bmatrix} j_\beta & 1 & K' \\ L & j_\alpha & J \end{Bmatrix}
C_{j_\alpha,-m_\alpha,j_\beta,m_\beta}^{J,m_\beta-m_\alpha}
C_{L,m_\beta-m_\alpha+M,1,-M}^{J,m_\beta-m_\alpha} \nonumber\end{aligned}$$ Together with (\[sph\_vectors\]) it gives $$\begin{aligned}
\psi_\alpha^\dagger [(\vec{\nabla}\times\vec{\sigma}) \psi_\beta^{\phantom{\dagger}}]
= &
6\mathrm{i} \sum_{l_2=l_\beta\pm1}
\frac{R_\alpha^{(0)} R_\beta^{(\pm)}}{\sqrt{4\pi}}
\sum_{LJ} (-1)^{m_\alpha+L+J-\frac{1}{2}}\,
C_{l_\alpha 0 l_2 0}^{L 0}
C_{j_\alpha,-m_\alpha,j_\beta,m_\beta}^{J,m_\beta-m_\alpha}
\vec{Y}_{J,m_\beta-m_\alpha}^L \nonumber\\
\label{Jab_part1}
& \times \sum_{K'} (2K'+1)
\begin{Bmatrix} j_\beta & K' & 1 \\ l_\beta & l_2 & 1 \\ \frac{1}{2} & \frac{1}{2} & 1 \end{Bmatrix}
\begin{Bmatrix} \frac{1}{2} & l_2 & K' \\ L & j_\alpha & l_\alpha \end{Bmatrix}
\begin{Bmatrix} j_\beta & 1 & K' \\ L & j_\alpha & J \end{Bmatrix}\end{aligned}$$ Second term of (\[Jab\_me\]) is evaluated analogously: $$(-1)^M\big\{{[\vec{\nabla}\times\vec{\sigma}]}_{-M} \psi_\alpha^{\phantom{\dagger}}\big\}^\dagger =
6\mathrm{i}(-1)^M\!\! \sum_{l_1=l_\alpha\pm1}
\frac{R_\alpha^{(\pm)}}{\sqrt{2l_1+1}}
\sum_{K'}
\begin{Bmatrix} j_\alpha & K' & 1 \\ l_\alpha & l_1 & 1 \\ \frac{1}{2} & \frac{1}{2} & 1 \end{Bmatrix}
C_{1,-M,j_\alpha,m_\alpha}^{K',m_\alpha-M}
\Omega_{K',m_\alpha-M}^{l_1\dagger}$$ $$\begin{aligned}
\frac{\Omega_{K',m_\alpha-M}^{l_1\dagger} \Omega_{j_\beta m_\beta}^{l_\beta}}
{\sqrt{(2l_1+1)(2j_\beta+1)(2l_\beta+1)}} = &
\sum_L
\frac{(-1)^{K'+m_\alpha-M+j_\beta+L+\frac{1}{2}}\sqrt{2K'+1}}{\sqrt{4\pi(2L+1)}}
\begin{Bmatrix} l_1 & l_\beta & L \\ j_\beta & K' & \frac{1}{2} \end{Bmatrix}
\nonumber\\
& \times C_{l_1 0 l_\beta 0}^{L 0}
C_{K',M-m_\alpha,j_\beta,m_\beta}^{L,m_\beta-m_\alpha+M}
Y_{L,m_\beta-m_\alpha+M}
\tag{V.~7.2.40}\end{aligned}$$ $$\begin{aligned}
C_{K',M-m_\alpha,j_\beta,m_\beta}^{L,m_\beta-m_\alpha+M}
C_{j_\alpha,-m_\alpha,1,M}^{K',M-m_\alpha} &=
\sum_J
(-1)^{K'+j_\beta-L}
\sqrt{(2J+1)(2K'+1)}
\begin{Bmatrix} j_\alpha & j_\beta & J \\ L & 1 & K' \end{Bmatrix} \nonumber\\
&\quad\times C_{j_\alpha,-m_\alpha,j_\beta,m_\beta}^{J,m_\beta-m_\alpha}
C_{1,M,J,m_\beta-m_\alpha}^{L,m_\beta-m_\alpha+M} \tag{V.~8.7.35}\\
&= \sum_J (-1)^{K'+j_\beta+L+M+1}
\sqrt{(2L+1)(2K'+1)} \nonumber\\[-6pt]
&\quad\times\begin{Bmatrix} j_\alpha & 1 & K' \\ L & j_\beta & J \end{Bmatrix}
C_{j_\alpha,-m_\alpha,j_\beta,m_\beta}^{J,m_\beta-m_\alpha}
C_{L,m_\beta-m_\alpha+M,1,-M}^{J,m_\beta-m_\alpha}\nonumber\end{aligned}$$ Together: $$\begin{aligned}
[(\vec{\nabla}\times\vec{\sigma}) \psi_\alpha^{\phantom{\dagger}}]^\dagger \psi_\beta^{\phantom{|}} &=
6\mathrm{i} \sum_{l_1=l_\alpha\pm1}
\frac{R_\alpha^{(\pm)} R_\beta^{(0)}}{\sqrt{4\pi}}
\sum_{LJ}
(-1)^{m_\alpha-\frac{1}{2}}\,
C_{l_1 0 l_\beta 0}^{L 0}
C_{j_\alpha,-m_\alpha,j_\beta,m_\beta}^{J,m_\beta-m_\alpha}
\vec{Y}_{J,m_\beta-m_\alpha}^L \nonumber\\
\label{Jab_part2}
& \qquad\times \sum_{K'} (2K'+1)
\begin{Bmatrix} j_\alpha & K' & 1 \\ l_\alpha & l_1 & 1 \\ \frac{1}{2} & \frac{1}{2} & 1 \end{Bmatrix}
\begin{Bmatrix} \frac{1}{2} & l_1 & K' \\ L & j_\beta & l_\beta \end{Bmatrix}
\begin{Bmatrix} j_\alpha & 1 & K' \\ L & j_\beta & J \end{Bmatrix}\end{aligned}$$
The sums over $K'$ in (\[Jab\_part1\], \[Jab\_part2\]) can be evaluated after decomposition of $9j$ symbol into $6j$ symbols (I take $g=1/2$ in the cited formula). $$\begin{aligned}
\begin{Bmatrix} j_\beta & K' & 1 \\ l_\beta & l_\beta\pm1 & 1 \\ \frac{1}{2} & \frac{1}{2} & 1 \end{Bmatrix}
\begin{Bmatrix} 1 & 1 & 1 \\ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \end{Bmatrix}
= &-\frac{1}{3}
\begin{Bmatrix} j_\beta & K' & 1 \\ \frac{1}{2} & \frac{1}{2} & l_\beta\pm1 \end{Bmatrix}
\begin{Bmatrix} l_\beta\pm1 & l_\beta & 1 \\ \frac{1}{2} & \frac{1}{2} & j_\beta \end{Bmatrix} \nonumber\\
&-\frac{(-1)^{K'+l_\beta-\frac{1}{2}}}{18}
\begin{Bmatrix} j_\beta & K' & 1 \\ l_\beta\pm1 & l_\beta & \frac{1}{2} \end{Bmatrix}
\tag{V.~10.9.9}\end{aligned}$$ I then evaluate some $6j$ symbols using tables 9.1 and 9.10 from [@Varshalovich1988] (given $6j$ symbols are non-zero only for the given $j_\beta=l_\beta\pm\frac{1}{2}$).
$$\begin{Bmatrix} 1 & 1 & 1 \\ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \end{Bmatrix}
= -\frac{1}{3}, \quad
\begin{Bmatrix} l_\beta & \!l_\beta+1\!\! & \!1 \\ \frac{1}{2} & \frac{1}{2} & \!\!l_\beta+\frac{1}{2} \end{Bmatrix} = \frac{1}{\sqrt{6(l_\beta+1)}}, \quad
\begin{Bmatrix} l_\beta & l_\beta-1\!\! & \!1 \\ \frac{1}{2} & \frac{1}{2} & \!\!l_\beta-\frac{1}{2} \end{Bmatrix} = \frac{1}{\sqrt{6l_\beta}}$$
\
$$\sum_{K'} (2K'+1)
\begin{Bmatrix} j_\beta & 1 & K' \\ \frac{1}{2} & l_\beta^\pm & \frac{1}{2} \end{Bmatrix}
\begin{Bmatrix} \frac{1}{2} & l_\beta^\pm & K' \\ L & j_\alpha & l_\alpha \end{Bmatrix}
\begin{Bmatrix} L & j_\alpha & K' \\ j_\beta & 1 & J \end{Bmatrix} = -
\begin{Bmatrix} j_\beta & j_\alpha & J \\ l_\beta^\pm & l_\alpha & L \\ \frac{1}{2} & \frac{1}{2} & 1 \end{Bmatrix}
\tag{V.~9.8.5}$$ $$\begin{aligned}
\sum_{K'} (-1)^{K'-\frac{1}{2}} (2K'+1)
\begin{Bmatrix} 1 & j_\beta & K' \\ \frac{1}{2} & l_\beta^\pm & l_\beta \end{Bmatrix}
\begin{Bmatrix} \frac{1}{2} & l_\beta^\pm & K' \\ L & j_\alpha & l_\alpha \end{Bmatrix}
\begin{Bmatrix} L & j_\alpha & K' \\ j_\beta & 1 & J \end{Bmatrix}& \nonumber\\
\displaystyle = (-1)^{j_\alpha+j_\beta+l_\alpha+L+J+1}
\begin{Bmatrix} l_\beta & l_\alpha & J \\ L & 1 & l_\beta^\pm \end{Bmatrix}
\begin{Bmatrix} l_\beta & l_\alpha & J \\ j_\alpha & j_\beta & \frac{1}{2} \end{Bmatrix}&
\tag{V.~9.8.6}\end{aligned}$$ Final result is then $$\langle\alpha|\vec{\mathcal{J}}_q(\vec{r})|\beta\rangle =
\sum_{LJ} \bigg[\sum_{ss'}^{0\pm,\pm0}
\mathcal{A}_{\alpha\beta LJ}^{\vec{J},ss'} R_\alpha^{(s)} R_\beta^{(s')} \bigg]
\frac{(-1)^{j_\alpha+j_\beta+L+m_\alpha-\frac{1}{2}}}{2\sqrt{4\pi}}\,
C_{j_\alpha,-m_\alpha,j_\beta,m_\beta}^{J,m_\beta-m_\alpha}
\vec{Y}_{J,m_\beta-m_\alpha}^L$$ where coefficients $\mathcal{A}_{\alpha\beta LJ}^{\vec{J},ss'}$ are given at (\[spin-orb\_me\]).
Separable RPA in the spherical symmetry {#app_SRPA}
=======================================
Calculation demands of the full RPA can grow rapidly as one increases the $2qp$ basis. Although in the spherical case the computional load is not so dramatic, I present here also the separable RPA [@Nesterenko2002; @Nesterenko2006], as a more efficient calculation scheme, useful mainly for a quick calculation of the strength functions. SRPA is outlined here according to my master thesis [@RepkoMgr], where it was derived for general wavefunctions, whereas here I assume spherical symmetry, and the final formulae are given in a fully rotationally invariant form.
Residual interaction is first approximated by a sum of separable terms $$\label{SRPA_hamilt}
\hat{H} = \hat{H}_0 + \hat{V}_\mathrm{res} =
\sum_\gamma \varepsilon_\gamma^{\phantom{|}}
\hat{\alpha}_\gamma^+\hat{\alpha}_\gamma^{\phantom{|}}
- \frac{1}{2}\sum_{qk,q'k'}^{\mu_{k'}=-\mu_k^{\phantom{|}}}
(-1)^{\mu_k}{:}\big[\kappa_{qk,q'k'}^{\phantom{|}}
\hat{X}_{qk}^{\phantom{+}}\hat{X}_{q'k'}
+ \eta_{qk,q'k'}^{\phantom{+}}
\hat{Y}_{qk}^{\phantom{+}}\hat{Y}_{q'k'}\big]{:}$$ where $k$ labels one of $K$ separable one-body operators $\hat{X}$ (time-even) and $\hat{Y}$ (time-odd). These operators will be obtained by means of the linear response theory from given input operators $\hat{Q}_{qk}$ (time-even) and $\hat{P}_{qk}$ (time-odd) acting on nucleons $q$. I choose a perturbed ground state: $$|q_{qk}^{\phantom{+}},p_{qk}^{\phantom{+}}\rangle = \prod_q\prod_{k=1}^K \mathrm{e}^{-\mathrm{i}q_{qk}\hat{P}_{qk}}\mathrm{e}^{-\mathrm{i}p_{qk}\hat{Q}_{qk}} |\textrm{HF+BCS}\rangle$$ This form was chosen on the basis of Thouless thorem [@Ring1980], so that it remains a Slater state, and therefore it makes sense to use mean-field density functional. Effective one-body part of (\[SRPA\_hamilt\]) in the perturbed ground state becomes $$\hat{h} = \hat{H}_0 + \mathrm{i}\sum_{\tilde{q}\tilde{k}}
\sum_{qk,q'k'}^{\mu_{k'}=-\mu_k^{\phantom{|}}}(-1)^{\mu_k}\Big(
q_{\tilde{q}\tilde{k}} \kappa_{qk,q'k'}^{\phantom{+}} \hat{X}_{qk}^{\phantom{+}}\langle[\hat{X}_{q'k'}^{\phantom{+}},\hat{P}_{\tilde{q}\tilde{k}}]\rangle +
p_{\tilde{q}\tilde{k}} \eta_{qk,q'k'}^{\phantom{+}} \hat{Y}_{qk}^{\phantom{+}}\langle[\hat{Y}_{q'k'}^{\phantom{+}},\hat{Q}_{\tilde{q}\tilde{k}}]\rangle \Big)$$ I define a basis of operators $\hat{X}_{qk}$, $\hat{Y}_{qk}$ (to get a simple result) by setting
\[str\_mtrx\] $$\begin{aligned}
\kappa_{q'k',qk}^{-1} &= \mathrm{i}(-1)^{\mu_k}
\langle[\hat{X}_{q'k'},\hat{P}_{qk}]\rangle =
\sum_{\alpha\beta\in q}^{\alpha\geq\beta} \frac{2\mathrm{i}}{2\lambda_k+1}
X^*_{q'k';\alpha\beta} P_{qk;\alpha\beta} \\
\eta_{q'k',qk}^{-1} &= \mathrm{i}(-1)^{\mu_k}
\langle[\hat{Y}_{q'k'},\hat{Q}_{qk}]\rangle =
\sum_{\alpha\beta\in q}^{\alpha\geq\beta} \frac{2\mathrm{i}}{2\lambda_k+1}
Y^*_{q'k';\alpha\beta} Q_{qk;\alpha\beta}\end{aligned}$$
where I used (\[comm\]) and $\gamma_T^A(-1)^{l_\alpha+l_\beta+\lambda}A_{\alpha\beta} = A_{\alpha\beta}^*$ (\[rme\_hermit\]). The effective one-body Hamiltonian is then, together with conditions on hermiticity $$\label{SRPA_effH}
\hat{h} = \hat{H}_0 + \sum_{qk}(q_{qk}\hat{X}_{qk} + p_{qk}\hat{Y}_{qk}),\quad
q_{qk} = (-1)^{\mu_k} q_{q\bar{k}},\quad p_{qk} = (-1)^{\mu_k} p_{q\bar{k}}$$ where $\bar{k}$ labels an operator with opposite projection, $\mu_{q\bar{k}} = -\mu_{qk}$. By equating (\[SRPA\_effH\]) with Skyrme effective Hamiltonian in the perturbed ground state, I obtain
$$\begin{aligned}
\hat{X}_{qk} &= \mathrm{i}\sum_{dd'}^\textrm{even}\int\mathrm{d}^3 r
\frac{\delta^2\mathcal{H}}{\delta J_d \delta J_{d'}}
\langle[\hat{P}_{qk},\hat{J}_d(\vec{r})]\rangle \hat{J}_{d'}(\vec{r}) \\
\hat{Y}_{qk} &= \mathrm{i}\sum_{dd'}^\textrm{odd}\int\mathrm{d}^3 r
\frac{\delta^2\mathcal{H}}{\delta J_d \delta J_{d'}}
\langle[\hat{Q}_{qk},\hat{J}_d(\vec{r})]\rangle \hat{J}_{d'}(\vec{r})\end{aligned}$$
In terms of reduced matrix elements:
\[XY\_op\] $$\begin{aligned}
X_{qk;\gamma\delta} & =
\frac{-2\mathrm{i}}{2\lambda+1}\sum_{dd'}^\textrm{even}\int\mathrm{d}^3 r
\bigg(\frac{\delta^2\mathcal{H}}{\delta J_d \delta J_{d'}}
\sum_{\alpha\beta\in q}^{\alpha\geq\beta} P_{qk;\alpha\beta} J_{d;\alpha\beta}^*(r)
\bigg) J_{d';\gamma\delta}(r) \\
Y_{qk;\gamma\delta} & =
\frac{-2\mathrm{i}}{2\lambda+1}\sum_{dd'}^\textrm{odd}\int\mathrm{d}^3 r
\bigg(\frac{\delta^2\mathcal{H}}{\delta J_d \delta J_{d'}}
\sum_{\alpha\beta\in q}^{\alpha\geq\beta} Q_{qk;\alpha\beta} J_{d;\alpha\beta}^*(r)
\bigg) J_{d';\gamma\delta}(r)\end{aligned}$$
where large parentheses indicate *responses* of the operators $\hat{Q}_{qk}$ and $\hat{P}_{qk}$ and they need to be calculated only once (this is one of the numerical advantages of SRPA; the second one is the reduction of matrix dimension, see (\[SRPA\_eq3\])). It should be emphasized that the index $q$ in $\hat{X}_{qk}$ and $\hat{Y}_{qk}$ denotes the *origin* of of these operators (i.e. it labels the corresponding generating operators $\hat{Q}_{qk},\hat{P}_{qk}$), and not the type of the nucleons on which they act, as it was used in the operators $\hat{J}_{d;q},\hat{Q}_{qk},\hat{P}_{qk}$; the operators $\hat{X}_{qk},\hat{Y}_{qk}$ act on both protons and neutrons.
To better approximate the residual interaction, input operators $\hat{Q}_{qk},\hat{P}_{qk}$ come in pairs, where only one of them is given a priori, and the second one is defined by relations $$\hat{P}_{qk} = \mathrm{i}[\hat{H},\hat{Q}_{qk}] \qquad \textrm{or} \qquad
\hat{Q}_{qk} = \mathrm{i}[\hat{H},\hat{P}_{qk}],$$ which in terms of reduced matrix elements turn into $$\label{second-QP}
P_{qk;\alpha\beta} = \mathrm{i}\varepsilon_{\alpha\beta}
Q_{qk;\alpha\beta} - Y_{qk;\alpha\beta} \qquad\textrm{or}\qquad
Q_{qk;\alpha\beta} = \mathrm{i}\varepsilon_{\alpha\beta}
P_{qk;\alpha\beta} - X_{qk;\alpha\beta}.$$ The calculation of the matrix elements then proceeds as $$Q\rightarrow Y\rightarrow P\rightarrow X \qquad\textrm{or}\qquad
P\rightarrow X\rightarrow Q\rightarrow Y$$
Evaluation of the RPA equation (\[RPA\_eq\]) using separable Hamiltonian (\[SRPA\_hamilt\]) leads to
\[SRPA\_eq1\] $$\begin{aligned}
(\varepsilon_{\alpha\beta}-E_\nu)c_{\alpha\beta}^{(\nu-)} & =
\!\!\sum_{qk,q'k'}\!\! \frac{(-1)^{l_\beta+\mu_k}}{\sqrt{2\lambda+1}}
\big(\kappa_{qk,q'k'}^{\phantom{+}}\langle[\hat{C}_\nu^+,\hat{X}_{q'k'}^{\phantom{+}}]\rangle X_{qk;\alpha\beta}
+ \eta_{qk,q'k'}^{\phantom{+}}\langle[\hat{C}_\nu^+,\hat{Y}_{q'k'}^{\phantom{+}}]\rangle Y_{qk;\alpha\beta}\big) \\
% second equation
(\varepsilon_{\alpha\beta}+E_\nu)c_{\alpha\beta}^{(\nu+)} & =
\!\!\sum_{qk,q'k'}\!\!\frac{(-1)^{l_\beta+\mu_k}}{\sqrt{2\lambda+1}}
\big(\kappa_{qk,q'k'}^{\phantom{+}}\langle[\hat{C}_\nu^+,\hat{X}_{q'k'}^{\phantom{+}}]\rangle X_{qk;\alpha\beta}
- \eta_{qk,q'k'}^{\phantom{+}}\langle[\hat{C}_\nu^+,\hat{Y}_{q'k'}^{\phantom{+}}]\rangle Y_{qk;\alpha\beta} \big)\end{aligned}$$
\
To reduce the number of equations, I introduce coefficients $\bar{q}_{qk}^\nu,\bar{p}_{qk}^\nu$, whose notation was inspired by the correspondence $[\hat{H},\hat{C}_\nu^+] \leftrightarrow q_k[\hat{V},\hat{P}_k]$
\[comm-CXY\] $$\begin{aligned}
\sum_{qk}\kappa_{q'k',qk}^{-1}\bar{q}_{qk}^\nu & =
(-1)^{\mu_{k'}}\langle[\hat{C}_\nu^+,\hat{X}_{q'k'}^{\phantom{+}}]\rangle =
\frac{(-1)^{l_\alpha+\lambda}}{\sqrt{2\lambda_\nu+1}}
\sum_{\alpha>\beta}
(c_{\alpha\beta}^{(\nu-)}+c_{\alpha\beta}^{(\nu+)})X_{q'k';\alpha\beta}^{\phantom{+}} \\
\sum_{qk}\eta_{q'k',qk}^{-1}\bar{p}_{qk}^\nu & =
(-1)^{\mu_{k'}}\langle[\hat{C}_\nu^+,\hat{Y}_{q'k'}^{\phantom{+}}]\rangle =
\frac{(-1)^{l_\alpha+\lambda+1}}{\sqrt{2\lambda+1}}
\sum_{\alpha>\beta}
(c_{\alpha\beta}^{(\nu-)}-c_{\alpha\beta}^{(\nu+)})Y_{q'k';\alpha\beta}^{\phantom{+}}\end{aligned}$$
Equations (\[SRPA\_eq1\]) then become
\[SRPA\_eq2\] $$\begin{aligned}
(\varepsilon_{\alpha\beta}-E_\nu)c_{\alpha\beta}^{(\nu-)} & =
\sum_{qk,q'k'} \frac{(-1)^{l_\beta}}{\sqrt{2\lambda+1}}
\big(X_{qk;\alpha\beta}\bar{q}_{qk}^\nu
+ Y_{qk;\alpha\beta}\bar{p}_{qk}^\nu\big) \\
(\varepsilon_{\alpha\beta}+E_\nu)c_{\alpha\beta}^{(\nu+)} & =
\sum_{qk,q'k'}\frac{(-1)^{l_\beta}}{\sqrt{2\lambda+1}}
\big(X_{qk;\alpha\beta}\bar{q}_{qk}^\nu
- Y_{qk;\alpha\beta}\bar{p}_{qk}^\nu\big)\end{aligned}$$
After elimination of $c_{\alpha\beta}^{(\nu\pm)}$ from (\[comm-CXY\]) and (\[SRPA\_eq2\]), I am left with a matrix equation $$\label{SRPA_eq3}
D\vec{R} =
\begin{pmatrix} F^{(XX)}-\kappa^{-1} & F^{(XY)} \\ F^{(YX)} & F^{(YY)}-\eta^{-1} \end{pmatrix} \binom{\bar{q}^\nu}{\bar{p}^\nu} = \binom{0}{0}$$ where I defined matrix $D$, vector $\vec{R}$, and the matrices $F$ as $$\label{F_me}
\begin{array}{ll}
\displaystyle
F_{q'k',qk}^{(XX)} = \frac{1}{2\lambda+1}
\sum_{\alpha\geq\beta}
\frac{2\varepsilon_{\alpha\beta}X^*_{q'k';\alpha\beta}X_{qk;\alpha\beta}}{\varepsilon_{\alpha\beta}^2-E_\nu^2}, &
\displaystyle
F_{q'k',qk}^{(XY)} = \frac{1}{2\lambda+1}
\sum_{\alpha\geq\beta}
\frac{2E_\nu X^*_{q'k';\alpha\beta}Y_{qk;\alpha\beta}}{\varepsilon_{\alpha\beta}^2-E_\nu^2}, \\
\displaystyle
F_{q'k',qk}^{(YX)} = \frac{1}{2\lambda+1}
\sum_{\alpha\geq\beta}
\frac{2E_\nu Y^*_{q'k';\alpha\beta}X_{qk;\alpha\beta}}{\varepsilon_{\alpha\beta}^2-E_\nu^2}, \phantom{\Bigg|} &
\displaystyle
F_{q'k',qk}^{(YY)} = \frac{1}{2\lambda+1}
\sum_{\alpha\geq\beta}
\frac{2\varepsilon_{\alpha\beta}Y^*_{q'k';\alpha\beta}Y_{qk;\alpha\beta}}{\varepsilon_{\alpha\beta}^2-E_\nu^2}
\end{array}$$ Reduced matrix elements $X_{qk;\alpha\beta}$ and $Y_{qk;\alpha\beta}$ are either real or imaginary depending on the $\hat{M}_{\lambda\mu}^{\mathrm{E/M}}$ (see also (\[EM\_sel\_rules\])) and the $\bar{q}^\nu$ and $\bar{p}^\nu$ are chosen in a way that $c_{\alpha\beta}^{(\nu\pm)}$ remains real $$\label{XY-cc}
X_{qk;\alpha\beta}^* = \gamma_T^M X_{qk;\alpha\beta},\quad
Y_{qk;\alpha\beta}^* = -\gamma_T^M Y_{qk;\alpha\beta},\quad
\bar{q}_{qk}^* = \gamma_T^M\bar{q}_{qk},\quad
\bar{p}_{qk}^* = -\gamma_T^M\bar{p}_{qk}$$
Matrices $D$ and $F$, and the vector $\vec{R}$ are not constant, but depend on the chosen RPA state $\nu$ (or its energy, $E_\nu$, respectively), nevertheless, I omit the index $\nu$, not to increase clutter. SRPA equations are therefore not an usual eigenvalue problem, since the number of their solutions can be much higher than the matrix dimension (number of solutions is equal to the number of $\alpha\beta$ pairs). Moreover, during the calculation of the strength function, the matrix $D$ becomes a continuous function of energy, $D(E)$, so the persistence of the index $\nu$ would cause confusion.
Normalization condition $\sum_{\alpha\geq\beta} (|c_{\alpha\beta}^{(\nu-)}|^2-|c_{\alpha\beta}^{(\nu+)}|^2) = 1$ (\[RPA\_norm\]) becomes [@Kvasil1998] $$\label{R-norm}
\vec{R}^\dagger \frac{\partial D}{\partial E_\nu} \vec{R} = 1$$
Transition probability is obtained by combining (\[trans\_me\]) with (\[SRPA\_eq2\])
$$\begin{aligned}
\langle[\hat{C}_\nu,\hat{M}_{\lambda\mu}^\mathrm{E}]\rangle &=
\frac{-1}{2\lambda+1}\sum_{\alpha\geq\beta}
\frac{M_{\lambda;\alpha\beta}^\mathrm{E}}{\varepsilon_{\alpha\beta}^2-E_\nu^2}
\Big[ 2\varepsilon_{\alpha\beta}
X_{qk;\alpha\beta} \,\bar{q}_{qk}^\nu +
2\hbar\omega_\nu Y_{qk;\alpha\beta}\,\bar{p}_{qk}^\nu
\Big]^* = \vec{R}^\dagger \vec{A} \\
\langle[\hat{C}_\nu,\hat{M}_{\lambda\mu}^\mathrm{M}]\rangle &=
\frac{-1}{2\lambda+1}\sum_{\alpha\geq\beta}
\frac{M_{\lambda;\alpha\beta}^\mathrm{M}}{\varepsilon_{\alpha\beta}^2-E_\nu^2}
\Big[ 2\hbar\omega_\nu X_{qk;\alpha\beta}\,\bar{q}_{qk}^\nu
+ 2\varepsilon_{\alpha\beta}
Y_{qk;\alpha\beta} \,\bar{p}_{qk}^\nu
\Big]^* = \vec{R}^\dagger \vec{A}\end{aligned}$$
where I defined vector $\vec{A}$ (dependent on the energy)
$$\begin{aligned}
A_{qk}^{(X)} &= \frac{-2}{2\lambda+1}\sum_{\alpha\geq\beta} \frac{M_{\lambda;\alpha\beta}X_{qk;\alpha\beta}^*}{\varepsilon_{\alpha\beta}^2-E_\nu^2}\times
\Big\{\!\!\begin{array}{l} \varepsilon_{\alpha\beta} \\
E_\nu \end{array} \\
A_{qk}^{(Y)} &= \frac{-2}{2\lambda+1}\sum_{\alpha\geq\beta} \frac{M_{\lambda;\alpha\beta}Y_{qk;\alpha\beta}^*}{\varepsilon_{\alpha\beta}^2-E_\nu^2}\times
\Big\{\!\!\begin{array}{ll} E_\nu & (\mathrm{E}\lambda) \\
\varepsilon_{\alpha\beta}\! & (\mathrm{M}\lambda) \end{array}\end{aligned}$$
Reduced transition probability is then $$\label{BE_0}
B(\textrm{E/M}\lambda\mu;0\rightarrow\nu) = |\langle\nu|\hat{M}_{\lambda\mu}|\textrm{RPA}\rangle|^2 = \vec{A}^\dagger \vec{R}\vec{R}^\dagger\vec{A}$$ I will evaluate matrix $\vec{R}\vec{R}^\dagger$ using (\[R-norm\]). I will use singularity of the matrix $D$ ($\det D(E_\nu) = 0$) and expand it by $j$-row into algebraic supplements $d_{jk}$ ($D^{(jk)}$ is a submatrix of $D$ with omitted $j$-th row and $k$-th column) $$0 = \det D = \sum_k (-1)^{j+k}D_{jk}\det D^{(jk)} = \sum_k D_{jk} d_{jk} =
\sum_k (D_{jk} + D_{j'k})d_{jk} = \sum_k D_{j'k}d_{jk}$$ where the penultimate equality follows from invariance of $\det D$ against addition of $j'\neq j$-th row to the $j$-th row. The previous equation says that $d_{jk}$ is a solution of the equation $D\vec{R} = 0$ where vector $\vec{R}$ was created from $d_{jk}$ using $k$ as a vector index and any fixed $j$. So vector components $R_k$ are proportional to $d_{jk}$: $$\frac{R_k}{R_{k'}} = \frac{d_{jk}}{d_{jk'}}$$ The matrix $D$ is hermitian (\[F\_me\], \[XY-cc\]) as well as its algebraic supplement $$d_{jk}^* = (-1)^{j+k}(\det D^{(jk)})^* = (-1)^{j+k}\det D^{(jk)\dagger} = (-1)^{j+k}\det D^{(kj)} = d_{kj}$$ and the derivative of $\det D$ can be calculated by a chain rule applied to its matrix elements $$\frac{\partial\det D}{\partial E_\nu} = \sum_{ij}\frac{\partial\det D}{\partial D_{ij}} \frac{\partial D_{ij}}{\partial E_\nu} = \sum_{ij} d_{ij}\frac{\partial D_{ij}}{\partial E_\nu}$$ Normalization condition (\[R-norm\]) can be now written as [@Kvasil1998] $$\begin{aligned}
1 &= \sum_{kk'} R_k^* \frac{\partial D_{kk'}}{\partial E_\nu} R_{k'} =
R_i^* R_j^{\phantom{*}} \sum_{kk'} \frac{R_k^*}{R_i^*} \frac{\partial D_{kk'}}{\partial E_\nu}\frac{R_{k'}}{R_j} \\
&= R_i^* R_j^{\phantom{*}} \sum_{kk'} \frac{d_{jk}^*}{d_{ji}^*} \frac{\partial D_{kk'}}{\partial E_\nu}\frac{d_{kk'}}{d_{kj}} =
\frac{R_i^* R_j^{\phantom{*}}}{d_{ij}} \frac{\partial\det D}{\partial E_\nu}\end{aligned}$$ Reduced transition probability (\[BE\_0\]) is then $$B(\lambda\mu;0\rightarrow\nu) = |\langle\nu|\hat{M}_{\lambda\mu}|\textrm{RPA}\rangle|^2 = \sum_{ij}A_i R_i^* R_j A_j^* =
\sum_{ij}\frac{A_i d_{ij} A_j^*}{\frac{\partial\det D}{\partial E_\nu}} = -\frac{\det B}{\ \frac{\partial\det D}{\partial E_\nu}\ }$$ where the expanded matrix $B$ was defined by $$\sum_{ij} A_i d_{ij} A_j^* = -\det \begin{pmatrix} D_{ij} & A_i \\ A_j^* & 0 \end{pmatrix} = -\det B$$ The star at $A_j$ means complex conjugation of matrix elements of $\hat{X}_{qk}$ and $\hat{Y}_{qk}$ only (matrix elements of $M_\lambda$ are real), but not the complex conjugation of $E_\nu$ that becomes complex during the evaluation of the strength function.
Strength function of $n$-th order $$\begin{aligned}
\label{sf_def}
S_n(\lambda\mu;E) &= \sum_\nu E_\nu^n B(\lambda\mu;0\rightarrow\nu)\delta_\Delta(E-E_\nu), \\[-8pt]
&\qquad\qquad\qquad\qquad\textrm{where}\
\delta_\Delta(E-E_\nu) = \frac{\Delta/2\pi}{(E-E_\nu)^2+(\Delta/2)^2} \nonumber\end{aligned}$$ can be evaluated direcly from determinants of $B$ and $D$ employing their complex-analytic properties with respect to the parameter $E_\nu$. Let’s define function $f(z)$ that vanishes at infinity (for $n\leq 2$) $$f(E_\nu) = -E_\nu^n\frac{\det B(E_\nu)}{\det D(E_\nu)},\quad
\mathop{\mathrm{Res}}_{\ z=E_\nu} f(z) = -E_\nu^n\frac{\det B(E_\nu)}{\ \frac{\partial\det D}{\partial E_\nu}\ } = E_\nu^n B(\lambda\mu;0\rightarrow\nu)$$ Lorentz smoothing can be obtained directly by shifting the energy by an imaginary constant $$f\Big(x+\mathrm{i}\frac{\Delta}{2}\Big) =
\sum_j \frac{1}{x-x_j+\mathrm{i}\Delta/2}\!\mathop{\mathrm{Res}}_{\ z=x_j}\! f(z) =
\sum_j \frac{x-x_j-\mathrm{i}\Delta/2}{(x-x_j)^2+(\Delta/2)^2}\!\mathop{\mathrm{Res}}_{\ z=x_j}\! f(z)$$ $$-\frac{1}{\pi}\Im\Big[f\Big(x+\mathrm{i}\frac{\Delta}{2}\Big)\Big] =
\sum_j \delta_\Delta(x-x_j)\!\mathop{\mathrm{Res}}_{\ z=x_j}\! f(z)$$ Besides the poles in $E_\nu$, the function $f(z)$ contains poles also in $\pm\varepsilon_{\alpha\beta},-E_\nu$. Negative poles will be neglected, due to their small contribution for positive $E$. The contribution of positive poles ($+\varepsilon_{\alpha\beta}$) is evaluated and removed using $$\begin{aligned}
\mathop{\mathrm{lim}}_{\ z\rightarrow\varepsilon_{\alpha\beta}}\!(z-\varepsilon_{\alpha\beta})^2 A_{qk}^{(X)}(z) A_{q'k'}^{(Y)*}(z)
&= \frac{|M_{\lambda;\alpha\beta}|^2}{(2\lambda+1)^2} \frac{4\varepsilon_{\alpha\beta}\varepsilon_{\alpha\beta}X_{qk;\alpha\beta}^* Y_{q'k';\alpha\beta}}{(\varepsilon_{\alpha\beta}+\varepsilon_{\alpha\beta})^2} \\
&= \frac{|M_{\lambda;\alpha\beta}|^2}{2\lambda+1}
\mathop{\mathrm{lim}}_{\ z\rightarrow\varepsilon_{\alpha\beta}}\!(\varepsilon_{\alpha\beta}-z)F_{qk,q'k'}^{(XY)}(z)\end{aligned}$$ $$-\!\!\mathop{\mathrm{Res}}_{\ z=\varepsilon_{\alpha\beta}}\! f(z) =
\!\!\mathop{\mathrm{lim}}_{\ z\rightarrow\varepsilon_{\alpha\beta}}\!\!(z-\varepsilon_{\alpha\beta})\frac{\det B(z)}{\det D(z)} =
\frac{|M_{\lambda;\alpha\beta}|^2}{2\lambda+1}
\!\mathop{\mathrm{lim}}_{\ z\rightarrow\varepsilon_{\alpha\beta}}\!\sum_{ij}\frac{D_{ij}(z)d_{ij}(z)}{\det D(z)}
= \frac{|M_{\lambda;\alpha\beta}|^2}{2\lambda+1}$$ The final strength function (for $n\in\{0,1,2\}$) is then $$\label{SRPA_sf}
S_n(\lambda\mu;E) = \frac{1}{\pi}\Im\bigg[z^n\frac{\det B(z)}{\det D(z)}\bigg]_{z=E+\mathrm{i}\frac{\Delta}{2}} + \sum_{\alpha\geq\beta} \varepsilon_{\alpha\beta}^n
\frac{|M_{\lambda;\alpha\beta}|^2}{2\lambda+1}
\delta_\Delta(E-\varepsilon_{\alpha\beta})$$
-------- ---------------------------------------------------------------------------------------------
$2qp$ two-quasiparticle pairs (like $\hat{\alpha}\hat{\alpha}$ or $\hat{\alpha}^+\hat{\alpha}^+$)
BCS Bardeen-Cooper-Schrieffer (theory of pairing)
BHF Brückner-Hartree-Fock
cmc center-of-mass correction
DFT density functional theory
E-M Euler-Maclaurin (formula, summation, correction)
EWSR energy-weighted sum rule
GDR giant dipole resonance (E1)
GQR giant quadrupole resonance (E2)
GMR giant monopole resonance (E0)
HF Hartree-Fock
HFB Hartree-Fock-Bogoliubov
QRPA quasiparticle random phase approximation
r.m.e. reduced matrix elements
RPA random phase approximation
s.f. strength function
SHO spherical harmonic oscillator
s.p. single-particle/one-body (basis, matrix elements)
SRPA separable random phase approximation
VAP variation after projection
VBP variation before projection
w.f. wavefunction
-------- ---------------------------------------------------------------------------------------------
Symbol $\Delta$ is used for two different purposes which may cause confusion: either as an energy-smoothing parameter (in the units of MeV), or as a grid spacing (lattice parameter) in the units of fm. There are also some other possible collisions, such as $\alpha,\,b,\,\delta,\,J,\,j,\,Q,\,P,\,p$, but hopefully all of them should be clear by the context or by the attributes (index, hat).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We describe a new implementation of the elementary transcendental functions exp, sin, cos, log and atan for variable precision up to approximately 4096 bits. Compared to the MPFR library, we achieve a maximum speedup ranging from a factor 3 for cos to 30 for atan. Our implementation uses table-based argument reduction together with rectangular splitting to evaluate Taylor series. We collect denominators to reduce the number of divisions in the Taylor series, and avoid overhead by doing all multiprecision arithmetic using the mpn layer of the GMP library. Our implementation provides rigorous error bounds.'
author:
- 'Fredrik Johansson[^1] [^2]'
bibliography:
- 'references.bib'
title: 'Efficient implementation of elementary functions in the medium-precision range'
---
Introduction
============
Considerable effort has been made to optimize computation of the elementary transcendental functions in IEEE 754 double precision arithmetic (53 bits) subject to various constraints [@dedinechin:inria-00071446; @daramy2003cr; @dukhan2014methods; @harrison1999computation; @metalibm]. Higher precision is indispensable for computer algebra and is becoming increasingly important in scientific applications [@bailey2012high]. Many libraries have been developed for arbitrary-precision arithmetic. The de facto standard is arguably MPFR [@Fousse2007], which guarantees correct rounding to any requested number of bits.
Unfortunately, there is a large performance gap between double precision and arbitrary-precision libraries. Some authors have helped bridge this gap by developing fast implementations targeting a fixed precision, such as 106, 113 or 212 bits [@thall2006extended; @6081400; @hida2007library]. However, these implementations generally do not provide rigorous error bounds (a promising approach to remedy this situation is [@metalibm]), and performance optimization in the range of several hundred bits still appears to be lacking.
The asymptotic difficulty of computing elementary functions is well understood. From several thousand bits and up, the bit-burst algorithm or the arithmetic-geometric mean algorithm coupled with Newton iteration effectively reduce the problem to integer multiplication, which has quasilinear complexity [@brent1976complexity; @mca]. Although such high precision has uses, most applications beyond double precision only require modest extra precision, say a few hundred bits or rarely a few thousand bits.
In this “medium-precision” range beyond double precision and up to a few thousand bits, i.e. up to perhaps a hundred words on a 32-bit or 64-bit computer, there are two principal hurdles in the way of efficiency. First, the cost of $(n \times n)$-word multiplication or division grows quadratically with $n$, or almost quadratically if Karatsuba multiplication is used, so rather than “reducing everything to multiplication” (in the words of [@steelreduce]), we want to do as little multiplying as possible. Secondly, since multiprecision arithmetic currently has to be done in software, every arithmetic operation potentially involves overhead for function calls, temporary memory allocation, and case distinctions based on signs and sizes of inputs; we want to avoid as much of this bookkeeping as possible.
In this work, we consider the five elementary functions exp, sin, cos, log, atan of a real variable, to which all other real and complex elementary functions can be delegated via algebraic transformations. Our algorithm for all five functions follows the well-known strategy of argument reduction based on functional equations and lookup tables as described in section \[sect:argred\], followed by evaluation of Taylor series. To keep overhead at a minimum, all arithmetic uses the low-level mpn layer of the GMP library [@gmp], as outlined in section \[sect:fixed\].
We use lookup tables in arguably the simplest possible way, storing values of the function itself on a regularly spaced grid. At high precision, a good space-time tradeoff is achieved by using bipartite tables. Several authors have studied the problem of constructing optimal designs for elementary functions in resource-constrained settings, where it is important to minimize not only the size of the tables but also the numerical error and the complexity of circuitry to implement the arithmetic operations [@de2005multipartite], [@schulte1999approximating], [@stine1999symmetric]. We ignore such design parameters since guard bits and code size are cheap in our setting.
While implementations in double precision often use minimax or Chebyshev polynomial approximations, which require somewhat fewer terms than Taylor series for equivalent accuracy, Taylor series are superior at high precision since the evaluation can be done faster. Smith’s rectangular splitting algorithm [@Smith1989] allows evaluating a degree-$N$ truncated Taylor series of suitable type using $O(\sqrt{N})$ $(n \times n)$-word multiplications whereas evaluating a degree-$N$ minimax polynomial using Horner’s rule requires $O(N)$ such multiplications. The main contribution of the paper, described in section \[sect:taylor\], is an improved version of Smith’s rectangular splitting algorithm for evaluating Taylor series, in which we use fixed-point arithmetic efficiently and avoid most divisions. Section \[sect:toplevel\] describes the global algorithm including error analysis.
Our implementation of the elementary functions is part of version 2.4.0 of the open source arbitrary-precision interval software Arb [@Johansson:2014:ACL:2576802.2576828]. The source code can be retrieved from [@fjarbsource].
Since the goal is to do interval arithmetic, we compute a rigorous bound for the numerical error. Unlike MPFR, our code does not output a correctly rounded floating-point value. This more of a difference in the interface than an inherent limitation of the algorithm, and only accounts for a small difference in performance (as explained in Section \[sect:toplevel\]).
Our benchmark results in section \[sect:bench\] show a significant speedup compared to the current version (3.1.2) of MPFR. MPFR uses several different algorithms depending on the precision and function [@mpfralg], including Smith’s algorithm in some cases. The large improvement is in part due to our use of lookup tables (which MPFR does not use) and in part due to the optimized Taylor series evaluation and elimination of general overhead. Our different elementary functions also have similar performance to each other. Indeed, the algorithm is nearly the same for all functions, which simplifies the software design and aids proving correctness.
While our implementation allows variable precision up to a few thousand bits, it is competitive in the low end of the range with the QD library [@hida2007library] which only targets 106 or 212 bits. QD uses a combination of lookup tables, argument reduction, Taylor series, and Newton iteration for inverse functions.
Fixed-point arithmetic {#sect:fixed}
======================
We base our multiprecision arithmetic on the GMP library [@gmp] (or the fork MPIR [@mpir]), which is widely available and optimized for common CPU architectures. We use the mpn layer of GMP, since the mpz layer has unnecessary overhead. On the mpn level, a multiprecision integer is an array of limbs (words). We assume that a limb is either $B = 32$ or $B = 64$ bits, holding a value between $0$ and $2^B-1$. We represent a real number in fixed-point format with $Bn$-bit precision using $n$ fractional limbs and zero or more integral limbs. An $n$-limb array thus represents a value in the range $[0,1-\text{ulp}]$, and an $(n+1)$-limb array represents a value in the range $[0,2^B-\text{ulp}]$ where $\text{ulp} = 2^{-Bn}$.
An advantage of fixed-point over floating-point arithmetic is that we can add numbers without any rounding or shift adjustments. The most important GMP functions are shown in Table \[tab:fixedpoint\], where $X, Y, Z$ denote fixed-point numbers with the same number of limbs and $c$ denotes a single-limb unsigned integer. Since the first five functions return carry-out or borrow, we can also use them when $X$ has one more limb than $Y$.
---------------- --------------------------------------
`mpn_add_n` $X \gets X + Y$ (or $X \gets Y + Z$)
`mpn_sub_n` $X \gets X - Y$ (or $X \gets Y - Z$)
`mpn_mul_1` $X \gets Y \times c$
`mpn_addmul_1` $X \gets X + Y \times c$
`mpn_submul_1` $X \gets X - Y \times c$
`mpn_mul_n` $X \gets Y \times Z$
`mpn_sqr` $X \gets Y \times Y$
`mpn_divrem_1` $X \gets Y / c$
---------------- --------------------------------------
: Fixed-point operations using GMP.
\[tab:fixedpoint\]
The first five GMP functions in Table \[tab:fixedpoint\] are usually implemented in assembly code, and we therefore try to push the work onto those primitives. Note that multiplying two $n$-limb fixed-point numbers involves computing the full $2n$-limb product and throwing away the $n$ least significant limbs. We can often avoid explicitly copying the high limbs by simply moving the pointer into the array.
The mpn representation does not admit negative numbers. However, we can store negative numbers implicitly using two’s complement representation as long as we only add and subtract fixed-point numbers with the same number of limbs. We must then take care to ensure that the value is positive before multiplying or dividing.
We compute bounds for all errors when doing fixed-point arithmetic. For example, if $X$ and $Y$ are fixed-point numbers with respective errors $\varepsilon_1$, $\varepsilon_2$, then their sum has error bounded by $|\varepsilon_1| + |\varepsilon_2|$, and their product, rounded to a fixed-point number using a single truncation, has error bounded by $$|Y| |\varepsilon_1| + |X| |\varepsilon_2|
+ |\varepsilon_1 \varepsilon_2| + (1~\text{ulp}).$$ If $c$ is an exact integer, then the product $X \times c$ has error bounded by $|\varepsilon_1||c|$, and the quotient $X/c$, rounded to a fixed-point number using a single truncation, has error bounded by $|\varepsilon_1|/|c| + (1~\text{ulp})$. Similar bounds are used for other operations that arise in the implementation.
In parts of the code, we use a single-limb variable to track a running error bound measured in ulps, instead of determining a formula that bounds the cumulative error in advance. This is convenient, and cheap compared to the actual work done in the multiprecision arithmetic operations.
Argument reduction {#sect:argred}
==================
The standard method to evaluate elementary functions begins with one or several argument reductions to restrict the input to a small standard domain. The function is then computed on the standard domain, typically using a polynomial approximation such as a truncated Taylor series, and the argument reduction steps are inverted to recover the function value [@mca], [@muller2006elementary].
As an example, consider the exponential function $\exp(x)$. Setting $m = \lfloor x / \log(2) \rfloor$ and $t = x - m \log(2)$, we reduce the problem to computing $\exp(x) = \exp(t) 2^m$ where $t$ lies in the standard domain $[0, \log(2))$. Writing $\exp(t) = [\exp(t/2^r)]^{2^r}$, we can further reduce the argument to the range $[0, 2^{-r})$ at the expense of $r$ squarings, thereby improving the rate of convergence of the Taylor series. Analogously, we can reduce to the intervals $[0,\pi/4)$ for sin and cos, $[0,1)$ for atan, and $[1,2)$ for log, and follow up with $r$ further transformations to reduce the argument to an interval of width $2^{-r}$.
This strategy does not require precomputations (except perhaps for the constants $\pi$ and $\log(2)$), and is commonly used in arbitrary-precision libraries such as MPFR [@mpfralg].
The argument reduction steps can be accelerated using lookup tables. If we precompute $\exp(i/2^r)$ for $i = 0 \ldots 2^r-1$, we can write $\exp(x) = \exp(x-i/2^r) \exp(i/2^r)$ where $i = \lfloor 2^r x \rfloor$. This achieves $r$ halvings worth of argument reduction for the cost of just a single multiplication. To save space, we can use a bipartite (or multipartite) table, e.g.writing $\exp(x) = \exp(x-i/2^r-j/2^{2r}) \exp(i/2^r) \exp(j/2^{2r})$.
This recipe works for all elementary functions. We use the following formulas, in which $x \in [0, 1)$, $q = 2^r$, $i = \lfloor 2^r x \rfloor$, $t = i/q$, $w = x-i/q$, $w_1 = (qx-i)/(i+q)$, and $w_2 = (qx-i)/(ix+q)$: $$\begin{aligned}
\exp(x) & = \exp(t) \exp(w) \\
\sin(x) & = \sin(t) \cos(w) + \cos(t) \sin(w) \\
\cos(x) & = \cos(t) \cos(w) - \sin(t) \sin(w) \\
\log(1+x) & = \log(1+t) + \log(1+w_1) \\
{\ensuremath{\operatorname{atan}}}(x) & = {\ensuremath{\operatorname{atan}}}(t) + {\ensuremath{\operatorname{atan}}}(w_2)\end{aligned}$$
The sine and cosine are best computed simultaneously. The argument reduction formula for the logarithm is cheaper than for the other functions, since it requires $(n \times 1)$-word operations and no $(n \times n)$-word multiplications or divisions. The advantage of using lookup tables is greater for log and atan than for exp, sin and cos, since the “argument-halving” formulas for log and atan involve square roots.
If we want $p$-bit precision and chain together $m$ lookup tables worth $r$ halvings each, the total amount of space is $m p 2^r$ bits, and the number of terms in the Taylor series that we have to sum is of the order $p / (r m)$. Taking $r$ between 4 and 10 and $m$ between 1 and 3 gives a good space-time tradeoff. At lower precision, a smaller $m$ is better.
Function Precision $m$ $r$ Entries Size (KiB)
---------- ------------ ----- ----- --------- ------------
exp $\le 512$ 1 8 178 11.125
exp $\le 4608$ 2 5 23+32 30.9375
sin $\le 512$ 1 8 203 12.6875
sin $\le 4608$ 2 5 26+32 32.625
cos $\le 512$ 1 8 203 12.6875
cos $\le 4608$ 2 5 26+32 32.625
log $\le 512$ 2 7 128+128 16
log $\le 4608$ 2 5 32+32 36
atan $\le 512$ 1 8 256 16
atan $\le 4608$ 2 5 32+32 36
Total 236.6875
: Size of lookup tables.
\[tab:tablesize\]
Our implementation uses the table parameters shown in Table \[tab:tablesize\]. For each function, we use a fast table up to 512 bits and a more economical table from 513 to 4608 bits, supporting function evaluation at precisions just beyond 4096 bits plus guard bits. Some of the tables have less than $2^r$ entries since they end near $\log(2)$ or $\pi/4$. A few more kilobytes are used to store precomputed values of $\pi/4$, $\log(2)$, and coefficients of Taylor series.
The parameters in Table \[tab:tablesize\] were chosen based on experiment to give good performance at all precisions while keeping the total size (less than 256 KiB) insignificant compared to the overall space requirements of most applications and small enough to fit in a typical L2 cache. For simplicity, our code uses static precomputed tables, which are tested against MPFR to verify that all entries are correctly rounded.
The restriction to 4096-bit and lower precision is done since lookup tables give diminishing returns at higher precision compared to asymptotically fast algorithms that avoid precomputations entirely. In a software implementation, there is no practical upper limit to the size of lookup tables that can be used. One could gain efficiency by using auxiliary code to dynamically generate tables that are optimal for a given application.
Taylor series evaluation {#sect:taylor}
========================
After argument reduction, we need to evaluate a truncated Taylor series, where we are given a fixed-point argument $0 \le X \ll 1$ and the number of terms $N$ to add. In this section, we present an algorithm that solves the problem efficiently, with a bound for the rounding error. The initial argument reduction restricts the possible range of $N$, which simplifies the analysis. Indeed, for an internal precision of $p \le 4608$ bits and the parameters of Table \[tab:tablesize\], $N < 300$ always suffices.
We use a version of Smith’s algorithm to avoid expensive multiplications [@Smith1989]. The method is best explained by an example. To evaluate $${\ensuremath{\operatorname{atan}}}(x) \approx x \sum_{k=0}^{N-1} \frac{(-1)^k t^k}{2k+1}, \quad t = x^2$$ with $N = 16$, we pick the splitting parameter $m = \sqrt{N} = 4$ and write ${\ensuremath{\operatorname{atan}}}(x) /x \approx$ $$\begingroup
\renewcommand*{\arraystretch}{1.3}
\begin{matrix}
& [1 & - & \tfrac{1}{3} t & + & \tfrac{1}{5} t^2 & - & \tfrac{1}{7} t^3] & \, \\
+ & [\tfrac{1}{9} & - & \tfrac{1}{11} t & + & \tfrac{1}{13} t^2 & - & \tfrac{1}{15} t^3] & t^4 \\
+ & [\tfrac{1}{17} & - & \tfrac{1}{19} t & + & \tfrac{1}{21} t^2 & - & \tfrac{1}{23} t^3] & t^8 \\
+ & [\tfrac{1}{25} & - & \tfrac{1}{27} t & + & \tfrac{1}{29} t^2 & - & \tfrac{1}{31} t^3] & t^{12}.
\end{matrix}
\endgroup$$ Since the powers $t^2, \ldots, t^m$ can be recycled for each row, we only need $2 \sqrt{N}$ full $(n \times n)$-limb multiplications, plus $O(N)$ “scalar” operations, i.e.additions and $(n \times 1)$-limb divisions. This “rectangular” splitting arrangement of the terms is actually a transposition of Smith’s “modular” algorithm, and appears to be superior since Horner’s rule can be used for the outer polynomial evaluation with respect to $t^m$ (see [@mca]).
A drawback of Smith’s algorithm is that an $(n \times 1)$ division has high overhead compared to an $(n \times 1)$ multiplication, or even an $(n \times n)$ multiplication if $n$ is very small. In [@Johansson2014], a different rectangular splitting algorithm was proposed that uses $(n \times O(\sqrt{N}))$-limb multiplications instead of scalar divisions, and also works in the more general setting of holonomic functions. Initial experiments done by the author suggest that the method of [@Johansson2014] can be more efficient at modest precision. However, we found that another variation turns out to be superior for the Taylor series of the elementary functions, namely to simply collect several consecutive denominators in a single word, replacing most $(n \times 1)$-word divisions by cheaper $(n \times 1)$-word multiplications.
We precompute tables of integers $u_k, v_k < 2^B$ such that $1/(2k+1) = u_k / v_k$ and $v_k $ is the least common multiple of $2i-1$ for several consecutive $i$ near $k$. To generate the table, we iterate upwards from $k = 0$, picking the longest possible sequence of terms on a common denominator without overflowing a limb, starting a new subsequence from each point where overflow occurs. This does not necessarily give the least possible number of distinct denominators, but it is close to optimal (on average, $v_k$ is 28 bits wide on a 32-bit system and 61 bits wide on a 64-bit system for $k < 300$). The $k$ such that $v_k \ne v_{k+1}$ are $$12, 18, 24, 29, \ldots, 226, 229, \ldots (\text{32-bit})$$ and $$23, 35, 46, 56, \ldots, 225, 232, \ldots (\text{64-bit}).$$ In the supported range, we need at most one division every three terms (32-bit) or every seven terms (64-bit), and less than this for very small $N$.
We compute the sum backwards. Suppose that the current partial sum is ${S / v_{k+1}}$. To add $u_k / v_k$ when $v_k \ne v_{k+1}$, we first change denominators by computing $S \gets (S \times v_{k+1}) / v_k$. This requires one $((n+1) \times 1)$ multiplication and one $((n+2) \times 1)$ division. A complication arises if $S$ is a two’s complemented negative value when we change denominators, however in this case we can just “add and subtract 1”, i.e. compute $$((S + v_{k+1}) \times v_k) / v_{k+1} - v_k$$ which costs only two extra single-limb additions.
$0 \le X \le 2^{-4}$ as an $n$-limb fixed-point number, $2 < N < 300$ $S \approx \sum_{k=0}^{N-1} \tfrac{(-1)^k}{2k+1} X^{2k+1}$ as an $n$-limb fixed-point number with $\le$ 2 ulp error $m \gets 2 \lceil \sqrt{N}/2\rceil$ $T_1 \gets X \times X + \varepsilon$ \[lst:line:xsquare1\] $T_2 \gets T_1 \times T_1 + \varepsilon$ $T_{k-1} \gets T_{k/2} \times T_{k/2-1} + \varepsilon$ $T_k \gets T_{k/2} \times T_{k/2} + \varepsilon$ $S \gets 0$ $S \gets S + v_{k+1}$ \[lst:line:spluse\] $S \gets S \times v_k$ \[lst:line:suse1\] $S \gets S / v_{k+1} + \varepsilon$ $S \gets S - v_k$ \[lst:line:sminusd\] $S \gets S + (-1)^k u_k$ \[lst:line:splusc1\] $S \gets S \times T_m + \varepsilon$ \[lst:line:suse2\] $S \gets S + (-1)^k u_k \times T_{k \bmod m}$ \[lst:line:splusc2\] $S \gets S / v_0 + \varepsilon$ \[lst:line:suse3\] $S \gets S \times X + \varepsilon$ \[lst:line:finalmul\] $S$
Pseudocode for our implementation of the atan Taylor series is shown in Algorithm \[alg:atan\]. All uppercase variables denote fixed-point numbers, and all lowercase variables denote integers. We write $+ \varepsilon$ to signify a fixed-point operation that adds up to 1 ulp of rounding error. All other operations are exact.
Algorithm \[alg:atan\] can be shown to be correct by a short exhaustive computation. We execute the algorithm symbolically for all allowed values of $N$. In each step, we determine an upper bound for the possible value of each fixed-point variable as well as its error, proving that no overflow is possible (note that $S$ may wraparound on lines \[lst:line:splusc1\] and \[lst:line:splusc2\] since we use two’s complement arithmetic for negative values, and part of the proof is to verify that $0 \le |S| \le 2^B - \text{ulp}$ necessarily holds before executing lines \[lst:line:suse1\], \[lst:line:suse2\], \[lst:line:suse3\]). The computation proves that the error is bounded by 2 ulp at the end.
It is not hard to see heuristically why the 2 ulp bound holds. Since the sum is kept multiplied by a denominator which is close to a full limb, we always have close to a full limb worth of guard bits. Moreover, each multiplication by a power of $X$ removes most of the accumulated error since $X \ll 1$. At the same time, the numerators and denominators are never so close to $2^B - 1$ that overflow is possible. We stress that the proof depends on the particular content of the tables $u$ and $v$.
Code to generate coefficients and prove correctness of Algorithm \[alg:atan\] (and its variants for the other functions) is included in the source repository [@fjarbsource] in the form of a Python script [verify\_taylor.py]{}.
Making small changes to Algorithm \[alg:atan\] allows us to compute log, exp, sin and cos. For log, we write $\log(1+x) = 2 \operatorname{atanh}(x/(x+2))$, since the Taylor series for atanh has half as many nonzero terms. To sum $S = \sum_{k=0}^{N-1} X^{2k+1} / (2k+1)$, we simply replace the subtractions with additions in Algorithm 1 and skip lines \[lst:line:spluse\] and \[lst:line:sminusd\].
For the exp series $S = \sum_{k=0}^{N-1} X^k / k!$, we use different tables $u$ and $v$. For $k! < 2^B - 1$, $u_k / v_k = 1/k!$ and for larger $k$, $u_k / v_k$ equals $1/k!$ times the product of all $v_i$ with $i < k$ and distinct from $v_k$. The $k$ such that $v_k \ne v_{k+1}$ are $$12, 19, 26, \ldots, 264, 267, \ldots (\text{32-bit})$$ and $$20, 33, 45, \ldots, 266, 273, \ldots (\text{64-bit}).$$ Algorithm \[alg:atan\] is modified by skipping line \[lst:line:suse1\] (in the next line, the division has one less limb). The remaining changes are that line \[lst:line:finalmul\] is removed, line \[lst:line:xsquare1\] becomes $T_1 \gets X$, and the output has $n + 1$ limbs instead of $n$ limbs.
For the sine and cosine $S_1 = \sum_{k=0}^{N-1} (-1)^k X^{2k+1} / (2k+1)!$ and $S_2 = \sum_{k=0}^{N-1} (-1)^k X^{2k} / (2k)!,$ we use the same $u_k, v_k$ as for exp, and skip line \[lst:line:suse1\]. As in the atan series, the table of powers starts with the square of $X$, and we multiply the sine by $X$ in the end. The alternating signs are handled the same way as for atan, except that line \[lst:line:sminusd\] becomes ${S \gets S - 1}$. To compute sin and cos simultaneously, we execute the main loop of the algorithm twice: once for the sine (odd-index coefficients) and once for the cosine (even-index coefficients), recycling the table $T$.
When computing sin and cos above circa 300 bits and exp above circa 800 bits, we optimize by just evaluating the Taylor series for sin or sinh, after which we use $\cos(x) = \sqrt{1 - [\sin(x)]^2}$ or $\exp(x) = \sinh(x) + \sqrt{1 + [\sinh(x)]^2}$. This removes half of the Taylor series terms, but only saves time at high precision due to the square root. The cosine is computed from the sine and not vice versa to avoid the ill-conditioning of the square root near 0.
Top-level algorithm and error bounds {#sect:toplevel}
====================================
Our input to an elementary function $f$ is an arbitrary-precision floating-point number $x$ and a precision $p \ge 2$. We output a pair of floating-point numbers $(y,z)$ such that $f(x) \in [y-z, y+z]$. The intermediate calculations use fixed-point arithmetic. Naturally, floating-point manipulations are used for extremely large or small input or output. For example, the evaluation of $\exp(x) = \exp(t) 2^m$, where $m$ is chosen so that $t = x - m \log(2) \in [0, \log(2))$, uses fixed-point arithmetic to approximate $\exp(t) \in [1,2)$. The final output is scaled by $2^m$ after converting it to floating-point form.
Algorithm \[alg:atantop\] gives pseudocode for ${\ensuremath{\operatorname{atan}}}(x)$, with minor simplifications compared to the actual implementation. In reality, the quantities $(y, z)$ are not returned exactly as printed; upon returning, $y$ is rounded to a $p$-bit floating-point number and the rounding error of this operation is added to $z$ which itself is rounded up to a low-precision floating-point number.
The variables $X, Y$ are fixed-point numbers and $Z$ is an error bound measured in ulps. We write $+\varepsilon$ to indicate that a result is truncated to an $n$-limb fixed-point number, adding at most $1~\text{ulp} = 2^{-Bn}$ error where $B = 32$ or $64$.
After taking care of special cases, $|x|$ or $1/|x|$ is rounded to a fixed-point number $0 \le X < 1$. Up to two argument transformations are then applied to $X$. The first ensures $0 \le X < 2^{-r_1}$ and the second ensures $0 \le X < 2^{-r_1-r_2}$. After line \[lst:line:lastxred\], we have (if $|x| < 1$) $$|{\ensuremath{\operatorname{atan}}}(x)| = {\ensuremath{\operatorname{atan}}}\!\left(\frac{p_1}{2^{r_1}}\right) + {\ensuremath{\operatorname{atan}}}\!\left(\frac{p_2}{2^{r_1+r_2}}\right) + {\ensuremath{\operatorname{atan}}}(X) + \delta$$ or (if $|x| > 1$) $$|{\ensuremath{\operatorname{atan}}}(x)| = \frac{\pi}{2} - {\ensuremath{\operatorname{atan}}}\!\left(\frac{p_1}{2^{r_1}}\right) - {\ensuremath{\operatorname{atan}}}\!\left(\frac{p_2}{2^{r_1+r_2}}\right) - {\ensuremath{\operatorname{atan}}}(X) + \delta$$ for some $|\delta| \le Z$. The bound on $\delta$ is easily proved by repeated application of the fact that $|{\ensuremath{\operatorname{atan}}}(t+\varepsilon)-{\ensuremath{\operatorname{atan}}}(t)| \le |\varepsilon|$ for all $t, \varepsilon \in \mathbb{R}$.
The value of ${\ensuremath{\operatorname{atan}}}(X)$ is approximated using a Taylor series. By counting leading zero bits in $X$, we find the optimal integer $r$ with $r_1 + r_2 \le r \le Bn$ such that $X < 2^{-r}$ (we could take $r = r_1 + r_2$, but choosing $r$ optimally is better when $x$ is tiny). The tail of the Taylor series satisfies $$\left|{\ensuremath{\operatorname{atan}}}(X) - \sum_{k=0}^{N-1} \frac{(-1)^k}{2k+1} X^{2k+1}\right| \le X^{2N+1},$$ and we choose $N$ such that $X^{2N+1} < 2^{-r(2N+1)} \le 2^{-w}$ where $w$ is the working precision in bits.
Values of ${\ensuremath{\operatorname{atan}}}(p_1 2^{-r_1})$, ${\ensuremath{\operatorname{atan}}}(p_2 2^{-r_1-r_2})$ and $\pi/2$ are finally read from tables with at most 1 ulp error each, and all terms are added. The output error bound $z$ is the sum of the Taylor series truncation error bound and the bounds for all fixed-point rounding errors. It is clear that $z \le 10 \times 2^{-w}$ where the $w$ is the working precision in bits, and that the choice of $w$ implies that $y$ is accurate to $p$ bits. The working precision has to be increased for small input, but the algorithm never slows down significantly since very small input results in only a few terms of the Taylor series being necessary.
$x \not\in \{0, \pm \infty, \text{NaN}\}$ with sign $\sigma$ and exponent $e$ such that $2^{e-1} \le \sigma x < 2^e$, and a precision $p \ge 2$ A pair $(y,z)$ such that ${\ensuremath{\operatorname{atan}}}(x) \in [y-z, y+z]$ $(x, \pm 2^{3e})$ ${\ensuremath{\operatorname{atan}}}(x) = x + O(x^3)$ $(\sigma \pi/2, 2^{1-e})$ ${\ensuremath{\operatorname{atan}}}(x) = \pm \pi/2 + O(1/x)$ $(\sigma \pi/4, 0)$ $w \gets p - \min(0, e) + 4$ Enclosure for ${\ensuremath{\operatorname{atan}}}(x)$ using fallback algorithm $n \gets \lceil w / B \rceil$ $X \gets |x| + \varepsilon$, $Z \gets 1$ $X \gets 1 / |x| + \varepsilon$, $Z \gets 1$ **If** $w \le 512$ **then** $(r_1, r_2) \gets (8, 0)$ **else** $(r_1, r_2) \gets (5, 5)$ $p_1 \gets \lfloor 2^{r_1} X \rfloor$ $X \gets (2^{r_1} X - p_1)/(2^{r_1} + p_1 X) + \varepsilon, Z \gets Z + 1$ $p_2 \gets \lfloor 2^{r_2} X \rfloor$ $X \gets (2^{r_1+r_2} \!X\!-\!p_2)/(2^{r_1+r_2}\!+\!p_2 X) + \varepsilon, Z \gets Z + 1$ \[lst:line:lastxred\] Compute $r_1 + r_2 \le r \le B n$ such that $0 \le X < 2^{-r}$ $N \gets \lceil (w - r) / (2r) \rceil$ $Y \gets \sum_{k=0}^{N-1} \tfrac{(-1)^k}{2k+1} X^{2k+1} + 3 \varepsilon$ Direct evaluation $Z \gets Z + 3$ $Y \gets \sum_{k=0}^{N-1} \tfrac{(-1)^k}{2k+1} X^{2k+1} + 2 \varepsilon$ Call Algorithm \[alg:atan\] $Z \gets Z + 2$ $Y \gets Y + ({\ensuremath{\operatorname{atan}}}(p_1 2^{-r_1}) + \varepsilon), \; Z \gets Z + 1$ $Y \gets Y + ({\ensuremath{\operatorname{atan}}}(p_2 2^{-r_1-r_2}) + \varepsilon), \; Z \gets Z + 1$ $Y \gets (\pi/2 + \varepsilon) - Y, Z \gets Z + 1$ $(\sigma Y,\, 2^{-r(2N+1)} + Z 2^{-Bn})$
The code for exp, log, sin and cos implements the respective argument reduction formulas analogously. We do not reproduce the calculations here due to space constraints. The reader may refer to the source code [@fjarbsource] for details.
Our software [@Johansson:2014:ACL:2576802.2576828] chooses guard bits to achieve $p$-bit relative accuracy with at most 1-2 ulp error in general, but does not guarantee correct rounding, and allows the output to have less accuracy in special cases. In particular, sin and cos are computed to an absolute (not relative) tolerance of $2^{-p}$ for large input, and thus lose accuracy near the roots. These are reasonable compromises for variable-precision interval arithmetic, where we only require a correct enclosure of the result and have the option to restart with higher precision if the output is unsatisfactory.
Correct rounding (or any other strict precision policy) can be achieved with Ziv’s strategy: if the output interval $[y-z,y+z]$ does not allow determining the correctly rounded $p$-bit floating-point approximation, the computation is restarted with more guard bits. Instead of starting with, say, 4 guard bits to compensate for internal rounding error in the algorithm, we might start with $4 + 10$ guard bits for a $2^{-10}$ probability of having to restart. On average, this only results in a slight increase in running time, although worst cases necessarily become much slower.
Benchmarks {#sect:bench}
==========
Table \[tab:benchmarktime\] shows benchmark results done on an Intel i7-2600S CPU running x86\_64 Linux. Our code is built against MPIR 2.6.0. All measurements were obtained by evaluating the function in a loop running for at least 0.1 s and taking the best average time out of three such runs.
The input to each function is a floating-point number close to $\sqrt{2}+1$, which is representative for our implementation since it involves the slowest argument reduction path in all functions for moderate input (for input larger than about $2^{64}$, exp, sin and cos become marginally slower since higher precision has to be used for accurate division by $\log(2)$ or $\pi/4$).
We include timings for the double-precision functions provided by the default libm installed on the same system (EGLIBC 2.15). Table \[tab:benchmarkratio\] shows the speedup compared to MPFR 3.1.2 at each level of precision.
Bits exp sin cos log atan
------ ------- ------- ------- ------- --------
53 0.045 0.056 0.058 0.061 0.072
32 0.26 0.35 0.35 0.21 0.20
53 0.27 0.39 0.38 0.26 0.30
64 0.33 0.47 0.47 0.30 0.34
128 0.48 0.59 0.59 0.42 0.47
256 0.83 1.05 1.08 0.66 0.73
512 2.06 2.88 2.76 1.69 2.20
1024 6.79 7.92 7.84 5.84 6.97
2048 22.70 25.50 25.60 22.80 25.90
4096 82.90 97.00 98.00 99.00 104.00
: Timings of our implementation in microseconds. Top row: time of libm.
\[tab:benchmarktime\]
Bits exp sin cos log atan
------ ----- ----- ----- ------ ------
32 7.9 8.2 3.6 11.8 29.7
53 9.1 8.2 3.9 10.9 25.9
64 7.6 6.9 3.2 9.3 23.7
128 6.9 6.9 3.6 10.4 30.6
256 5.6 5.4 2.9 10.7 31.3
512 3.7 3.2 2.1 6.9 14.5
1024 2.7 2.2 1.8 3.6 8.8
2048 1.9 1.6 1.4 2.0 4.9
4096 1.7 1.5 1.3 1.3 3.1
: Speedup vs MPFR 3.1.2.
\[tab:benchmarkratio\]
exp sin cos log atan
------------- ------ ------ ------ ------- -------
MPFR 5.76 7.29 3.42 8.01 21.30
libquadmath 4.51 4.71 4.57 5.39 4.32
QD (dd) 0.73 0.69 0.69 0.82 1.08
Our work 0.65 0.81 0.79 0.61 0.68
MPFR 7.87 9.23 5.06 12.60 33.00
QD (qd) 6.09 5.77 5.76 20.10 24.90
Our work 1.29 1.49 1.49 1.26 1.23
: Top rows: timings in microseconds for quadruple (113-bit) precision, except QD which gives 106-bit precision. Bottom rows: timings for quad-double (212-bit) precision. Measured on an Intel T4400 CPU.
\[tab:benchmarkquad\]
Table \[tab:benchmarkquad\] provides a comparison at IEEE 754 quadruple (113-bit) precision against MPFR and the libquadmath included with GCC 4.6.4. We include timings for the comparable double-double (“dd”, 106-bit) functions provided by version 2.3.15 of the QD library [@hida2007library]. Table \[tab:benchmarkquad\] also compares performance at quad-double (“qd”, 212-bit) precision against MPFR and QD. The timings in Table \[tab:benchmarkquad\] were obtained on a slower CPU than the timings in Table \[tab:benchmarktime\], which we used due to the GCC version installed on the faster system being too old to ship with libquadmath.
At low precision, a function evaluation with our implementation takes less than half a microsecond, and we come within an order of magnitude of the default libm at 53-bit precision. Our implementation holds up well around 100-200 bits of precision, even compared to a library specifically designed for this range (QD).
Our implementation is consistently faster than MPFR. The smallest speedup is achieved for the cos function, as the argument reduction without table lookup is relatively efficient and MPFR does not have to evaluate the Taylor series for both sin and cos. The speedup is largest for atan, since MPFR only implements the bit-burst algorithm for this function, which is ideal only for very high precision. Beyond 4096 bits, the asymptotically fast algorithms implemented in MPFR start to become competitive for all functions, making the idea of using larger lookup tables to cover even higher precision somewhat less attractive.
Differences in accuracy should be considered when benchmarking numerical software. The default libm, libquadmath, and QD do not provide error bounds. MPFR provides the strongest guarantees (correct rounding). Our implementation provides rigorous error bounds, but allows the output to be less precise than correctly rounded. The 20% worse speed at 64-bit precision compared to 53-bit precision gives an indication of the overhead that would be introduced by providing correct rounding (at higher precision, this factor would be smaller).
Future improvements
===================
Our work helps reduce the performance gap between double and multiple precision. Nonetheless, our approach is not optimal at precisions as low as 1-2 limbs, where rectangular splitting has no advantage over evaluating minimax polynomials with Horner’s rule, as is generally done in libraries targeting a fixed precision.
At very low precision, GMP functions are likely inferior to inlined double-double and quad-double arithmetic or similar, especially if the floating-point operations are vectorized. Interesting alternatives designed to exploit hardware parallelism include the carry-save library used for double-precision elementary functions with correct rounding in CR-LIBM [@daramy2003cr; @defour2003], the recent SIMD-based multiprecision code [@van2014modular], and implementations targeting GPUs [@thall2006extended]. We encourage further comparison of these options.
Other improvements are possible at higher precision. We do not need to compute every term to a precision of $n$ limbs in Algorithm \[alg:atan\] as the contribution of term $k$ to the final sum is small when $k$ is large. The precision should rather be changed progressively. Moreover, instead of computing an $(n \times n)$-limb fixed-point product by multiplying exactly and throwing away the low $n$ limbs, we could compute an approximation of the high part in about half the time (unfortunately, GMP does not currently provide such a function).
Our implementation of the elementary functions outputs a guaranteed error bound whose proof of correctness depends on a complete error analysis done by hand, aided by some exhaustive computations.
To rule out any superficial bugs, we have tested the code by comparing millions of random values against MPFR. We also test the code against itself for millions of random inputs by comparing the output at different levels of precisions or at different points connected by a functional equation. Random inputs are generated non-uniformly to increase the chance of hitting corner cases. The functions are also tested indirectly by being used internally in many higher transcendental functions.
Nevertheless, since testing cannot completely rule out human error, a formally verified implementation would be desirable. We believe that such a proof is feasible. The square root function in GMP is implemented at a similar level of abstraction, and it has been proved correct formally using Coq [@bertot2002proof].
Acknowledgments {#acknowledgments .unnumbered}
===============
This research was partially funded by ERC Starting Grant ANTICS 278537. The author thanks the anonymous referees for valuable feedback.
[^1]: INRIA Bordeaux
[^2]: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
From the results of a comprehensive asteroid population evolution model, we conclude that the YORP-induced rotational fission hypothesis can be consistent with the observed population statistics of small asteroids in the main belt including binaries and contact binaries. The foundation of this model is the asteroid rotation model of @Marzari:2011dx, which incorporates both the YORP effect and collisional evolution. This work adds to that model the rotational fission hypothesis, described in detail within, and the binary evolution model of @Jacobson:2011eq [@Jacobson:2011hp]. The asteroid population evolution model is highly constrained by these and other previous works, and therefore it has only two significant free parameters: the ratio of low to high mass ratio binaries formed after rotational fission events and the mean strength of the binary YORP (BYORP) effect.
We successfully reproduce characteristic statistics of the small asteroid population: the binary fraction, the fast binary fraction, steady-state mass ratio fraction and the contact binary fraction. We find that in order for the model to best match observations, rotational fission produces high mass ratio ($> 0.2$) binary components with four to eight times the frequency as low mass ratio ($< 0.2$) components, where the mass ratio is the mass of the secondary component divided by the mass of the primary component. This is consistent with post-rotational fission binary system mass ratio being drawn from either a flat or a positive and shallow distribution, since the high mass ratio bin is four times the size of the low mass ratio bin; this is in contrast to the observed steady-state binary mass ratio, which has a negative and steep distribution. This can be understood in the context of the BYORP-tidal equilibrium hypothesis, which predicts that low mass ratio binaries survive for a significantly longer period of time than high mass ratio systems. We also find that the mean of the log-normal BYORP coefficient distribution $\mu_B \gtrsim 10^{-2}$, which is consistent with estimates from shape modeling [@McMahon:2012ti].
address:
- 'Department of Astrophysical and Planetary Sciences, University of Colorado, Boulder, CO 80309-0391, USA'
- 'Laboratoire Lagrange, Observatoire de la C[ô]{}te d’Azur, Boulevard de l’Observatoire, 06304 Nice Cedex 4, France'
- 'Bayerisches Geoinstitut, Universtat Bayreuth, D-95444 Bayreuth, Germany'
- 'Dipartimento di Fisica, Universit[à]{} di Padova, 35131 Padova, Italy '
- 'IFAC-CNR, 50019 Sesto Fiorentino, Firenze, Italy'
- 'Department of Aerospace and Engineering Sciences, University of Colorado, Boulder, CO 80309-0429 USA'
author:
- 'Seth A. Jacobson'
- Francesco Marzari
- Alessandro Rossi
- 'Daniel J. Scheeres'
bibliography:
- 'biblio.bib'
title: 'Matching asteroid population characteristics with a model constructed from the YORP-induced rotational fission hypothesis'
---
Introduction {#sec:introduction}
============
The YORP-induced rotational fission hypothesis predicts that the Yarkovsky-O’Keefe-Radzievksii-Paddack (YORP) effect can rotationally accelerate rubble pile asteroids until internal stresses within the body due to centrifugal accelerations surpass the gravitational attractions holding the rubble pile elements in their current configurations. Subsequently, according to the hypothesis, these asteroids rotationally fission into mutually orbiting components that can dynamically evolve into the observed binary populations [@Bottke:2006en; @Scheeres:2007io; @Walsh:2008gk; @Jacobson:2011eq]. This hypothesis has been constructed on two pillars: the theoretical conclusion that light imparts a meaningful torque on small asteroids, which has been named the YORP effect [@Rubincam:2000fg], and the observations that the majority of binary asteroid systems have rapidly rotating primaries and small semi-major axes relative to the radius of the primary. This configuration has a high angular momentum content, which is consistent only with formation from rotational fission [@Margot:2002fe].
The hypothesis has also undergone observational and theoretical experiments. Rotational fission predicts a relationship between the angular momentum content of the fissioned asteroid system and the mass ratio between its components [@Scheeres:2007io]. In the asteroid pair population, @Pravec:2010kt discovered that the spin rates of the larger members and the mass ratio of each observed asteroid pair had the predicted relationship. This confirmed that asteroid pairs are the result of rotational fission. @Jacobson:2011eq tested the connection between rotational fission and the observed binary population by numerically modeling the post-rotational fission process. With only the inclusion of gravitational dynamics and mutual body tides, they were able to create the most commonly observed asteroid systems (e.g. asteroid pairs, binaries, contact binaries, etc.). After including the binary YORP (BYORP) effect, all the observed binary systems are hypothesized to be natural final states after these processes (as reviewed in @Jacobson:2014hp).
Often asteroid evolution occurs too quickly (on Solar System timescales) and too infrequently (on human timescales) to be observed [*in situ*]{}. Although, as larger telescopes are aimed at smaller asteroid systems, the possibility of capturing rotational fission events as they occur grows increasingly high [@Marzari:2011dx] (and the first such systems may have already been observed, e.g. @Jewitt:2010bs and @Jewitt:2014fe). In the meantime, these timescales present a challenge for direct confirmation of rotational fission and subsequent binary evolution, but the proposed asteroid evolution makes specific predictions for the relative abundances of each final state so a detailed asteroid population evolution model that reproduces the observed sub-populations is a strong consistency test of the YORP-induced rotational fission hypothesis. We present such an asteroid population evolution model that allows us to see if the proposed evolutionary mechanisms are sufficient to create the observed sub-populations and, perhaps more importantly, create them in the right proportions to one another.
The asteroid population evolution model is a development of the model presented in @Marzari:2011dx, which studied the rotational evolution of the Main Belt asteroid (MBA) population including both the YORP effect and collisions. This model was already an improvement and continuation of earlier projects by @Rossi:2009kz and @Scheeres:2004bd, which studied the near-Earth asteroid population. Similar to @Marzari:2011dx, we use a Monte Carlo approach to simulate the evolution of $2 \times 10^6$ asteroid systems for $4.5 \times 10^9$ years. The spin state of each asteroid evolves constantly due to the YORP effect and collisions as in @Marzari:2011dx (summarized in Section \[sec:singleasteroidevolution\]). Similar to @Jacobson:2014bi, when the rotation rate of an asteroid exceeds a specified spin limit, the asteroid rotationally fissions and can form a binary system. The survival and lifetimes of these binary systems are determined from a separate set of calculations based on the results of @Jacobson:2011eq [b].
Both the single and binary evolution schemes are built from well-developed theories in the literature. Therefore, there are very few free parameters built into the model that have not been significantly constrained elsewhere. For instance, the intrinsic probability of collision for Main Belt asteroids $\left< P_i \right> = 2.7 \times 10^{-18}$ yr$^{-1}$ km$^{-2}$, the fundamental parameter determining the frequency of collisions in the model, has been established by the efforts of a series of authors to at least the order of uncertainty inherent in other parts of the asteroid population evolution model [@Farinella:1992im; @BottkeJr:1994kr]. Similarly, the binary evolution model utilizes the the evolutionary flowchart and derived probabilities given in @Jacobson:2011eq [b].
The binary evolution model does contain two free input parameters that are not well constrained by either observation or current theory. The first parameter is the initial mass ratio fraction $F_i$, which is the ratio of high mass ratio to low mass ratio binary systems created from rotational fission events. This parameter is determined from the interior structure of the rotationally fissioning asteroid and the mechanics of the fission event itself, neither of which are currently observed or modeled accurately enough to generate this number. The initial mass ratio fraction is distinct from the observed mass ratio fraction $F_q$, which reflects the evolutionary differences between high and low mass ratio systems.
The second parameter is the mean of the logarithmic normal distribution of the BYORP coefficient $\mu_B$. It is used to determine the strength of the BYORP effect, which determines the bound lifetimes for most binary systems. The basic shape and width of the distribution is determined from the equilibrium occupied by the synchronous binary asteroid population. There has only been a single published estimation of a BYORP coefficient and the shape model used may not have had the necessary accuracy [@McMahon:2010jy] and this effect has yet to be measured. These parameters are the knobs that will control the output from the asteroid population evolution model.
After evolving the population for the age of the Solar System, which is longer than needed for the sub-populations to reach a relative steady-state equilibrium for most choices of $\mu_B$, we can compare the model to the observed main asteroid belt. There are four particular observables that we can compare with our model: The binary fraction $F_B$, which is the total number of binaries over the total number of asteroid systems, the fast-rotating binary fraction, $F_F$, which is a more specific comparison of the number of binaries with rapidly rotating primaries to the number of rapidly rotating asteroids, the steady-state (i.e. observed) mass ratio fraction $F_q$, which is defined similarly to the initial mass ratio fraction $F_i$ above, and the contact binary fraction $F_C$, which is the number of contact binaries over the total number of asteroid systems. From these comparisons, we construct a simple log-likelihood model to assess which model parameters, $F_i$ and $\mu_B$, are the most likely to match the model population to the observations. Lastly, we discuss the best fit models and their implications for future observations and tests.
Single Asteroid Evolution {#sec:singleasteroidevolution}
=========================
Each asteroid within the asteroid population evolution model is individually evolved. Similar to @Marzari:2011dx, the asteroid population evolution model utilizes the intrinsic probability for impact $\left< P_i \right>$ and a projectile size frequency distribution to determine the collision history of each model asteroid. Between collisions, single asteroids undergo rotational evolution driven by the YORP effect, which modifies both the spin rate and obliquity of the asteroid. Rotational acceleration can lead to rotational fission if it occurs before the next collision event. The specific conditions for triggering rotational fission and the process itself are parameterized using well-developed models [@Scheeres:2007io; @Jacobson:2011eq].
Each asteroid system is characterized by a number of fixed and evolving parameters. These parameters change if the system undergoes rotational fission and evolves into a binary asteroid system. All systems are assigned a fixed semi-major axis $a_\odot$ and eccentricity $e_\odot$ from a Main Belt asteroid orbital element distribution. Both the YORP effect and collisions evolve the spin rate $\omega$ and the obliquity $\epsilon$ of each asteroid. The initial spin rate is drawn from a Maxwellian distribution with a $\sigma = 1.99$ corresponding to a mean period of $7.56$ hrs, which is consistent with @Fulchignoni:1995um and @Donnison:1999iv. @Rossi:2009kz demonstrated for models similar to the asteroid population evolution model that the steady-state spin rate distribution is independent of the initial spin rate distribution. We draw the initial obliquity of each asteroid from a flat distribution. The relative change in obliquity is used by the model to update the YORP coefficient, however the absolute obliquity is not currently used by the model. Thus the rotational evolution output is insensitive to the initial obliquity distribution, but it is a feature of the model that could be utilized in the future to compare input and output obliquity distributions.
For the purpose of calculating the critical spin limit, each asteroid is assigned a shape from an ellipsoidal semi-axis ratio distribution reported from laboratory experiments by @Giblin:1998io. From largest to smallest the tri-axial semi-axes are $a$, $b$, and $c$. Axis ratios are drawn from normal distributions such that for $b/a$, the mean $\mu = 0.6$ with a standard deviation $\sigma = 0.18$ and for $c/a$, $\mu = 0.4$ and $\sigma = 0.05$. This shape distribution is in agreement with Hayabusa observations of boulders on 243 Itokawa and photometry of small, fast-rotating asteroids [@Michikami:2010cr] and the mean lightcurve amplitude of small asteroids with diameters between 0.2 and 10 km [@Pravec:2000dr]. The added realism of using a shape distribution rather than assuming sphericity results in a reduced critical spin limit.
The most important parameter for determining the evolution of a individual asteroid system is its mean radius $R$. Both the collision and rotational evolution depend strongly on the size of the asteroid, thus the two effects are not of comparable strengths at all radii. We expect radiative torques to be inconsequential for large asteroids, but dominant at smaller sizes transitioning at some critical radius $R_c$. This critical radius is estimated to be $R_c \approx 6$ km from both analytical arguments and numerical experiments [@Jacobson:2014bi]. In Section \[sec:rotationalspinlimits\], we discuss a transition between “monolithic” and “rubble pile” interior structures that is inferred to occur at a radius of $R \approx 125$ m. Therefore the asteroid population evolution model focuses on asteroids with radii between $R = 200$ m, just above this transition, and $20$ km, since the evolution of asteroids with radii $R \gtrsim 20$ km are collision dominated and have YORP effect timescales on order with the age of the Solar System or longer [@Jacobson:2014bi]. Within this range, the asteroid population evolution model includes a sample of 2 million asteroids which are drawn from the size frequency distribution derived from the results of the Sloan Digital Sky Survey [@Ivezic:2001ct]. The range of asteroids included in the asteroid population evolution model is different than the range of possible projectile asteroids used to model collisions. These projectiles range in radius from $0.05$ m to $20$ km.
Asteroid system destruction whether through a catastrophic collision, rotational bursting, or destruction of a binary, is a mass transfer from one size asteroid (the progenitor in the case of a binary) into two or more smaller size bodies. Each asteroid in the asteroid population evolution model resides in a logarithmic diameter bin and the model tracks this mass flow from larger bins into smaller bins after each destructive event. Diameter bins are created so that the upper diameter of a bin is $D_i = D_m D_w ^i$, where $D_m$ is the minimum diameter and $D_w = 1.25992$ is the bin width. After a destructive event the asteroid within the asteroid population evolution model is replaced with another asteroid from the original diameter bin. This replacement is motivated by the constant flux of material into the original bin from even larger bins, and in this way, the asteroid population evolution model maintains a steady-state size frequency distribution. Therefore it does not feature a full feedback size-frequency distribution. The output of the asteroid population evolution model includes destruction statistics that we published in @Jacobson:2014bi to generate a new size frequency distribution. The asteroid population evolution model is utilized here to test the YORP-induced rotational fission hypothesis, so rather than focus on changes to the size-frequency distribution, we focus on the abundances of distinguishable sub-populations taking into account both collisional and rotational evolution.
Collisional Evolution {#sec:collisionalevolution}
---------------------
The collisional evolution of each asteroid follows a similar protocol as @Marzari:2011dx. The population of potential impactors is derived from the Sloan Digital Sky Survey size frequency distribution of asteroids [@Ivezic:2001ct] distributed over logarithmic size bins from $0.05$ m to $20$ km. Using Poisson statistics, the number of collisions and their timing is computed for each asteroid with projectiles from each size bin using the intrinsic probability of collision for the Main Belt $\left< P_i \right> = 2.7 \times 10^{-18}$ km$^{-2}$ yr$^{-1}$ [@Farinella:1992im]. Each collision is assigned an impact velocity of $5.5$ km s$^{-1}$ [@BottkeJr:1994kr] and a random geometry within the limits of the Main Belt orbital distribution, in order to determine from these parameters the change in spin rate due to each collision.
Using this method, we have created a list of collisions and their properties for each asteroid in our simulated population. Between each collision in the list, each asteroid rotationally evolves according to the YORP effect. At the time of a collision the rotational evolution is stopped and the collision is evaluated. First, the collision is classified as either a cratering or a catastrophic collision depending on the energy of the event.
If the collision is too large for a cratering event, then the original asteroid is shattered and a new object is created with the same size but a new initial spin state and YORP coefficient. Shattering collisions are defined as those that deliver specific kinetic energy greater than the critical specific energy of the target, which defined as the energy per unit target mass delivered by the collision required for catastrophic disruption (i.e. such that one-half the mass of the target body escapes).
Cratering collisions do not appreciably change the mass or size of the target asteroid, but they can change the angular momentum of the asteroid. The angular momentum of the projectile, the target and the geometry of the collision determine the new angular momentum of the cratered asteroid. This new angular momentum vector is used to update both the spin rate and the obliquity. The model neglects the angular momentum removed by fragments. This assumption is acceptable for the frequent low energy impacts but introduces a small error for high energy impacts that do not catastrophically disrupt the asteroid, which are infrequent. Sub-catastrophic impacts create a random walk in spin rate if there is no significant YORP effect rotational acceleration.
YORP Evolution {#sec:yorpevolution}
--------------
The YORP effect changes the spin rate $\dot{\omega}$ as [@Scheeres:2007kv]: $$\dot{\omega} = \frac{Y}{2 \pi \rho R^2} \left( \frac{F_\odot }{a_\odot^2 \sqrt{1 - e_\odot^2}} \right)$$ where $F_\odot = 10^{14}$ kg km s$^{-2}$ is the solar radiation constant and $Y$ is a non-dimensional YORP coefficient $Y$ assigned to each object from a gaussian distribution with a mean of $0$ and a standard deviation of $0.0125$ which was found to successfully reproduce the spin rate distribution of both the near-Earth and main belt asteroid populations [@Rossi:2009kz; @Marzari:2011dx]. In @Rossi:2009kz, the results were found to be invariant on the order of the uncertainty of the model to the particular distribution used. This distribution is consistent with the measured values of 1862 Apollo (1932 HA) $Y = 0.022$ [@Kaasalainen:2007hq] and 54509 YORP (2005 PH$_5$) $Y = 0.005$ [@Taylor:2007kp; @Lowry:2007by]. The model does not distinguish the Tangential YORP effect [@Golubov:2012kt; @Golubov:2014hf], which may both bias the sense of rotation (towards prograde) and increase the acceleration when the asteroid is rotating slowly. The first effect cannot be captured in the model since it does not track sense of rotation but the second effect is already empirically included in the model, since the utilized YORP coefficient distribution was successfully able to reproduce the asteroid spin rate distribution.
Using semi-major axis drift as a proxy for obliquity evolution, @Bottke:2015jg hypothesized that obliquity must be preserved through multiple rotational fission events or the rotational fission timescale must be suppressed, which they accomplished via a stochastic YORP. This model effectively includes the effect of stochastic YORP on rotation rate evolution since the YORP coefficient is re-drawn after each rotational fission event, significant collisions and whenever the obliquity changes by more than $0.2$ rad due to either collisions or YORP evolution itself. For smaller changes in the obliquity, the YORP coefficient evolves according to: $Y' = Y \left( 3 \cos^2 \epsilon - 1 \right) / 2$ as in @Nesvorny:2008by. The model only tracks this relative obliquity evolution due to the YORP effect and is not a full obliquity evolution model. How the obliquity evolves after a rotational fission occurs is not clear, and the role of binary formation and evolution on obliquity has not been fully explored.
If the YORP coefficient $Y > 0$, then the spin rate is accelerating and if uninterrupted by collisions, the spin rate will eventually reach the spin limit. If the YORP coefficient $Y < 0$, then the spin rate is decelerating. Eventually, if uninterrupted by collisions, the angular momentum of the asteroid becomes very low, where even the smallest projectiles can deliver impulsive torques that are the same order of magnitude as the angular momentum of the target body. Since this model cannot assess the evolution of this state, i.e. we only model the effects of $0.05$ m projectiles and larger, an artificial lower spin barrier at 10$^5$ hr is enforced and at this very slow rotation rate the YORP torque switches directions. This assumption could underestimate the YORP evolution timescale but by less than a couple thousand years for even the largest asteroids in the model.
Spin Limits {#sec:rotationalspinlimits}
-----------
Almost all asteroids larger than approximately 200 m in diameter obey a critical disruption spin limit of about 2.3 hours [@Pravec:2000dr]. Below this size, there are a couple hypotheses for why the barrier can be broken including enhanced strength due to cohesive forces [@Scheeres:2012tj; @Holsapple:2007eg] and that these super-critical asteroids are the monolithic remnants of rubble pile progenitors that have undergone multiple YORP induced rotational fissioning [@Pravec:2007ki]. To limit the complexity of the model, we only consider asteroids with radii $R \geq 0.2$ km.
The critical disruption spin limit is a direct consequence of the YORP-induced rotational fission hypothesis. As an asteroid is rotationally accelerated due to either a continuous YORP torque or sudden collisional torque the centrifugal accelerations increase on each component of a rubble pile asteroid. These accelerations counter the gravitational accelerations holding the body together. @Scheeres:2009dx showed that for every partitioning of the body in two along rubble pile component boundaries, there is a specific rotation rate at which the centrifugal accelerations will exceed the mutual gravity and the two sections will no longer rest against each other but enter into orbit. As the body rotationally accelerates it will reach the slowest of these rotation rates first and it will be along this partitioning that the body rotationally fissions. The smaller of the two sections is now the secondary, and the remainder is the primary, both in orbit about each other. This simple story of rotational fission is complicated by but reaffirmed when the asteroid’s shape is also allowed to evolve. Some numerical models predicts surface shedding implying a very low initial mass ratio fraction [@Walsh:2008gk; @Hirabayashi:2015jd], while others predict internal failure consistent with a high initial mass ratio fraction [@Sanchez:2012hz]. Because of this uncertainty, the initial mass ratio fraction is a fundamental parameter of the asteroid population evolution model. Once the initial mass ratio of a particular asteroid has been chosen, the model utilizes the simple approximation that all rubble piles rotationally disrupt at the critical disruption spin limit modified to account for the ellipsoidal shape of the asteroid.
We also consider collision-induced rotational fission, which requires that the combined angular momentum from both precursor bodies and the cratering impact geometry exceeds the critical angular momentum necessary for the body to gravitationally hold itself together against centrifugal accelerations. This is similar to the YORP-induced rotational fission hypothesis described above with three exceptions. Firstly, the collision may significantly change the internal component distribution itself. Secondly, the torque is delivered impulsively. These first two differences are not significant since we are not modeling the internal component distribution nor are we resolving the rotational fission event itself. Thirdly, the new system angular momentum may exceed the critical angular momentum by a measurable amount. Even though an asteroid that undergoes collision-induced rotational fission may be rotationally accelerated past the critical disruption rotation rate, for the purposes of the asteroid population evolution model these events will be treated the same as the YORP-induced rotational fission, which occurs at the critical disruption rotation rate. Consequences of ignoring the excess include overestimating the binary creation rate at the expense of the asteroid pair creation rate.
Outcomes of Rotational Fission {#sec:outcomesofrotationalfission}
------------------------------
![Evolutionary tracks for a small asteroid after it has undergone rotational fission according to the theory in @Jacobson:2011eq and @Jacobson:2011hp. Each evolutionary step is indicated by an arrow. Most of this diagram is a cycle, since the end states are single asteroids: re-shaped asteroids, contact binaries or each member of asteroid pairs. Collisions can destroy synchronous binaries in equilibrium.[]{data-label="fig:AsteroidFlowChart"}](Figure1.pdf){width="\columnwidth"}
If the critical spin rate is reached, then the asteroid population evolution model simulates a rotational fission event for that asteroid. This can happen when a collision brings the asteroid above the rotational breakup limit or when the rotational breakup period is reached due to YORP acceleration. @Pravec:2010kt observationally showed that these types of events are the progenitors of the observed asteroid pair population. @Jacobson:2011eq numerically showed that rotationally fissioned asteroid systems can evolve into a number of different outcomes, as shown in Figure \[fig:AsteroidFlowChart\], but the chaotic nature of the system allows for only a probabilistic determination of the outcome. A binary system formed via rotational fission can temporarily occupy a number of evolutionary morphologies before settling into three enduring states: single, binary and pair. None of these categories are truly permanent since single asteroids can undergo rotational fission forming binaries and pairs, binaries can be disrupted forming pairs or collide to make re-shaped asteroids (i.e. singles), and asteroid pairs, which are really single asteroids, can be rotationally fissioned.
The mass ratio, which is the mass of the primary divided by the mass of the secondary, determines the energy available to the post-fission binary system [@Scheeres:2009dc]. If the mass ratio $q > 0.2$, then the system has a negative free energy and so is bound. These binaries cannot form asteroid pairs without an external force or torque such as the YORP effect. Whereas, systems with mass ratios $q < 0.2$ are unbound systems with positive energy, and so can immediately disrupt to form asteroid pairs [@Pravec:2010kt]. Because of this fundamental difference, high mass ratio $q > 0.2$ and low mass ratio $q < 0.2$ binary systems evolve differently within the model.
@Jacobson:2011eq determined that the mass ratio is not necessarily a fixed quantity and may change via a process termed secondary fission. During secondary fission, the secondary undergoes rotational fission similar to that which formed the binary system in the first place with the exception that the rotational torque is provided by spin-orbit coupling rather than the YORP effect . This process was only observed numerically to occur with low mass ratio systems and since it reduces the mass ratio, no binary systems can evolve across the $q \sim 0.2$ threshold between high and low mass ratio systems.
Mass Ratio Fraction {#sec:massratiofraction}
-------------------
Before describing the possible outcomes and their likelihoods for both high and low mass ratio systems, the relative number of high to low mass ratio systems must be determined. The initial mass ratio of a binary system after rotational fission is determined by the internal component (i.e. rubble pile element) distribution of the parent asteroid before rotational fission [@Scheeres:2007io], so it is the distribution of internal structures amongst an ensemble of asteroids that will determine the initial distribution of binary mass ratios. The direct determination of the distribution of mass ratios after rotational fission would perhaps require the gentle and complete disassembly of a number of asteroids into their component pieces understanding their masses, shapes and relative locations, however an approximate understanding of this distribution may soon be available via detailed numerical modeling using discrete element methods [@Walsh:2008gk; @Walsh:2012jt; @Sanchez:2011kw; @Sanchez:2012hz].
![Two histograms of the same observed binary distribution as a function of mass ratio. The solid histogram shows the number of binaries in bins of width $0.1$ in mass ratio. The dashed histogram simply outlines the number of binaries in the low mass ratio ($0 < q < 0.2$) and the high mass ratio ($0.2 < q < 1$) populations, of which there are $127$ and $16$ observed binary systems, respectively. The observed binaries are the 143 characterized binaries with small primary diameters $\lesssim15$ km according to the September 18, 2015 binary asteroid parameter release from http://www.asu.cas.cz/ asteroid/binastdata.htm as compiled by methods and assumptions described in @Pravec:2006bc and updated in @Pravec:2015uc.[]{data-label="fig:BinaryMassRatioHist"}](Figure2.pdf){width="\columnwidth"}
Until then, we can constrain the initial mass ratio fraction $F_i$ that is input in the asteroid population evolution model by comparing the observed steady-state mass ratio fraction to the steady-state fraction output by the model $F_q$. The steady-state distribution reflects a balance between creation and destruction of binary systems as a function of mass ratio. The mass ratio fraction $F$ is defined as the number of high mass ratio systems divided by the number of low mass ratio systems. The mass ratio fraction is a function of time as high and low mass ratio systems are created and destroyed. The initial mass ratio fraction $F_i$ reflects the distribution of possible internal component distributions of parent asteroids. This initial distribution then evolves into the observed steady-state mass ratio fraction $F_q$ due to the differences between binary creation and destruction timescales in high and low mass ratio systems. The initial mass ratio fraction $F_i$ is an input into the asteroid population evolution model, and the steady-state mass ratio fraction $F_q$ is one of the observable outputs.
This evolution in mass ratio fraction is due only to the creation and destruction of specific binary systems and not due to the possible evolution in mass ratio of those systems, since high mass ratio systems were not observed in numerical models to transform into low mass ratio binaries and vice versa [@Jacobson:2011eq]. As discussed above, binary systems cannot cross the mass ratio $q \sim 0.2$ boundary between the two regimes via secondary fission.
Thus, the simplest approximation within each mass ratio regime is to assume that the members are selected from a flat distribution. As is shown in Figure \[fig:BinaryMassRatioHist\], this description is imperfect but is an appropriate assumption, since the asteroid population evolution model is only being used to determine the steady-state mass ratio fraction $F_q$ and not the detailed steady-state mass ratio distribution. In the future, a treatment that includes a more advanced binary evolution model with a more detailed dependance on mass ratio will also need to explore more complex initial mass ratio distributions.
The range of initial mass ratio fractions $F_i$ to be tested in the asteroid population evolution model is motivated by the observed population as shown in Figure \[fig:BinaryMassRatioHist\]. The observed steady-state mass ratio fraction is $F_q \sim 0.2$, but low mass ratio systems face much steeper odds of surviving as binary systems ($8\%$), as discussed in Section \[sec:instantaneousbinaryevolution\]. To examine a broad range of initial conditions and their outcomes: $F_i$ is varied between $32$, $16$, $8$, $4$, $2$, and $1$. Every time a binary system is created via rotational fission in the asteroid population evolution model, the binary is assigned to either the low or high mass ratio regime, such that $\left( 1 + F_i \right)^{-1}$ of the time the system is low mass ratio and $1 - \left( 1 + F_i \right)^{-1}$ of the time it is high mass ratio. This is the first knob in the model as described in Section \[sec:introduction\]; the other knob is the BYORP coefficient distribution.
Binary Asteroid Evolution {#sec:binaryasteroidevolution}
=========================
After a rotational fission event, a binary system is formed that undergoes complex dynamics immediately after formation [@Jacobson:2011eq]. If they stabilize, then non-gravitational and tidal torques control the fate of the system [e.g. @vanFlandern:1979tf; @Cuk:2005hb]. Since this evolution is complex, the asteroid population evolution model does not individually evolve binary systems, since this would be computationally expensive. Instead, a lifetime for each system is drawn from a distribution, which has been determined from a separate Monte Carlo model of binary asteroid evolution as described later in this section. After formation each binary system is placed randomly in a mass ratio bin according to the probabilities established by the initial mass ratio fraction $F_i$: low ($q < 0.2$) or high ($q > 0.2$). These mass ratio bins determine the “instantaneous” survival of the binary system. If the binary survives, then the binary’s “long-term” evolutionary path is drawn, which is also dependent on the assigned mass ratio bin. Each evolutionary path is associated with a binary lifetime distribution. The drawn lifetime is then scaled by the heliocentric orbit of the system and the absolute size of the system (radius of the primary). The heliocentric semi-major axis and eccentricity remain the same as the rotationally fissioned progenitor.
Each binary system has four permanent parameters: the heliocentric semi-major axis and eccentricity, the mass ratio and the binary lifetime. The evolved parameter is not the spin rate as in the single asteroid case, but rather the age of the binary. The final outcome of the evolutionary path is also recorded, so that when the binary lifetime is over, the system is replaced with a new asteroid the same size as the progenitor but labeled as either an asteroid pair or a re-shaped asteroid. This evolution may be interrupted by a collision, and this is discussed in Section \[sec:binariesandcollision\].
The evolution of a binary asteroid system from rotational fission to a long term stable outcomes is deterministic but the evolution is chaotic and only weakly a function of the shape of each body and the mass ratio within each of two distinct dynamical regimes: low and high mass ratio [@Jacobson:2011eq]. The initial evolution of the spin and orbit states of the system are controlled by dynamical coupling between the spin and orbit by non-Keplerian gravity terms and solar gravitational perturbations. This dynamical evolution is quick often finishing in tens of years [@Jacobson:2011eq]. Due to the chaotic and swift nature of this evolution it occurs “instantaneously” and probabilistically within the model. If the rotational fission event results in the creation of a re-shaped asteroid or an asteroid pair, then these objects are returned to the asteroid population evolution model as single asteroids sharing the same heliocentric orbit properties as their progenitors. If the systems settles into a stable (i.e. long-lasting) binary state, then the binary evolves according to “long-term” binary evolution.
According to the binary evolution model described in @Jacobson:2011hp, the longevity of a binary system is primarily determined by the strength of the BYORP effect [@Cuk:2005hb; @McMahon:2010jy]. The BYORP effect may permanently stabilize some binaries in a tidal-BYORP equilibrium and expand the orbits of others. Low mass ratio binaries that evolve into a tidal-BYORP equilibrium exist until a collision occurs that is capable of disrupting the mutual orbit or catastrophically destroys one of the binary members. For other stable binary systems, after creation each is assigned a lifetime that is drawn from a distribution determined by Monte Carlo modeling of binary asteroids as explained in Section \[sec:binarylifetimedistributions\]. During this evolution, binary destruction via collision is possible as discussed in Section \[sec:binariesandcollision\]. At the end of a binary system’s lifetime, the binary disrupts forming a re-shaped asteroid, if the BYORP effect is contractive, or an asteroid pair, if the BYORP effect is expansive. Here, we assert that the BYORP effect can expand the mutual orbit to the Hill sphere creating an as-yet-unobserved population of asteroid pairs [@Jacobson:2015wu]. However, it is possible that solar perturbations or libration growth due to the adiabatic invariant relationship between libration and mean motion de-synchronize the synchronous binary member, which is undergoing the BYORP effect [@Jacobson:2014hp]. In the case that both binary members become asynchronous at a wide semi-major axis, the binary mutual orbit can no longer significantly evolve due to tides and the BYORP effect and the secondary is unlikely to be re-captured into synchronicity. In the model, we treat this scenario identically to the formation of an asteroid pair, since the primary spin state evolves according to the YORP effect with negligible influence from the secondary because of the wide orbit.
“Instantaneous” Binary Evolution {#sec:instantaneousbinaryevolution}
--------------------------------
Each system that rotationally fissioned undergoes binary evolution. Within the Monte Carlo asteroid evolution program, there are two stages for binary evolution: “instantaneous” and “long-term.” This distinction is made between processes that occur immediately after rotational fission and last less than $10^5$ years, and those that take significantly more than $10^5$ years. This timescale was chosen since it is a tenth of the YORP timescale for an asteroid with 200 m radius at 2.5 AU, and so it is effectively the time resolution of the code. “Instantaneous” evolution is described below and “long-term” evolution in Section \[sec:longtermbinaryevolution\].
An example of an instantaneous process is tidal synchronization, which has been estimated from first principles to take between 10$^3$ and 10$^5$ years for representative binaries [@Goldreich:2009ii]. While the YORP effect can delay tidal synchronization [@Jacobson:2014jw], tides typically dominate the spin evolution for newly created binary systems with semi-major axes less than 16 primary radii [@Jacobson:2014hp see Figure 1;], which is the maximum distance obtained by simulated post-fission binaries [@Jacobson:2011eq]. Due to spin-orbit coupling, the timescale for tidal synchronization may be lengthened since spin locking cannot occur above a specific eccentricity given the shape of the secondary [@Naidu:2015gp]. Understanding the details of tidal evolution is an ongoing focus of research, for instance, if the singly synchronous binary asteroids occupy a tidal-BYORP equilibrium [@Jacobson:2011hp] as 1996 FG$_3$ does [@Scheirich:2015ez], then tidal timescales are much shorter than those estimated purely from theory [@Fang:2012fw]. Furthermore, the first-order classical constant tidal parameter ratio $k/Q$ theories are likely not correct and, for instance, they may not depend on the mechanical rigidity as assumed by many [@Goldreich:2009ii; @Taylor:2011bj] but instead on an effective viscosity [@Efroimsky:2015ia] or surface properties including surface motion and potential lofting [@Fahnestock:2009en; @Harris:2009ea]. In the asteroid population evolution model, consistently mis-estimating the length of “instantaneous” processes is effectively a bias on the determined mean of the log-normal BYORP coefficient distribution $\mu_B$.
During “instantaneous” evolution, the mass ratio of the newly formed binary systems is chosen randomly according to the initial mass ratio fraction $F_i$ distribution. If the mass ratio of a system is chosen to be high, then that system evolves along the high mass ratio evolutionary track as shown along the top branch of Figure \[fig:AsteroidFlowChart\]. Mutual body tides lead to synchronization of the spins to the orbit period and circularization of the orbit. Tidal synchronization of each component of a high mass ratio binary occurs simultaneously, since they are of nearly equal size. For “rubble pile” tidal parameters, these systems typically synchronize in less than $10^5$ years [@Goldreich:2009ii; @Jacobson:2011hp], and so this process is considered an “instantaneous” process in the asteroid population evolution model. This may be violated for high mass ratio systems, systems larger than $5$ km and with mass ratios $0.2 < q \lesssim 0.3$, which may take more than a million years to synchronize [@Jacobson:2011eq]. Since high mass ratio systems have negative free energy, none of these systems can disrupt endogenously and all systems emerge as doubly synchronous binaries. Once synchronous, the BYORP effect will expand or contract the mutual orbit. Since this process can last for many millions of years, further evolution of high mass ratio binary systems is a long-term evolutionary process.
If the mass ratio of a system is determined to be low, then that system evolves along the low mass ratio evolutionary track as shown along the bottom branch of Figure \[fig:AsteroidFlowChart\]. In @Jacobson:2011eq, this track is shown to immediately branch into four possible states, however all modeld chaotic ternary systems formed via secondary fission return to the chaotic binary state via escape of a member or impact between two of the members, so this track is not shown in Figure \[fig:AsteroidFlowChart\]. Escape from low mass ratio systems is possible because they have positive free energy [@Scheeres:2009dc], and @Jacobson:2011eq found numerically that $\sim 67\%$ of low mass ratio binaries do disrupt and form asteroid pairs as observed by @Pravec:2010kt. Furthermore, @Jacobson:2011eq found that collisions between the two members occur in another $\sim 25\%$ of these systems forming re-shaped asteroids and that only $\sim8\%$ of low mass ratio binaries survive for more than $10^3$ years.
Typically, the secondary of these binaries synchronizes due to mutual body tidal dissipation in less than $10^5$ years [@Goldreich:2009ii; @Jacobson:2011hp], and so these binaries become singly synchronous systems within the “instantaneous” period of the asteroid population evolution model. The model stochastically assigns an outcome to each rotationally fissioned low mass ratio system according to the probabilities reported above creating members of asteroid pairs, re-shaped asteroids, and singly synchronous binary systems. Further evolution of singly synchronous binary systems due to the BYORP effect and tides is a long-term evolutionary process since the relevant timescales typically exceed a million years.
All resultant asteroid systems from both mass ratio regimes are propagated forward using the asteroid population evolution model with all of the asteroids that did not undergo rotational fission. Members of asteroid pairs and re-shaped asteroids are subject to the YORP effect and collisions exactly as single asteroid systems that did not undergo rotational fission. They are assigned new rotation rates from the original rotation rate distribution. These systems are now single asteroid systems having complete one rotational fission lifetime cycle. They can eventually rotationally fission again if they are accelerated to the appropriate rotational break-up speed of their size regime.
“Long-term” Binary Evolution {#sec:longtermbinaryevolution}
----------------------------
Binary systems that have survived “instantaneous” evolution are treated differently than single systems in the asteroid population evolution model. These systems are still subject to collisions as discussed in Section \[sec:binariesandcollision\], and they would also be subject to YORP effect but not in the same way as single asteroids since the internal (i.e. spin and orbit states) evolution of binary systems is complicated by their mutual non-Keplerian gravity fields and mutual body tides. Torques within binary systems such as the YORP effect and tides are generally much smaller than the BYORP effect [@Jacobson:2014hp] with the exception of those binaries that enter the tidal-BYORP equilibrium [@Jacobson:2011hp], and so despite this complexity of multiple operating torques, an estimate of the lifetime of the binary can be estimated solely according to the BYORP effect evolution of the system.
The BYORP effect is an averaged torque on the orbit of synchronous satellites due to asymmetric emitted thermal radiation [@Cuk:2005hb; @McMahon:2010by]. The effect acts independently on each body, so that if both bodies are synchronous as in doubly synchronous binaries, then there is a BYORP torque on each, but for singly synchronous systems, the BYORP effect only acts on the synchronous secondary. The direction of the BYORP torques is the fundamental parameter for determining the final evolutionary state of the system [@Cuk:2007gr; @Jacobson:2011eq]. The BYORP effect eventually destroys all doubly synchronous and half of all singly synchronous binary systems, as shown in Figure \[fig:AsteroidFlowChart\]. The only exception to BYORP destruction are the singly synchronous systems which occupy an equilibrium between tides and the BYORP effect and are predicted to survive indefinitely unless there is exogenous interference such as a collision [@Jacobson:2011hp].
------ -------------- -------------- -------------- -------------- -------------- -------------- ------------ ------------ ------------ ---------------
Likelihood
$q$ Direction Aligned Given $q$ $\mu_\tau$ $\mu_\tau$ $\mu_\tau$ $\mu_\tau$ $\mu_\tau$ $\mu_\tau$ $\sigma_\tau$
Low Out - 0.5 4.88 5.88 6.88 7.88 8.88 9.88 0.71
Low In - 0.5 $\infty$ $\infty$ $\infty$ $\infty$ $\infty$ $\infty$ -
High Out No 0.25 4.95 5.95 6.95 7.95 8.95 9.95 0.76
High Out Yes 0.25 4.61 5.61 6.61 7.61 8.61 9.61 0.55
High In No 0.25 4.42 5.42 6.42 7.42 8.42 9.42 0.75
High In Yes 0.25 4.09 5.09 6.09 7.09 8.09 9.09 0.55
$\mu_B = -1$ $\mu_B = -2$ $\mu_B = -3$ $\mu_B = -4$ $\mu_B = -5$ $\mu_B = -6$
------ -------------- -------------- -------------- -------------- -------------- -------------- ------------ ------------ ------------ ---------------
The asteroid population evolution model does not calculate the specific mutual orbit evolution of each binary system due to computational constraints. Instead, each binary is assigned an evolutionary path determined by the system mass ratio and direction of the BYORP torque(s) in the system. There are six distinct evolutionary paths as shown in Table \[tab:binarylifetimes\]: low mass ratio stable equilibrium with tides (contractive BYORP), low mass ratio expansive, high mass ratio expansive anti-aligned, high mass ratio expansive aligned, high mass ratio contractive anti-aligned, and high mass ratio contractive aligned. Within each mass ratio regime, there is an equal likelihood to follow a specific track since there is nominally the same chance for a positive as negative BYORP coefficient and the BYORP coefficient of each body is independent of the other [@Cuk:2005hb; @McMahon:2010jy]. For instance, $25\%$ of high mass ratio systems evolve along the expansive track with aligned BYORP coefficients, since there is a $50\%$ chance that the primary will have a positive BYORP coefficient and a $50\%$ chance that the secondary will also have a positive BYORP coefficient. Once the evolutionary track has been established for a binary system, it continues down that track for the rest of its lifetime.
The lifetime of a binary system is determined principally by the BYORP effect. After synchronization of both members, tides may damp eccentricity from the system but do not strongly evolve the semi-major axis. If only the secondary is synchronized, then tides are still important for contractive systems (i.e. the tidal-BYORP equilibrium) and while tides assist BYORP in expanding systems, tides are a strong function of semi-major axis and soon become much weaker than the BYORP effect. There are also possible interruptions by exogenous processes (e.g. collisions, see Section \[sec:binariesandcollision\]). The rate of expansion or contraction is determined primarily by the heliocentric orbit, absolute size of the system, and the BYORP coefficient. @McMahon:2010jy showed that to first order in eccentricity, the semi-major axis $a$ measured in primary radii $R_p$ evolves as: $$\dot{a} = \frac{3 B_c}{2 \pi \omega_d \rho } \left( \frac{ a^{3/2} \sqrt{1+q}}{R_p^2 q} \right) \left( \frac{(2/3) F_\odot}{a_\odot^2 \sqrt{1 - e_\odot^2}} \right)$$ where $B_c = B_p + B_s q^{2/3}$ is the combined BYORP coefficient. The mass ratio $q^{2/3}$ factor is a direct result of the BYORP effect evolutionary equations [@McMahon:2010jy]. For doubly synchronous systems, there is a BYORP coefficient for the primary $B_p$ and the secondary $B_s$, but for singly synchronous systems, there is only a BYORP torque on the secondary so the BYORP coefficient for the primary $B_p = 0$. The BYORP coefficient is scaleless and depends solely on the shape of the synchronous member.
### BYORP coefficient distributions
![A probability density histogram of $\upsilon$ of the observed singly synchronous population (bins are of width $0.5$). The dashed line is the probability density function of a central normal distribution fit to the data where $\sigma_\upsilon = 0.68$. Data is from @Jacobson:2011hp.[]{data-label="fig:BYORPDistribution"}](Figure3.pdf){width="\columnwidth"}
BYORP coefficients are determined solely by the shape of the asteroid, but determining the appropriate distribution of plausible BYORP coefficients is challenging. The effect is similar to the detected Yarkovsky and YORP effects [@Chesley:2003bk; @Taylor:2007kp; @Lowry:2007by] and so the BYORP effect rests on strong theoretical support despite a lack of direct observation of BYORP-driven evolution. The BYORP effect has never been directly measured, so a BYORP coefficient distribution cannot be derived from direct observation. A detection may be precluded by the BYORP-tidal equilibrium hypothesis [@Jacobson:2011hp] and the possibly fast destruction of doubly synchronous binary systems [@Cuk:2007gr]. Furthermore, there are very few well resolved asteroid shapes particularly of binary asteroid members. The only current published BYORP prediction, @McMahon:2010jy estimated that $B_s = 2 \times 10^{-2}$ for the secondary of the 66391 (1999 KW$_4$) system using a vertice-and-facet shape model from @Ostro:2006dq. This shape model is an order $8$ spherical harmonic representation with an average $26$ m facet edge length (corresponding to $7^\circ$ angular resolution). Using this BYORP coefficient and the observed parameters of 66391, @McMahon:2010jy determined a Hill radius expansion timescale of $\sim5.4 \times 10^{4}$ years. This expansion is very rapid compared to the typical YORP timescales of possible progenitors of $\sim10^6$ years assuming formation from YORP-induced rotational fission [@Rubincam:2000fg; @Vokrouhlicky:2002cq; @Capek:2004bl]. Nominally, half of all synchronous binary asteroids are expected to expand due to the BYORP effect and 66391 may be a member of this population but observing this system as a binary rather than an asteroid pair is very unlikely given the difference between those two timescales.
This estimated BYORP coefficient also contradicts the BYORP-tidal equilibrium hypothesis in @Jacobson:2011hp, which states that the observable singly synchronous binary asteroids occupy an equilibrium between a contractive BYORP torque and the expansive mutual body tidal torque; this hypothesis requires a negative BYORP coefficient. Further study by @McMahon:2012ty [pers. comm.] concluded that the shape of 66391 should be known to a mean facet edge length of $8$ m (an angular resolution of $2.2^\circ$), using results scaled from an analysis of 25143 Itokawa (1998 SF$_{36}$), in order to model the BYORP coefficient with sufficient accuracy to prevent significant changes including sign changes. For the related YORP effect, @Statler:2009fw concluded that spherical harmonic fits of order $\leq 10$ produce expected errors of order $100\%$ and for errors under $10\%$, the harmonic order of the fit must be at least $20$. Furthermore, @Statler:2009fw showed that a crater half the object’s radius can produce errors of several tens of percent; the observations of the secondary of 66391 did not uniformly cover the surface, a significant portion of the southern hemisphere is systematically not as accurate as the $7^\circ$ angular resolution of the rest of the model, and features such as craters may have not been observed [@Ostro:2006dq]. Alarmingly, @Rozitis:2012fq conclude that the related YORP effect is very sensitive to surface roughness due to thermal-infrared beaming and that accurate YORP (and perhaps BYORP) coefficient estimation from shape models may require $1$ cm resolution.
@Pravec:2010tc determined that the direct detection of the BYORP effect and measurement of the BYORP coefficient would require multi-decade observations of small (semi-major axes of $<10$ primary radii and secondary radii $<1$ km) binaries. Furthermore, this analysis did not include mutual body tides, which @Jacobson:2011hp predicted would create a stable equilibrium and halt mutual orbit evolution. @Scheirich:2015ez conclude that for 175706 (1996 FG$_3$) this is true for at least this system. Only the less numerous doubly synchronous systems do not have mutual body tides capable of creating the stable equilibrium. 69230 Hermes (1937 UB) is the smallest doubly synchronous system in both absolute size and heliocentric orbit, and is therefore the likeliest system for a direct detection of BYORP-driven orbit evolution.
While the hypothesized BYORP-tidal equilibrium prevents the direct measurement of the BYORP coefficients of singly synchronous binaries, it may be used to determine the relative distribution of BYORP coefficients. In @Jacobson:2011hp, it is shown how for each system the balance between the BYORP and tidal torques determines the value of the combination of the BYORP coefficient $B$ and the tidal parameters: tidal quality number divided by the tidal Love number $Q/k_p$ of the primary, degenerately: $$\frac{BQ}{k_p} = \frac{2 \pi \omega_d^2 \rho R_p^2 q^{4/3} }{F_\odot a^7} a_\odot^2 \sqrt{1-e_\odot^2} = 2557 R_p\text{ km}^{-1}$$ where the last equality is the fit to the singly synchronous binary data.
Since the BYORP coefficient is not a function of radius $R_p$, so that if the data is divided by a $Q/k_p \propto 2557 R_p$ km$^{-1}$ model, the resulting values reflect the distribution of BYORP coefficients $B$. While this trick does not determine the absolute magnitude of the BYORP coefficient, it does provide information about the dispersion of the BYORP coefficient distribution. Figure \[fig:BYORPDistribution\] shows each system’s normalized log BYORP coefficient $\upsilon$ as fit by a simple normal distribution with mean $\mu_\upsilon = 0$ and standard deviation $\sigma_\upsilon = 0.68$. The observed distribution has a slight negative skew and a positive kurtosis compared to the normal distribution. While the normalization of the singly synchronous data removed information about the absolute value of the BYORP coefficients, the standard deviation of those absolute coefficients is the same as the normalized coefficients so $\sigma_B = \sigma_\upsilon = 0.68$, where $\sigma_B$ is the standard deviation of $y$ and the absolute BYORP coefficients $B = 10^y$.
The mean $\mu_B$ of the distribution of $y$ is difficult to determine. Estimating the absolute magnitude of the BYORP coefficient from @McMahon:2010jy suggests a value for the mean of the distribution near $\mu_B = - 2$. Even though this value is correct for the radar shape model of the secondary of 66391 rotated $180^\circ$ about either the radial or body axis orthogonal to the along track direction, however as discussed above, this estimation may not be accurate due to deficiencies of the shape model.
Since we cannot constrain the BYORP coefficient distribution, five different distributions are tested in the asteroid population evolution model: $\mu_B = -1$, $-2$, $-3$, $-4$, $-5$, and $-6$. This is the second knob in the model; the other knob is the initial mass ratio fraction as described in Section \[sec:massratiofraction\]. These BYORP coefficient distributions are used to generate the binary lifetime distributions that are then assigned to each binary system in the asteroid population evolution model. Each BYORP distribution is tested independently and the entire asteroid population is then evolved from within the chosen distribution for the entirety of the run.
### Binary lifetime distributions {#sec:binarylifetimedistributions}
The BYORP lifetime $\tau$ is determined by the evolution of the mutual orbit from a tidally synchronized semi-major axis to single member end states either re-shaped asteroids (e.g. contact binaries) or asteroid pairs. This evolution can be described as the evolution from an interior semi-major axis $a_\text{interior}$ to an exterior semi-major axis $a_\text{exterior}$ or vice versa: $$\begin{aligned}
\tau =& 10^x R_p^2 a_\odot^2 \sqrt{1 - e_\odot^2} \\
x = & \log_{10} \left[\frac{4 \pi \omega_d \rho q}{3 F_\odot B_c \sqrt{1 + q}} \left( \frac{1}{a_{interior}^{1/2}} - \frac{1}{a_{exterior}^{1/2}} \right) \right]
\label{eqn:lifetime}\end{aligned}$$ where $F_\odot = 4.5 \times 10^{-5}$ g cm$^{-1}$ s$^{-2}$ is the solar constant at a $1$ AU circular orbit. The BYORP lifetime $\tau$ is determined by the primary radius $R_p$, the heliocentric semi-major axis $a_\odot$ and eccentricity $e_\odot$, and $x$. $x$ is the logarithm of all the other system parameter dependencies. Rather than generating the necessary parameters to determine $x$ for each system, a million systems were generated outside of the asteroid population evolution model for each evolutionary path and the distribution of $x$ was determined. Logarithmic normal distributions were fit to these generated distributions of $x$ with means of $\mu_\tau$ and standard deviations of $\sigma_\tau$. Each distribution depends on the BYORP coefficients of the synchronous members, and the particular evolutionary track. For each of the million systems, the BYORP coefficients are drawn from the distribution with the prescribed $\mu_B$ for that run. Distributions of $x$ are shown in Figure 7, such that if $R_p$ is in km, $a_\odot$ is in AU, then $\tau$ is in years.
Each evolutionary pathway is defined by the sign of the BYORP coefficient for each synchronous member and the mass ratio of the system. As mentioned earlier, the only evolutionary track that does not self-destruct is the BYORP contracting singly synchronous track. These systems may contract or expand to some degree in semi-major axis, but the BYORP-tidal equilibrium hypothesis predicts that these systems reach a stable semi-major axis. The interior $a_\text{interior}$ and exterior $a_\text{exterior}$ semi-major axes are given below for each of the evolutionary tracks in Table \[tab:binarylifetimes\].
For high mass ratio doubly synchronous systems, the initial semi-major axis is always the tidally synchronized semi-major axis with the equivalent angular momentum as the rotational fissioned system at the time of fission. Tidal dissipation will remove energy from the system, but angular momentum is conserved until the system is synchronized and the BYORP effect evolves the system. This semi-major axis can be either the interior or exterior semi-major axis depending on the sign of the BYORP coefficient. The initial semi-major axes for doubly synchronous systems $a_\text{d}$ is derived in \[ref:derivationofinitialsemimajoraxesinbinaryevolution\]. This semi-major axis can be well approximated as a power law series expansion as a sole function of mass ratio $q$ and measured in primary radii $R_p$: $$a_{doubly} = 0.344 + \frac{0.00406}{q^3} + \frac{0.0132}{q^2} + \frac{0.815}{q} + 1.23 q$$ For contracting high mass ratio systems, the interior semi-major axis $a_\text{interior}$ is contact between the two bodies: $$a_\text{c} = 1+ q$$ For both singly and doubly synchronous expanding systems, the exterior semi-major axis $a_{exterior}$ is the Hill radius $a_\text{Hill}$. The Hill radius can be approximated in primary radii $R_p$: $$a_{Hill} = q_\odot \left( \frac{4 \pi \rho}{9 M_\odot} \right)^{1/3}$$ where $\rho = 2$ g cm$^{-3}$ is the density of the primary, $M_\odot = 1.99 \times 10^{33}$ g is the mass of the Sun, and $q_\odot$ is the heliocentric perihelion of the barycenter of the system. Asteroids at the outer edge of the Main Belt in circular orbits $q_\odot = 3.28$ AU have the largest Hill radii $a_\text{Hill} = 549$ primary radii and those at the inner edge in highly eccentric orbits with periapses just exterior to the Earth $q_\odot = 1$ AU have the smallest Hill radii $a_\text{Hill} = 168$ primary radii, but these radii are very large compared to the interior semi-major axes $a_\text{interior}$. Since the BYORP lifetime is proportional to the difference between the inverse square roots of the interior and exterior semi-major axes, this factor of three difference in exterior semi-major axis translates to an at most $10\%$ difference in BYORP lifetime, if one extreme was chosen relative to the other. To simplify the calculations, we use a single perihelion $q_\odot = 2.25$ AU, very close to the mean and median of the Main Belt Asteroid distribution. This corresponds to a Hill radius $a_\text{Hill} = 377$ primary radii.
Binaries and Collisions {#sec:binariesandcollision}
-----------------------
If a binary participates in a catastrophic shattering collision then the binary is always destroyed. This is determined by the same condition as a single asteroid from a comparison of the imparted specific kinetic energy and the critical impact specific energy. Unlike single asteroids, cratering collisions can destroy a binary systems. While these collisions by definition deliver less energy than the critical impact energy, these impacts can deliver enough energy to the system to disrupt the binary. A simple condition for this disruption is a comparison of the delivered change in momentum to the system (delta V) and the escape velocity from the primary. If the former exceeds the latter, then the system disrupts.
Contact Binaries
----------------
Contact binaries are formed from the merging of BYORP contracting high mass ratio binary systems. These systems exist until either they undergo a rotational fission event or are subject to a catastrophic collision. This is probably too optimistic a scenario since the binary system crosses a instability before contact [@Scheeres:2009dc]. This instability causes the two components to begin to circulate and the orbit to evolve, but from simulations, these systems still collide and do so gently [@Jacobson:2011eq]. These gentle collisions may be enough to reshape the new combined mass into a non-bifurcated shape that would not be easily identifiable as a contact binary. The subjectivity of the contact binary label adds some uncertainty to the population statistics.
Results of the asteroid population evolution model {#sec:resultsoftheasteroidpopulationevolutionmodel}
==================================================
The asteroid population evolution model produces a spin period distribution as a function of diameter similar to the observed population. This is not of great surprise since the spin limit constraints were designed to reproduce the observed population and the model has been used successfully in the past for this purpose [@Marzari:2011dx]. The model had two input parameters initial mass ratio fraction $F_i$ and mean of the log-normal distribution of BYORP coefficients $\mu_B$, and these inputs were permuted so that each combination produced a full set of model outputs. We discuss each observable quantity output from the model and how that observable depends on the model free parameters: $F_i$ and $\mu_B$. Combining all of the observables, we assemble a log-likelihood metric that can determine the best fit parameters. Since the computational cost of running the asteroid population evolution model is high and we utilized a population of $2 \times 10^6$ asteroids, there is small variance when a particular set of input parameters is run a second time. We use a Monte Carlo method to propagate the observed uncertainties to the comparison tests. From the model, we identify a region where the free parameters can be well fit to the data. They are discussed in detail in Section \[sec:discussionoftheasteroidpopulationevolutionmodel\].
Steady-State Binary Fraction {#ref:steadystatebinaryfraction}
----------------------------
{width="8.9cm"}
The asteroid population evolution model traces the evolution of a population with diameters from $200$ m to $20$ km. However, observations typically do not go to such small sizes. To replicate them, we will only consider asteroids with those diameters, which corresponds to an absolute magnitude $H \sim 21$ for typical asteroid albedos.
In Figure \[fig:BinaryFractionPlot\], the steady-state binary fraction is shown as a function of both the initial mass ratio fraction and the log-normal BYORP coefficient distribution mean from asteroid population evolution model. The difference between the asteroid population evolution model and the observations are shown as a heat map behind the model fractions (white indicates a close match).
Radar and photometric lightcurve observations supply independent and robust binary statistics regarding the near-Earth asteroid (NEA) population binary fraction, which we use as a proxy for the small Main Belt asteroid population (we discuss possible differences at the ends of the next paragraph). Using radar observations, @Margot:2002fe reported that about $16\%$ of radar observed binary systems larger than $200$ m are binary systems. Updated statistics from radar observations agree well with the better determined value of about $17\%$: $31$ binary systems out of $180$ asteroid systems with absolute magnitudes $H < 21$ approximate diameters $D \gtrsim 250$ m for an $p = 0.18$ albedo asteroid [@Taylor:2012vp]. Photometric lightcurve analyses report a binary detection rate of $15 \pm 4\%$ for NEAs with diameters $D \gtrsim 300$ m and inferred mass ratios $q > 0.006$ [@Pravec:2006bc]. This agrees with an initial assessment by [@Pravec:1999wt] that $17\%$ of near-Earth asteroid systems are binary. The near-Earth asteroid population is significantly easier to observe than similar sized Main Belt asteroids, but for the sizes observed $D \lesssim 10$ km, rotational fission is expected to be the dominant formation mechanism. For small diameter MBA systems $D \lesssim 10$ km, @Pravec:2006vc determine that there is a similar binary fraction in the inner Main Belt and this is supported by the results of the Binary Asteroid Photometric survey [@Pravec:2006bc; @Pravec:2012fa]. Tidal disruption of binary asteroids in the near-Earth asteroid population may lower the binary fraction in that population relative to the Main Belt [@Fang:2012go].
Long binary lifetimes (small BYORP coefficients) naturally correspond to a high binary fraction. A low initial mass ratio fraction has a higher binary fraction due to the more likely creation of synchronous long-lasting binary systems. If we combine the photometric [@Pravec:2006bc] and radar [@Taylor:2012vp] survey results and assume Poisson statistics for calculating the uncertainty, then the observed steady-state binary fraction is $16 \pm 6\%$. The best parameter fits occur when the log-normal BYORP coefficient distribution mean is low, either $10^{-1}$ or $10^{-2}$ and the initial mass ratio fraction is greater than 8.
However, it is possible that comparing the main belt asteroid binary fraction to the near-Earth asteroid binary fraction may be misleading. If an asteroid’s average distance from the Sun during its orbit increases, then the rotational fission timescale increases as well since the YORP timescale would increase. Since the rate of fission decreases, the creation of binary asteroids would slow controlling for all other factors other than average heliocentric distance. When considering the steady-state population though, we must consider destruction as well as binary formation. If the BYORP effect evolution is the dominant destructive route, then it scales identically with heliocentric distance as the YORP effect. This is the primary reason, why it may be acceptable to use the near-Earth asteroid binary fraction as a proxy for the main belt asteroid binary fraction. Furthermore, the YORP and BYORP timescales increase roughly by a factor of 10 from the near-Earth to main belt asteroid populations and this is the same factor by which the non-BYORP destructive routes increase—about 10 Myr for dynamical scattering into the Sun for the near-Earth asteroids and about 100 Myr for collisional disruption for the main belt asteroids. From these considerations, we conclude that using the near-Earth asteroid binary fraction as a proxy is acceptable.
Fast Binary Fraction {#ref:fastbinaryfraction}
--------------------
{width="8.9cm"}
@Pravec:2006bc made a specific subpopulation observation that amongst fast-rotating binaries (spin periods between $2.2$ and $2.8$ hours) with diameters larger than $0.3$ km the binary fraction becomes $66 \pm 12\%$. The asteroid population evolution model tracks the spin rate of single asteroids but since it does not evolve the system parameters of binaries, we rely on the binary evolution model to assume that all low mass ratio and no high mass ratio binaries will have rapidly rotating primaries [@Jacobson:2011eq].
The fast rotating binary fraction as a function of the free parameters is shown in Figure \[fig:LargeFastBinaryFractionPlot\]. Similar to the overall binary fraction, a large initial mass ratio fraction produces a small fast rotating binary fraction. Unlike the overall binary fraction, the fast rotating binary fraction is not significantly dependent on binary lifetimes since only low mass ratio systems have rapidly rotating primaries. There is a band around a initial mass fraction of $8$ that produces the smallest difference between the model and observation, however this constraint is softer than the overall binary fraction since the nearby bins have similar values.
Steady-State Mass Ratio Fraction {#ref:steadystatemassratiofraction}
--------------------------------
{width="8.9cm"}
The steady-state mass ratio fraction is the evolved initial mass ratio fraction and the mass ratio fraction is the number of high mass ratio binaries divided by the number of low mass ratio binaries. It is shown as a function of the free parameters in Figure \[fig:MassRatioFractionPlot\]. Increasing the initial mass ratio fraction does increase the steady-state mass fraction, however that increase is mitigated when high mass ratio systems do not survive for as long as low mass ratio systems. Also as the log-normal BYORP coefficient distribution mean decreases and binary lifetimes increase, the steady-state mass ratio fraction increases since the high mass ratio binaries are living longer relative to the low mass ratio synchronous systems, which are in a long-term equilibrium.
The binary asteroid catalogue provided by Pravec et al. provides the best statistics regarding the steady-state mass ratio fraction. This ratio is shown in Figure \[fig:BinaryMassRatioHist\] and is $0.11 \pm 0.08$ using Poisson statistics [@Pravec:2015uc]. In Figure \[fig:MassRatioFractionPlot\], the absolute difference between the model and the observation is shown as shading. The best fits are a diagonal band from long binary lifetime and small initial mass ratio fractions to short binary lifetimes and high initial mass ratio fractions. This is sensible trade-off in parameters to arrive at similar values for the steady-state mass ratio fraction.
Contact Binary Fraction
-----------------------
{width="8.9cm"}
In Figure \[fig:ContactFractionPlot\], we show the model contact binary fraction as a function of the free parameters. Contact binaries are formed from the destruction of inward evolving high mass ratio binaries, so when high mass ratio binaries are created often (large initial mass ratio fraction) and when they are destroyed frequently (large log-normal BYORP coefficient distribution mean), the contact binary fraction is high.
Only radar imaging can conclusively determine whether a system is a contact binary, but even then it is often a subjective result. @Taylor:2012vp provides the most recent estimate of $15 \pm 7\%$ using Poisson statistics. This number is perhaps more likely to be an underestimate relative to the asteroid population evolution model definition of a contact binary because contact binary formation involves the low velocity collision of two asteroids and collision geometry and internal structure may dictate whether a collapsing high mass ratio system is observable as a contact binary. In Figure \[fig:ContactFractionPlot\], the absolute difference between the model and observations are shown. If the model is over-counting contact binaries because the model always creates them at the end of the collapsing high mass ratio track evolution track, then the band of best fits would contract some about the upper right-hand corner and come more into agreement with the initial mass ratio fractions that the other observable constraints impose.
Best Fit Parameters
-------------------
{width="8.9cm"}
We can combine each of these observables into a single log-likelihood estimator for determining the best fit for the free parameters. The log-likelihood metric we will use is a summation of the difference between the model output fraction $F_j$ for each observable $j$ and an observable fraction $F_{obs}$, which is drawn from a normal distribution with mean $\mu_j$ and standard deviation $\sigma_j$ in accordance with the values in the previous sections.
$$\mathcal{L} = A \sum_j \frac{ \left( F_j - F_{obs} \right)^2}{2 \sigma_j^2}$$
A normalization is applied to make the best fit model have a value of 1. The larger the normalized log-likelihood the less likely those set of parameters are. Using Monte Carlo techniques, the uncertainty of the log-likelihood estimator can be determined. It is important to note that due to computational constraints, the simulations are single runs and there is unaccounted for uncertainty. Although, a few cases were run more than once and they were consistent with small changes to the reported values. The log-likelihood metric is shown in Figure \[fig:LikelihoodThingPlot\].
Discussion of the asteroid population evolution model {#sec:discussionoftheasteroidpopulationevolutionmodel}
-----------------------------------------------------
The asteroid population evolution model identified a region in the phase space of the two free parameters in which the correct values are most likely to lie. The log-normal BYORP coefficient distribution mean is likely to be greater than $-3$, which implies binary lifetimes less than $10^6$ years for systems that do not end up in the tidal-BYORP equilibrium. This is similar to the formation and destruction cycle initially proposed by @Cuk:2007gr with the exception of the low mass ratio singly synchronous binaries, which we presume are captured in a tidal-BYORP equilibrium. These short binary lifetimes are consistent with the understanding that the tight asynchronous population (e.g. 2004 $DC$) are newly formed binary systems that have yet to tidally relax. However, @Naidu:2015gp demonstrate the possibility that simple tidal theory could dramatically underestimate the tidal locking timescale due to the spin-orbit coupling of a secondary’s aspherical shape. In this case, the model is incorrectly assuming that tidal synchronization can occur within the“instantaneous” binary evolution time.
The best fit initial mass ratio fraction is $8$ but not statistically distinguishable from $4$. Because the mass ratio fraction is defined as the frequency of high mass ratio systems (0.2 to 1.0) over the frequency of low mass ratio systems (0.0 to 0.2), (note that the high mass ratio range is four times the extent of the low mass ratio range), the best fit initial mass ratio fraction is consistent with asteroids that fission nearly in half twice as frequently or at least as frequently as into two very unequal pieces. The high mortality rate of low mass ratio systems in the “instantaneous” phase of binary formation is corrected by the synchronous low mass ratio binary population. This is consistent with the hypothesis that asteroids are more likely to rotationally fission along interior planes and “necks” [@Sanchez:2012hz; @Holsapple:2009tq] than from small events at the surface that accumulate in orbit into a larger satellite [@Walsh:2008gk].
For these best fit parameters, the asteroid population evolution model provides some predictions regarding the Main Belt asteroid population. The asteroid pair population is predicted to be about $2\%$ of the total population. That is within the last $2$ Myrs, $2\%$ of the population was a member of a binary pair that disrupted. These are mostly small asteroids, and it goes to less than $1\%$ for asteroids larger than a kilometer in diameter.
Conclusions {#sec:conclusions}
===========
The YORP-induced rotational fission hypothesis predicts that the YORP effect rotationally accelerates asteroids until they fission and that this is the primary formation mechanism of binary asteroids. We examine this hypothesis by modeling the main belt asteroid population between the sizes of 200 m and 20 km. Our asteroid population evolution model rotationally evolves two million asteroids over 4.5 billion years according to the YORP effect and collisions. Collisions can destroy both single and binary asteroids as well as modify the YORP coefficient after cratering collisions. When these asteroids are rotationally accelerated to a rotational spin limit, they undergo rotational fission. The outcome of each individual rotational fission event is drawn from statistical distributions as determined from the mutual orbit evolution model in @Jacobson:2011eq. There are two important free parameters to the model, the initial binary mass ratio fraction $F_i$, which is the ratio of high to low mass ratio binaries created after a rotational fission event, and the strength of the BYORP effect $\mu_B$, which determines binary lifetimes. Many binaries are “instantaneously” destroyed due to strong gravitational torques from spin-orbit coupling. These form asteroid pairs, re-shaped asteroids if the mass ratio is low, and contact binaries if the mass ratio is high. Those that survive evolve according to “long-term” effects such as the BYORP effect and tides.
The asteroid population evolution model utilizes a simplified form of the binary model described in @Jacobson:2011eq. For instance, it ignores the formation of triple systems and wide asynchronous binaries. Furthermore, the model utilizes evolutionary equations only accurate to first order in eccentricity. The model asserts that singly synchronous binary systems are in a tidal-BYORP equilibrium. This has the effect of scaling the strength of tidal evolution with the log-normal BYORP coefficient, which is chosen as a free parameter of the model. If this assertion is incorrect, likely it is because the BYORP coefficient is significantly weaker than expected. If it is significantly weaker, then a much lower initial binary mass fraction would be needed to explain the relative abundance of low mass fraction (typically singly synchronous) binaries. However, this is unlikely to be the case since the theory behind the BYORP effect is robust especially given the observation of 1996 FG$_3$ within the tidal-BYORP equilibrium [@Scheirich:2015ez].
The model also assumes an asteroid size distribution determined from collision evolution models which do not incorporate the YORP effect, however the YORP effect is expected to significantly deplete the population of small ($D \lesssim 10$ km) asteroids relative to the collisionally equilibrium size distribution [@Jacobson:2014bi]. The inclusion of this effect may decrease the number of catastrophic and cratering events amongst the asteroid population. This would allow more steady YORP effect evolution and probably lead to shorter periods between rotational fission events. However, the YORP coefficient distribution used here already creates a spin rate distribution that matches that of the near-Earth and main belt asteroid populations [@Rossi:2009kz; @Marzari:2011dx]. Furthermore, the output of the asteroid population evolution model is compared to observables, which are not absolute quantities but relative comparisons of sub-populations within the asteroid population, so the effect of a change of the absolute number of asteroids in a particular size bin may not be significant.
We compare the four outcomes from the model to observables: the steady-state binary fraction, the fast binary fraction, the binary mass ratio fraction and the contact binary fraction. We find that the asteroid population evolution model can match each observable individually and typically over a swath of parameter space. When all of the observables are combined using a likelihood parameter, the model best fits all of the observables in only one location, so we determine that the best fit parameters are $F_i = 4$ or $8$ and $\mu_B = 10^{-1}$ or $10^{-2}$. These best fit parameters are not very precise, but they are a unique global solution since each of the four observables carve out unique and generally orthogonal constraints on the parameter space. Moreover, the best fit strengths of the BYORP effect match that predicted from a shape model. Thus, we conclude that the YORP-induced rotational fission hypothesis can explain these four observables from a sophisticated asteroid population synthesis model.
Derivation of tidally synchronous semi-major axis for doubly synchronous binary evolution {#ref:derivationofinitialsemimajoraxesinbinaryevolution}
=========================================================================================
Asteroids undergo rotational fission at some critical disruption rotation rate; this has been shown with analytic theory, observations of asteroid pairs, and computational numerics [@Scheeres:2007io; @Pravec:2010kt; @Sanchez:2012hz]. This disruption rate and the shape of the asteroid at fission determine the angular momentum of the system during the “instantaneous” binary evolution stage identified in Section \[sec:instantaneousbinaryevolution\]. In the doubly synchronous case, both bodies become synchronous with the orbit rate on similar short timescales but in the singly synchronous case, only the secondary is synchronized on a short timescale and the primary remains rotating at near the initial rate. During this stage, energy is removed from the system via mutual body tidal dissipation but the angular momentum of the system is conserved. By making some idealized approximations, the conservation of angular momentum is used to derive a tidal synchronization semi-major axis for the doubly synchronous systems $a_d$.
The angular momentum of an idealized binary system approximating each body as a constant density sphere is $$H = I_p \omega_{p} + I_s \omega_{s} + m a^2 \Omega$$ where $I_n = 2 M_n R_n^2 / 5$ are the moments of inertia, $R_n$ are their radii, $M_n = 4 \pi \rho R_n^3 / 3$ are their masses, $m = M_p q / ( 1+q )$ is the reduced mass, $a$ is the distance between each body’s center of mass, and $\Omega$ is the rotation rate about the system barycenter. Additionally, the mass ratio is defined as $q = M_s / M_p = R_s^3 / R_p^3$ and the critical disruption rate for a specific mass ratio as $\omega_q = \sqrt{ (1+q)/(1+q^{1/3})^3}$.
In the idealized system described above, the initial angular momentum at the moment of rotational fission is a function of the mass ratio, the density and the primary radius. Before entering into orbit, the two idealized components are initially separated only by their radii $a = R_p + R_s = R_p ( 1 + q^{1/3} )$. All three rotation rates in the system are equivalent to the critical disruption rate for a specific mass ratio $\omega_{p} = \omega_{s} = \Omega = \omega_q $. Therefore, the initial angular momentum of the system is $$\begin{aligned}
H_i = & \frac{4 \pi \rho \omega_d R_p^5}{15} \sqrt{\frac{1+q}{\left(1 +q^{1/3}\right)^3}} \times \\
& \!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \left( \frac{2 - 2 q^{1/3} + 2 q^{2/3} + 5 q + 5 q^{4/3} + 2 q^{5/3} - 2 q^2 + 2 q^{7/3}}{1 - q^{1/3} + q^{2/3}} \right) \nonumber\end{aligned}$$ Doubly synchronous systems dissipate energy until all three rotation rates of the system are equivalent to the keplerian orbit rate $\omega_{p} = \omega_{s} = \Omega = \omega_{d} \sqrt{(1 +q )/ a_d^3 } $ where $a_d = a / R_p$ is the doubly synchronous synchronization semi-major axis normalized by the primary radius. The synchronization angular momentum for a doubly synchronous is: $$\begin{aligned}
H_d & = \frac{4 \pi \rho \omega_d R_p^5}{15} \times \\
& \left( \frac{ \left( 1 +q \right) \left( 2 + 2 \left( q + q^{5/3} + q^{8/3} \right) + 5 q a_d^2 \right)}{ \left( a_d \left( 1 + q^{1/3} \right) \left( 1 - q^{1/3} + q^{2/3} \right) \right)^{3/2} } \right) \nonumber\end{aligned}$$ Since angular momentum is conserved, $H_i = H_d$ and we obtain the synchronization semi-major axis $a_d$. If we assume $a_d > 0$ and $ 0 \leq q \leq 1$, we can approximate the solution using a power series: $$a_d = 0.344 + \frac{0.00406}{q^3} + \frac{0.01322}{q^2} + \frac{0.815}{q} + 1.23 q$$ is the initial tidally doubly synchronous semi-major axis measured in primary radii $R_p$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The number of spanning trees in a class of directed circulant graphs with generators depending linearly on the number of vertices $\beta n$, and in the $n$-th and $(n-1)$-th power graphs of the $\beta n$-cycle are evaluated as a product of $\lceil\beta/2\rceil-1$ terms.'
author:
- Justine Louis
bibliography:
- 'bibliographyDic.bib'
date: $10$th July $2015$
nocite: '[@*]'
title: 'Spanning trees in directed circulant graphs and cycle power graphs[^1]'
---
Introduction
============
In this paper we study the number of spanning trees in a class of directed and undirected circulant graphs. Let $1\leqslant\gamma_1\leqslant\cdots\leqslant\gamma_d\leqslant\lfloor n/2\rfloor$ be positive integers. A circulant directed graph, or circulant digraph, on $n$ vertices generated by $\gamma_1,\ldots,\gamma_d$ is the directed graph on $n$ vertices labelled $0,1,\ldots,n-1$ such that for each vertex $v\in\mathbb{Z}/n\mathbb{Z}$ there is an oriented edge connecting $v$ to $v+\gamma_m$ mod $n$ for all $m\in\{1,\ldots,d\}$. We will denote such graphs by $\overrightarrow{C}^{\gamma_1,\ldots,\gamma_d}_n$. Similarly, a circulant graph on $n$ vertices generated by $\gamma_1,\ldots \gamma_d$, denoted by $C^{\gamma_1,\ldots,\gamma_d}_n$, is the undirected graph on $n$ vertices labelled $0,1,\ldots,n-1$ such that each vertex $v\in\mathbb{Z}/n\mathbb{Z}$ is connected to $v\pm\gamma_m$ mod $n$ for all $m\in\{1,\ldots,d\}$. Circulant graphs and digraphs are used as models in network theory. In this context, they are called multi-loop networks, or double-loop networks when they are $2$-generated, see for example [@MR1846929; @MR1973148]. The number of spanning tree measures the reliability of a network.\
The evaluation of the number of spanning trees in circulant graphs and digraphs has been widely studied, were both exact and asymptotic results have been obtained as the number of vertices grows, see [@MR2565193; @MR2574828; @louis2015asymptotics; @louis2015formula; @MR2445039] and references therein. In [@MR2261780; @MR2320194], the authors showed that the number of spanning trees in such graphs satisfy linear recurrence relations. Yong, Zhang and Golin developped a technique in [@MR2445039] to evaluate the number of spanning trees in a particular class of double-loop networks $\overrightarrow{C}^{p,\gamma n+p}_{\beta n}$. In the first section of this work, we derive a closed formula for these graphs, and more generally for $d$-generated circulant digraphs with generators depending linearly on the number of vertices, that is $\overrightarrow{C}^{p,\gamma_1n+p \ldots,\gamma_{d-1}n+p}_{\beta n}$ where $p,\gamma_1,\ldots,\gamma_{d-1},\beta,n$ are positive integers. This partially answers an open question posed in [@MR2565193] by simplifying the formula given in [@MR2565193 Corollary $1$].\
In the second section we calculate the number of spanning trees in the $n$-th and $(n-1)$-th power graphs of the $\beta n$-cycle which are the circulant graphs generated by the $n$, respectively $n-1$, first consecutive integers, denoted by $\boldsymbol{C}^n_{\beta n}$ and $\boldsymbol{C}^{n-1}_{\beta n}$ respectively, where $\beta\in\mathbb{N}_{\geqslant2}$. As a consequence, the asymptotic behaviour of it is derived. Cycle power graphs appear, for example, in graph colouring problems, see [@MR2587027; @MR1720404].\
The results obtained here are derived from the matrix tree theorem (see [@MR2339282; @MR1271140]) which provides a closed formula of a product of $\beta n-1$ terms for a graph on $\beta n$ vertices. Our formulas are a product of $\lceil\beta/2\rceil-1$ terms and are therefore interesting when $n$ is large. In both cases, the symmetry of the graphs is reflected in the formulas which are expressed in terms of eigenvalues of subgraphs of the original graph. This fact was already observed in [@louis2015formula].
**Acknowledgements:** The author thanks Anders Karlsson for reading the manuscript and useful discussions.
Spanning trees in directed circulant graphs
===========================================
Let $G$ be a directed graph and $V(G)$ its vertex set. A spanning arborescence converging to $v\in V(G)$ is an oriented subgraph of $G$ such that the out-degree of all vertices except $v$ equals one, and the out-degree of $v$ is zero. We define the combinatorial Laplacian of a directed graph $G$ as an operator acting on the space of functions defined on $V(G)$, by $$\label{Delta-}
\Delta^-_Gf(x)=\sum_ {y:\ x\rightarrow y}(f(x)-f(y))$$ where the sum is over all vertices $y$ such that there is an oriented edge from $x$ to $y$. Equivalently, the combinatorial Laplacian can be defined as a matrix by $\Delta^-_G=D^--A$, where $D^-$ is the out-degree matrix and $A$ is the adjacency matrix such that $(A)_{ij}$ is the number of directed edges from $i$ to $j$. Let $\tau^-(G,v)$ denote the number of arborescences converging to $v$. The Tutte matrix tree theorem (see [@MR2339282]) states that for all $v\in V(G)$, $$\tau^-(G,v)=\det\Delta^-_{G,v}$$ where $\det\Delta^-_{G,v}$ is the $v$-th cofactor of the Laplacian $\Delta^-_G$ obtained by deleting the row and column of $\Delta^-_G$ corresponding to the vertex $v$. For a regular directed graph $G$, we define the number of spanning trees in $G$, $\tau(G)$, by the sum over all vertices $v\in V(G)$ of the number of arborescences converging to $v$, that is $$\tau(G)=\sum_{v\in V(G)}\tau^-(G,v).$$ Notice that we could have defined the number of spanning trees by the sum over all vertices $v\in V(G)$ of the number of spanning arborescences diverging from $v$.\
By symmetry, all cofactors of the Laplacian of a directed circulant graph are equal and are equal to the product of the non-zero eigenvalues of the Laplacian divided by the number of vertices. Therefore we have that $$\tau(G)=\prod_{k=1}^{\lvert V(G)\rvert}\lambda_k$$ where $\lambda_k$, $k=1,\ldots,\lvert V(G)\rvert$, denote the non-zero eigenvalues of the Laplacian of $G$. The non-zero eigenvalues of the Laplacian of the directed circulant graph $\overrightarrow{C}^{\gamma_1,\ldots,\gamma_d}_n$ are given by (see [@MR1271140 Proposition $3.5$]) $$\lambda_k=d-\sum_{m=1}^de^{2\pi i\gamma_mk/n},\quad k=1,\ldots,n-1.$$ This can also be derived by noticing that the eigenvectors are given by the characters $\chi_k(x)=e^{2\pi ikx/n}$, $k=0,1,\ldots,n-1$, and then applying the Laplacian (\[Delta-\]) on it.\
In this section, we establish a formula for the number of spanning trees in directed circulant graphs $\overrightarrow{C}^\Gamma_{\beta n}$ generated by $\Gamma=\{p,\gamma_1n+p,\ldots,\gamma_{d-1}n+p\}$ and in the particular case of two generators $\overrightarrow{C}^{p,\gamma n+p}_{\beta n}$. Figure \[directedgraphs\] illustrates a $2$ and a $3$ generated directed circulant graphs. We denote by $\mu_k=d-1-\sum_{m=1}^{d-1}e^{2\pi i\gamma_mk/\beta}$, $k=1,\ldots,\beta-1$, the non-zero eigenvalues of the Laplacian on the directed circulant graph $\overrightarrow{C}^{\gamma_1,\ldots,\gamma_{d-1}}_\beta$ and by $\eta_k=2(d-1)-2\sum_{m=1}^{d-1}\cos(2\pi\gamma_mk/\beta)$, $k=1,\ldots,\beta-1$, the non-zero eigenvalues of the Laplacian on the circulant graph $C^{\gamma_1,\ldots,\gamma_{d-1}}_\beta$. Let $A$ be a statement and $\delta_A$ be defined by $$\delta_A=\left\{\begin{array}{rl}1&\textnormal{if }A\textnormal{ is satisfied}\\0&\textnormal{otherwise}\end{array}.\right.$$
\[dicd\] Let $1\leqslant\gamma_1\leqslant\cdots\leqslant\gamma_{d-1}\leqslant\beta$ and $p$, $n$ be positive integers. For all even $n\in\mathbb{N}_{\geqslant2}$ such that $(p,n)=1$, the number of spanning trees in the directed circulant graph $\overrightarrow{C}^{\Gamma}_{\beta n}$, where $\Gamma=\{p,\gamma_1n+p,\ldots,\gamma_{d-1}n+p\}$, is given by $$\begin{aligned}
\tau(\overrightarrow{C}^{\Gamma}_{\beta n})&=nd^{\beta n-1}\Big(1-\delta_{\beta\textnormal{ even}}\frac{(-1)^p}{d^n}(1+\sum_{m=1}^{d-1}(-1)^{\gamma_m})^n\Big)\\
&\times\prod_{k=1}^{\lceil\beta/2\rceil-1}\left(1-2\Big|1-\frac{\mu_k}{d}\Big|^n\cos\left(\frac{2\pi pk}{\beta}+n\operatorname{Arctg}\left(\frac{\sum_{m=1}^{d-1}\sin(2\pi\gamma_mk/\beta)}{d-\eta_k/2}\right)\right)+\Big|1-\frac{\mu_k}{d}\Big|^{2n}\right)\end{aligned}$$ and for odd $n\in\mathbb{N}_{\geqslant1}$, $$\begin{aligned}
&\tau(\overrightarrow{C}^{\Gamma}_{\beta n})=nd^{\beta n-1}\Big(1-\delta_{\beta\textnormal{ even}}\frac{(-1)^p}{d^n}(1+\sum_{m=1}^{d-1}(-1)^{\gamma_m})^n\Big)\\
&\times\prod_{k=1}^{\lceil\beta/2\rceil-1}\left(1-2\textnormal{sgn}(d-\eta_k/2)\Big|1-\frac{\mu_k}{d}\Big|^n\cos\left(\frac{2\pi pk}{\beta}+n\operatorname{Arctg}\left(\frac{\sum_{m=1}^{d-1}\sin(2\pi\gamma_mk/\beta)}{d-\eta_k/2}\right)\right)+\Big|1-\frac{\mu_k}{d}\Big|^{2n}\right)\end{aligned}$$ where $\lceil x\rceil$ is the smallest integer greater or equal to $x$, $\lvert.\rvert$ denotes the modulus and we set $\textnormal{sgn}(0)=1$. The number of spanning trees in $\overrightarrow{C}^{\Gamma}_{\beta n}$ is zero if either $(p,n)=1$ and $\beta$, $p$, $\gamma_m$, $m=1,\ldots,d-1$ are all even or either $(p,n)\neq1$.
From the Tutte matrix tree theorem, the number of spanning trees in $\overrightarrow{C}^\Gamma_{\beta n}$ is given by $$\tau(\overrightarrow{C}^{\Gamma}_{\beta n})=\prod_{k=1}^{\beta n-1}(d-e^{2\pi ipk/(\beta n)}-\sum_{m=1}^{d-1}e^{2\pi i(\gamma_mn+p)k/(\beta n)}).$$ By splitting the product over $k=1,\ldots,\beta n-1$ into two products, when $k$ is a multiple of $\beta$, that is $k=l\beta$ with $l=1,\ldots,n-1$, and over non-multiples of $\beta$, that is, $k=k'+l'\beta$ with $k'=1,\ldots,\beta-1$ and $l'=0,1,\ldots,n-1$, we have $$\label{tau}
\tau(\overrightarrow{C}^{\Gamma}_{\beta n})=\prod_{l=1}^{n-1}(d-de^{2\pi ipl/n})\prod_{k=1}^{\beta-1}\prod_{l'=0}^{n-1}(d-(1+\sum_{m=1}^{d-1}e^{2\pi i\gamma_mk/\beta})e^{2\pi ipk/(\beta n)}e^{2\pi ipl'/n}).$$ We have that $$\prod_{l=1}^{n-1}(d-de^{2\pi ipl/n})=d^{n-1}\prod_{l=1}^{n-1}(1-e^{2\pi ipl/n})=nd^{n-1}\delta_{(p,n)=1}.$$ This equality comes from the fact that $\prod_{l=1}^{n-1}(1-e^{2\pi ipl/n})$ is the number of spanning trees of the directed graph $\overrightarrow{C}^p_n$, which is isomorphic to the directed cycle on $n$ vertices if $(p,n)=1$, and is not connected if $(p,n)\neq1$. Therefore the product is equal to $n\delta_{(p,n)=1}$.\
Hence, if $(p,n)\neq1$, we have $$\tau(\overrightarrow{C}^{\Gamma}_{\beta n})=0.$$ Let $p$ be relatively prime to $n$. Using that the complex numbers $e^{2\pi il/n}$, $l=0,1,\ldots,n-1$, are the $n$ non-trivial roots of unity, we have for all $x$, $$\label{unityroots}
\prod_{l=0}^{n-1}(x-e^{2\pi ilp/n})=x^n-1.$$ since $(p,n)=1$. Equivalently we have, $$\prod_{l=0}^{n-1}(1-xe^{2\pi ilp/n})=1-x^n.$$ Using this identity in (\[tau\]) enables to evaluate the product over $l'$, it comes $$\label{betaproduct}
\tau(\overrightarrow{C}^{\Gamma}_{\beta n})=nd^{\beta n-1}\prod_{k=1}^{\beta-1}(1-\frac{1}{d^n}(1+\sum_{m=1}^{d-1}e^{2\pi i\gamma_mk/\beta})^ne^{2\pi ipk/\beta}).$$ For odd $\beta$ we write the product over $k$, $k=1,\ldots,\beta-1$, as a product from $1$ to $(\beta-1)/2$, and for even $\beta$ we write it as a product from $1$ to $\beta/2-1$ and add the $k=\beta/2$ factor which is given by $1-(-1)^p(1+\sum_{m=1}^{d-1}(-1)^{\gamma_m})^n/d^n$. Writing the above expression in terms of $\mu_k=d-1-\sum_{m=1}^{d-1}e^{2\pi i\gamma_mk/\beta}$, it comes $$\begin{aligned}
\tau(\overrightarrow{C}^{\Gamma}_{\beta n})&=nd^{\beta n-1}\Big(1-\delta_{\beta\textnormal{ even}}\frac{(-1)^p}{d^n}(1+\sum_{m=1}^{d-1}(-1)^{\gamma_m})^n\Big)\nonumber\\
&\times\prod_{k=1}^{\lceil\beta/2\rceil-1}(1-(1-\mu_k/d)^ne^{2\pi ipk/\beta})(1-(1-\mu_k^\ast/d)^ne^{-2\pi ipk/\beta})\nonumber\\
&=nd^{\beta n-1}\Big(1-\delta_{\beta\textnormal{ even}}\frac{(-1)^p}{d^n}(1+\sum_{m=1}^{d-1}(-1)^{\gamma_m})^n\Big)\nonumber\\
&\times\prod_{k=1}^{\lceil\beta/2\rceil-1}(1-2\lvert1-\mu_k/d\rvert^n\cos(2\pi pk/\beta+n\phi_k)+\lvert1-\mu_k/d\rvert^{2n})
\label{tau2}\end{aligned}$$ where $\phi_k$ is the phase of the complex number $1-\mu_k/d$ such that $1-\mu_k/d=\lvert1-\mu_k/d\rvert e^{i\phi_k}$. We have $$\lvert1-\mu_k/d\rvert=\frac{1}{d}\Big((d-\eta_k/2)^2+\Big(\sum_{m=1}^{d-1}\sin(2\pi\gamma_mk/\beta)\Big)^2\Big)^{1/2}$$ and $$\cos{\phi_k}=\frac{d-\eta_k/2}{\lvert d-\mu_k\rvert},\quad\sin{\phi_k}=\frac{\sum_{m=1}^{d-1}\sin(2\pi\gamma_mk/\beta)}{\lvert d-\mu_k\rvert}.$$ Therefore for $k$ such that $d-\eta_k/2\neq0$, the phase is given by $$\label{phik}
\phi_k=\operatorname{Arctg}\left(\frac{\sum_{m=1}^{d-1}\sin(2\pi\gamma_mk/\beta)}{d-\eta_k/2}\right)+\epsilon\pi$$ where $\epsilon=0$ if $\textnormal{sgn}(d-\eta_k/2)=1$ and $\epsilon\in\{-1,1\}$ if $\textnormal{sgn}(d-\eta_k/2)=-1$. For $k$ such that $d-\eta_k/2=0$, we take the limit as $d-\eta_k/2\rightarrow0$ in (\[phik\]), with $\epsilon=0$. The theorem follows by putting equation (\[phik\]) into equation (\[tau2\]).\
When $\beta$, $p$ and $\gamma_m$, $m=1,\ldots,d-1$ are all even, the directed circulant graph $\overrightarrow{C}^{\Gamma}_{\beta n}$ is not connected and therefore the number of spanning trees is zero, this is reflected in the formula.
In the following theorem we state the particular case on two-generated directed circulant graphs.
\[d=2\] Let $1\leqslant\gamma\leqslant\beta$ and $p$, $n$ be positive integers. For odd $\beta$ and all $n\in\mathbb{N}_{\geqslant1}$ such that $(p,n)=1$, the number of spanning trees in the directed circulant graph $\overrightarrow{C}^{p,\gamma n+p}_{\beta n}$ is given by $$\tau(\overrightarrow{C}^{p,\gamma n+p}_{\beta n})=n2^{\beta n-1}\prod_{k=1}^{(\beta-1)/2}\Big(1-2\cos(2\pi(p+\gamma n/2)k/\beta)\cos^n(\pi\gamma k/\beta)+\cos^{2n}(\pi\gamma k/\beta)\Big)$$ and for even $\beta$, if $\gamma$ or $p$ is odd, then $$\tau(\overrightarrow{C}^{p,\gamma n+p}_{\beta n})=n2^{\beta n-1+\delta_{\gamma\textnormal{ even}}}\prod_{k=1}^{\beta/2-1}\Big(1-2\cos(2\pi(p+\gamma n/2)k/\beta)\cos^n(\pi\gamma k/\beta)+\cos^{2n}(\pi\gamma k/\beta)\Big).$$ The number of spanning trees in $\overrightarrow{C}^{p,\gamma n+p}_{\beta n}$ is zero if either $(p,n)=1$ and $\beta$, $p$ and $\gamma$ are all even or either $(p,n)\neq1$.
From equation (\[betaproduct\]) it follows $$\tau(\overrightarrow{C}^{p,\gamma n+p}_{\beta n})=n2^{\beta n-1}\prod_{k=1}^{\beta-1}(1-e^{2\pi i(p+\gamma n/2)k/\beta}\cos^n(\pi\gamma k/\beta)).$$ For odd $\beta$, we have $$\begin{aligned}
\tau(\overrightarrow{C}^{p,\gamma n+p}_{\beta n})&=n2^{\beta n-1}\prod_{k=1}^{(\beta-1)/2}(1-e^{2\pi i(p+\gamma n/2)k/\beta}\cos^n(\pi\gamma k/\beta))\\
&\qquad\qquad\quad\qquad\times(1-e^{-2\pi i(p+\gamma n/2)k/\beta}\cos^n(\pi\gamma k/\beta))\\
&=n2^{\beta n-1}\prod_{k=1}^{(\beta-1)/2}(1-2\cos(2\pi(p+\gamma n/2)k/\beta)\cos^n(\pi\gamma k/\beta)+\cos^{2n}(\pi\gamma k/\beta)).\end{aligned}$$ For even $\beta$, the factor $k=\beta/2$ is added: $$1-e^{\pi i(p+\gamma n/2)}\cos^n(\pi\gamma/2)=\left\{\begin{array}{rl}0&\textnormal{if }p\textnormal{ and }\gamma\textnormal{ are even}\\1&\textnormal{if }\gamma\textnormal{ is odd}\\2&\textnormal{otherwise}\end{array}.\right.$$ For even $\beta$, $p$ and $\gamma$, the graph $\overrightarrow{C}^{p,\gamma n+p}_{\beta n}$ is not connected and therefore the number of spanning trees is zero. Therefore if $p$ or $\gamma$ is odd, we have $$\begin{aligned}
\tau(\overrightarrow{C}^{p,\gamma n+p}_{\beta n})&=n2^{\beta n-1+\delta_{\gamma\textnormal{ even}}}\prod_{k=1}^{\beta/2-1}(1-e^{2\pi i(p+\gamma n/2)k/\beta}\cos^n(\pi\gamma k/\beta))\\
&\quad\qquad\qquad\qquad\qquad\times(1-e^{-2\pi i(p+\gamma n/2)k/\beta}\cos^n(\pi\gamma k/\beta))\\
&=n2^{\beta n-1+\delta_{\gamma\textnormal{ even}}}\prod_{k=1}^{\beta/2-1}(1-2\cos(2\pi(p+\gamma n/2)k/\beta)\cos^n(\pi\gamma k/\beta)+\cos^{2n}(\pi\gamma k/\beta)).\end{aligned}$$
Consider the case when $p=\beta=3$ and $\gamma=2$. It follows from Theorem \[d=2\] that $\tau(\overrightarrow{C}^{3,2n+3}_{3n})=0$ if $n$ is a multiple of $3$, otherwise, $$\begin{aligned}
\tau(\overrightarrow{C}^{3,2n+3}_{3n})&=n2^{3n-1}(1-2\cos(2\pi n/3)\cos^n(2\pi/3)+\cos^{2n}(2\pi/3))\\
&=n(2^{3n-1}-2^{2n}\cos(\pi n/3)+2^{n-1})\end{aligned}$$ as stated in [@MR2445039 Example $4$.(iii)]. As another example, consider the case when $p=2$, $\gamma=5$ and $\beta=6$. From Theorem \[d=2\], for even $n$, $\tau(\overrightarrow{C}^{2,5n+2}_{6n})=0$, and for odd $n$, $$\begin{aligned}
\tau(\overrightarrow{C}^{2,5n+2}_{6n})&=n2^{6n-1}(1-2\cos(2\pi(2+5n/2)/6)\cos^n(5\pi/6)+\cos^{2n}(5\pi/6))\\
&\quad\times(1-2\cos(4\pi(2+5n/2)/6)\cos^n(10\pi/6)+\cos^{2n}(10\pi/6))\\
&=\frac{n}{2}(2^{3n}+2^{2n}3^{n/2}\cos(\pi n/6)-2^{2n}3^{(n+1)/2}\sin(\pi n/6)+6^n)\\
&\quad\times(2^{3n}-2^{2n-1}3^{n/2}\cos(\pi n/3)+2^{n-1}3^{(n+1)/2}\sin(\pi n/3)+2^n).\end{aligned}$$
Spanning trees in cycle power graphs
====================================
The $k$-th power graph of the $n$-cycle, denoted by $\boldsymbol{C}^k_n$, is the graph with the same vertex set as the $n$-cycle where two vertices are connected if their distance on the $n$-cycle is at most $k$. It is therefore the circulant graph on $n$ vertices generated by the first $k$ consecutive integers. In this section, we derive a formula for the number of spanning trees in the $n$-th and $(n-1)$-th power graph of the $\beta n$-cycle, where $\beta\in\mathbb{N}_{\geqslant2}$. As a consequence we derive the asymptotic behaviour of it as $n$ goes to infinity.\
The combinatorial Laplacian of an undirected graph $G$ with vertex set $V(G)$ defined as an operator acting on the space of functions is $$\Delta_Gf(x)=\sum_{y\sim x}(f(x)-f(y))$$ where the sum is over all vertices adjacent to $x$. The matrix tree theorem [@MR1271140] states that the number of spanning trees in $G$, $\tau(G)$, is given by $$\tau(G)=\frac{\prod_{k=1}^{\lvert V(G)\rvert-1}\lambda_k}{\lvert V(G)\rvert}$$ where $\lambda_k$, $k=1,\ldots,\lvert V(G)\rvert-1$, are the non-zero eigenvalues of $\Delta_G$. The eigenvectors of the Laplacian on the circulant graph $C^{1,\ldots,n}_{\beta n}$ are given by the characters $\chi_k(x)=e^{2\pi ikx/(\beta n)}$, $k=0,1,\ldots,\beta n-1$. Therefore the non-zero eigenvalues are given by $$\lambda_k=2n-2\sum_{m=1}^n\cos(2\pi km/(\beta n)),\quad k=1,\ldots,\beta n-1.$$ Similarly, the non-zero eigenvalues on $C^{1,\ldots,n-1}_{\beta n}$ are given by $$\lambda_k=2(n-1)-2\sum_{m=1}^{n-1}\cos(2\pi km/(\beta n)),\quad k=1,\ldots,\beta n-1.$$ Figure \[powergraphs\] below illustrates two power graphs of the $24$-cycle.
\[circ\] Let $\beta\geqslant2$ be an integer and $\mu_k=2-2\cos(2\pi k/\beta)$, $k=1,\ldots,\beta-1$, be the non-zero eigenvalues of the Laplacian on the $\beta$-cycle. The number of spanning trees in the $n$-th power graph of the $\beta n$-cycle $\boldsymbol{C}^n_{\beta n}$ for $\beta\geqslant3$, is given by $$\begin{aligned}
\tau(\boldsymbol{C}^n_{\beta n})&=\frac{2^{\beta(n+1)}}{(2\beta)^2}n^{\beta n-2}\left(1+\frac{1}{2n}\right)^{\beta n}(1-(2n+1)^{-\beta})^n\\
&\times\prod_{k=1}^{\lceil\beta/2\rceil-1}\sin^2\left(\frac{\pi(n+1)k}{\beta}-n\operatorname{Arcsin}\left(\frac{n+1}{\sqrt{4n^2/\mu_k+2n+1}}\right)\right)\end{aligned}$$ where $\lceil x\rceil$ denotes the smallest integer greater or equal to $x$. For $\beta=2$, it is given by $$\tau(\boldsymbol{C}^n_{2n})=(2n)^{2n-2}(1+1/n)^n.$$ The number of spanning trees in the $(n-1)$-th power graph of the $\beta n$-cycle $\boldsymbol{C}^{n-1}_{\beta n}$, for $\beta\geqslant3$, is given by $$\begin{aligned}
\tau(\boldsymbol{C}^{n-1}_{\beta n})&=\frac{2^{\beta(n+1)}}{(2\beta)^2}n^{\beta n-2}\left(1-\frac{1}{2n}\right)^{\beta n}\lvert(-1)^\beta-(2n-1)^{-\beta}\rvert^n\\
&\times\prod_{k=1}^{\lceil\beta/2\rceil-1}\sin^2\left(\frac{\pi(n-1)k}{\beta}-n\operatorname{Arcsin}\left(\frac{n-1}{\sqrt{4n^2/\mu_k-(2n-1)}}\right)\right).\end{aligned}$$ For $\beta=2$, it is given by $$\tau(\boldsymbol{C}^{n-1}_{2n})=(2n)^{2n-2}(1-1/n)^n.$$
We emphasise that in the cycle power graphs $\boldsymbol{C}^{n-1}_{\beta n}$ and $\boldsymbol{C}^n_{\beta n}$ there are $\beta$ copies of $n$-cliques as subgraphs of the original graph. This fact appears in the formula by the factor $n^{\beta n-2}=(n^{n-2})^\beta n^{2(\beta-1)}$ since the number of spanning trees in the complete graph on $n$ vertices is $n^{n-2}$.
We prove the theorem only for the first type of graphs $\boldsymbol{C}^n_{\beta n}$. The proof of the second type $\boldsymbol{C}^{n-1}_{\beta n}$ is very similar to the first one. The matrix tree theorem states that $$\tau(\boldsymbol{C}^n_{\beta n})=\frac{1}{\beta n}\prod_{k=1}^{\beta n-1}(2n-2\sum_{m=1}^n\cos(2\pi km/(\beta n))).$$ Lagrange’s trigonometric identity expresses the sum of cosines appearing in the above formula in terms of a quotient of sines: $$2\sum_{m=1}^n\cos(2\pi km/(\beta n))=\frac{\sin((n+1/2)2\pi k/(\beta n))}{\sin(\pi k/(\beta n))}-1.$$ Hence, $$\tau(\boldsymbol{C}^n_{\beta n})=\frac{1}{\beta n}\prod_{k=1}^{\beta n-1}(\sin(\pi k/(\beta n)))^{-1}((2n+1)\sin(\pi k/(\beta n))-\sin(\pi k/(\beta n)+2\pi k/\beta)).$$ Using that there are $\beta n$ spanning trees in the $\beta n$-cycle, that is $\frac{1}{\beta n}\prod_{k=1}^{\beta n-1}(2-2\cos(2\pi k/(\beta n)))=\beta n$, it follows that $$\label{taucycle}
\prod_{k=1}^{\beta n-1}\sin(\pi k/(\beta n))=\frac{\beta n}{2^{\beta n-1}}.$$ For the second factor, as in the proof of Theorem \[dicd\], we split the product over $k=1,\ldots,\beta n-1$ into two products, first when $k$ is a multiple of $\beta$, that is $k=l\beta$ with $l=1,\ldots,n-1$, and second when $k$ is not a multiple of $\beta$, that is, $k=k'+l'\beta$ with $k'=1,\ldots,\beta-1$ and $l'=0,1,\ldots,n-1$. The product over the multiples of $\beta$ reduces to $$\prod_{l=1}^{n-1}2n\sin(\pi l/n)=n^n.$$ We have $$\label{2prod}
\tau(\boldsymbol{C}^n_{\beta n})=\frac{2^{\beta n-1}n^n}{(\beta n)^2}\prod_{k=1}^{\beta-1}\prod_{l=0}^{n-1}((2n+1)\sin(\pi k/(\beta n)+\pi l/n)-\sin(\pi k/(\beta n)+\pi l/n+2\pi k/\beta).$$ The difference of sines in the above product can be written as $$\label{sine}
(2n+1)\sin(\pi k/(\beta n)+\pi l/n)-\sin(\pi k/(\beta n)+\pi l/n+2\pi k/\beta)=\lvert z_k\rvert\sin(\pi(n+1)k/(\beta n)+\theta_k+\pi l/n)$$ where $$z_k=2n\cos(\pi k/\beta)-i(2n+2)\sin(\pi k/\beta){=\mathrel{\vcenter{\baselineskip0.5ex \lineskiplimit0pt
\hbox{\scriptsize.}\hbox{\scriptsize.}}}}\lvert z_k\rvert e^{i\theta_k}.$$ Let $\omega_k=\pi(n+1)k/(\beta n)+\theta_k$, we have $$\begin{aligned}
\prod_{l=0}^{n-1}\sin(\omega_k+\pi l/n)&=\frac{1}{(2i)^n}\prod_{l=0}^{n-1}(e^{i(\omega_k+\pi l/n)}-e^{-i(\omega_k+\pi l/n)})\nonumber\\
&=\frac{1}{(2i)^n}e^{-i\omega_kn}e^{\pi i(n-1)/2}\prod_{l=0}^{n-1}(e^{2i\omega_k}-e^{-2\pi il/n})\nonumber\\
&=\frac{\sin(\omega_kn)}{2^{n-1}}
\label{prodsines}\end{aligned}$$ where in the last equality we used equation (\[unityroots\]). Putting equations (\[2prod\]), (\[sine\]) and (\[prodsines\]) together yields $$\tau(\boldsymbol{C}^n_{\beta n})=\frac{2^{\beta n-1}n^n}{(\beta n)^2}\prod_{k=1}^{\beta-1}\frac{\lvert z_k\rvert^n}{2^{n-1}}\sin(\pi(n+1)k/\beta+n\theta_k).$$ Notice that for even $\beta$, the phase of $z_{\beta/2}$ is $\theta_{\beta/2}=-\pi/2$, so that $\sin(\pi(n+1)/2+n\theta_{\beta/2})=1$. For $\beta=2$, $z_1=-2(n+1)i$, hence $$\tau(\boldsymbol{C}^n_{2n})=(2n)^{2n-2}(1+1/n)^n.$$ For $\beta\geqslant3$, we have $$\tau(\boldsymbol{C}^n_{\beta n})=\frac{2^{n+\beta-2}n^n}{(\beta n)^2}\big(\prod_{k=1}^{\beta-1}\lvert z_k\rvert^n\big)\prod_{k=1}^{\lceil\beta/2\rceil-1}\sin(\pi(n+1)k/\beta+n\theta_k)\sin(\pi(n+1)(\beta-k)/\beta+n\theta_{\beta-k}).$$ For $1\leqslant k\leqslant\lceil\beta/2\rceil-1$, the phase of $z_k$ is $\theta_k=-\operatorname{Arcsin}((2n+2)\sin(\pi k/\beta)/\lvert z_k\rvert)$. The phase of $z_{\beta-k}$ satisfies $$\cos\theta_{\beta-k}=-\cos\theta_k,\quad\sin\theta_{\beta-k}=\sin\theta_k$$ so that, $\theta_{\beta-k}=\pi-\theta_k$. The modulus of $z_k$ is given by $$\lvert z_k\rvert=((2n+1)^2+1-2(2n+1)\cos(2\pi k/\beta))^{1/2}=(4n^2+(2n+1)\mu_k)^{1/2}$$ where $\mu_k=2-2\cos(2\pi k/\beta)$, $k=1,\ldots,\beta-1$, are the non-zero eigenvalues of the Laplacian on the $\beta$-cycle. We have $\sin(\pi k/\beta)=\mu_k^{1/2}/2$. Hence for $1\leqslant k\leqslant\lceil\beta/2\rceil-1$, the phase is given by $\theta_k=-\operatorname{Arcsin}((n+1)/\sqrt{4n^2/\mu_k+2n+1})$. Therefore $$\label{taucirc}
\tau(\boldsymbol{C}^n_{\beta n})=\frac{2^{n+\beta-2}n^n}{(\beta n)^2}\big(\prod_{k=1}^{\beta-1}\lvert z_k\rvert^n\big)\prod_{k=1}^{\lceil\beta/2\rceil-1}\sin^2\left(\frac{\pi(n+1)k}{\beta}-n\operatorname{Arcsin}\left(\frac{(n+1)}{\sqrt{4n^2/\mu_k+2n+1}}\right)\right).$$ The product of the modulus of $z_k$ is given by $$\begin{aligned}
\prod_{k=1}^{\beta-1}\lvert z_k\rvert&=\frac{(2n+1)^{\beta/2}}{2n}\prod_{k=0}^{\beta-1}(2n+1+1/(2n+1)-2\cos(2\pi k/\beta))^{1/2}\nonumber\\
&=\frac{(2n+1)^{\beta/2}}{2n}(2\cosh(\beta\operatorname{Argcosh}(n+1/2+1/(4n+2)))-2)^{1/2}\nonumber\\
&=\frac{(2n+1)^{\beta}}{2n}(1-(2n+1)^{-\beta})
\label{prod_modulus}\end{aligned}$$ where the second equality comes from the identity (see [@louis2015formula section $2$]) $$\prod_{k=0}^{\beta-1}(2\cosh\theta-2\cos(2\pi k/n))=2\cosh(\beta\theta)-2.$$ Putting equality (\[prod\_modulus\]) into (\[taucirc\]) gives the theorem.
We point out that the proof above could not be easily applied to other powers of the $\beta n$-cycle, like $\boldsymbol{C}^{n-p}_{\beta n}$, where $p\geqslant2$ or $p\leqslant-1$, because in this case $z_k$ defined in equation (\[sine\]) would also depend on $l$ and the phase $\theta_k$ of $z_k$ cannot be easily determined. As a consequence, the product over $l$ cannot be evaluated in the same way as it is done in the proof. It would be interesting to find a derivation in this class of more general circulant graphs.
From Theorem \[circ\], we derive the asymptotic behaviour of the number of spanning trees in the $n$-th, respectively $(n-1)$-th, power graph of the $\beta n$-cycle as $n\rightarrow\infty$.
Let $\beta\in\mathbb{N}_{\geqslant2}$. The asymptotic number of spanning trees in the $n$-th and $(n-1)$-th power graphs of the $\beta n$-cycle $\boldsymbol{C}^n_{\beta n}$ and $\boldsymbol{C}^{n-1}_{\beta n}$ as $n\rightarrow\infty$ is respectively given by $$\tau(\boldsymbol{C}^n_{\beta n})=\frac{2^{\beta n}}{2\beta}n^{\beta n-2}(e^{\beta/2}+o(1))$$ and $$\tau(\boldsymbol{C}^{n-1}_{\beta n})=\frac{2^{\beta n}}{2\beta}n^{\beta n-2}(e^{-\beta/2}+o(1)).$$
By observing that for all $k\in\{1,\ldots,\lceil\beta/2\rceil-1\}$, $$\lim_{n\rightarrow\infty}\frac{n+1}{\sqrt{4n^2/\mu_k+2n+1}}=\sin(\pi k/\beta)\quad\textnormal{and}\quad\lim_{n\rightarrow\infty}\frac{n-1}{\sqrt{4n^2/\mu_k-(2n-1)}}=\sin(\pi k/\beta)$$ where $\mu_k=2-2\cos(2\pi k/\beta)$ and using relation (\[taucycle\]) the corollary is a direct consequence of Theorem \[circ\].
[^1]: The author acknowledges support from the Swiss NSF grant $200021\_132528/1$.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'S. Andreon'
date: 'Received –, 2012; accepted –, 2012'
title: 'Observational evidence that massive cluster galaxies were forming stars at $z\sim2.5$ and did not grow in mass at later times'
---
psfig.tex
Introduction
============
The galaxy mass–assembly history can be reconstructed via the infrared luminosity function (LF), which is sensitive to the growth of the stellar mass of galaxies as a function of time. The available data, almost entirely at $z<1.2$, are usually interpreted as evidence that that most of the bright galaxies must have also been largely assembled by this redshift (De Propris et al. 1999; Andreon 2006; De Propris et al. 2007; Muzzin et al. 2008, Andreon et al. 2009). Andreon (2006) was the first work to sample the high redshift range well (six clusters above $z=0.99$), and it excluded several mass growth models, in particular a twofold increase in mass over a 8 Gry period. While consistent with a scenario where galaxies in clusters are fully assembled at high redshift, later works (e.g. De Propris et al. 2007; Muzzin et al. 2008; Strazzullo et al. 2010), are unable to give a more stringent constraint, mainly because a large redshift baseline and precise $m^*$ are needed to distinguish scenarios, whereas very few clusters at high redshift were known, and almost none was present in the studied samples.
At variance with these works, Mancone et al. (2010) studied candidate clusters, i.e. cluster detections, and claim to have finally reached the epoch of galaxy mass assembly. This epoch is at $z\approx1.3$, because $m^*$ is much fainter at $z\ga1.3$ than it should be for galaxies if there is no mass growth. We are, according to these authors, seeing the rapid mass assembly of cluster galaxies at $z\approx1.3$. However, this result is in tension with the K-band LF of the $z=1.39$ cluster 1WGAJ2235.3-2557 (Strazzullo et al. 2010), with the existence of a developed and tight red sequence at even higher redshifts, as high as $z=1.8$ (Andreon & Huertas-Company 2011, Andreon 2011), and with a very early quenching of the cluster populations (Raichoor & Andreon 2012b). All this evidence implies a peacefully evolution for cluster galaxies up to $z=1.8$.
Since a number of “bona fide" clusters at high redshift have been discovered in the past very few years and for many of them deep Spitzer observations are available, we decided to measure the galaxy mass function of $z>1.4$ clusters.
Throughout this paper, we assume $\Omega_M=0.3$, $\Omega_\Lambda=0.7$, and $H_0=70$ km s$^{-1}$ Mpc$^{-1}$. Magnitudes are quoted in their native system (Vega for I and \[3.6\] bands, AB for the z’ band). Unless otherwise stated, results of the statistical computations are quoted in the form $x\pm y$, where $x$ is the posterior mean and $y$ the posterior standard deviation.
The sample and the data
=======================
Sample
------
In this work we studied the luminosity and mass function of galaxies in $z>1.4$ clusters with a firm detection of the intracluster medium (ICM). We adopted this choice not to run the risk of including in the cluster sample other high redshift structures that have been named “cluster", but whose nature is uncertain, or just different (e.g. a proto-cluster). Since we are interested in a clean sample of secure clusters, we applied a severe screening to the list of objects generically called cluster in literature, see Sect 3.3 for discussion, only keeping those with a firm Chandra detection of the intracluster medium spatially coincident with a galaxy overdensity. The latter is an unambigous detection of deep potential wells. By our choice, our cluster sample is incomplete. However, our sample would also be incomplete if every structure called cluster were included, because what is available today at very high redshift is a collection of objects, not a complete sample.
The adoption of this criterium leads to five $z>1.4$ clusters, listed in Table 1, out of the many “cluster" detections in the literature. Our list of $z>1.4$ clusters does not include the $z\sim1.5$ cluster by Tozzi et al. (2012) because the available Spitzer data sample only part of the cluster and are contaminated by an angularly nearby rich $z=1$ cluster[^1]. Our list does not include the $z=1.62$ structure claimed by Tanaka et al. (2012) to have an ICM detection because our re-analysis of the deep Chandra data available does not confirm their ICM detection after flagging 4 arcsec aperture regions centered on point sources.
The redshift range of bona fide $z>1.4$ clusters goes from $z=1.41$ to $z=1.8$, as detailed in Table 1. All clusters but one have a redshift known with two–digit (at least) precision. Instead, JKCS041 has a $\pm0.1$ redshift uncertainty, which is the least source of uncertainty in this work because $m^*$ has a negligible change with redshift (i.e. a negligible $\partial m^* /
\partial z$) at high redshift. In passing, we note that the presence of a two-digit (spectroscopic) redshift does not necessarily indicate a more precise redshift, as shown by the Gobat et al. (2011) group, whose initial spectroscopic redshift has been lowered by $\delta z=0.08$ (Gobat 2012[^2]), or by MS1241.5+1710, whose initial spectroscopic redshift has been increased by $\delta z=0.237$ (Henry 2000).
All the five clusters have Spitzer observations. Table 1 lists the object ID (Column 1), the cluster redshift (Column 2), and the Spitzer exposure time per pixel (Column 3).
-------------------- ------ -----------
ID $z$ $t_{exp}$
\[s\]
JKCS041 1.8 1200
IDCSJ1426.5+3508 1.75 420
ISCSJ1432.4+3250 1.49 1000
XMMXCSJ2215.9-1738 1.45 1500
ISCSJ1438.1+3414 1.41 1000
-------------------- ------ -----------
: The cluster sample.
The data and analysis
---------------------
The basic data used in our analysis are the standard pipeline pBCD (post Basic Calibrated Data) products delivered by the Spitzer Science Center (SSC). These data include flat-field corrections, dark subtraction, linearity, and flux calibrations. Additional steps included pointing refinement, distortion correction, and mosaicking. Cosmic rays were rejected during mosaicking by sigma-clipping. The pBCD products do not merge observations taken in different astronomical observations requests (AORs). AORs are therefore mosaicked together using SWARP (Bertin, unpublished), making use of the weight maps. Images of IDCSJ1426.5+3508 were already reduced and distributed by Ashby et al. (2009) so we used them. For the two other clusters at right ascension about $14^h$ we use the deeper data available in the archive instead.
Sources were detected using SExtractor (Bertin & Arnout 1996), making use of weight maps. Star/galaxy separation was performed by using the stellarity index provided by SExtractor. We conservatively kept a high posterior threshold ($class_{star}=0.95$), rejecting “sure star" only ($class_{star}>0.95$) in order not to reject galaxies (by unduly putting them in the star class), leaving some residual stellar contamination in the sample. This contamination is later dealt with statistically, with background and foreground galaxies on the cluster line-of-sight. The use of a high posterior threshold is very different from using a concentration index to select galaxies because the latter at low signal-to-noise or for low-resolution observations is susceptible to (mis)classifing galaxies as stars. Objects brighter than 12.5 mag are often saturated and therefore removed from the sample. No cluster galaxy is that bright.
As pointed out by Ashby et al. (2009), images are already moderately crowed with a 420 sec exposure, and this makes the catalog completeness much brighter than the limiting depth (see also Mauduit et al. 2012). Because crowding is greather in cluster regions, incompletness is more severe in these lines-of-sight. For example, a circle of 1 arcmin radius centered on IDCSJ1426.5+3508 is 50 % overdense compared to the average line-of-sight. The overdensity is higher for richer clusters. Completeness at \[3.6\]=19.5 mag is 50 % in the general field around IDCSJ1426.5+3508 (in agreement with Ashby et al. 2009), but a few percentage points in the cluster (overdense) direction. If neglected, the differential (crowding) correction biases the LF parameters (makes the cluster LF flatter than actually is, and, via the parameter covariance, biases the characteristic magnitude, too). Furthermore, very low values of completeness do not allow reliable LFs to be derived, even when crowding is accounted for by following, for example, Andreon (2001). To avoid the danger associated with an important (cluster-, radial- and magnitude-dependent) crowding correction, we adopted quite bright magnitude limits, corresponding to 90 % completeness in average lines-of-sights and we ignored fainter galaxies, as, e.g., in De Propris et al. (2007). The 90% completeness is estimated by comparing galaxy counts in non-cluster lines-of-sight to crowding-corrected galaxy counts taken from Barmby et al. (2008). We checked with the Ashby et al. (2009) data that this procedure returns a completeness vs magnitude estimate that is very close to the one derived by Ashby et al. (2009) from the recovery rate of simulated point sources added in the images. Our threshold magnitudes are very conservative; for example, Mancone et al. (2010, 2012) used 50 % completeness magnitude in control field directions, which are about 2 mag fainter than those we would have chosen.
To compute the luminosity function (LF), we adopted a Bayesian approach, as done for other clusters (e.g. Andreon 2002, 2006, 2008, 2010, Andreon et al. 2006, 2008a, Meyers et al. 2012). We accounted for the background (galaxies in the cluster line-of-sight), estimated in adjacent lines-of-sight. We adopted a Schechter (1976) luminosity function for cluster galaxies and a third–order power law for the background distribution. The likelihood expression were taken from Andreon, Punzi & Grado (2005), which is an extension of the Sandage, Tammann & Yahil (1979) likelihood expression for the case where a background is present. We adopted uniform priors for all parameters, except for the faint end slope $\alpha$, taken to be equal to $-1$ for comparison with lower redshift LF determinations. In particular, we took a uniform prior on $m^*$ between 12.5 mag and our adopted (quite bright, indeed) limiting mag. The Bayesian approach allowed us to easily propagate all uncertainties and their covariance into the luminosity function parameters and derived quantities. As a visual check, we also computed the luminosity function by binning galaxies in magnitude bins (e.g. Zwicky 1957, Oemler 1974, and many papers since then). These LFs are plotted in figures 1 to 5 as points with errors. Our fit instead used unbinned galaxy counts. For all clusters we adopted an aperture with a 1 arcmin radius (about 500 kpc).
To limit the contamination by galaxies in the cluster foreground, we removed (with one exception discussed in next section) from the sample all galaxies that are too blue to be at the cluster redshift, as in Andreon et al. (2004) and other works (e.g. Mancone et al. 2012). We anticipated that this choice has no impact on the results. For rich clusters, we also computed the LF of galaxies of all colors to check our assumption. To estimate the bluest acceptable color a cluster galaxy may plausibly have, we computed the $optical-[3.6]$ color (in the various adopted photometric indexes) of an exponentially increasing star formation history (SFH) model, adopting the SFH of the template named Sc in Grasil. In this way most of the stars at the cluster redshift are newborn. We adopted a formation redshift $z=3$ (i.e. the template has very young stellar populations, less than 0.8 to 2.3 Gyr old) and a Salpeter initial mass function with lower/upper limit fixed to 0.15/120 $M_\odot$. Grasil (Silva et al. 1998; Panuzzo et al. 2005) is a code to compute the spectral evolution of stellar systems the effects of dust taking into account, which absorbs and scatters optical and UV photons and emits in the IR-submm region.
Optical magnitudes are derived from two sources: for JKCS041 and XMMXCSJ2215.9-1738 we used CFHTLS Deep z’ bands (from K-band detected WIRDS catalogs, Bielby et al. 2012). For clusters at right ascension $\sim14^h$, we used NOAO Deep I band catalogs (Jannuzi & Dey 1999). The data are of adequate depth for our purposes: we have one, or at most two, galaxies detected at \[3.6\] and undetected in the optical band (because our adoption of a bright magnitude cut in the \[3.6\] band). For these optically undetected galaxies, their undetection makes their color redder than the blue template and therefore these sources are kept in the sample.
Finally, by integrating the LF we derived the number of cluster galaxies brighter than two limiting magnitudes: a) 18.0 mag, the brighter among all chosen threshold magnitude values. This allows us to properly compare cluster richnesses if $m^*$ does not evolve in the studied redshift range; b) our bright limiting magnitude. This gives the number of cluster galaxies actually fitted. We emphasize that our richnesses computed above accounts for the existence of background galaxies and for errors on, and fluctuation of, the background counts, as well as uncertainties derived from having sampled a finite, usually small, number of cluster galaxies.
Results
=======
Results for individual clusters
-------------------------------
### JKCS041 ($z\sim1.8$)
JKCS041 (Andreon et al. 2009) stands out quite clearly as remarkable galaxy overdensity in the left–hand panel of Figure \[fig:JKCS041\]. It is the most distant cluster of our sample and also one of the most studied ones thanks to the deep data at various wavelengths. It has been studied in the context of the SZ scaling relations (Culverhouse et al. 2010), and it has been used to measure the evolution of the $L_X-T$ scaling relation (Andreon, Trinchieri, & Pizzolato 2011). JKCS041 color-magnitude relation has been measured in Andreon & Huertas-Company (2011), whereas the age spread of galaxies on the red sequence is studied in Andreon (2011b). The relation between star formation and environment is determined in Raichoor & Andreon (2012a) and the evolution of the quenching rate with redshift (previously known as Butcher-Oemler effect) in Raichoor & Andreon (2012b).
Because of the presence of a group in the cluster southeastern outskirts (Andreon & Huertas-Company 2011), in our analysis we exclude all the southeastern quadrant of JKCS041[^3]. The red–sequence population also shows up clearly in the z’-\[3.6\] band (see the central panel of Fig. \[fig:JKCS041\] and compare with the other clusters). The color distribution seems to indicate the presence of a blue population at z’-\[3.6\] $\approx 5$ mag, so redder than the blue spectrophotometric template (vertical arrow), already pointed out using other filters in Raichoor & Andreon (2012a). There is no evidence of an excess of galaxies bluer than the blue template (i.e. left of the vertical arrow in the central panel of Figure \[fig:JKCS041\]).
The luminosity function of those $18\pm 5$ galaxies brighter than $18.3$ mag in the three quadrants is shown in the top right–hand panel of Fig. \[fig:JKCS041\]. The characteristic magnitude is $17.0\pm0.5$ mag (see the bottom right–hand panel for the probability distribution of it). The cluster has a richness of $19\pm6$ galaxies, after accounting for the unused southeastern quadrant.
The luminosity function of red–sequence (i.e. $6.5<z'-[3.6]<7.5$ mag) galaxies has an identical characteristic magnitude, $16.9\pm0.6$ mag, as expected because most JKCS041 galaxies are on the red sequence.
### IDCSJ1426.5+3508 ($z=1.75$)
As also shown in the left–hand panel of Figure \[fig:IDCS\], IDCSJ1426.5+3508 has a dense core of galaxies, whose clean detection obliged us to remove the image filtering when detecting its galaxies using Sextractor. The cluster has been detected, as JKCS041 was, as a galaxy overdensity (Stanford et al. 2011). Shallow Chandra data (Stanford et al. 2011) allow the ICM to be detected, but not to be characterized (e.g. to estimate its temperature). The ICM is detected in absorption via the Sunyaev-Zeldovich effect by Brodwin et al. (2012). The cluster displays a red sequence (Stanford et al. 2011), not visible in the central panel of Figure \[fig:IDCS\], but which shows up, at $I-[3.6]\sim6$ mag when considering fainter galaxies (our catalog is incomplete at these faint magnitudes, however). Unique among the five clusters considered in this paper, IDCSJ1426.5+3508 shows a possible presence of galaxies that are bluer than the blue spectrophotometric template (central panel). For this reason, the LF computation of IDCSJ1426.5+3508 uses galaxies of all colors.
The luminosity function of the $15\pm 6$ galaxies brighter than $18.0$ mag is shown in the top right–hand panel. We find a characteristic magnitude of $17.3\pm0.4$ mag, but we notice that the error amplitude depends on the adopted prior (bottom-right panel): data allow lower values of $m^*$, although with low probability, but the prior ($m^*<18.0$ mag) discard them. This situation occurs because the data used are too shallow to bound the lower end of the $m^*$ probability distribution of this cluster. Indeed, IDCSJ1426.5+3508 data are the shallowest in our sample (see exposure times in Table 1).
IDCSJ1426.5+3508 and JKCS041 have comparable richnesses ($15\pm 6$ vs $19\pm6$ galaxies), although with different color distributions (JKCS041 has a larger fraction of red galaxies).
### ISCSJ1432.4+3250 ($z=1.49$)
ISCSJ1432.4+3250 (left–hand panel of Figure \[fig:z149\]) has, similar to the previous two clusters, been detected as a galaxy overdensity (Brodwin et al. 2011). In terms of spatial distribution, it does not have a compact core of galaxies as IDCSJ1426.5+3508 has. Shallow Chandra data, presented in Brodwin et al. (2011), allow the detection of the ICM, but not its characterization. The cluster red sequence is studied in Snyder et al. (2012) and also shows up at $I-[3.6]\sim6$ mag (see the central panel of Figure \[fig:z149\]). Similar to JKCS041 and unlike IDCSJ1426.5+3508, there is no evidence of a galaxy population bluer than the blue spectrophotometric template (see the central panel).
The luminosity function of the $21\pm 6$ galaxies brighter than $18.15$ mag is shown in the top right–hand panel. We find a characteristic magnitude of $17.4\pm0.4$ mag, but we notice that the error amplitude depends on the adopted prior (bottom-right panel), as for IDCSJ1426.5+3508.
ISCSJ1432.4+3250 has comparable richnesses ($17\pm 5$) to the two clusters at higher redshift.
### XMMXCSJ2215.9-1738 ($z=1.45$)
XMMXCSJ2215.9-1738 (left panel of Figure \[fig:cl2215\]) is the most distant X-ray selected cluster (Stanford et al. 2006). The cluster, initially discovered with XMM-Newton, has been re-observed with Chandra (Hilton et al. 2010), and the original XMM detection was found to be heavily contaminated by point sources (already suspected in XMM discovery data by Stanford et al. 2006). Although X-ray selected, hence likely overbright for its mass and temperature (Andreon, Trinchieri, & Pizzolato 2011), the cluster turns out to be underluminous for its temperature assuming a self-similar evolution (Hilton et al. 2010, Andreon, Trinchieri, & Pizzolato 2011). Its color-magnitude relation is studied in Hilton et al. (2009) and Meyers et al. (2012).
The cluster’s red sequence shows up at $z'-[3.6]\sim6.5$ mag (see the central panel of figure \[fig:cl2215\]). Unlike IDCSJ1426.5+3508, there is no evidence of a galaxy population bluer than the blue spectrophotometric template (see the central panel). The luminosity function of the $49\pm 8$ galaxies brighter than $18.5$ mag is shown in the top right–hand panel. We find a characteristic magnitude of $16.6\pm0.3$ mag (see the bottom right–hand panel for the probability distribution of it). ISCSJ1432.4+3250 is richer than the other clusters at higher redshift ($36\pm 6$ vs $15$ to $20$ galaxies).
Richnesses and characteristic magnitude of the luminosity function derived removing the color selection are indistinguishable from those just derived (see Table 2) because of the large dominance of red galaxies.
This cluster, at the spectroscopic redshift $z=1.45$, has a red sequence 0.5 mag bluer than JKCS041, independently confirming that JKCS041 has $z>1.45$ and that therefore JKCS041 has to be kept in the sample of $z>1.4$ clusters[^4]. We emphasize that this color comparison uses homogeneous photometry that is uniformly reduced ($z'$ band by Bielby et al. 2010, this work for \[3.6\] band).
### ISCSJ1438.1+3414 ($z=1.41$)
ISCSJ1438.1+3414 (left–hand panel of Figure \[fig:z141\]) has been detected as a galaxy overdensity (Stanford et al. 2005). Deep Chandra data (Andreon, Trinchieri, & Pizzolato 2011) allowed characterizing the ICM and measuring the evolution of the $L_X-T$ scaling relation (Andreon, Trinchieri, & Pizzolato 2011). Its red sequence has been studied in Meyers et al. (2012). The cluster red sequence stands out (see the central panel of Figure \[fig:z141\]), although it seems quite broad. However, to accurately measure the width of the red sequence, a more tailored measurement of color is needed. Unlike IDCSJ1426.5+3508, there is no evidence of a galaxy population bluer than the blue template (see the central panel).
The luminosity function of the $47\pm 8$ galaxies brighter than $18.15$ mag is shown in the top right–hand panel. We find a characteristic magnitude of $16.8\pm0.3$ mag. The cluster is quite rich, $42\pm7$ galaxies brighter than 18 mag, as rich as XMMXCSJ2215.9-1738, and much richer than the clusters at higher redshift. Richnesses and characteristic magnitude of the luminosity function derived by removing the color selection are indistinguishable from those derived above (see Table 2) because of the strong dominance of red galaxies.
-------------------- -------------- ------------ --------- ------------------ -------------------------
ID $m^*$ $n(<18.0)$ ref mag $n(<$ ref mag$)$ Notes
(1) (2) (3) (4) (5)
JKCS041 $17.0\pm0.5$ $19\pm6$ 18.30 $18\pm5$ $z'-[3.6]> 4.3$ mag
IDCSJ1426.5+3508 $17.3\pm0.4$ $15\pm6$ 18.00 $15\pm6$ galaxies of all colors
ISCSJ1432.4+3250 $17.4\pm0.4$ $17\pm5$ 18.15 $21\pm6$ $I-[3.6]> 3.8$ mag
XMMXCSJ2215.9-1738 $16.6\pm0.3$ $37\pm6$ 18.50 $52\pm8$ $z'-[3.6]> 3.8$ mag
ISCSJ1438.1+3414 $16.8\pm0.3$ $42\pm7$ 18.15 $47\pm8$ $I-[3.6]> 3.8$ mag
Other fits
JKCS041 $16.9\pm0.6$ $13\pm4$ 18.30 $12\pm4$ $6.5<z'-[3.6]< 7.5$ mag
XMMXCSJ2215.9-1738 $16.5\pm0.3$ $38\pm6$ 18.50 $52\pm8$ galaxies of all colors
ISCSJ1438.1+3414 $16.8\pm0.3$ $43\pm8$ 18.15 $47\pm9$ galaxies of all colors
-------------------- -------------- ------------ --------- ------------------ -------------------------
Richnesses in col (5) of JKCS041 refers to measurement in three quarters of the cluster area, see text.
Collective analysis
-------------------
Figure \[fig:mstarz\] shows the derived $m^*$ values of the five $z>1.4$ clusters, as well as previous determinations from the literature in the \[3.6\] band (Andreon 2006; Muzzin et al. 2008; Mancone et al. 2010, 2012; Stalder et al. 2012), corrected in the case of Muzzin et al. (2008) for the different faint end slope adopted. Andreon (2006), Muzzin et al. (2008), and Mancone et al. (2010) fitted, as we also do, an LF with a fixed slope. The plotted errors are heterogeneous and not easily comparable because they do not include the same terms in the error budget. In particular, the Mancone et al. (2010) results are based on a few spectroscopically confirmed clusters and many putative cluster detections, including false ones, as mentioned by the authors. At $z<1.2$, Muzzin et al. (2008) is consistent with the more precise determination of Andreon (2006), whereas Mancone et al. (2010) $m^*$ values display a slower change with redshift than the other data. The difference is statistically significant if one assumes that the Mancone et al. (2010) errors are correctly estimated.
The five $z>1.4$ clusters studied in this paper (black open dots) have $m^*$ values that are consistent among themselves. We therefore combined the data of the five clusters by multiplying their likelihoods after tying the characteristic magnitude parameters. We find $m^* = 16.92\pm0.13$ at $z=1.5$, the median redshift of the six clusters (solid dot), based on the 150 member galaxies of the combined sample. This value is is 0.8 mag brighter than the values measured by Mancone et al. (2010) in the same redshift range. The difference is statistically significance if one assumes that the Mancone et al. (2010) errors are correctly estimated. As detailed in the technical appendix, we identify the likely reason for the disagreement, Mancone et al. (2010) did not adopted the likelihood expression appropriate for the data used, and we also exclude the possibility that the difference is due to an intrinsic variance of $M^*$.
We emphasize that in this combined fit, the precise redshift of the five clusters is irrelevant, provided they have $z>1.4$. Therefore, the lack of a spectroscopic redshift for JKCS041 (or any other cluster) is not detrimental for the collective $m^*$ determination (or for individual $m^*$, as it is self–evident). We also emphasize that the highest redshift cluster, JKCS041, only contributes 10 % of the total number of galaxies, and that an indistinguishable result would therefore be found by dropping it from the sample.
Figure \[fig:lfall\] plots the galaxy mass function of the combined five clusters. Mass is defined as the integral of the star formation rate, and it is derived from the \[3.6\] luminosity assuming a single stellar population (SSP) formed at $z_f=2.5$, modeled by the 2007 version of Bruzual & Charlot (2003) spectrophotometric population synthesis code with solar metallicity and a Chabrier (2003) initial mass function. Based on the 150 member galaxies of the combined sample, the mass function is determined with a 40 % error in the $10.5<lgM<12$ M$_\odot$ range. The characteristic mass is $lgM^*=11.30\pm 0.05$ M$_\odot$, where the error does not account for the systematic errors coming from the conversion from luminosity to mass (e.g. about the stellar initial mass function shape).
Figure \[fig:mstarz\_mod\] summarizes the observational data by keeping $m^*$ determinations limited to confirmed clusters (Muzzin et al. 2008 and Mancone et al. 2010 studied a mix of real clusters and cluster detections) and setting the Mancone et al. (2012) and Stadler et al. (2012) determinations aside because these works use galaxy counts in a magnitude regime where a crowding correction is, at best, compelling, but ignored (see also the technical appendix). The characteristic luminosity $m^*$ has the same value at $1.4<z<1.8$ as at $z\sim1$. This is the main result of this work, which still holds true when keeping all $z<1.3$ data (see Fig. \[fig:mstarz\]).
Figure \[fig:mstarz\_mod\] also plots the luminosity evolution expected for several mass–growth histories. The very steep luminosity evolution drawn in the figure (dotted curve) is a Grasil model in which the mass doubled in the last 8 Gyr, first explained and then ruled out in Andreon (2006). Mass-evolving models are rejected by the data, because a mass growth makes galaxies brighter (because more massive) at lower redshift, and thus these models do not fit the data. For example, exponentially declining $\tau$ models are rejected by the data for all $\tau$ tau values higher than 0.5 Gyr, the precise value depending however on the adopted $z_f$. Models involving mergers do not fit the data because the latter makes descending galaxies more massive and therefore brighter, i.e. moves $m^*$ to brighter values going to lower redshift, while the points at “low" ($z\sim1$) redshift are too faint for the $1.4<z<1.8$ point(s).
Therefore, from now on we only consider models with no ongoing (in the last 11 Gyr) mass growth. The solid line shows the luminosity evolution of an SSP, modeled as previous SSP but formed at $z_f=5$. The dashed line close to this SSP is a Grasil non-evolving mass model (named E both in the Grasil package and in Andreon 2006). This model is a bit more physical than Bruzual & Charlot (2003) because, for example, a metallicity evolution is allowed during the 1 Gyr long star formation episode started at $z=5$ (all other Grasil parameters are kept to the default value). These models are rejected too by the data, being too faint at very high redshift (or too bright at lower redshift if models are normalized at $1.4<z<1.8$). To have an overall flat evolution at $z>>1$ and the observed trend at $z\la 1$, one needs to boost the light at $1.4<z<1.8$ without increasing the cluster mass, otherwise the descending galaxies would be too massive (bright) for the low ($z\la 1$) redshift data points. This can be achieved by considering an SSP with a formation redshift very close to the highest observed redshift, $z_f=2.5$, a mere 0.5 Gyr age difference from the age of the most distant cluster in the sample or 0.9 Gyr from the age of the second most distant cluster. To boost the light observed at $1.4<z<1.8$, and thus emitted at early ages (1 to 2 Gyr), it is necessary to use the 2007 version of the Bruzual & Charlot (2003) models or Maraston (2005) and to keep a low $z_f$. In fact, the 2003 release of Bruzual & Charlot (2003) gives galaxies that are too faint (for the data) galaxies in the critical 1 to 2 Gyr age because of the different treatment of the thermally–pulsing asymptotic giant branch (TP-AGB). Larger $z_f$ does not boost the light enough to reproduce the data: the $z_f=5$ case is dipicted in the figure and does not fit the data. Similarly, a $z_f=3$ SSP does not fit the data.
To summarize, models with mergers or recent star formation (in the past 11 Gyr) are rejected because they are too luminous at mid-to-high $z$ (or, equivalently, too faint at very hight redshift). Populations that are too old (SSPs with $z_f=5$) are rejected because too faint at very high $z$. The similarity of $m^*$ at $1.4<z<1.8$ and $z\sim1$ implies an assembly time earlier than $z=1.8$ and, at the same time, a star formation episode that is not much earlier than $z_f=2.5$ in order to boost the luminosity of the galaxies observed between 0.9 to 2.0 Gyr after it (our $1.4<z<1.8$ galaxies).
Past works dealing with luminosity/mass functions usually give a [*lower*]{} limit on the formation redshift or on the last star–formation episode. The large redshift baseline and the robustly measured $m^*$ values of this work are able to distinguish SSPs with $z_f=2.5$ or $z_f=5$ (in Fig. \[fig:mstarz\_mod\] these SSPs are hardly distinguishable below $z\sim1.2$ and widely different at higher redshift), and are also able to indicate that the [*upper*]{} limit of the latest star formation episode did not preceed the redshift of the two most distant clusters in the sample by a long time, only about 0.9 Gyr.
We emphasize that exponentially declining models with small $\tau$ (e.g. 0.1 Gyr) show a luminosity evolution very similar to SSP models, and thus are an equally acceptable fit to the data. Similarly, models with twice the solar metallicity are hardly distinguishable in Fig. \[fig:mstarz\_mod\] from solar metallicity models, so are also acceptable. However, adopting them does not change our conclusion. Finally, we note that the impact of TP-AGB stars on galaxy spectral energy distributions is still under discussion (Zibetti et al. 2011): the lower their contribution to the overall emission in the rest-frame near–infrared band is, the harder it is to fit our data.
Cluster choice
--------------
To be included in this work, a $z>1.4$ structure should have a firm, Chandra, ICM detection that is spatially coincident with a galaxy overdensity. This may seem overly restrictive, and in fact at an early stage of this work we considered other possible criteria, but we found them unsuitable.
Initially, we accepted $z>1.4$ spectroscopic confirmed clusters. However, relying on the common “spectroscopic confirmation" to name a cluster is dangerous at high redshift: the presence of a spectroscopically confirmed cluster-sized galaxy overdensity is not a guarantee of the presence of a cluster. It is enough to think about the zoo of structures with different names (proto-clusters, redshift spikes), often with several concordant redshifts in areas of a few Mpc$^2$. The usual spectroscopic criteria have been shown to have low reliability by Gal et al. (2008) using a real spectroscopic survey. This is exemplified by the spectroscopic confirmation of the Gobat et al. (2011) $z=2.07$ group: recent spectroscopic data (Gobat et al. 2012[^5]) show that none of their original 11 spectroscopic members used to spectroscopically confirm the group belong to the group, all being more than $8000$ km s$^{-1}$ away from the group. Given the danger of the “spectroscopic confirmation", we choose to require a firm ICM detection.
Later, we considered a suitable definition of cluster to be every galaxy overdensity with a spatially coincindent X-ray detection, either from Chandra or XMM. However, a weak XMM detection, typical of most clusters in the XMM Deep Cluster Survey (Fassbender 2011) and of other searches, such as Henry et al. (2010), is an ambigous cluster detection: it may be ICM emission or an AGN misclassified as extended source because of the low XMM resolution and the low signal–to–noise. This is the case of the Papovich et al. (2010) structure, initially named cluster, and renamed proto-cluster in Papovich et al. (2012). It was detected as a $4\sigma$ XMM source (as many Fassbender 2011 “clusters"). Later pointed Chandra follow-up observations revealed a bright point source and no extended emission (Pierre et al. 2012) using observations planned to measure the intracluster medium temperature to better than 30 % out to $r_{500}$ (M. Pierre Chandra proposal abstract). Similarly, the Gobat et al. (2011) group with a $3.5\sigma$ XMM detection after point source subtraction is undetected in deep Chandra observations (Gobat et al. 2011).
Therefore, to avoid the risk of computing the luminosity function of an environment mis-identified as a cluster, i.e. of comparing “apples" (at high redshift) to “oranges“ at low redshift (Andreon & Ettori 1999), we asked for a firm, Chandra, ICM detection that is spatially coincident with a galaxy overdensity. The superb Chandra angular resolution allows X-ray point sources (1” wide) to be easily recognized from extended (20" wide) ICM emission, even in the low signal–to–noise regime, unlike XMM.
Summary
=======
We analyzed deep Spitzer data of the five $z>1.4$ clusters with a firm detection of the ICM spatially coincident with a galaxy overdensity. This definition of cluster gets rid of the many sorts of cluster-sized structures known at high redshift (such as proto-cluster), which may differ from clusters and allows us to be certain we are comparing “apples to apples" (Andreon & Ettori 1999). The analyzed data are deep (about 1000s), but to avoid a cluster- radial- magnitude-dependent, unreliable crowding correction, we only consider bright galaxies (brighter than 18.0-18.5 mag), about 2 mag brighter than other authors would choose for data with the same exposure time. The four clusters differ in richness (ISCSJ1438.1+3414 and XMMXCSJ2215.9-1738 are twice richer than ISCSJ1432.4+3250, IDCSJ1426.5+3508 and JKCS041) and morphological appareance. By adopting the correct expression of the likelihood for the data in hand we derived the luminosity function in the \[3.6\] band and the characteristic magnitude $m^*$, the latter by marginalizing over the remaining parameters except $\alpha$. Since the five $m^*$ values are found to be consistent with each other, we combined the data in the unique statistically acceptable way, by multiplying the likelihood of each individual determination. We found a characteristic luminosity of $m^* = 16.92\pm0.13$ at $z=1.5$, the median redshift of the six clusters. Assuming a luminosity-to-mass conversion fixed by an SSP with $z_f=2.5$, we found a characteristic mass $lgM^*=11.30\pm 0.05$ at $z=1.5$ and a mass function determined with about 40 % error in the $10.5<lgM<12$ M$_\odot$ range from the 150 member galaxies of the combined sample.
We found that the characteristic luminosity and mass does not evolve between $z\sim1$ and $1.4<z<1.8$, directly ruling out ongoing mass assembly between these epochs because massive galaxies are already present at $z=1.8$. Lower redshift build–up epochs were already ruled out by previous works, leaving only $z>1.8$ as a possible epoch for the mass build–up. Populations that are too old (SSPs with $z_f=5$) are rejected because they are too faint at very high $z$. The observed values of $m^*$ at very high redshift are, however, too bright for galaxies without any star formation shortly preceeding the observed redshift. The similarity of $m^*$ at $1.4<z<1.8$ and $z\sim1$ implies a star formation episode no earlier than $z_f=2.5$ in order to boost the luminosity of the galaxies observed at $1.4<z<1.8$ without increasing their mass. For the first time, mass/luminosity functions are able to robustly distinguish tiny differences between formation redshifts ($z_f=2.5$ from $z_f=3$) and to set [*upper*]{} limits to the last star formation episode. This did not preceed the redshift of the two most distant clusters in the sample by a long time, only about 0.9 Gyr. In short, $1.4<z<1.8$ is the post starforming age of massive cluster galaxies, we found that massive cluster galaxies were still forming stars at $z\sim2.5$ and that they did not grow in mass at later times.
This work is based on observations made with the Spitzer Space Telescope.
Andreon, S. 2001, ApJ, 547, 623
Andreon, S. 2002, A&A, 382, 495
Andreon, S. 2006, MNRAS, 369, 969
Andreon, S. 2010, MNRAS, 407, 263 (A10)
Andreon, S. 2011a, in Astrostatistical Challenges for the New Astronomy, ed. J. Hilbe, Springer Series on Astrostatistics (arXiv:1112.3652)
Andreon, S. 2011b, A&A, 529, L5
Andreon, S., & Ettori, S. 1999, ApJ, 516, 647
Andreon, S., & Huertas-Company, M. 2011, A&A, 526, A11
Andreon, S., Punzi, G., Grado, A., 2005, MNRAS, 360, 727
Andreon, S., Willis, J., Quintana, H., et al. 2004, , 353, 353
Andreon, S., Cuillandre, J.-C., Puddu, E., & Mellier, Y. 2006, MNRAS, 372, 60
Andreon, S., Puddu, E., de Propris, R., & Cuillandre, J.-C. 2008a, MNRAS, 385, 979
Andreon, S., Maughan, B., Trinchieri, G., & Kurk, J. 2009, A&A, 507, 147
Andreon, S., Trinchieri, G., & Pizzolato, F. 2011, MNRAS, 412, 2391
Ashby, M. L. N., Stern, D., Brodwin, M., et al. 2009, ApJ, 701, 428
Barmby, P., Huang, J.-S., Ashby, M. L. N., et al. 2008, ApJS, 177, 431
Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393
Bielby, R., Hudelot, P., McCracken, H. J., et al. 2012, A&A, 545, A23
Brodwin, M., Stern, D., Vikhlinin, A., et al. 2011, ApJ, 732, 33
Brodwin, M., Gonzalez, A. H., Stanford, S. A., et al. 2012, ApJ, 753, 162
Bruzual, G., & Charlot, S. 2003, MNRAS, 344, 1000
Chabrier, G. 2003, PASP, 115, 763
Culverhouse, T. L., Bonamente, M., Bulbul, E., et al. 2010, ApJ, 723, L78
De Propris, R., Stanford, S. A., Eisenhardt, P. R., Dickinson, M., & Elston, R. 1999, AJ, 118, 719
De Propris, R., Stanford, S. A., Eisenhardt, P. R., Holden, B. P., & Rosati, P. 2007, AJ, 133, 2209
Fassbender, R., B[ö]{}hringer, H., Nastasi, A., et al. 2011, New Journal of Physics, 13, 125014
Gal, R. R., Lemaux, B. C., Lubin, L. M., Kocevski, D., & Squires, G. K. 2008, ApJ, 684, 933
Gobat, R., Daddi, E., Onodera, M., et al. 2011, A&A, 526, A133
Henry, J. P., Salvato, M., Finoguenov, A., et al. 2010, ApJ, 725, 615
Mancone, C. L., Gonzalez, A. H., Brodwin, M., et al. 2010, ApJ, 720, 284
Mancone, C. L., Baker, T., Gonzalez, A. H., et al. 2012, ApJ, 761, 141
Maraston, C. 2005, MNRAS, 362, 799
Mauduit, J.-C., Lacy, M., Farrah, D., et al. 2012, PASP, 124, 714
Meyers, J., Aldering, G., Barbary, K., et al. 2012, ApJ, 750, 1
Mullis, C. R., Rosati, P., Lamer, G., et al. 2005, ApJ, 623, L85
Muzzin, A., Wilson, G., Lacy, M., Yee, H. K. C., & Stanford, S. A. 2008, ApJ, 686, 966
Jannuzi, B. T., & Dey, A. 1999, Photometric Redshifts and the Detection of High Redshift Galaxies, ASP Conference Series, Vol. 191, Edited by R. Weymann, L. Storrie-Lombardi, M. Sawicki, and R. Brunner. ISBN: 158381-017-X, p. 111
Johnston, R. 2011, A&A Review, 19, 41
Henry, J. P. 2000, ApJ, 534, 565
Hilton, M., Lloyd-Davies, E., Stanford, S. A., et al. 2010, ApJ, 718, 133
Hilton, M., Stanford, S. A., Stott, J. P., et al. 2009, ApJ, 697, 436
Oemler, A., Jr. 1974, ApJ, 194, 1
Panuzzo, P., Silva, L., Granato, G. L., Bressan, A., & Vega, O. 2005, in “The Spectral Energy Distribution of Gas-Rich Galaxies: Confronting Models with Data”, eds. C.C. Popescu and R.J. Tuffs, AIP Conf. Ser., in press, (astro-ph/0501464)
Papovich, C., Momcheva, I., Willmer, C. N. A., et al. 2010, ApJ, 716, 1503
Papovich, C., Bassett, R., Lotz, J. M., et al. 2012, ApJ, 750, 93
Pierre, M., Clerc, N., Maughan, B., et al. 2012, A&A, 540, A4
Raichoor, A., & Andreon, S. 2012a, A&A, 543, A19
Raichoor, A., & Andreon, S. 2012b, A&A, 537, A88
Sandage, A., Tammann, G. A., & Yahil, A. 1979, ApJ, 232, 352
Schechter, P. 1976, ApJ, 203, 297
Silva, L., Granato, G. L., Bressan, A., & Danese, L. 1998, ApJ, 509, 103
Snyder, G. F., Brodwin, M., Mancone, C. M., et al. 2012, ApJ, 756, 114
Stalder, B., Ruel, J., Suhada, R., et al. 2012, ApJ, submitted (arXiv:1205.6478)
Stanford, S. A., Eisenhardt, P. R., Brodwin, M., et al. 2005, ApJ, 634, L129
Stanford, S. A., Romer, A. K., Sabirli, K., et al. 2006, ApJL, 646, L13
Stanford, S. A., Brodwin, M., Gonzalez, A. H., et al. 2012, ApJ, 753, 164
Strazzullo, V., Rosati, P., Pannella, M., et al. 2010, A&A, 524, A17
Tanaka, M., Finoguenov, A., Mirkazemi, M., et al. 2012, PASJ, in press (arXiv:1210.0302)
Tozzi, P., Santos, J. S., Nonino, M., et al. 2012, arXiv:1212.2560
Zibetti, S., Gallazzi, A., Charlot, S., Pierini, D., & Pasquali, A. 2013, MNRAS, 428, 1479
Zwicky, F. 1957, Morphological astronomy, Berlin: Springer
Technical addedum about the LF determination
============================================
Methods for deriving the luminosity function date back to Zwicky (1957) at least, with newer methods usually more properly addressing the complicate features of the astronomical data not considered by previous methods (see Johnston 2011 for a detailed list of references and Andreon 2011a for a listing of many of the awkward features of the astronomical data). For challenging estimations such as those of high redshift clusters, these “complications" include the use of the correct likelihood expression for the handled data and, in the case of deep data, the (radial- magnitude- cluster- dependent) crowding corrections or the adoption of a bright limiting magnitude. If the objects used to build the LF include putative clusters (i.e. spurious or false cluster detections) or objects of a different nature from clusters (e.g. filaments), this uncertainty should be folded into the likelihood expression. The same is true if clusters have a photometric redshift, unless $m^*$ is constant in the redshift uncertainty range. If galaxies are selected with photometric redshift or their photometric redshift is used in the LF determination, uncertainties (both statistics and systematics) should be folded into the likelihood expression, too. Both effects introduce a bias on $m^*$ if neglected, and this can be easily appreciated by remembering that contamination and photometric redshift errors work as a convolution filter making the LF broader, thus biasing $m^*$ even in the simplest case (symmetric errors). Complicated bias patterns are introduced when asymmetry is important. If galaxies without optical counterparts are excluded from the LF computation, the likelihood expression should be modified to correct for the bias induced by the forced optical detection requirement.
Mancone et al. (2010, 2012) faced most of these issues but did not adopt the likelihood appropriate for the data used. This is the reason, in our opinion, for the (formally statistically) different $m^*$ change with redshift at $z<1.2$ between Mancone et al. (2010) and the other works, as well as for the (formally statistically) different $m^*$ values between Mancone et al. (2010) and this work at $z\sim 1.5$. Before interpreting them as genuine differences, due for example to two populations of clusters with widely different $m^*$ values, with Mancone et al. (2010) primarly sampling one population and this work the other, one should first make certain that all determinations are robustly derived and refer to clusters as we usually intend them (the author consider sheets, filaments, proto-clusters, and false cluster detections as fairly different from clusters), and second, three of the five clusters studied in this work are likely also in the Mancone et al. (2010) sample (using shallower data, however).
For completeness, we also explored whether the difference between our $m^*$ values and those in Mancone et al. (2010) at $z\sim1.5$ may be due to an intrinsic variance of $M^*$ values. By performing a Monte Carlo simulation, we computed the probability that if $M^*$ has an intrinsic scatter, five out five of the clusters studied in this work are all within 0.6 mag, and the $m^*$ of the combined cluster is 0.85 mag brighter (or more) than the Mancone et al. (2010) value. To this aim, as prior probability distribution for the intrinsic (i.e. accounting for measurement errors) scatter at $z\sim1.5$ we adopted the posterior probability distribution computed for the 17 clusters at $0.29<z<1.06$ in Andreon et al. (2004), whose luminosity functions have been fitted by holding $\alpha$ fixed, as we and Mancone et al. (2010) both do. This distribution may be approximated by a normal distribution centered on $0.02$ mag, with sigma $0.22$ mag and truncated at zero. The distribution is quite broad, meaning that we allow the possibility in our simulations that the instrinsic scatter may be very large. In fact, 5% of our simulations have an intrinsic $M^*$ scatter larger than $0.4$ mag. Note that in simulations we adopted the actual probability distribution, not the Normal approximation mentioned above. Then, we generated 60000 simulated data sets, each one composed of five clusters, with $m^*$ values having the same errors as our observed values and having Mancone et al. (2010) mean $m^*$ (and intrinsic scatter as detailed above). Finally, we counted how many times we observe mean $m^*$ offsets in the simulated data larger than those in the real data (0.85 mag) and individual $m^*$ values all within 0.6 mag (the observed maximal difference between the individual $m^*$ measured by us). We found no case in 60000 simulations, i.e. the probability of observing a larger disagreement because of the intrinsic variance in $M^*$ is a negligible $2\ 10^{-5}$.
To allow a possible evolution of the intrinsic scatter between the redshift where it is measured, $0.3<z<1.1$, and the redshift where we need to known its value, $z\sim1.5$, we performed other simulations assuming that at higher redshift the intrinsic scatter is twice higher (if we reduce the scatter at higher redshift, it becomes even more implausible to observe the observed $m^*$ offset). Also in this case, we found no case matching the observations out of our 60000 simulations. In short, it is very unlikely that the intrinsic scatter on $M^*$ is the source of the disagreement between $z\sim1.5$ determinations in these two works.
We emphasize that our search for a physical reason that explains differences in the mean $m^*$ assumes that measurements and errors (i.e. the likelihood) are (is) correct whereas those in one of the compared works is not. Therefore, our search, performed at the request of the referee, should not be interpreted as indication that we believe that the observed differences of the mean $m^*$ is genuine.
[^1]: We note that Tozzi et al. (2012) report having Spitzer \[3.6\] images with exposure time 10 times shorter than the data available in the Spitzer archive. Our discovery of the contamination mentioned below uses 970s long exposure, reduced and analyzed like all the other Spitzer data of this work. The contamination has been discovered during the LF analysis.
[^2]: http://www.sciops.esa.int/SYS/CONF2011/images/ cluster2012Presentations/rgobat\_2012\_esac.pdf
[^3]: A similar contamination is also present for other high redshift clusters, which were not retained in our final sample studied here, such as the $z=1.393$ 1WGAJ2235.3-2557 (Mullis et al. 2005) and the $z\sim1.5$ CXOJ1415.2+3610 (Tozzi et al. 2012) clusters.
[^4]: After the paper acceptance, JKCS041 has been spectroscopic confirmed to be at high redshift by mean of HST spectroscopy.
[^5]: http://www.sciops.esa.int/SYS/CONF2011/images/ cluster2012Presentations/rgobat\_2012\_esac.pdf
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study absorbing-state phase transitions in two-dimensional Voronoi-Delaunay (VD) random lattices with quenched coordination disorder. Quenched randomness usually changes the criticality and destroys discontinuous transitions in low-dimensional nonequilibrium systems. We performed extensive simulations of the Ziff-Gulari-Barshad (ZGB) model, and verified that the VD disorder does not change the nature of its discontinuous transition. Our results corroborate recent findings of Barghatti and Vojta \[Phys. Rev. Lett. [**113**]{}, 120602 (2014)\] stating the irrelevance of topological disorder in a class of random lattices that includes VD and raise the interesting possibility that disorder in nonequilibrium APT may, under certain conditions, be irrelevant for the phase coexistence. We also verify that the VD disorder is irrelevant for the critical behavior of models belonging to the directed percolation and Manna universality classes.'
address:
- 'Departamento de Física e Matemática, Universidade Federal de São João Del Rei, 36420-000, Ouro Branco, MG, Brazil'
- 'Departamento de Física, Universidade Federal de Viçosa, 36570-000, Viçosa, MG, Brazil'
author:
- 'Marcelo M. de Oliveira'
- 'Sidiney G. Alves'
- 'Silvio C. Ferreira'
title: 'Continuous and discontinuous absorbing-state phase transitions on Voronoi-Delaunay random lattices'
---
[^1]
[^2]
Introduction
============
Nonequilbrium phase transitions from an active (fluctuating) to an inactive (absorbing) phase in spatially extended systems is a topic of broad interest [@Marrobook; @henkel08; @odor04]. The so-called absorbing-state phase transitions (APTs) arise in a wide variety of problems as, for example, heterogeneous catalysis [@zgb], interface growth [@tang], population dynamics and epidemiology [@pastor2014]. Recent experimental realizations in turbulent liquid crystals [@take07], driven suspensions [@pine] and superconducting vortices [@okuma] highlight the importance of this kind of transition.
In analogy with equilibrium phase transitions, it is expected that [*continuous*]{} APTs can be classified in universality classes [@Marrobook; @henkel08]. Generically, single-component systems with short-range interactions exhibiting a continuous APT, in the absence of extra symmetries or conservation laws, belong to the directed percolation (DP) universality class [@gras; @jans], but other robust classes emerge when multiple absorbing states and conservation laws are included [@Marrobook; @henkel08].
Of particular interest is how spatially quenched disorder affects the critical behavior of an APT. In real systems, quenched disorder appears in the form of impurities and defects [@hinri00b]. On a regular lattice, quenched disorder can be added in the forms of random deletion of sites or bonds [@noestPRL; @noestPRB; @adr-dic96; @adr-dic98; @vojta06; @DeOliveira2008] or of random spatial variation of the control parameter [@durrett; @salinas08; @vojta14b]. In all the cases above, quenched randomness produces rare regions which are locally supercritical even when the whole system is subcritical. The lifetime of active rare regions is exponentially long in the domain size. The convolution of rare region and exceedingly large lifetimes can lead to a slow dynamics, with non universal exponents, for some interval of the control parameter $\lambda_c^{(0)}<\lambda<\lambda_c$ where $\lambda_c^{(0)}$ and $\lambda_c$ are the critical points of the clean and disordered systems, respectively. This interval of singularities is called Griffiths phase (GP) [@vojta06b]. This GP behavior was verified in DP models with uncorrelated disorder irrespective to the disorder strength and corresponds to the universality class of the random transverse Ising model [@HooyberghsPRL; @HooyberghsPRE; @vojta05; @DeOliveira2008; @vojta09].
These findings are in agreement with the heuristic Harris’ criterion [@harris74], which states that uncorrelated quenched disorder is a relevant perturbation if $$d\nu_\perp<2,$$ where $d$ is the dimensionality and $\nu_\perp$ is the correlation length exponent of the clean model. Note that in DP this inequality is satisfied for all dimensions $d<4$, since $\nu_\perp =$ 1.096854(4), 0.734(4) and 0.581(5), for $d=1$, $2$ and $3$, respectively [@jensen92; @jensen99; @voigt97]. In the opposite way, simulations of the continuous APT in models with a conserved field in the Manna universality class [@manna], considering uncorrelated lattice dilution below the lattice percolation threshold, provide strong evidences that this kind of disorder is irrelevant although the Harris criterion is satisfied for $d<4$ [@LeePRE2011; @LeePRE2013; @LeePRL].
For equilibrium [*discontinuous*]{} phase transitions the Imry-Ma criterion [@ma; @ma2] governs the stability of macroscopic phase coexistence and disorder destroys phase coexistence by domain formation in dimensions $d \leq 2$. If the distinct phases are related by a continuous symmetry the marginal dimension is $d = 4$ [@ma2]. Therefore, first-order phase transitions become rounded in presence of disorder for $d \leq 2$.
Recent numerical results provide evidences that the Imry-Ma argument for equilibrium systems can be extended to non-equilibrium APTs: Irrespective to the uncorrelated disorder strength, Buendia and Rikvold [@buendia; @BuendiaPhyA; @BuendiaPRE] reported that the absorbing discontinuous transition in the Ziff-Gulari-Barshad (ZGB) model for heterogeneous catalysis turns to a continuous one (see also the discussion in [@BustosPRE]). Analogous behavior was observed more recently by Martín [*et al.*]{} [@martin] for a two-dimensional quadratic contact-process [@Liu2007].
Another important question is the role played by disorder inherent to the underlying connectivity in a nonperiodic, random structure of integer dimension as the random lattice generated by the Voronoi-Delaunay (VD) triangulation [@hilh08]. This random lattice can be generated from a random (uniform) distribution of $N$ points in a unitary square region. The triplets that can be circumscribed by a circle that does not overlap with any other point form a triangulation. The result is a two-dimensional connected graph with a Poissonian distribution of connectivity with average degree $\bar{q}=6$ [@okabe]. This lattice plays an important role in the description of idealized statistical geometries such as planar cellular structures, soap throats, [*etc*]{}. [@okabe; @hilh08].
Recently, it was found that such a kind of VD disorder does not alter the character of the APT exhibited by the clean contact process (CP) [@oliveira2], which is a prototypical model in the DP universality class. These results are in evident contrast with those for uncorrelated disorder which leads to an infinite-randomness critical point and strong GPs [@DeOliveira2008; @vojta09]. In order to determine the relevance of the disorder in these cases, we can apply the heuristic Harris-Luck criterion [@luck93], in which the regular critical behavior remains unchanged when the wandering exponent[^3] $\omega$ does not exceed a threshold value given by $$\omega_c=1-\frac{1}{d\nu_\perp}.$$ For independent dilution, $\omega=1/2$, and Luck’s expression reduces to the Harris criterion.
Former numerical estimates of wandering exponents for VD triangulations indicated a value close to independent dilution $\omega=1/2$ [@janke]. So, the clean critical behavior observed for CP on VD lattices posed doubts on the validity of Harris-Luck criterion for DP class [@oliveira2]. This inconsistency was recently unfolded [@vojta14c] with the determination of the correct wandering exponent of VD lattices as $\omega=1/4$ in $d=2$ implying a criterion $\nu_\perp>2/3$, not $\nu_\perp>1$, for a clean critical behavior.
In the present work, we investigate the role played by the disorder of VD lattice on the phase coexistence of ZGB model. We provide evidences that the VD topological disorder does not destroy the phase coexistence and thus permit discontinuous phase transitions. We complement the paper with more evidences for the irrelevance of VD disorder for continuous APTs belonging to the DP [@henkel08] and Manna [@Turcotte1999; @manna] universality classes.
The reminder of this paper is organized as follows. In the next section, we review the models definitions and details of the simulation methods we used. In Sec. III, we present our results and discussions. Sec. IV is devoted to summarize our conclusions.
Models and methods {#models}
==================
We constructed the Voronoi-Delaunay lattice with periodic boundary conditions, following the method described in [@Friedberg1984]. For sake of simplicity, the length of the domain where $N$ node are randomly distributed will be expressed in terms of $L=\sqrt{N}$.
Discontinuous APT:
------------------
The ZGB model [@zgb], a lattice gas model introduced to investigate the reaction of CO oxidation on a catalytic substrate, follows the Langmuir-Hinshelwood mechanism, $$\begin{aligned}
\mbox{CO}_{gas}+*\to \mbox{CO}_{ads} \nonumber \\
\mbox{O}_{2 gas}+2* \to 2\mbox{O}_{ads} \nonumber \\
\mbox{CO}_{ads}+\mbox{O}_{ads}\to \mbox{CO}_{2}+ 2*, \nonumber\end{aligned}$$ where $*$ denotes an empty site, and subscripts indicate the state (gaseous or adsorbed) of each species. The O$_{2 gas}$ dissociates at surface, and requires two empty sites to adsorb, while CO requires only one site to adsorb (the model is also called the monomer-dimer model). The product CO$_2$ desorbs immediately on formation. CO$_{gas}$ molecules arrive at rate $Y$ per site while O$_2$ arrives at rate $(1-Y)$, with $0\leq Y\leq 1$. Varying the control parameter $Y$, the model exhibits phase transitions between an active steady state and one of the two absorbing or “poisoned” states, in which the surface is saturated either by oxygen (O) or by CO. The first transition (O-poisoned) is found to be continuous while the second (CO-poisoned) is strongly discontinuous.
The computer implementation is the following: With a probability $Y$ a CO adsorption attempt takes place, and with a complementary probability ($1-Y$) an O$_2$ adsorption attempt takes place. In the former case, one site is randomly chosen. If the site is occupied, either by O or CO, the attempt fails. If it is empty but one of its first-neighbors is occupied by an O, both sites become empty (O and CO react instantaneously). Otherwise, the site becomes occupied by an adsorbed CO molecule. Analogous procedure is followed for an O$_2$ adsorption attempt, but in this case we have to choose at random a $pair$ of first-neighbors sites, and check for the opposite species in all remaining nearest-neighbors of the target pair.
Continuous APT:
---------------
The CP [@harris-CP] is the prototypical model of the DP class, and is defined on a lattice with each site either active ($\sigma_i=1$) or inactive ($\sigma_i=0$). Transitions from active to inactive occur spontaneously at a rate of unity. The transition from inactive to active occurs at rate $\sigma_j\lambda/k_j$, for each edge between active nearest neighbors $j$ of site $i$. The computer implementation of CP in graphs with arbitrary connectivity is as follows [@Marrobook; @DeOliveira2008]: An occupied site is chosen at random. With probability $p=1/(1+\lambda)$ the chosen particle is removed. With the complementary probability $1-p=\lambda/(1+\lambda)$, a nearest neighbor of the selected particle is randomly chosen and, if empty, is occupied, otherwise nothing happens and simulations runs to the next step. Time is incremented by $\delta t = 1/n$, where $n$ is the number of particles. So, the creation mechanism in CP effectively compensates the local connectivity variation with a reduction of the spreading rate through a particular edge inversely proportional to the connectivity of the site that transmits a new particle. If we modify these rules to create offspring in [*all*]{} empty nearest neighbors of the randomly chosen occupied site we obtain the A model [@a1; @dic-jaf] (in the A-model occupied sites becomes empty at unitary rate, as in the CP). This means that sites with higher coordination number produce more activity when compared with CP, enhancing possible “rare region effect” [@vojta06b; @vojta14]. Since contagion occurs more readily in the A model than in the CP, the critical creation rate $\lambda_c$ is smaller but the two models share the same critical behavior of the DP universality class [@a2].
The Manna model [@manna], a prototypical model of the Manna class and introduced to investigate the dynamic of sandpiles in the context of self-organized criticality, is defined on a lattice where each site assumes integer values (mimicking the number of “sand grains" deposited on the substrate). In the version we investigate an unlimited number of particles per site is permitted. Sites with a number bellow a threshold $N_c=2$ are inactive while those where this number is equal to or larger than $N_c$ are active. The active sites redistribute their particles among its nearest neighbors chosen at random, generating a dynamics that conserves the number of particles when considering periodic boundary condition. The Manna model exhibits a continuous phase transition from an active to an inactive state depending on the control parameter $p$ that is given by the density of particles on the lattice [@Dickman2001]. The absorbing stationary state, where all sites have a number of particles bellow $N_c$ is characterized by an infinite number of configurations. The computer implementation is analogous to that of CP: one active site $i$ ($N_i\geqslant N_c$) is randomly chosen. Each of the $N_i$ particles is sent to a randomly chosen nearest neighbor irrespective of its state. The site $i$ becomes empty (inactive) and the nearest neighbors with $N_j\equiv N_c-1$ particles that received a new one are activated.
Simulation methods
------------------
The central method we used involves the quasi-stationary state, in which averages are restricted to samples that did not visit an absorbing state [@Marrobook]. To perform the QS analysis we applied the simulation method of Ref. [@qssimPRE]. The method is based in maintaining, and gradually updating, a set of configurations visited during the evolution; when a transition to the absorbing state is imminent, the system is instead placed in one of the saved configurations. Otherwise the evolution is exactly that of a conventional simulation [@qssimPhysA]. Each realization of the process is initialized in an active state, and runs for at least $10^8$ Monte Carlo time steps. Averages are taken in the QS regime, after discarding an initial transient of $10^7$ time steps or more. This procedure is repeated for each realization of disorder. The number of disorder realization ranged from 20 (for the largest size used, $L=2048$) to $10^3$. Another important quantity is the lifetime in the QS regime, $\tau$. In QS simulations we take $\tau$ as the mean time between successive attempts to visit the absorbing state.
For discontinuous APTs, we estimated the transition point through the jump in the order parameter and the finite-size scaling of the maximum of the susceptibility. In DP class the spreading analysis starting from a single active site (a pre-absorbing configuration) is very accurate and computationally efficient method [@Marrobook]. For Manna class, spreading analysis is more cumbersome [@henkel08] due to infinitely many pre-absorbing configurations. So, we proceeded using dimensionless moment ratios analysis in the QS state [@dic-jaf], which are size-independent at criticality. Here, we analyze the critical moment ratio $m = \langle \rho^2
\rangle/\langle \rho \rangle ^2$, which assumes a universal value $m_c$ at the clean critical point.
Results
=======
Discontinuous APT
-----------------
First order transitions are characterized by a discontinuity in the order parameter and thermodynamic densities, with an associated delta-peak behavior in the susceptibility [@henkel08]. However, at finite volume thermodynamic quantities become continuous and rounded. According to the finite-size theory, rounding and shifting of the coexistence point scale inversely proportional to the system volume $L^d$ [@binder]. Although there is no established similar scaling theory for nonequilibrium systems yet, some studies show evidences of an analogous behavior for APTs [@AliSaif2009; @Sinha2012; @DeOliveira2015].
Quasi-stationary analysis remains useful in the context of discontinuous APTs [@DeOliveira2015]. Considering the QS simulations we observe a discontinuous phase transition from a low-density to a poisoned (absorbing) CO state, as shown in Fig. \[zgbqs\] instead of a rounded (continuous) transition expected for APTs in the presence of relevant disorder [@martin]. The inset of Fig. \[zgbqs\] shows the QS probability distribution for the density of active sites near the transition. We clearly observe a bimodal distribution, which is a hallmark of discontinuous phase transition [@DeOliveira2015].
![(Color online) QS Density of CO sites in ZGB model on triangular and Voronoi lattices of linear system size $L=100$, showing a discontinuous phase transition to the CO-poisoned absorbing state close to $Y=0.56$. Inset shows the QS distribution for $Y=0.5590$ where the APT takes place.[]{data-label="zgbqs"}](triangVD.pdf){width="8cm"}
In analogy to equilibrium first-order phase transition, where at the transition point a thermodynamical potential (such as the free energy) is equal for both phases[@binder], we can define the coexistence value of the order parameter in which the area under the peaks of the QS distribution related to each phase (active and absorbing) are equal [@DeOliveira2015]. The intercept of the linear fit from this equal histogram method yields $Y_c=0.55928(3)$. Such a value is very close to the coexistence value $Y_c=0.5596(5)$ we found for the regular triangular lattice (see inset of Fig. \[zgbvar\]).
The location of the maximum of the susceptibility $\chi$, defined as variance of the order parameter $\chi=L^d({\langle {\rho^2} \rangle}-{\langle {\rho} \rangle}^2)$, scales as $L^d$ in a discontinuous APT [@AliSaif2009; @Sinha2012; @DeOliveira2015]. Figure \[zgbvar\] shows the finite-size scaling of the transition point which clearly scales inversely to the volume, confirming again the disordered lattice does not alter the discontinuous nature of the transition.
![(Color online) Quasi-stationary susceptibility on the ZGB model on VD lattices as a function of $Y$ for different sizes. Inset: finite size scaling for the susceptibility maxima in the range $L=40$ to $320$.[]{data-label="zgbvar"}](zgbvar.pdf){width="8cm"}
Further evidence of discontinuity of the phase transition in presence of quenched coordination disorder, is shown in Fig. \[bistable\]. Using conventional simulations, we observe the system bistability around the transition point: depending on the initial density, a homogeneous steady state may converge either to a stationary active state of high CO$_2$ production (and small CO density) or to the CO-poisoned (absorbing) state.
![ Density of CO as a function of time for distinct initial conditions close to the transition point. Initial densities $\rho_{CO} = 0.0,
~0.1,~0.2,\cdots,0.9$, from bottom to top. Linear system size L =100 and $Y=0.5560$.[]{data-label="bistable"}](bistable.pdf){width="8cm"}
These results contrasts with those for uncorrelated disorder, for which no matter its strength, the discontinuous transition is replaced by a continuous one.
Continuous APT
--------------
The spreading analysis for the A model on VD lattices using the mean number of active sites against time, with a single occupied site as initial condition, provides a critical value $\lambda_c=0.322430(5)$, which is smaller than $\lambda_c = 0.34047(1)$ found for the regular triangular lattice with $q=6$. This difference is more significant than that obtained for CP for these same lattices [@oliveira2] showing that effect of disorder in A model is stronger than in CP. However, the critical behavior remains that of the clean system, exhibiting clear power laws with spreading exponents very consistent with the DP class (results not shown).
Figure \[fig:FSSVD\] shows that at the critical point we found the QS density $\rho$ decays as a power law, $ \rho \sim
L^{-\beta/\nu_\perp}$, with $\beta/\nu_\perp=0.79(1)$. Besides, we observe that the lifetime of the QS state also follows a power-law at criticality, with $ \tau \sim L^{z}$, $z=1.73(5)$. Both values of the exponents are close to the DP ones of $\beta/\nu_\perp=0.797(3)$ and $z=1.7674(6)$ [@henkel08]. The inset of Fig. \[fig:FSSVD\] shows the ratio $m={\langle {\rho^2} \rangle}/{\langle {\rho} \rangle}$ around the criticality for varying system sizes. From these data we found $m_c=1.33(1)$, in agreement with the value $m_c = 1.3257(5)$ found for DP class in two dimensions [@dic-jaf]. All results presented here confirm the irrelevance of disorder of the VD lattice for the critical behavior of the A model.
![\[fig:FSSVD\] (Color online) FSS of the critical A model. Main: Quasistationary density of active sites $\rho$ (stars) and lifetime of the QS state $\tau$ (crosses) as a function of the system sizes $L$ for $\lambda=0.32243$. Inset: Quasistationary moment ratio $m$ versus $1/L$, for $\lambda = 0.32238$, $\lambda = 0.32242$, $\lambda = 0.32246$, from top to bottom.](FSSVD.pdf "fig:"){width="8cm"}.
Lets now turn our attention to the Manna class. The correlation length exponent $\nu_\perp=0.799$ [@Lubeck2002], larger than the DP value 0.7333, makes the modified Harris-Luck criterion modified criterion $(d+1)\nu_\perp<2$ still not fulfilled for VD lattices [@vojta14c]. Critical point determination using moment ratios is shown in the inset of Fig. \[fig:Mannacrit\] resulting in the estimate $p_c=0.688808(2)$ that is smaller than the triangular lattice threshold $p_c=0.69375(5)$. The critical moment ratio is $m_c=1.35(1)$, which agrees with the value we found for square lattices[^4] $m_c=1.348(7)$ at the threshold $p_c=0.716957(2)$. The critical exponents we obtained using $L\geqslant 256$ were $\beta/\nu_\perp=0.78(1)$ and $\nu_\parallel/\nu_\perp=1.54(2)$ are also in striking agreement with the Manna class exponents $\beta/\nu_\perp=0.80(2)$ and $\nu_\parallel/\nu_\perp=1.53(5)$.
![(Color online) Critical Manna model on VD lattices. Main: Critical density of active sites (crosses) and lifetime (stars) against lattice size. Inset: Moment ratio $m={\langle {\rho^2} \rangle}/{\langle {\rho} \rangle}^2$ against inverse of size for $p=0.688800,~0.688805,~0.688810,~$and 0.688815 from top to bottom.[]{data-label="fig:Mannacrit"}](Mannacrit.pdf){width="8cm"}
It is known that critical exponents and moment ratios of the Manna class in $d=2$ obtained via QS analysis are hardly distinguishable form DP class [@henkel08; @Bonachela2007]. In order to provide a more incisive verification that Manna model on VD lattice has exponents different from DP we considered density around the critical point, which scales as [@Dickman2006] $$\rho(\Delta,L)=\frac{1}{L^{\beta/\nu_\perp}}\mathcal{F}_\rho(L^{1/\nu_\perp} \Delta),$$ where $\Delta=p-p_c$. This implies that $$\left|\frac{\partial \ln \rho}{\partial p}\right| \sim L^{1/\nu_\perp}$$ can be used to obtain the exponent $\nu_\perp$ explicitly. Similarly, for the moment ratio we have $m(\Delta,L)=\mathcal{F}_m(L^{1/\nu_\perp}
\Delta)$ implying that $\nu_\perp$ can also be directly obtained from $$\left|\frac{\partial m}{\partial p}\right| \sim L^{1/\nu_\perp}.$$ A similar scaling law is expected for $\tau$. The inset of Fig.\[fig:dxdlb\] shows the moment ratios around the critical point where the slope clearly increase (in absolute values) with size. The main plot shows the derivatives against size. Using the three methods, we estimate a critical exponent $1/\nu_\perp=1.252(10)$, which is remarkably close to the exponent for Manna class $1/\nu_\perp=1.250(18)$ [@henkel08] and definitely ruling out the DP value $1/\nu_\perp=1.364(10)$ [@Marrobook].
![(Color online) Determination of critical exponent $\nu_\perp$ for Manna model on VD lattices using different quantities $x=\ln \rho,~\ln \tau$ and $m$. Inset: Moment ratio against control parameter around the critical point for $L=256$, 512, 1024, and 2048.[]{data-label="fig:dxdlb"}](dxdlb.pdf){width="8cm"}
Conclusions
===========
We investigate the effects of quenched coordination disorder in continuous and discontinuous absorbing state phase transitions. Our extensive simulations of the ZGB model on the VD lattice reveal the discontinuous nature of the absorbing state transition featured by the model remains unchanged under such a kind of disorder. Recently, it was shown that the Imry-Ma argument can be extended to non-equilibrium situations including absorbing states, and in addition, it was conjectured that first-order phase transitions cannot appear in low-dimensional disordered systems with an absorbing state. We showed that this is not always true: Our results for the ZGB model raise the interesting possibility that disorder in nonequilibrium APT may, under certain conditions, be irrelevant for the phase coexistence. The underlying reason for this is that the fluctuations induce correlated coordination disorder exhibited by the VD Lattice decay faster and are not able to preclude phase coexistence.
In the case of continuous APT, we performed large-scale simulations of the A and Manna models on a Voronoi-Delaunay random lattice. Our results confirm, as expected, that this kind of disorder does not alter the universality of the continuous transitions, supporting that strong anticorrelations present in the VD random lattice makes topological disorder less relevant than uncorrelated randomness.
Our findings corroborate a recent work of Barghatti and Vojta [@vojta14c] which shows systematically that the disorder fluctuations of the VD lattice are featured by strong anticorrelations and decay faster than those of random uncorrelated disorder. In particular, it was shown that the random VD lattice has wandering exponent $\omega = 1/4$ [@vojta14c]. Hence, in this case, the Harris-Luck criterion yields that random connectivity is irrelevant at a clean critical point for $\nu_\perp > 2/3$ that is satisfied for both Manna and DP universality classes. It is important to mention that in contrast to the A model, which belongs to DP class, even the strong disorder of uncorrelated lattice dilution (below the lattice percolation threshold) was found to be irrelevant for Manna class[@LeePRE2011; @LeePRE2013; @LeePRL]. Therefore, our results are consistent with these findings, since the coordination disorder of VD lattice is weaker than lattice dilution. In addition, we determined the exponent $1/\nu_\perp= 1.252(10)$ for Manna class on VD lattice definitely ruling out the DP value $1/\nu_\perp= 1.364(10)$.
Further work should include the study of absorbing phase transitions on a three-dimensional random VD lattice, since it does not belong to the class of lattices with constrained total coordination [@vojta14]. In particular, according to the Harris criterion, the disorder might be relevant for the Manna class at least in three dimensions and there might be a dimensional difference between two and three dimensions. It would also be interesting to investigate if other kinds of correlated disorder are irrelevant for phase coexistence.
This work was supported by CNPq, CAPES and FAPEMIG, Brazil. M.M.O thanks the kind hospitality at the Complex Systems and Statistical Physics Group/University of Manchester, where part of this work was done, and financial support from CAPES, under project BEX 10646/13-2.
[66]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [**]{} (, , ) @noop [**]{}, Vol. (, , ) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/RevModPhys.87.925) [****, ()](\doibase
10.1103/PhysRevLett.99.234503) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.83.012503) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevLett.57.90) [****, ()](\doibase 10.1103/PhysRevB.38.2715) [****, ()](\doibase 10.1103/PhysRevE.54.R3090) [****, ()](\doibase 10.1103/PhysRevE.57.1263) [****, ()](\doibase 10.1103/PhysRevLett.96.035701) @noop [****, ()]{} @noop [****, ()]{} @noop [ ()]{} [****, ()](\doibase 10.1103/PhysRevE.89.012112) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevE.79.011111) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevA.45.R563) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevE.56.R6241) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevLett.35.1399) [****, ()](\doibase 10.1103/PhysRevLett.62.2507) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevLett.98.050601) [****, ()](\doibase 10.1140/epjb/e2008-00003-7) @noop [**]{} (, , ) [****, ()](\doibase 10.1103/PhysRevE.78.031133) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.69.144208) @noop [****, ()]{} [****, ()](\doibase 10.1088/0034-4885/62/10/201) [****, ()](\doibase 10.1016/0550-3213(84)90501-7) @noop [****, ()]{} @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevE.58.4266) [****, ()](\doibase 10.1103/PhysRevLett.112.075702) @noop [****, ()]{} [****, ()](\doibase
10.1103/PhysRevE.64.056104), [****, ()](\doibase 10.1103/PhysRevE.71.016129) @noop [****, ()]{} [****, ()](http://stacks.iop.org/0034-4885/50/i=7/a=001) [ ()](\doibase 10.1088/1742-5468/2009/07/P07023) [****, ()](\doibase 10.1007/s10955-011-0414-5) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevE.66.046114) [ ()](\doibase 10.1088/1742-5468/2014/08/P08003) [****, ()](\doibase 10.1016/j.physa.2007.04.110) [****, ()](\doibase 10.1103/PhysRevE.73.036131)
[^1]: On leave at: Theoretical Physics Division, School of Physics and Astronomy, The University of Manchester, Manchester, M13 9PL, UK
[^2]: Present address: Departamento de Física e Matemática, Universidade Federal de São João Del Rei, 36420-000, Ouro Branco, MG, Brazil
[^3]: The wandering exponent is associated to the decay of deviations from the average as a function of patch sizes where the averages are computed.
[^4]: Our estimate of $m$ does not agree with that of Ref. [@DaCunha2014] where a restricted version of the Manna model, in which $N_i>2$ is forbidden, was considered.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Ilse Cleeves
- Ryan Loomis
- Richard Teague
- Ke Zhang
- Edwin Bergin
- Karin Öberg
- Crystal Brogan
- Todd Hunter
- Yuri Aikawa
- Sean Andrews
- Jaehan Bae
- Jennifer Bergner
- Kevin Flaherty
- Viviana Guzman
- Jane Huang
- Michiel Hogerheijde
- 'Shih-Ping Lai'
- Laura Pérez
- Charlie Qi
- Luca Ricci
- Colette Salyk
- Kamber Schwarz
- Jonathan Williams
- David Wilner
- Al Wootten
title: |
**Astro 2020 White Paper:\
Realizing the Unique Potential of ALMA to Probe the Gas Reservoir of Planet Formation\
**
---
[**Abstract:**]{} Understanding the origin of the astonishing diversity of exoplanets is a key question for the coming decades. ALMA has revolutionized our view of the dust emission from protoplanetary disks, demonstrating the prevalence of ring and spiral structures that are likely sculpted by young planets in formation. To detect kinematic signatures of these protoplanets and to probe the chemistry of their gas accretion reservoir will require the imaging of molecular spectral line emission at high angular and spectral resolution. However, the current sensitivity of ALMA limits these important spectral studies to only the nearest protoplanetary disks. Although some promising results are emerging, including the identification of the snowlines of a few key molecules and the first attempt at detecting a protoplanet’s spiral wake, it is not yet possible to search for these important signatures in a diverse population of protoplanetary disks. Harnessing the tremendous power of (sub)mm observations to pinpoint and characterize the chemistry of planets in formation will require a major increase of ALMA’s spectral sensitivity ($5-10\times$), increase in bandwidth ($2\times$) at high spectral resolution, and improved angular resolution ($2\times$) in the 2030 era.
Introduction
============
[r]{}[0.46]{} {width="0.99\linewidth"}
Today $>3900$ exoplanets are confirmed, and this number will continue to grow with current ([*TESS*]{} and [*GAIA*]{}) and future missions ([*JWST, PLATO, WFIRST*]{}, etc). Even given current observational biases, the diversity of exoplanetary system architectures is astonishing. ALMA is currently leading a revolution in our understanding of the [*origins*]{} of this diversity, allowing us for the first time to peer deep into protoplanetary disks and capture images of planet formation in action. (Sub)millimeter dust continuum observations reveal the evolution of disk midplane solids as protoplanets form, as exemplified by a recent ALMA Large Program (DSHARP; Fig. \[dsharp\]) of 20 disks revealing numerous dark/bright rings, spiral structures, and azimuthal asymmetries (with typical size scales of 5 – 10 au) that are generally thought to be sculpted by the presence of hidden planets in their infancy. However, the dust can only tell a fraction of the story: it is the gas that traces $99\%$ of a protoplanetary disk’s mass, encodes all of the kinematic information, and reveals the chemical reservoir for planet formation. With ALMA, we are just now beginning to unlock the unique diagnostic potential of gas-phase spectroscopic observations and link the physical [*and*]{} chemical properties of protoplanetary disks with their forming planets.
As we enter a new era when the characterization of exoplanetary atmospheres becomes routine, ALMA shows promise to be a transformative instrument in connecting exoplanets with the story of their origins. Achieving this potential, however, will require both spatially and spectrally resolving key diagnostic line emission at relevant physical scales (such data are inherently $\sim 2$ orders of magnitude less sensitive than the continuum). Moreover, such studies must cover a representative sample of disks that span a range of evolutionary states, disk morphologies, and environments. Here the current limitations of ALMA become apparent. Presently, to achieve $\sim10-15$ au resolution for spectroscopic study of [*only five*]{} targets requires a 130 hr ALMA Large Program (PI: [Ö]{}berg). These disks reside at distances of $\sim140$ pc, but in order to study the closest disks in a massive star forming environment (e.g., Orion), we must reach out to $\sim400$ pc. Improving spectral surface brightness sensitivity and simultaneous bandwidth to observe more diagnostic lines at once will therefore be critical for comprehensive spectral studies of protoplanetary disks in the coming decades.
Here we present key science drivers for spectroscopic study of protoplanetary systems in the (sub)mm regime, highlighting the present state of the art and areas where deficiencies in current capabilities motivate the significant upgrades outlined in the ALMA 2030 Development Roadmap [@Carpenter2019]. In particular, we show that a 5-$10\times$ increase in spectral sensitivity coupled with an increase in spectral agility and bandwidth will both dramatically improve our capability to directly detect protoplanets [*and*]{} massively expand the sample size of surveys investigating the chemical environment in which exoplanets form.
Kinematic Detection of Planets in Formation
===========================================
To directly confront planet formation theories, we must find planets during their formation, while still embedded in the disk. Previously, there have been two main approaches to this goal. The first exploits the high angular resolution of extreme adaptive optics (XAO; e.g. Gemini Planet Imager and SPHERE/VLT) to try to detect thermal emission from the young planet, or H$\alpha$ emission from accretion [@Wagner2018]. However, searches in nearby disks with significant mm dust continuum substructure have resulted in many upper limits, suggesting that protoplanets are generally much cooler, or accrete significantly less vigorously than predicted. The second approach is the detection of circumplanetary disks (CPD) at (sub)mm wavelengths. Though @Zhu2018 predict ALMA could detect CPDs down to 0.03 lunar masses, CPD emission has not yet been detected [e.g., @Andrews2018].
The spectroscopic imaging power of ALMA has led to a new approach for planet detection through searches for gas kinematic perturbations due to the gravitational influence of embedded protoplanets [@Perez2018; @Pinte2018; @Teague2018a]. Embedded
[r]{}[0.5]{} {width="0.96\linewidth"}
planets drive spiral wakes, resulting in local density enhancements and changes in the gas velocity due to the gas pressure gradient (Fig. \[fig:HD163296hydro\]). These effects result in two clear observables. First, the planet clears some material along its orbit, creating a gas deficit. Second, density variations perturb the radial pressure gradient and the rotational velocity of the gas [@Kanagawa2015], an effect which has already been identified in a handful of sources [@Teague2018a; @Teague2018c].
Though intriguing, [*a more definitive method will be directly imaging the spiral pattern of the wake*]{} (Fig. \[fig:HD163296hydro\]b). Identifying wakes provides two significant advantages. First, since detection will not be limited to the inner disk regions where the mm grains reside, protoplanet searches can extend to the entire [*gas*]{} disk. At larger radial separations, studies in the NIR (e.g. [*JWST*]{} or ELTs) will also be feasible as contamination from the stellar PSF
[r]{}[0.53]{} {width="1\linewidth"}
is reduced. Second, wake signatures are typically [*larger*]{} spatially than CPDs, making them accessible at lower spatial resolution.
However, ALMA currently lacks the sensitivity required to resolve spatial scales comparable to the ring/gap structures in the dust continuum ($\sim 5$ au) for any but the most nearby disks. Fig. \[fig:TWHya\] demonstrates the current state of the art in high angular resolution kinematic studies, with 6.6 hr on-source time towards the nearest disk TWHya (d=60 pc) in $^{12}$CO(3-2), and 8 au resolution. Hints of azimuthal structures are observed, albeit amid significant noise. Confirmation will require significantly more integration time even toward nearby TWHya so that more optically thin tracers can be used. Exploiting the true power of this technique in a sample of protoplanetary disks (unavoidably at larger distances) will require both high angular resolution (at least $2\times$) to achieve the requisite 5 au resolution and significantly higher sensitivity to overcome the commensurate decrease in surface brightness sensitivity.
The Chemical Environment of Forming Planets
===========================================
The chemistry and physics of planet formation are intimately linked (Fig. \[fig:c2o\]), and we are just beginning to scratch the surface of this connection. With ALMA we can now directly observe snowlines where volatiles freeze out of the gas phase, and we can probe the indirect effects of physical evolution on chemistry. Even with observations limited to a handful of the most nearby protoplanetary disks, it is rapidly becoming clear that their chemistry is [*actively evolving*]{}. Some of the strongest evidence for these deviations from a simple inherited interstellar chemistry comes from synergistic ALMA and [*Herschel*]{} observations, showing respectively that both CO and water vapor are strongly depleted in disk surfaces compared to interstellar abundances [@hogerheijde2011; @miotello2017; @du2017].
![[*[**Top**]{} Cartoon of the radial distribution of key disk components. [**Bottom**]{} Midplane C/O ratio prediction compared to Solar for gas and ice. The C/O ratio changes radially due to the freeze out of species like H$_2$O, CO$_2$, and CO [@Oberg2011].*]{}[]{data-label="fig:c2o"}]({Planet_cartoon2}.png){width="0.98\linewidth"}
These tantalizing results suggest that the evolution of the disk chemical environment may play an important role in setting the range of planetary compositions, but many of the most crucial observations of gas are prohibitively expensive and thus currently limited in scope and sample size. [*We still do not know what the most common disk compositions are, and therefore we do not know what the most probable exoplanet compositions are likely to be.*]{} As exoplanet atmospheric characterization capabilities rapidly improve [e.g., @madhusudhan2018], such information will be critical in designing programs for follow-up atmospheric characterization of confirmed exoplanets from missions such as [*TESS*]{}.
Below we describe three key science questions for uncovering the chemical environment of planet formation. First, it is increasingly clear that the observed composition of disk surface layers is inconsistent with that of earlier interstellar stages [e.g., @cleeves2018iau]. It is therefore crucial to trace the evolution of disk chemistry in a statistically significant sample of sources across a wide range of physical environments and ages. Second, investigations of the interface between disk surface layers and the icy grains in the planet-forming midplane will be critical to interpreting the impact of gaseous chemical evolution on planetary inheritance. Here, direct and indirect ALMA observations of snowlines will be highly complementary with upcoming infrared studies of the disk ices and inner disk gas. Finally, emission from complex organic species in disks is inherently weak, but offers a powerful tool to constrain the interstellar inheritance of prebiotic material. An increase in surface brightness sensitivity at sub(mm) wavelengths would be transformative for each of these goals, and expanded instantaneous bandwidths would allow many to be achieved simultaneously.
[**What is the range of possible disk compositions, and which are common?**]{} The leading explanation for the aforementioned differences between the gas-phase carbon, oxygen, nitrogen, and sulfur abundances and interstellar values is that the volatiles are being sequestered into ice-coated grains that grow into larger pebbles or even bodies such as comets or planetesimals. This process preferentially removes oxygen (in the form of water) from the observable surface layers of the disk [@bergin2016; @cleeves2018iau], which enhances the C/O ratio in the gas [@Oberg2011 Fig. \[fig:c2o\]]. Under high C/O conditions, abundant hydrocarbons such as C$_2$H will form [@du2015], suggesting that observations of these hydrocarbons may be useful as a proxy for tracing disk chemical evolution. For example, the older ($\sim 8$ Myr) disk TW Hya requires a C/O ratio $\gtrsim 1$ to reproduce the brightness of the observed C$_2$H lines [@bergin2016], while the younger IMLup ($\sim0.5$ Myr) disk, only requires C/O $\sim 0.8$. Similarly, observations of optically thin N-bearing species such H$^{13}$CN can be used to constrain the disk nitrogen content [@cleeves2018].
Furthermore, with upcoming observations anticipated from [*JWST*]{}, we will be able to search for the “missing” ices at the same radii that ALMA probes the gas using broadband ice absorption features [@aikawa2012], and also test for radial transport of icy-coated dust grains into the terrestrial planet forming region by investigating volatile chemistry in the inner disk with [*JWST*]{} MIRI. For example, if the evolving grains transport extensive amounts of water into the inner disk, we should be able to see an inner gas-phase water enhancement, which would enrich the atmospheres of forming giant planets, potentially explaining close in gas giant exoplanets with water rich atmospheres [e.g., @pinhas2018].
However, we are still in the regime of small number statistics, limited in our ability to detect key species sensitive to C/N/O like C$_2$H and isotopologues of HCN and CO toward a large sample of disks ($\sim$ a few hundred). ALMA surveys have had relatively few detections of the CO isotopologues compared to models with interstellar abundances [@ansdell2016]. By improving ALMA’s spectral line sensitivity, we have the potential to unlock [*in a statistical way*]{} what are the most common compositions planets can inherit from their disks.
[**How do snowlines mediate the chemical and physical disk evolution?**]{} The freeze-out of different volatiles (H$_2$O, CO$_2$, and CO) as ice onto dust grains may dramatically improve the ability of grains to coagulate into larger bodies [@Ros2013; @Banzatti2015] and also shifts the balance of ice- versus gas-phase carbon, oxygen, nitrogen, etc., directly impacting the resulting initial chemical composition that a forming planet may inherit [see Fig. \[fig:c2o\] and @Oberg2011]. However, complicating this picture, if dust grains have grown to sufficiently large sizes, they may start to “blur” the specific snowline locations as the grains drift inward [@Piso2015; @Piso2016]. Therefore [*direct measurements*]{} of snowline locations are critical for identifying the locations of these threshold regions (Fig. \[fig:c2o\]).
The midplane CO snowline around sun-like stars is expected to occur between $10-40$ au, readily accessible with ALMA. Peering through highly optically-thick surface layers down to the midplane, however, requires the use of weakly emitting, optically-thin isotopologues as tracers. $^{13}$C$^{18}$O has emerged as a promising diagnostic, successfully employed by @Zhang2017 to unambiguously identify the mid-plane CO snowline at 21 au in TWHya (d=60 pc) with ALMA. Similar studies for a larger sample of TTauri disks is not feasible with the current sensitivity, however; imaging $^{13}$C$^{18}$O in a single disk at the distance of Taurus (d=140 pc) would require $\sim 30$ hr on-source integration time. A 5-10$\times$ increase in spectral sensitivity would allow surveys of minimum-mass solar nebula type disks (60 M$_\oplus$ of solids plus 0.01 $M_\odot$ of H/He) across a number of local star-forming regions.
Directly accessing the H$_2$O midplane snowline is more challenging for ALMA because of its compact radial distribution [within $1-5$ au; @Zhang2013; @Blevins2016], and a lack of optimal transitions. However, several weak warm/hot ($E_{U}\sim$ 100s to 1000s of K) transitions of H$_2$O, and H$_2^{18}$O in the (sub)mm offer hope for detecting, and even resolving the distribution of water at larger radii along its snow-surface interface. A tentative detection of H$_2$O and H$_2^{18}$O at 321-322 GHz has been reported in a disk 120 pc away [@Carr2018]. Only with a more sensitive ALMA can we push these studies forward, connecting water observations at larger radii with observations closer to the star from facilities such as [*JWST*]{} to provide a cohesive picture of water chemistry across a large sample.
[**What is our interstellar organic inheritance?**]{} It is currently unclear whether the molecular inventory of disks, particularly the midplane, is set by interstellar inheritance or an active disk chemistry. During the early prestellar phase, a rich chemistry has already begun, including abundant water and organics [@jimenezserra2016; @caselli2010]. Models suggest that some material, including water and organics, can be preserved in disks [e.g., @visser2009; @cleeves2014wat; @cleeves2016org; @drozdovskaya2018].
Although organic molecules are widely observed at earlier stages of star formation, low inherent gas-phase column densities makes their detection challenging in protoplanetary disks. The deep integrations required, however, pay off with optically thin emission, which allows the gas-phase organic properties to be observed throughout the vertical extent of the disk, including closer to the midplane if non-thermal desorption is efficient. Moreover, these species’ closely spaced lines enable key disk physical properties like temperatures and densities to be constrained, fundamentally anchoring physical models.
ALMA has provided the first detections of “complex” organics like CH$_3$CN, CH$_3$OH, and HCOOH toward nearby protoplanetary disks [@Oberg_2015; @Walsh_2016; @Favre_2018]. Observations of CH$_3$CN show the strong potential of organics as unambiguous tracers of excitation conditions [@Loomis_2018; @Bergner_2018]. Even these observations, however, are limited by prohibitively large integration times and lower resolutions, restricting our understanding at planet forming spatial scales. A 5-10$\times$ better spectral line sensitivity would enable organics to be used as a powerful probe of disk inheritance [*and*]{} physical/kinematic structure (§2) across a larger sample of disks. Larger instantaneous bandwidths ($\geq2\times$) would allow more diagnostic transitions to be observed at once, enabling all the key science goals described here to be simultaneously achievable.
Recommendations
===============
ALMA is leading a revolution in our understanding of planet formation. Nonetheless, the current limited spectral surface brightness sensitivity of ALMA restricts the study of the crucial [*gas*]{} component of planet formation to a handful of the most nearby objects in a non-representative sample of environments. In order to harness the tremendous power of (sub)mm observations to pinpoint and chemically characterize planets in formation requires a 5-10$\times$ improvement of ALMA’s spectral sensitivity and increased bandwidth ($\geq2\times$) at high spectral resolution for simultaneous observation of diagnostic lines in the 2030 era. These goals can be realized with a combination of increased collecting area, improved receivers, and increasing the bandwidth, efficiency, and data rates of the ALMA signal processing system.
[2]{}
[41]{} natexlab\#1[\#1]{}
, Y., [et al.]{} 2012, , 538, A57
, S. M., [et al.]{} 2018, , 869, L41
, M., [et al.]{} 2016, , 828, 46
, A., [et al.]{} 2015, , 815, L15
, E. A., [et al.]{} 2016, , 831, 101
, J. B., [et al.]{} 2018, , 857, 69
, S. M., [et al.]{} 2016, , 818, 22
, J., [et al.]{} 2019, arXiv e-prints
, J. S., [Najita]{}, J. R., & [Salyk]{}, C. 2018, RNAAS, 2, 169
, P., [et al.]{} 2010, , 521, L29
, L. I. 2018, in IAUS, ed. [Cunningham]{}, [Millar]{}, & [Aikawa]{}, Vol. 332, 57–68
, L. I., [et al.]{} 2014, Science, 345, 1590
—. 2016, , 819, 13
—. 2018, , 865, 155
, M. N., [et al.]{} 2018, , 476, 4949
, F., [et al.]{} 2017, , 842, 98
, F., [Bergin]{}, E. A., & [Hogerheijde]{}, M. R. 2015, , 807, L32
, C., [et al.]{} 2018, , 862, L2
, M. R., [et al.]{} 2011, Science, 334, 338
, J., [et al.]{} 2018, , 852, 122
, I., [et al.]{} 2016, , 830, L6
, K. D., [et al.]{} 2015, , 448, 994
, R. A., [et al.]{} 2018, , 859, 131
, N. [Atmospheric Retrieval of Exoplanets]{}, 104
, A., [et al.]{} 2017, , 599, A113
, K. I., [et al.]{} 2015, , 520, 198
, K. I., [Murray-Clay]{}, R., & [Bergin]{}, E. A. 2011, , 743, L16
, S., [Casassus]{}, S., & [Ben[í]{}tez-Llambay]{}, P. 2018, , 480, L12
, A., [et al.]{} 2018, , 480, 5314
, C., [et al.]{} 2018, , 860, L13
, A.-M. A., [et al.]{} 2015, , 815, 109
, A.-M. A., [Pegues]{}, J., & [[Ö]{}berg]{}, K. I. 2016, , 833, 203
, K. & [Johansen]{}, A. 2013, , 552, A137
, R., [et al.]{} 2018, , 860, L12
—. 2018, , 868, 113
, R., [et al.]{} 2009, , 495, 881
, K., [et al.]{} 2018, , 863, L8
, C., [et al.]{} 2016, , 823, L10
, K., [et al.]{} 2017, Nature Astronomy, 1, 0130
—. 2013, , 766, 82
, Z., [Andrews]{}, S. M., & [Isella]{}, A. 2018, , 479, 1850
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Sparse generalized eigenvalue problem plays a pivotal role in a large family of high-dimensional learning tasks, including sparse Fisher’s discriminant analysis, canonical correlation analysis, and sufficient dimension reduction. However, the theory of sparse generalized eigenvalue problem remains largely unexplored. In this paper, we exploit a non-convex optimization perspective to study this problem. In particular, we propose the truncated aylegh ow mthod (Rifle) to estimate the leading generalized eigenvector and show that it converges linearly to a solution with the optimal statistical rate of convergence. Our theory involves two key ingredients: (i) a new analysis of the gradient descent method on non-convex objective functions, as well as (ii) a fine-grained characterization of the evolution of sparsity patterns along the solution path. Thorough numerical studies are provided to back up our theory. Finally, we apply our proposed method in the context of sparse sufficient dimension reduction to two gene expression data sets.'
author:
- 'Kean Ming Tan, Zhaoran Wang, Han Liu, and Tong Zhang'
bibliography:
- 'reference.bib'
title: 'Sparse Generalized Eigenvalue Problem: Optimal Statistical Rates via Truncated Rayleigh Flow'
---
Introduction {#section:introduction}
============
A broad class of high-dimensional statistical problems such as sparse canonical correlation analysis (CCA), sparse Fisher’s discriminant analysis (FDA), and sparse sufficient dimension reduction (SDR) can be formulated as the sparse generalized eigenvalue problem (GEP). In detail, let $\Ab\in\RR^{d\times d}$ be a symmetric matrix and $\Bb \in \RR^{d\times d}$ be positive definite. For the symmetric-definite matrix pair $(\Ab,\Bb)$, the sparse generalized eigenvalue problem aims to obtain a sparse vector $\vb^*\in \RR^d$ satisfying \# \[Eq:gep opt\] \^\* = \_ (,) \^\*, \# where $\vb^*$ is the leading generalized eigenvector corresponding to the largest generalized eigenvalue $\lambda_{\max} (\Ab,\Bb)$ of the matrix pair $(\Ab, \Bb)$. Let $s=\|\vb^* \|_0$ be the number of non-zero entries in $\vb^*$, and we assume that $s$ is much smaller than $d$.
In real-world applications, the matrix pair $(\Ab,\Bb)$ is a population quantity that is unknown in general. Instead, we can only access $(\hat{\Ab},\hat{\Bb})$, which is an estimate of $ (\Ab,\Bb)$, i.e., $$\hat{\Ab} = \Ab + \Eb_{\Ab} \quad \mathrm{and}\quad \hat{\Bb} = \Bb + \Eb_{\Bb},$$ where $\Eb_{\Ab}$ and $\Eb_{\Bb}$ are stochastic errors due to finite sample estimation. For those statistical applications considered in this paper, $\Eb_{\Ab}$ and $\Eb_{\Bb}$ are symmetric. We aim to approximate $\vb^*$ based on $\hat{\Ab}$ and $\hat{\Bb}$ by approximately solving the following optimization problem \# \[Eq:gep opt\] \_[\^[d]{}]{} \^T , \^T = 1,\_0 s. \# There are two major challenges in solving . Firstly, the problem in requires maximizing a convex objective function over a non-convex set, which is NP-hard even if $\hat{\Bb}$ is the identity matrix [@moghaddam2006generalized; @moghaddam2006spectral]. Secondly, in the high-dimensional setting in which the dimension $d$ is much larger than the sample size $n$, $\hat{\Bb}$ is in general singular, and classical algorithms for solving generalized eigenvalue problems are not directly applicable [@golub2012matrix].
In this paper, we propose a non-convex optimization algorithm to approximately solve (\[Eq:gep opt\]). The proposed algorithm iteratively performs a gradient ascent step on the generalized Rayleigh quotient $\vb^T \hat{\Ab} \vb/\vb^T \hat{\Bb} \vb$, and a truncation step that preserves the top $k$ entries of $\vb$ with the largest magnitudes while setting the remaining entries to zero. Here $k$ is a tuning parameter that controls the sparsity level of the solution. Strong theoretical guarantees are established for the proposed method. In particular, let $\{\vb_t\}_{t=0}^L$ be the solution sequence resulting from the proposed algorithm, where $L$ is the total number of iterations and $\vb_0$ is the initialization point. We prove that, under mild conditions, \#\[eq:w1\] \^t - \^\* \_2 \_[optimization error]{} + \_[statistical error]{}(t = 1,…, L). \# The quantities $\nu \in (0,1)$ and $\xi(\Ab,\Bb)$ depend on the population matrix pair $(\Ab, \Bb)$. These quantities will be specified in Section \[section:theory\]. Meanwhile, $\rho(\Eb_{\Ab}, 2k+s)$ is defined as \#\[eq:w3\] (\_, 2k+s) = \_[\_2 = 1, \_0 2k+s]{} | \^T \_ | \# and $\rho(\Eb_{\Bb}, 2k+s)$ is defined similarly. In , the first term on the right-hand side quantifies the exponential decay of the optimization error, while the second term characterizes the statistical error due to finite sample estimation. In particular, for the aforementioned statistical problems such as sparse CCA, sparse FDA, and sparse SDR, we can show that \#\[eq:w2\] {(\_, 2k+s), (\_, 2k+s)} \# with high probability. Consequently, for any properly chosen $k$ that is of the same order as $s$, the algorithm achieves an estimator of $\vb^*$ with the optimal statistical rate of convergence $\sqrt{s \log d/n}$.
The sparse generalized eigenvalue problem in is closely related to the classical matrix computation literature (see, e.g., [@golub2012matrix] for a survey, and more recent results in [@ge2016efficient]). There are two key differences between our results and these previous works. First, we have an additional non-convex constraint on the sparsity level, which allows us to handle the high-dimensional setting. Second, due to the existence of stochastic errors, we allow the normalization matrix $\hat{\Bb}$ to be rank-deficient, while in the classical setting $\hat{\Bb}$ is assumed to be positive definite. In comparison with the existing generalized eigenvalue algorithms, our algorithm keeps the iterative solution sequence within a basin that only involves a few coordinates of $\vb$ such that the corresponding submatrix of $\hat{\Bb}$ is positive definite. Furthermore, in this way our algorithm ensures that the statistical errors in only involves the largest sparse eigenvalues of the stochastic errors $\Eb_{\Ab}$ and $\Eb_{\Bb}$, which is defined in . In contrast, a straightforward application of the classical matrix perturbation theory gives a statistical error term that involves the largest eigenvalues of $\Eb_{\Ab}$ and $\Eb_{\Bb}$, which are much larger than their sparse eigenvalues [@stewart1990].\
**Notation:** Let $\vb = (v_1,\ldots,v_d)^T \in \RR^d$. We define the $\ell_q$-norm of $\vb$ as $\|\vb\|_q = (\sum_{j=1}^d |v_j|^q)^{1/q}$ for $1\le q < \infty$. Let $\lambda_{\max}(\Zb)$ and $\lambda_{\min}(\Zb)$ be the largest and smallest eigenvalues correspondingly. If $\Zb$ is positive definite, we define its condition number as $\kappa(\Zb) = \lambda_{\max}(\Zb)/\lambda_{\min} (\Zb)$. We denote $\lambda_k(\Zb)$ to be the $k$th eigenvalue of $\Zb$, and the spectral norm of $\Zb$ by $\|\Zb\|_2 = \sup_{\|\vb\|_2 =1}\; \|\Zb \vb \|_2$. For $F\subset \{1,\ldots,d\}$, let $\Zb_F\in \RR^{s\times s}$ be the submatrix of $\Zb$, where the rows and columns are restricted to the set $F$. We define $\rho(\Zb, s) = \sup_{\|\ub\|_2 = 1, \|\ub\|_0 \leq s} | \ub^T \Zb \ub |$.
Sparse Generalized Eigenvalue Problem and Its Applications {#section:previous work}
==========================================================
Many high-dimensional multivariate statistics methods can be formulated as special instances of (\[Eq:gep opt\]). For instance, when $\hat{\Bb} = \Ib$, (\[Eq:gep opt\]) reduces to the sparse principal component analysis (PCA) that has received considerable attention within the past decade (among others, [@zou2006sparse; @d2007direct; @d2008optimal; @witten2009penalized; @ma2013sparse; @cai2013sparse; @yuan2013truncated; @vu2013fantope; @vu2013minimax; @birnbaum2013minimax; @wang2014tighten; @gu2014sparse]). In the following, we provide three examples when $\hat{\Bb}$ is not the identity matrix. We start with sparse Fisher’s discriminant analysis for classification problem (among others, [@tibshirani2003class; @guo2007regularized; @leng2008sparse; @clemmensen2012sparse; @mai2012direct; @mai2015multiclass; @kolar2015optimal; @gaynanova2015optimal; @fan2015quadro]).
\[example:fda\] **Sparse Fisher’s discriminant analysis:** Given $n$ observations with $K$ distinct classes, Fisher’s discriminant problem seeks a low-dimensional projection of the observations such that the between-class variance, $\bSigma_b$, is large relative to the within-class variance, $\bSigma_w$. Let $\hat{\bSigma}_b$ and $\hat{\bSigma}_w$ be estimates of $\bSigma_b$ and $\bSigma_w$, respectively. To obtain a sparse leading discriminant vector, one solves $$\label{Eq:FDA}
\underset{\vb}{\mathrm{maximize}} \; \vb^T \hat{\bSigma}_b \vb, \qquad \mathrm{subject\; to\;}
\vb^T \hat{\bSigma}_w \vb = 1, \qquad \|\vb\|_0 \le s.$$ This is a special case of (\[Eq:gep opt\]) with $\hat{\Ab} = \hat{\bSigma}_b$ and $\hat{\Bb}= \hat{\bSigma}_w$.
Next, we consider sparse canonical correlation analysis which explores the relationship between two high-dimensional random vectors [@witten2009penalized; @chen2013sparse; @gao2014sparse; @gao2015minimax].
\[example:cca\] **Sparse canonical correlation analysis:** Let $\bX$ and $\bY$ be two random vectors. Let $\bSigma_{x}$ and $\bSigma_{y}$ be the covariance matrices for $\bX$ and $\bY$, respectively, and let $\bSigma_{xy}$ be the cross-covariance matrix between $\bX$ and $\bY$. To obtain sparse leading canonical direction vectors, we solve $$\label{Eq:CCA}
\underset{\vb_x,\vb_y}{\mathrm{maximize}} \; \vb_x^T \hat{\bSigma}_{xy} \vb_y, \qquad \mathrm{subject\; to\;}
\vb^T_x \hat{\bSigma}_x \vb_x = \vb^T_y \hat{\bSigma}_y \vb_y = 1, \quad \|\vb_x\|_0 \le s_x, \quad \|\vb_y\|_0 \le s_y,$$ where $s_x$ and $s_y$ control the cardinality of $\vb_x$ and $\vb_y$. This is a special case of (\[Eq:gep opt\]) with $$\hat{\Ab}= \begin{pmatrix} \mathbf{0} & \hat{\bSigma}_{xy} \\
\hat{\bSigma}_{xy} & \mathbf{0} \end{pmatrix},
\qquad
\hat{\Bb}= \begin{pmatrix} \hat{\bSigma}_{x} &\mathbf{0} \\
\mathbf{0} & \hat{\bSigma}_{y} \end{pmatrix},
\qquad
\vb = \begin{pmatrix} \vb_x \\ \vb_y
\end{pmatrix}.$$
Theoretical guarantees for sparse CCA were established recently. @chen2013sparse proposed a non-convex optimization algorithm for solving (\[Eq:CCA\]) with theoretical guarantees. However, their algorithm involves obtaining accurate estimators of $\bSigma_x^{-1}$ and $\bSigma_y^{-1}$, which are in general difficult to obtain without imposing additional structural assumptions on $\bSigma_x^{-1}$ and $\bSigma_y^{-1}$. In a follow-up work, @gao2014sparse proposed a two-stage procedure that attains the optimal statistical rate of convergence [@gao2015minimax]. However, they require the matrix $\bSigma_{xy}$ to be low-rank, and the second stage of their procedure requires the normality assumption. In contrast, our proposal only requires the condition in , which is much weaker and holds for general sub-Gaussian distributions.
In the sequel, we consider a general regression problem with a univariate response $Y$ and $d$-dimensional covariates $\bX$, with the goal of inferring the conditional distribution of $Y$ given $\bX$. Sufficient dimension reduction is a popular approach for reducing the dimensionality of the covariates [@li1991sliced; @cook1999dimension; @dennis2000save; @cook2007fisher; @cook2008principal; @ma2013review]. Many SDR problems can be formulated as generalized eigenvalue problems [@li2007sparse; @chen2010coordinate]. For simplicity, we consider a special case of SDR, the sparse sliced inverse regression.
\[example:sir\] **Sparse sliced inverse regression:** Consider the model $$Y= f(\vb_1^T \bX,\ldots,\vb_K^T \bX,\epsilon),$$ where $\epsilon$ is the stochastic error independent of $\bX$, and $f(\cdot)$ is an unknown link function. @li1991sliced proved that under certain regularity conditions, the subspace spanned by $\vb_1,\ldots,\vb_K$ can be identified. Let $\bSigma_x$ be the covariance matrix for $\bX$ and let $\bSigma_{E(\bX\mid Y)}$ be the covariance matrix of the conditional expectation $E(\bX\mid Y)$. The first leading eigenvector of the subspace spanned by $\vb_1,\ldots,\vb_K$ can be identified by solving $$\label{Eq:sir}
\underset{\vb}{\mathrm{maximize}} \; \vb^T \hat{\bSigma}_{E(\bX\mid Y)}\vb, \qquad \mathrm{subject\; to\;}
\vb^T \hat{\bSigma}_x \vb = 1, \qquad \|\vb\|_0 \le s.$$ This is a special case of (\[Eq:gep opt\]) with $\hat{\Ab}= \hat{\bSigma}_{E(\bX\mid Y)}$ and $\hat{\Bb} = \hat{\bSigma}_{x}$.
Many authors have proposed methodologies for sparse sliced inverse regression [@li2006sparse; @zhu2006sliced; @li2008sliced; @chen2010coordinate; @yin2015sequential]. More generally, in the context of sparse SDR, @li2007sparse and @chen2010coordinate reformulated sparse SDR problems into the sparse generalized eigenvalue problem in (\[Eq:gep opt\]). However, most of these approaches lack algorithmic and non-asymptotic statistical guarantees for the high-dimensional setting. Our results are directly applicable to many sparse SDR problems.
Truncated Rayleigh Flow Method {#section:algorithm}
==============================
\[sec:tgd\] We propose an iterative algorithm to estimate $\vb^*$, which we refer to as truncated aylegh ow mthod (Rifle). More specifically, the optimization problem (\[Eq:gep opt\]) can be rewritten as $$\underset{\vb\in\RR^d}{\mathrm{maximize}} \; \frac{\vb^T \hat{\Ab} \vb}{\vb^T \hat{\Bb} \vb}, \qquad\mathrm{subject\; to\; } \|\vb\|_0 \le s,$$ where the objective function is called the generalized Rayleigh quotient. At each iteration of the algorithm, we compute the gradient of the generalized Rayleigh quotient and update the solution by its ascent direction. To achieve sparsity, a truncation operation is performed within each iteration. Let $\mathrm{Truncate}(\vb,F)$ be the truncated vector of $\vb$ by setting $v_i = 0$ for $i\notin F$ for an index set $F$. We summarize the details in Algorithm \[alg:tgd\].
**Input**: matrices $\hat{\Ab}$, $\hat{\Bb}$, initial vector $\vb_0$, cardinality $k \in \{1,\ldots,d\}$, and step size $\eta$.
Let $t=1$. Repeat the following until convergence:
1. $\rho_{t-1} \leftarrow \vb_{t-1}^T \hat{\Ab} \vb_{t-1}/ \vb_{t-1}^T \hat{\Bb} \vb_{t-1}$.
2. $\Cb\leftarrow \Ib+ (\eta/\rho_{t-1})\cdot (\hat{\Ab}-\rho_{t-1}\hat{\Bb}) $.
3. $\vb_t' \leftarrow\Cb \vb_{t-1}/\|\Cb \vb_{t-1}\|_2$.
4. Let $F_t = \mathrm{supp}(\vb_t' ,k)$ contain the indices of $\vb_t'$ with the largest $k$ absolute values and $\mathrm{Truncate}(\vb_t',F_t)$ be the truncated vector of $\vb_t'$ by setting $(\vb_t')_i = 0$ for $i\notin F_t$.
5. $\hat{\vb}_t \leftarrow \mathrm{Truncate}(\vb'_t,F_t)$.
6. $\vb_t \leftarrow \hat{\vb}_t / \|\hat{\vb}_t\|_2$.
7. $t \leftarrow t+1$.
**Output**: $\vb_t$.
Algorithm \[alg:tgd\] requires the choice of a step size $\eta$, a tuning parameter $k$ on the cardinality of the solution, and an initialization $\vb_0$. As suggested by our theory, we need $\eta$ to be sufficiently small such that $\eta \lambda_{\max} (\hat{\Bb})<1$. The tuning parameter $k$ can be selected using cross-validation or based on prior knowledge. While the theoretical results in Theorem \[theorem:main\] suggests that a good initialization may be beneficial, our numerical results illustrate that the proposed algorithm is robust to different initialization $\vb_0$. A more thorough discussion on this can be found in the following section.
Theoretical Results {#section:theory}
===================
In this section, we show that if the matrix pair $(\Ab,\Bb)$ has a unique sparse leading eigenvector, then Algorithm \[alg:tgd\] can accurately recover the population eigenvector from the noisy matrix pair $(\hat{\Ab},\hat{\Bb})$. Recall from the introduction that $\Ab$ is symmetric and $\Bb$ is positive definite. This condition ensures that all generalized eigenvalues are real. We denote $\vb^*$ to be the leading generalized eigenvector of $(\Ab,\Bb)$. Let $V = \mathrm{supp}(\vb^*)$ be the index set corresponding to the non-zero elements of $\vb^*$, and let $|V| = s$. Throughout the paper, for notational convenience, we employ $\lambda_j$ and $\hat{\lambda}_j$ to denote the $j$th generalized eigenvalue of the matrix pairs $(\Ab,\Bb)$ and $(\hat{\Ab},\hat{\Bb})$, respectively.
Our theoretical results depend on several quantities that are specific to the generalized eigenvalue problem. Let $$\label{Eq:crawford}
\mathrm{cr}(\Ab,\Bb) = \underset{\vb: \|\vb\|_2=1}{\min} \; \left[(\vb^T\Ab \vb)^2+(\vb^T\Bb \vb)^2 \right]^{1/2}> 0$$ denote the Crawford number of the symmetric-definite matrix pair $(\Ab,\Bb)$ (see, for instance, [@stewart1979pertubation]). For any set $F\subset \{1,\ldots,d\}$ with cardinality $|F|=k'$, let $$\label{Eq:inf crawford}
\mathrm{cr}(k') = \inf_{F : |F|\le k'} \mathrm{cr}(\Ab_F,\Bb_F);\qquad
\epsilon(k') = \sqrt{ \rho(\Eb_{\Ab},k')^2+\rho(\Eb_{\Bb},k')^2},$$ where $\rho(\Eb_{\Ab},k')$ is defined in (\[eq:w3\]). Moreover, let $\lambda_j=(F)$ and $\hat{\lambda}_j(F)$ denote the $j$th generalized eigenvalue of the matrix pair $(\Ab_F,\Bb_F)$ and $(\hat{\Ab}_F,\hat{\Bb}_F)$, respectively. In the following, we start with an assumption that these quantities are upper bounded for sufficiently large $n$.
\[ass:large n\] For any sufficiently large $n$, there exist constants $b,c >0$ such that $$\frac{\epsilon(k')}{\mathrm{cr}(k')}\le b; \; \qquad \rho(\Eb_{\Bb},k') \le c \lambda_{\min} (\Bb)$$ for any $k' \ll n$, where $\epsilon(k')$ and $\mathrm{cr}(k')$ are defined in .
Provided that $n$ is large enough, it can be shown that the above assumption holds with high probability for most statistical models. More details can be found later in this section. We will use the following implications of Assumption \[ass:large n\] in the theoretical analysis, which are implied by matrix perturbation theory [@stewart1979pertubation; @stewart1990]. In detail, by applications of Lemmas \[lemma:eigenvalue\] and \[lemma:perturbed pair\] in Appendix \[proof:theorem:main\], we have that for any $F\subset \{1,\ldots,d\}$ with $|F|=k'$, there exist constants $a,c$ such that $$(1-a) \lambda_j (F)\le \hat{\lambda}_j (F) \le (1+a)\lambda_j(F);
\qquad
(1-c) \lambda_{j} (\Bb_F) \le \lambda_{j} (\hat{\Bb}_F) \le (1+c)\lambda_{j} (\Bb_F),$$ and $$\label{eq:kappa}
c_{\mathrm{lower}}\cdot \kappa(\Bb)
\le \kappa(\hat{\Bb}_F) \le c_{\mathrm{upper}} \cdot \kappa(\Bb),$$ where $c_{\mathrm{lower}}=(1-c)/(1+c)$ and $c_{\mathrm{upper}}= (1+c)/(1-c)$. Here $c$ is the same constant as in Assumption \[ass:large n\].
Our main results involve several more quantities. Let $$\label{Eq:eigengap}
\Delta \lambda = \underset{j>1}{\min} \; \frac{\lambda_1-(1+a)\lambda_j}{\sqrt{1+\lambda_1^2} \sqrt{1+(1-a)^2 \lambda_j^2}}$$ denote the eigengap for the generalized eigenvalue problem [@stewart1979pertubation; @stewart1990]. Meanwhile, let $$\label{eq:w8}
\gamma = \frac{(1+a) \lambda_2}{(1-a) \lambda_1} ;\qquad \qquad
\omega (k')
=\frac{2}{ \Delta \lambda \cdot \mathrm{cr}(k') } \cdot \epsilon(k')$$ be an upper bound on the ratio between the second largest and largest generalized eigenvalues of the matrix pair $(\hat{\Ab}_F,\hat{\Bb}_F)$, and the statistical error term, respectively. Let $$\theta = 1-
\frac{1-\gamma}{30\cdot (1+c) \cdot c_{\mathrm{upper}}^2\cdot \eta \cdot \lambda_{\max} (\Bb) \cdot \kappa^2(\Bb) \cdot [c_{\mathrm{upper}}\kappa(\Bb)+\gamma]}$$ be some constant that depends on the matrix $\Bb$.
The following theorem shows that under suitable conditions, Algorithm \[alg:tgd\] approximately recovers the leading generalized eigenvector $\vb^*$.
\[theorem:main\] Let $k' = 2k+s$ and assume that $\Delta \lambda > \epsilon(k')/\mathrm{cr}(k')$. Choose $k=Cs$ for sufficiently large $C$ and $\eta$ such that $\eta\lambda_{\max} (\Bb)<1/(1+c)$ and $$\nu = \sqrt{1+2 [(s/k)^{1/2} + s/k] }\cdot \sqrt{
1-\frac{1+c}{8} \cdot \eta \cdot \lambda_{\min} ({\Bb})\cdot
\left[ \frac{1-\gamma}{ c_{\mathrm{upper}} \kappa ({\Bb})+\gamma}\right]} < 1,$$ where $c$ and $c_{\mathrm{upper}}$ are constants defined in Assumption \[ass:large n\] and (\[eq:kappa\]). Given Assumption \[ass:large n\] and an initialization vector $\vb_0$ with $\|\vb_0\|_2=1$ satisfying $$\small
\frac{|(\vb^*)^T\vb_0|}{\|\vb^*\|_2} \ge \omega(k') + \theta,$$ we have $$\label{eq:w5}
\sqrt{1- \frac{|(\vb^*)^T {\vb}_t|}{\|\vb^*\|_2}} \le \nu^t \cdot \sqrt{1-\frac{|(\vb^*)^T\vb_0|}{\|\vb^*\|_2}}+ \sqrt{10}\cdot\frac{\omega (k')}{1-\nu}.$$
The proof of this theorem will be presented in Section \[sketch\]. The implications of Theorem \[theorem:main\] are as follows. For the simplicity of discussion, assume that $(\vb^*)^T {\vb}_t$ and $(\vb^*)^T {\vb}_0$ are positive without loss of generality. Since ${\vb}_t$ and $\vb_0$ are unit vectors, from we have $$1- \frac{|(\vb^*)^T {\vb}_t|}{\|\vb^*\|_2} = \frac{1}{2}\left\|\vb_t - \frac{\vb^*}{\|\vb^*\|_2} \right\|_2^2,\qquad 1- \frac{|(\vb^*)^T {\vb}_0|}{\|\vb^*\|_2} = \frac{1}{2}\left\|\vb_0 - \frac{\vb^*}{\|\vb^*\|_2} \right\|_2^2.$$ In other words, states that the $\ell_2$ distance between $\vb^*/\|\vb^*\|_2$ and $\vb_t$ can be upper bounded by two terms. The first term on the right-hand side of quantifies the optimization error, which decreases to zero at a geometric rate since $\nu<1$. Meanwhile, the second term on the right-hand side of quantifies the statistical error $\omega(k')$. The quantity $\omega(k')$ depends on $\epsilon(k') = \sqrt{ \rho(\Eb_{\Ab},k')^2+\rho(\Eb_{\Bb},k')^2}$, where $\rho(\Eb_{\Ab},k')$ and $\rho(\Eb_{\Bb},k')$ are defined in . For a broad range of statistical models, these quantities converge to zero at the rate of $\sqrt{s\log d/n}$.
Theorem \[theorem:main\] involves a condition on initialization: the cosine of the angle between $\vb^*$ and the initialization $\vb_0$ needs to be strictly larger than a constant, since $\epsilon(k')$ goes to zero for sufficiently large $n$. There are several approaches to obtain such an initialization for different statistical models. For instance, @johnstone2012consistency proposed a diagonal thresholding procedure to obtain an initial vector in the context of sparse PCA. Also, @chen2013sparse generalized the diagonal thresholding procedure for sparse CCA. An initial vector $\vb_0$ can also be obtained by solving a convex relaxation of (\[Eq:gep opt\]). This is explored in @gao2014sparse for sparse CCA. We refer the reader to @johnstone2012consistency and @gao2014sparse for a more detailed discussion.
Proof Sketch of Theorem \[theorem:main\] {#sketch}
----------------------------------------
To establish Theorem \[theorem:main\], we first quantify the error introduced by maximizing the empirical version of the generalized eigenvalue problem, restricted to a superset of $V$ ($V\subset F$), that is, $$\label{Eq:vbF}
\vb(F) = \underset{\vb \in \RR^d}{\arg \max} \; \vb^T \hat{\Ab} \vb, \qquad \mathrm{subject \; to\; } \vb^T \hat{\Bb} \vb = 1, \quad \mathrm{supp}(\vb) \subset F.$$ Then we establish an error bound between $\vb_t'$ in Step 2 of Algorithm \[alg:tgd\] and $\vb(F)$. Finally, we quantify the error introduced by the truncated step in Algorithm \[alg:tgd\]. These results are stated in Lemmas \[lemma:perturbation vec\]–\[lemma:truncation\] in the Appendix \[proof:theorem:main\].
Applications to Sparse CCA, PCA, and FDA
----------------------------------------
In this section, we provide some discussions on the implications of Theorem \[theorem:main\] in the context of sparse CCA, PCA, and FDA. First, we consider the sparse CCA model. For the sparse CCA model, we assume $$\begin{pmatrix}
\bX\\ \bY
\end{pmatrix}\sim N (\mathbf{0},\bSigma); \qquad
\bSigma = \begin{pmatrix}
\bSigma_x & \bSigma_{xy}\\ \bSigma_{xy}^T & \bSigma_y
\end{pmatrix}.$$ The following proposition characterizes the rate of convergence between $\hat{\bSigma}$ and $\bSigma$. It follows from Lemma 6.5 of @gao2014sparse.
\[prop:concentration\] Let $\hat{\bSigma}_x$, $\hat{\bSigma}_y$, and $\hat{\bSigma}_{xy}$ be empirical estimates of $\bSigma_x$, $\bSigma_y$, and $\bSigma_{xy}$, respectively. For any $C>0$ and positive integer $\overbar{k}$, there exists a constant $C'>0$ such that $$\rho(\hat{\bSigma}_x - \bSigma_x ,\overbar{k}) \le C\sqrt{\frac{\overbar{k}\log d}{n}};
\quad
\rho(\hat{\bSigma}_y - \bSigma_y ,\overbar{k}) \le C\sqrt{\frac{\overbar{k}\log d}{n}};
\quad
\rho(\hat{\bSigma}_{xy} - \bSigma_{xy},\overbar{k}) \le C\sqrt{\frac{\overbar{k}\log d}{n}},$$ with probability greater than $1-\exp (-C' \overbar{k}\log d)$.
Recall from Example \[example:cca\] the definitions of $\hat{\Ab}$ and $\hat{\Bb}$ in the context of sparse CCA. Choosing $k$ to be of the same order as $s$, Proposition \[prop:concentration\] implies that $\rho(\Eb_{\Ab}, k')$ and $\rho(\Eb_{\Bb},k')$ in Theorem \[theorem:main\] are upper bounded by the order of $\sqrt{s\log d/n}$ with high probability. Thus, as the optimization error decays to zero, we obtain an estimator with the optimal statistical rate of convergence [@gao2015minimax]. In comparison with @chen2013sparse [@gao2014sparse], we do not require structural assumptions on $\bSigma_x$, $\bSigma_y$, and $\bSigma_{xy}$, as discussed in Section \[section:previous work\]. Furthermore, we can establish the same error bounds as in Proposition \[prop:concentration\] for sub-Gaussian distributions. Since our theory only relies on these error bounds, rather than the distributions of $\bX$ and $\bY$, we can easily handle sub-Gaussian distributions in comparison with the results in [@gao2014sparse].
In addition, our results have direct implications on sparse PCA and sparse FDA. For the sparse PCA, assume that $\bX \sim N(\mathbf{0},\bSigma)$. As mentioned in Section \[section:previous work\], sparse PCA is a special case of sparse generalized eigenvalue problem when $\hat{\Bb}= \mathbf{I}$ and $\hat{\Ab} = \hat{\bSigma}$. Theorem \[theorem:main\] and Proposition \[prop:concentration\] imply that our estimator achieves the optimal statistical rate of convergence of $\sqrt{s\log d/n}$ [@cai2013sparse; @vu2013minimax]. Recall the sparse FDA problem in Example \[example:fda\]. For simplicity, we assume there are two classes with mean $\boldsymbol{\mu}_1$ and $\boldsymbol{\mu}_2$, and covariance matrix $\bSigma$. Then it can be shown that the between-class and within-class covariance matrices take the form $\bSigma_b = (\boldsymbol{\mu}_1-\boldsymbol{\mu}_2) (\boldsymbol{\mu}_1-\boldsymbol{\mu}_2)^T$ and $\bSigma_w =\bSigma$. If we further assume the data are sub-Gaussian, using results similar to Proposition \[prop:concentration\], we have $\rho(\hat{\bSigma}_b-\bSigma_b, k')$ and $\rho(\hat{\bSigma}_w-\bSigma_w,k')$ are upper bounded by the order of $\sqrt{s\log d/n}$ with high probability. Similar results were established in @fan2015quadro.
Numerical Studies {#section:simulation}
=================
We perform numerical studies to evaluate the performance of our proposal, Rifle, compared to some existing methods. We consider sparse Fisher’s discriminant analysis and sparse canonical correlation analysis, each of which can be recast as the sparse generalized eigenvalue problem (\[Eq:gep opt\]), as shown in Examples \[example:fda\]–\[example:cca\]. Our proposal involves an initial vector $\vb_0$ and a tuning parameter $k$ on the cardinality. In our simulation studies, we generate each entry of $\vb_0$ from a standard normal distribution. We then standardize the vector $\vb_0$ such that $\|\vb_0 \|_2 = 1$. We first run Algorithm \[alg:tgd\] with a large value of truncation parameter $k$. The solution is then used as the initial value for Algorithm \[alg:tgd\] with a smaller value of $k$. This type of initialization has been considered in the context of sparse PCA and is shown to yield good empirical performance [@yuan2013truncated].
Fisher’s Discriminant Analysis {#subsec:simFDA}
------------------------------
We consider high-dimensional classification problem using sparse Fisher’s discriminant analysis. The data consists of an $n\times d$ matrix $\Xb$ with $d$ features measured on $n$ observations, each of which belongs to one of $K$ classes. We let $\xb_i$ denote the $i$th row of $\Xb$, and let $C_k \subset\{1,\ldots,n\}$ contains the indices of the observations in the $k$th class with $n_k = |C_k|$ and $\sum_{k=1}^K n_k = n$.
Recall from Example \[example:fda\] that this is a special case of the sparse generalized eigenvalue problem with $\hat{\Ab}= \hat{\bSigma}_b$ and $\hat{\Bb} = \hat{\bSigma}_w$. Let $\hat{\boldsymbol{\mu}}_k = \sum_{i\in C_k} \xb_i/n_k$ be the estimated mean for the $k$th class. The standard estimates for $\bSigma_w$ and $\bSigma_b$ are $$\hat{\bSigma}_w = \frac{1}{n} \sum_{k=1}^K \sum_{i\in C_k} (\xb_i-\hat{\boldsymbol{\mu}}_k)(\xb_i-\hat{\boldsymbol{\mu}}_k)^T; \qquad
\hat{\bSigma}_b = \frac{1}{n}\sum_{k=1}^K n_k \hat{\boldsymbol{\mu}}_k\hat{\boldsymbol{\mu}}_k^T.$$ We consider two simulation settings similar to that of @witten2009penalized:
1. Binary classification: in this example, we set $\boldsymbol{\mu}_1 = \mathbf{0}$, $\mu_{2j} = 0.5$ for $j=\{2,4,\ldots,40\}$, and $\mu_{2j} =0$ otherwise. Let $\bSigma$ be a block diagonal covariance matrix with five blocks, each of dimension $d/5\times d/5$. The $(j,j')$th element of each block takes value $0.7^{|j-j'|}$. As suggested by @witten2009penalized, this covariance structure is intended to mimic the covariance structure of gene expression data. The data are simulated as $\xb_i \sim N(\boldsymbol{\mu}_k,\bSigma)$ for $i\in C_k$.
2. Multi-class classification: there are $K=4$ four classes in this example. Let $\mu_{kj} = (k-1)/3$ for $j=\{2,4,\ldots,40\}$ and $\mu_{kj} =0$ otherwise. The data are simulated as $\xb_i \sim N(\boldsymbol{\mu}_k,\bSigma)$ for $i\in C_k$, with the same covariance structure for binary classification. As noted in @witten2009penalized, a one-dimensional vector projection of the data fully captures the class structure.
Four approaches are compared: (i) our proposal Rifle; (ii) $\ell_1$-penalized logistic or multinomial regression implemented using the R package glmnet; (iii) $\ell_1$-penalized FDA with diagonal estimate of $\bSigma_w$ implemented using the R package penalizedLDA [@witten2009penalized]; and (iv) direct approach to sparse discriminant analysis [@mai2012direct; @mai2015multiclass] implemented using the R package dsda and msda for binary and multi-class classification, respectively. For each method, models are fit on the training set with tuning parameter selected using 5-fold cross-validation. Then, the models are evaluated on the test set. In addition to the aforementioned models, we consider an oracle estimator using the theoretical direction $\vb^*$, computed using the population quantities $\bSigma_w$ and $\bSigma_b$.
To compare the performance of the different proposals, we report the misclassification error on the test set and the number of non-zero features selected in the models. The results for 500 training samples and 1000 test samples, with $d=1000$ features, are reported in Table \[Table:fda\]. From Table \[Table:fda\], we see that our proposal have the lowest misclassification error compared to other competing methods. @witten2009penalized has the highest misclassification error in both of our simulation settings, since it does not take into account the dependencies among the features. @mai2012direct and @mai2015multiclass perform slightly worse than our proposal in terms of misclassification error. Moreover, they use a large number of features in their model, which renders interpretation difficult. In contrast, the number of features selected by our proposal is very close to that of the oracle estimator.
$\ell_1$-penalized $\ell_1$-FDA direct our proposal oracle
------------- ---------- -------------------- --------------- --------------- --------------- --------------- --
Binary Error 29.21 (0.59) 295.85 (1.08) 26.92 (0.49) 19.18 (0.56) 8.23 (0.19)
Features 111.79 (1.50) 23.45 (0.82) 122.85 (2.17) 49.15 (0.57) 41 (0)
Multi-class Error 497.43 (1.69) 495.61 (1.14) 244.25 (1.39) 213.82 (1.56) 153.18 (0.80)
Features 61.99 (2.51) 23.76 (2.23) 109.30 (2.33) 55.68 (0.36) 41 (0)
: The number of misclassified observations out of 1000 test samples and number of non-zero features (and standard errors) for binary and multi-class classification problems, averaged over 200 data sets. The results are for models trained with 500 training samples with 1000 features.
\[Table:fda\]
Canonical Correlation Analysis {#subsec:simCCA}
------------------------------
In this section, we study the relationship between two sets of random variables $\bX \in \RR^{d/2}$ and $\bY\in\RR^{d/2}$ in the high-dimensional setting using sparse CCA. Let $\bSigma_x$, $\bSigma_y$, and $\bSigma_{xy}$ be the covariance matrices of $\bX$ and $\bY$, and cross-covariance matrix of $\bX$ and $\bY$, respectively. Assume that $(\bX,\bY) \sim N (\mathbf{0},\bSigma)$ with $$\bSigma = \begin{pmatrix} \bSigma_x & \bSigma_{xy} \\ \bSigma_{xy} & \bSigma_y \end{pmatrix}; \qquad \bSigma_{xy} = \bSigma_x \vb_x^* \lambda_1 (\vb_y^*)^T \bSigma_y,$$ where $0<\lambda_1<1$ is the largest generalized eigenvalue and $\vb_x^*$ and $\vb_y^*$ are the leading pair of canonical directions. The data consists of two $n\times ({d/2})$ matrices $\Xb$ and $\Yb$. We assume that each row of the two matrices are generated according to $(\xb_i,\yb_i) \sim N(\mathbf{0},\bSigma)$. The goal of CCA is to estimate the canonical directions $\vb^*_x$ and $\vb_y^*$ based on the data matrices $\Xb$ and $\Yb$.
Let $\hat{\bSigma}_x$, $\hat{\bSigma}_y$ be the sample covariance matrices of $\bX$ and $\bY$, and let $\hat{\bSigma}_{xy}$ be the sample cross-covariance matrix of $\bX$ and $\bY$. Recall from Example \[example:cca\] that the sparse CCA problem can be recast as the generalized eigenvalue problem with $$\hat{\Ab}= \begin{pmatrix} \mathbf{0} & \hat{\bSigma}_{xy} \\
\hat{\bSigma}_{xy} & \mathbf{0} \end{pmatrix},
\qquad
\hat{\Bb}= \begin{pmatrix} \hat{\bSigma}_{x} &\mathbf{0} \\
\mathbf{0} & \hat{\bSigma}_{y} \end{pmatrix},
\qquad
\vb = \begin{pmatrix} \vb_x \\ \vb_y
\end{pmatrix}.$$ In our simulation setting, we set $\lambda_1 = 0.9$, $v_{x,j}^* = v_{y,j}^* = 1/\sqrt{5}$ for $j=\{1,6,11,16,21\}$, and $v_{x,j}^* = v_{y,j}^* =0$ otherwise. Then, we normalize $\vb_x^*$ and $\vb_y^*$ such that $(\vb_x^*)^T \bSigma_x \vb_x^* = (\vb_y^*)^T \bSigma_y \vb_y^*=1$. We consider the case when $\bSigma_x$ and $\bSigma_y$ are block diagonal matrix with five blocks, each of dimension $d/5\times d/5$, where the $(j,j')$th element of each block takes value $0.7^{|j-j'|}$.
We compare our proposal to @witten2009penalized, implemented using the R package PMA. Their proposal involves choosing two tuning parameters that controls the sparsity of the estimated directional vectors, which we select using cross-validation. We perform our method using multiple values of $k$, and report results for $k=\{15,25\}$ since they are qualitatively similar for large $n$. The output of both our proposal and that of @witten2009penalized are normalized to have norm one, whereas the true parameters $\vb_x^*$ and $\vb_y^*$ are normalized with respect to $\bSigma_x$ and $\bSigma_y$. To evaluate the performance of the two methods, we normalize $\vb_x^*$ and $\vb_y^*$ such that they have norm one, and compute the squared $\ell_2$ distance between the estimated and the true directional vectors. The results for $d=600$, $s=10$, averaged over 100 data sets, are plotted in Figure \[Fig:CCA\].
From Figure \[Fig:CCA\], we see that our proposal outperforms @witten2009penalized uniformly across different sample sizes. This is not surprising since @witten2009penalized uses diagonal estimates of $\bSigma_x$ and $\bSigma_y$ to compute the directional vectors. Our proposal is not sensitive to the truncation parameter $k$ when the sample size is sufficiently large. In addition, we see that the squared $\ell_2$ distance for our proposal is inversely proportional to $n/\{s\log (d)\}$, as suggested by Theorem \[theorem:main\].
Data Application
================
In this section, we apply our method in the context of sparse sliced inverse regression as in Example \[example:sir\]. The data sets we consider are:
1. Leukemia [@golub1999molecular]: 7,129 gene expression measurements from 25 patients with acute myeloid leukemia and 47 patients with acute lymphoblastic luekemia. The data are available from . Recently, this data set is analyzed in the context of sparse sufficient dimension reduction in @yin2015sequential.
2. Lung cancer [@spira2007airway]: 22,283 gene expression measurements from large airway epithelial cells sampled from 97 smokers with lung cancer and 90 smokers without lung cancer. The data are publicly available from GEO at accession number GDS2771.
We preprocess the leukemia data set following @golub1999molecular and @yin2015sequential. In particular, we set gene expression readings of 100 or fewer to 100, and expression readings of 16,000 or more to 16,000. We then remove genes with difference and ratio between the maximum and minimum readings that are less than 500 and 5, respectively. A log-transformation is then applied to the data. This gives us a data matrix $\Xb$ with 72 rows/samples and 3571 columns/genes. For the lung cancer data, we simply select the 2,000 genes with the largest variance as in @petersen2015fused. This gives a data matrix with 167 rows/samples and 2,000 columns/genes. We further standardize both the data sets so that the genes have mean equals zero and variance equals one.
Recall from Example \[example:sir\] that in order to apply our method, we need the estimates $\hat{\Ab}= \hat{\bSigma}_{E(\bX\mid Y)}$ and $\hat{\Bb} = \hat{\bSigma}_{x}$. The quantity $\hat{\bSigma}_x$ is simply the sample covariance matrix of $\bX$. Let $n_1$ and $n_2$ be the number of samples of the two classes in the data set. Let $\hat{\bSigma}_{x,1}$ and $\hat{\bSigma}_{x,2}$ be the sample covariance matrix calculated using only data from class one and class two, respectively. Then, the covariance matrix of the conditional expectation can be estimated by $$\hat{\bSigma}_{E[\bX\mid Y]} = \hat{\bSigma}_{x} - \frac{1}{n}\sum_{k=1}^2 n_k \hat{\bSigma}_{x,k},$$ where $n=n_1+n_2$ [@li1991sliced; @li2006sparse; @zhu2006sliced; @li2008sliced; @chen2010coordinate; @yin2015sequential]. As mentioned in Section \[section:simulation\], we run Algorithm \[alg:tgd\] with a large value of $k$, and used its solution as the initial value for Algorithm \[alg:tgd\] with a smaller value of $k$. Let $\hat{\vb}_t$ be the output of Algorithm \[alg:tgd\]. Similar to @yin2015sequential, we plot the box-plot of the sufficient predictor, $\Xb \hat{\vb}_t$, for the two classes in each data set. The results with $k=25$ for leukemia and lung cancer data sets are in Figures \[Fig:realdata\](a)-(b), respectively.
From Figure \[Fig:realdata\](a), for the leukemia data set, we see that the sufficient predictor for the two groups are much more well separated than the results in @yin2015sequential. Moreover, our proposal is a one-step procedure with theoretical guarantees whereas their proposal is sequential without theoretical guarantees. For the lung cancer data set, we see that there is some overlap between the sufficient predictor for subjects with and without lung cancer. These results are consistent in the literature where it is known that the lung cancer data set is a much more difficult classification problem compared to that of the leukemia data set [@fan2008high; @petersen2015fused].
Discussion
==========
We propose the truncated Rayleigh flow for solving sparse generalized eigenvalue problem. The proposed method successfully handles ill-conditioned normalization matrices that arise from the high-dimensional setting due to finite sample estimation, and enjoys geometric convergence to a solution with the optimal statistical rate of convergence. The proposed method and theory have applications to a broad family of important statistical problems including sparse FDA, sparse CCA, and sparse SDR. Compared to existing theory, our theory does not require any structural assumption on ($\Ab,\Bb)$, nor normality assumption on the data.
Acknowledgement {#acknowledgement .unnumbered}
===============
We thank Ashley Petersen for providing us the lung cancer data set. Han Liu was supported by NSF CAREER Award DMS1454377, NSF IIS1408910, NSF IIS1332109, NIH R01MH102339, NIH R01GM083084, and NIH R01HG06841. Tong Zhang was supported by NSF IIS-1250985, NSF IIS-1407939, and NIH R01AI116744. Kean Ming Tan was supported by NSF IIS-1250985 and NSF IIS-1407939. Zhaoran Wang was supported by Microsoft Research PhD fellowship.
Proof of Theorem \[theorem:main\] {#proof:theorem:main}
=================================
Next we state a series of lemmas that will be used in proving Theorem \[theorem:main\]. The proofs for the technical lemmas are deferred to Appendix \[appendixB\]. We start with some results from perturbation theory for eigenvalue and generalized eigenvalue problems [@golub2012matrix].
\[lemma:eigenvalue\] Let $\Jb$ and $\Jb+\Eb_{\Jb}$ be $d\times d$ symmetric matrices. Then, for all $k\in \{1,\ldots,d\}$, $$\lambda_k(\Jb)+\lambda_{\min}(\Eb_{\Jb}) \le \lambda_k (\Jb+\Eb_{\Jb}) \le \lambda_k(\Jb)+\lambda_{\max}(\Eb_{\Jb}).$$
In the sequel, we state a result on the perturbed generalized eigenvalues for a symmetric-definite matrix pair $(\Jb,\Kb)$ in the following lemma, which follows directly from Theorem 3.2 in @stewart1979pertubation and Theorem 8.7.3 in @golub2012matrix.
\[lemma:perturbed pair\] Let $(\Jb,\Kb)$ be a symmetric-definite matrix pair and let ($ \Jb+\Eb_{\Jb}$,$ \Kb+\Eb_{\Kb}$) be the perturbed matrix pair. Assume that $\Eb_{\Jb}$ and $\Eb_{\Kb}$ satisfy $$\epsilon =\sqrt{ \|\Eb_{\Jb}\|_{2}^2 + \|\Eb_{\Kb}\|_{2}^2} < \mathrm{cr}(\Jb,\Kb),$$ where $\mathrm{cr}(\Jb,\Kb)$ is as defined in (\[Eq:crawford\]). Then, $( \Jb+\Eb_{\Jb}$, $ \Kb+\Eb_{\Kb})$ is a symmetric-definite matrix pair. Let $ \lambda_k ( \Jb+\Eb_{\Jb}$, $ \Kb+\Eb_{\Kb})$ be the $k$th generalized eigenvalue of the perturbed matrix pair. Then, $$\frac{ \lambda_k( \Jb, \Kb) \cdot \mathrm{cr}(\Jb,\Kb)-\epsilon}{\mathrm{cr}(\Jb,\Kb) + \epsilon\cdot \lambda_k( \Jb, \Kb) } \le {\lambda}_k ( \Jb+\Eb_{\Jb}, \Kb+\Eb_{\Kb}) \le \frac{\lambda_k( \Jb, \Kb) \cdot \mathrm{cr}(\Jb,\Kb)+\epsilon}{\mathrm{cr}(\Jb,\Kb) - \epsilon\cdot \lambda_k( \Jb, \Kb) }.$$
Recall from Section \[section:theory\] that $\vb^*$ is the first generalized eigenvector of $(\Ab,\Bb)$ with generalized eigenvalue $\lambda_1$, and that $V=\mathrm{supp}(\vb^*)$. For any given set $F$ such that $V\subset F$, let ${\lambda}_k(F)$ and $\hat{\lambda}_k(F)$ be the $k$th generalized eigenvalues of $(\Ab_F,\Bb_F)$ and $(\hat{\Ab}_F,\hat{\Bb}_F)$, respectively. Provided Assumption \[ass:large n\] and by an application Lemma \[lemma:perturbed pair\], we have $$\frac{\hat{\lambda}_2(F)}{\hat{\lambda}_1(F)} \le \gamma,$$ where $\gamma = (1+a)\lambda_2 /[(1-a)\lambda_1]$ is as defined in (\[eq:w8\]). Let $\yb(F) = \vb(F)/\|\vb(F)\|_2$ and $\yb^* = \vb^*/\|\vb^*\|_2$ such that $\|\yb(F)\|_2 = \|\yb^*\|_2=1$. Next, we prove that $\yb(F)$ is close to $\yb^*$ if the set $F$ contains the support of $\yb^*$. The result follows from Theorem 4.3 in @stewart1979pertubation.
\[lemma:perturbation vec\] Let $F$ be a set such that $ V\subset F$ with $|F|=k'>s$ and let $$\delta (F) = \sqrt{\|\Eb_{\Ab,F}\|_2^2+\|\Eb_{\Bb,F}\|_2^2 }.$$ Let $$\chi(\lambda_1(F),\hat{\lambda}_k(F)) = \frac{|\lambda_1(F)-\hat{\lambda}_k(F)|}{\sqrt{1+\lambda_1(F)^2}\cdot \sqrt{1+\hat{\lambda}_k(F)^2}}; \qquad \Delta \hat{\lambda}(F) = \underset{k> 1}{\min} \;\chi (\lambda_1(F),\hat{\lambda}_k(F)) >0.$$ If $\delta(F)/\Delta \hat{\lambda}(F) < \mathrm{cr}(\hat{\Ab}_F,\hat{\Bb}_F)$, then $$\frac{\|\vb(F) -\vb^*\|_2}{\|\vb^*\|_2}\le \frac{\delta(F)}{ \Delta \hat{\lambda} (F)\cdot \mathrm{cr}(\hat{\Ab}_F,\hat{\Bb}_F) }.$$ This implies that $$\|\yb(F) -\yb^*\|_2\le \frac{2 }{ \Delta \lambda \cdot \mathrm{cr}(k')}\cdot \epsilon(k'),$$ where $\Delta \lambda$, $\mathrm{cr}(k')$, and $\epsilon(k')$ are as defined in (\[Eq:eigengap\]) and (\[Eq:inf crawford\]).
We now present a key lemma on measuring the progress of the gradient descent step. It requires an initial solution that is close enough to the optimal value in (\[Eq:vbF\]). With some abuse of notation, we indicate $\yb(F)$ to be a $k'$-dimensional vector restricted to the set $F \subset \{1,\ldots,d\}$ with $|F| = k'$. Recall that $c>0$ is some arbitrary small constant stated in Assumption \[ass:large n\] and $c_{\mathrm{upper}}$ is defined as $(1+c)/(1-c)$.
\[lemma:key\] Let $F\subset \{1,\ldots,d\}$ be some set with $|F| = k' $. Given any $\tilde{\vb}$ such that $\|\tilde{\vb}\|_2 =1$ and $\tilde{\vb}^T \yb(F) >0$, let $\rho = \tilde{\vb}^T \hat{\Ab}_F\tilde{\vb}/ \tilde{\vb}^T{\hat{\Bb}_F} \tilde{\vb}^2$, and let $\vb' = \Cb_F\tilde{\vb} /\|\Cb_F \tilde{\vb}\|_2$, where \$ = + (/) (-) \$ and $\eta>0$ is some positive constant. Let $\delta =1-\yb(F)^T \tilde{\vb}$. If $\eta$ is sufficiently small such that \$ \_ () < 1/(1+c), \$ and $\delta$ is sufficiently small such that $$\delta \le \min \left( \frac{1}{8 c_{\mathrm{upper}}\kappa(\Bb)}, \frac{1/\gamma-1}{3c_{\mathrm{upper}}\kappa(\Bb)} ,
\frac{1-\gamma}{30\cdot (1+c) \cdot c_{\mathrm{upper}}^2\cdot \eta \cdot \lambda_{\max} (\Bb) \cdot \kappa^2(\Bb) \cdot [c_{\mathrm{upper}}\kappa(\Bb)+\gamma]}
\right),$$ then under Assumption \[ass:large n\], we have $$\yb(F)^T \vb' \ge\yb(F)^T \tilde{\vb} +\frac{1+c}{8} \cdot \eta\cdot \lambda_{\min} (\Bb)\cdot [1-\yb(F)^T \tilde{\vb}]\cdot \left[ \frac{1-\gamma}{ c_{\mathrm{upper}} \kappa ({\Bb})+\gamma}\right].$$
The following lemma characterizes the error introduced by the truncation step. It follows directly from Lemma 12 in @yuan2013truncated.
\[lemma:truncation\] Consider $\yb'$ with $F'=\mathrm{supp}(\yb')$ and $ |F'| = \overbar{k}$. Let $F$ be the indices of ${\yb}$ with the largest $k$ absolute values, with $|F|=k$. If $\|\yb'\|_2 = \|{\yb}\|_2 = 1$, then \$ &|(,F)\^T ’|\
&|\^T ’| - (/k)\^[1/2]{} ( , \[1+(/k)\^[1/2]{}\]). \$
The following lemma quantifies the progress of each iteration of Algorithm \[alg:tgd\].
\[lemma:combine\] Assume that $k> s$, where $s$ is the cardinality of the support of $\yb^*=\vb^*/\|\vb^*\|_2^2$, and $k$ is the truncation parameter in Algorithm \[alg:tgd\]. Let $k' = 2k + s$ and let $$\nu = \sqrt{1+2 [(s/k)^{1/2} + s/k] }\cdot \sqrt{
1-\frac{1+c}{8} \cdot\eta \cdot \lambda_{\min} ({\Bb})\cdot
\left[ \frac{1-\gamma}{c_{\mathrm{upper}}\kappa ({\Bb})+\gamma}\right]}.$$ Under the same conditions in Lemma \[lemma:key\], we have $$\sqrt{1- |(\yb^*)^T \hat{\vb}_t| } \le \nu \sqrt{1- |(\yb^*)^T \vb_{t-1}|} + \sqrt{10} \cdot \omega (k'),$$
In the following, we proceed with the proof of Theorem \[theorem:main\]. Recall from Algorithm \[alg:tgd\] that we define $\vb_t = \hat{\vb}_t /\|\hat{\vb}_t\|_2$. Since $\|\vb_t'\|_2=1$, and $\hat{\vb}_t$ is the truncated version of $\vb_t'$, we have that $\|\hat{\vb}_t\|_2\leq 1$. This implies that $|(\yb^*)^T \vb_t| \ge |(\yb^*)^T \hat{\vb}_t|$. By Lemma \[lemma:combine\], we have $$\label{Eq:proof theorem main1}
\begin{split}
\sqrt{1- |(\yb^*)^T {\vb}_t| }&\le \sqrt{1- |(\yb^*)^T \hat{\vb}_t| }\\
&\le \nu \sqrt{1- |(\yb^*)^T \vb_{t-1}|} + \sqrt{10} \cdot \omega (k').
\end{split}$$ By recursively applying Lemma \[lemma:combine\], we have for all $t\ge 0$, $$\sqrt{1- |(\yb^*)^T {\vb}_t|} \le \nu^t \sqrt{1- |(\yb^*)^T \vb_{0}|} + \sqrt{10} \cdot \omega (k')/(1-\nu),$$ as desired.
Proof of Technical Lemmas {#appendixB}
=========================
Proof of Lemma \[lemma:perturbation vec\]
-----------------------------------------
The first part of the lemma on the following inequality follows directly from Theorem 4.3 in @stewart1979pertubation $$\frac{\|\vb(F) -\vb^*\|_2}{\|\vb^*\|_2}\le \frac{\delta(F)}{ \Delta \hat{\lambda} \cdot \mathrm{cr}(\hat{\Ab}_F,\hat{\Bb}_F) }.$$ We now prove the second part of the lemma. Setting $\yb(F) = \vb(F)/\|\vb(F)\|_2$ and $\yb^* = {\vb^*}/\|{\vb^*}\|_2$ such that $\|\yb(F)\|_2= 1 $ and $\|{\yb^*}\|_2=1$, we have $$\begin{split}
\|\yb(F) - {\yb}^* \|_2&\le \left\| \frac{\vb(F)}{\|\vb (F)\|_2} - \frac{{\vb^*}}{\|{\vb^*}\|_2} \right\|_2 \\
&\le \frac{1}{\|\vb(F)\|_2 \cdot\|{\vb^*}\|_2} \cdot \| \vb(F)\cdot \|{\vb^*}\|_2 - {\vb^*} \cdot \|\vb(F)\|_2 \|_2\\
&\le \frac{2}{\|\vb^*\|_2} \cdot \| \vb(F) - {\vb^*} \|_2 \\
&\le 2 \frac{\delta(F)}{ \Delta \hat{\lambda} \cdot \mathrm{cr}(\hat{\Ab}_F,\hat{\Bb}_F) }
\end{split}$$ where the third inequality holds by adding and subtracting $\vb(F) \cdot \|\vb (F)\|_2$. By definition, $\delta(F) \le \epsilon(k')$, $\Delta \hat{\lambda} \ge \Delta \lambda$, and $ \mathrm{cr}(\hat{\Ab}_F,\hat{\Bb}_F)\ge \mathrm{cr}(k')$. Thus, we obtain $$\|\yb(F)-\yb^*\|_2 \le \frac{2}{\Delta \lambda \cdot \mathrm{cr}(k')} \cdot \epsilon(k').$$
Proof of Lemma \[lemma:key\]
----------------------------
Recall that $F\subset \{1,\ldots,d\}$ is some set with cardinality $|F| = k'$. Also, recall that $\yb(F)$ is proportional to the largest generalized eigenvector of $(\hat{\Ab}_F,\hat{\Bb}_F)$. Throughout the proof, we write $\hat{\kappa}$ to denote $\kappa (\hat{\Bb}_F)$ for notational convenience. In addition, we use the notation $\|\vb\|_{\hat{\Bb}_F}^2$ to indicate $\vb^T \hat{\Bb}_F \vb$.
Let $\bxi_j$ be the $j$th generalized eigenvector of $(\hat{\Ab}_F,\hat{\Bb}_F)$ corresponding to $\hat{\lambda}_j(F)$ such that $$\bxi_j^T \hat{\Bb}_F\bxi_k = \begin{cases}
1 & \mathrm{if\; } j=k,\\
0 & \mathrm{if\; } j\ne k.
\end{cases}$$ Assume that $\tilde{\vb} =\sum_{j=1}^{k'} \alpha_j \bxi_j$ and by definition we have $\yb(F) = \bxi_1 / \|\bxi_1\|_2$. By assumption, we have $\yb(F)^T\tilde{\vb} = 1-\delta$. This implies that $\|\yb(F)-\tilde{\vb}\|_2^2 = 2\delta$. Also, note that $$\begin{split}
\|\tilde{\vb} -\yb(F)\|_{\hat{\Bb}_F}^2 &=\|\tilde{\vb} -\alpha_1 \bxi_1- (\yb(F)- \alpha_1 \bxi_1)\|_{\hat{\Bb}_F}^2\\
&= \|\tilde{\vb} - \alpha_1 \bxi_1\|_{\hat{\Bb}_F}^2+\|\yb(F) - \alpha_1 \bxi_1\|_{\hat{\Bb}_F}^2- 2 [\yb(F)-\alpha_1\bxi_1]^T \hat{\Bb}_F(\tilde{\vb}-\alpha_1\bxi_1)
\end{split}$$ Since $\yb(F)-\alpha_1 \bxi_1$ is orthogonal to $\tilde{\vb} - \alpha_1 \bxi_1$ under the normalization of $\hat{\Bb}_F$, we have $$\label{Eq:lemma:keyproof1}
\sum_{j=2}^{k'} \alpha_j^2 = \|\tilde{\vb} -\alpha_1 \bxi_1\|_{\hat{\Bb}_F}^2 \le \|\tilde{\vb} -\yb(F)\|_{\hat{\Bb}_F}^2 \le 2 \lambda_{\max} (\hat{\Bb}_F) \delta,$$ in which the last inequality holds by an application of Hölder’s inequality and the fact that $\|\yb(F)-\tilde{\vb}\|_2^2 = 2\delta$. Moreover, we have $$\label{Eq:lemma:keyproof2}
\sum_{j=1}^{k'} \alpha_j^2 = \|\tilde{\vb}\|_{\hat{\Bb}_F}^2 \ge \lambda_{\max} (\hat{\Bb}_F) / \hat{\kappa} \qquad \mathrm{and} \qquad \alpha_1^2 \ge \lambda_{\max} (\hat{\Bb}_F)/\hat{\kappa}- \sum_{j=2}^{k'} \alpha_j^2 \ge \frac{2\lambda_{\max} (\hat{\Bb}_F)}{3\hat{\kappa} } ,$$ where the last inequality is obtained by (\[Eq:lemma:keyproof1\]) and the assumption that $\delta \le 1/(8c_{\mathrm{upper}}\kappa)$.
We also need a lower bound on $\|\yb(F)\|_{\hat{\Bb}_F}$. By the triangle inequality, we have $$\begin{split}
\label{Eq:lemma:keyproof3}
\|\yb(F)\|_{\hat{\Bb}_F} &\ge \|\tilde{\vb} \|_{\hat{\Bb}_F}- \| \tilde{\vb} -\yb(F)\|_{\hat{\Bb}_F}\ge \sqrt{\sum_{j=1}^{k'} \alpha_j^2} - \sqrt{\lambda_{\max} (\hat{\Bb}_F)} \cdot \|\tilde{\vb}-\yb(F)\|_2
\\
&\ge \frac{1}{2}\sqrt{\sum_{j=1}^{k'} \alpha_j^2} + \frac{1}{2} \sqrt{\frac{\lambda_{\max} (\hat{\Bb}_F)}{\hat{\kappa}}} - \sqrt{2\lambda_{\max}(\hat{\Bb}_F) \delta}
\ge \frac{1}{2} \alpha_1,
\end{split}$$ where the second inequality holds by the definition of $\|\tilde{\vb}\|_{\hat{\Bb}_F}$ and an application of Hölder’s inequality, the third inequality follows from (\[Eq:lemma:keyproof2\]), and the last inequality follows from the fact that $1/2 \cdot \sqrt{\lambda_{\max}(\hat{\Bb}_F) / \hat{\kappa}} \ge \sqrt{2\lambda_{\max}(\hat{\Bb}_F) \delta}$ under the assumption that $1/(8c_{\mathrm{upper}}\kappa)$.\
**Lower and upper bounds for $[\hat{\lambda}_1(F)-\rho]/\rho$:** To obtain a lower bound for the quantity $\yb(F)^T\vb'$, we need both lower bound and upper bound for the quantity $[\hat{\lambda}_1(F)-\rho]/\rho$. Recall that $\rho = \tilde{\vb}^T \hat{\Ab}_F \tilde{\vb}/ \tilde{\vb}^T \hat{\Bb}_F \tilde{\vb}$. Using the fact that $ \tilde{\vb}^T \hat{\Ab}_F \tilde{\vb} = \sum_{j=1}^{k'} \alpha_j^2 \hat{\lambda}_j (F)$, we obtain $$\label{Eq:lemma:keyproof4}
\frac{\hat{\lambda}_1 (F) - \rho}{\rho} = \frac{\sum_{j=1}^{k'} [\hat{\lambda}_1(F) - \hat{\lambda}_j (F) ]\alpha_j^2 }{\sum_{j=1}^{k'} \hat{\lambda}_j (F) \alpha_j^2} \le \frac{\hat{\lambda}_1 (F) \sum_{j=2}^{k'} \alpha_j^2}{\hat{\lambda}_1 (F) \alpha_1^2 } \le \frac{2 \lambda_{\max} (\hat{\Bb}_F) \delta }{\alpha_1^2} \le 3\delta \hat{\kappa},$$ where the second to the last inequality holds by (\[Eq:lemma:keyproof1\]) and the last inequality holds by (\[Eq:lemma:keyproof2\]). We now establish a lower bound for $[\hat{\lambda}_1(F)-\rho]/\rho$. First, we observe that $$\label{Eq:lemma:keyproof5}
\delta \le 2\delta - \delta^2 = (1-\delta)^2 + 1 - 2(1-\delta)\yb(F)^T \tilde{\vb} = \|\tilde{\vb} -(1-\delta) \yb(F)\|_2^2 \le \|\tilde{\vb} - \alpha_1 \bxi_1\|_2^2,$$ where the first equality follows from the fact that $\yb(F)^T \tilde{\vb} = 1-\delta$, and the second inequality holds by the fact that $(1-\delta) \yb(F)$ is the scalar projection of $\yb(F)$ onto the vector $\bxi_1$. Thus, we have $$\label{Eq:lemma:keyproof6}
\begin{split}
&\frac{\hat{\lambda}_1 (F) - \rho}{\rho} =\frac{\sum_{j=1}^{k'} [\hat{\lambda}_1(F) - \hat{\lambda}_j (F) ]\alpha_j^2 }{\sum_{j=1}^{k'} \hat{\lambda}_j (F) \alpha_j^2} \ge \frac{[\hat{\lambda}_1(F) - \hat{\lambda}_2 (F)]\sum_{j=2}^{k'} \alpha_j^2}{\hat{\lambda}_1 (F) \alpha_1^2 + \hat{\lambda}_2 (F) \sum_{j=2}^{k'} \alpha_j^2 }\\
&\quad= \frac{[\hat{\lambda}_1(F) - \hat{\lambda}_2 (F)]\cdot \|\tilde{\vb} - \alpha_1 \bxi_1\|_{\hat{\Bb}_F}^2}{\hat{\lambda}_1 (F) \alpha_1^2 + \hat{\lambda}_2 (F) \cdot \|\tilde{\vb} - \alpha_1 \bxi_1\|_{\hat{\Bb}_F}^2 }\ge \frac{(1-\gamma) \cdot [\lambda_{\max}(\hat{\Bb}_F)/\hat{\kappa}] \cdot \|\tilde{\vb} - \alpha_1 \bxi_1\|_2^2}{\alpha_1^2 + \gamma \cdot [\lambda_{\max}(\hat{\Bb}_F)/\hat{\kappa}]\cdot \|\tilde{\vb} - \alpha_1 \bxi_1\|_{2}^2}\\
&\quad \ge \frac{(1-\gamma) \cdot \lambda_{\max}(\hat{\Bb}_F) \cdot \delta}{\alpha_1^2\cdot \hat{\kappa} + \gamma \cdot \lambda_{\max}(\hat{\Bb}_F)\cdot \delta},
\end{split}$$ where the second to the last inequality holds by dividing the numerator and denominator by $\hat{\lambda}_1(F)$ and using the upper bound $\hat{\lambda}_2(F)/\hat{\lambda}_1(F) \le \gamma$, and the last inequality holds by (\[Eq:lemma:keyproof5\]).\
**Lower bound for $\|\Cb_F \tilde{\vb}\|_2^{-1}$:** In the sequel, we first establish an upper bound for $\|\Cb_F \tilde{\vb}\|_2^2$. By the definition that $\rho = \tilde{\vb}^T \hat{\Ab}_F \tilde{\vb} / \tilde{\vb}^T \hat{\Bb}_F \tilde{\vb}$, we have \$ \^T \_F - \^T \_F = 0. \$ Moreover, by the definition of $\tilde{\vb} = \sum_{j=1}^{k'} \alpha_j \bxi_j$ and the fact that $\hat{\Ab}_F \bxi_j = \hat{\lambda}_j(F) \hat{\Bb}_F \bxi_j$, we have $$\label{Eq:lemma:keyproof7}
\begin{split}
\|(\hat{\Ab}_F-\rho \hat{\Bb}_F) \tilde{\vb}\|_2^2 = \left\|\sum_{j=1}^{k'} \alpha_j \hat{\Ab}_F \bxi_j - \rho \sum_{j=1}^{k'} \alpha_j \hat{\Bb}_F \bxi_j \right\|_2^2 = \left\|\sum_{j=1}^{k'} \alpha_j [\hat{\lambda}_j (F) - \rho] \hat{\Bb}_F \bxi_j \right\|_2^2.
\end{split}$$ Thus, by (\[Eq:lemma:keyproof7\]) and the fact that $\tilde{\vb}^T \hat{\Ab}_F \tilde{\vb} - \rho \tilde{\vb}^T \hat{\Bb}_F \tilde{\vb} = 0$, we obtain $$\label{Eq:lemma:keyproof8}
\|\Cb_F \tilde{\vb}\|_2^2 = \left\| \left[\Ib + \frac{\eta}{\rho} (\hat{\Ab}_F-\rho \hat{\Bb}_F) \right]\tilde{\vb} \right\|_2^2 =1 +\left\|\sum_{j=1}^{k'} \alpha_j \cdot \left( \frac{\eta}{\rho} \right)\cdot [\hat{\lambda}_j (F) - \rho] \cdot \hat{\Bb}_F \bxi_j \right\|_2^2.$$ It remains to establish an upper bound for the second term in the above equation. Note that by the assumption that $ \delta \le 1/(3\cdot c_{\mathrm{upper}} \kappa)\cdot(1/\gamma-1)$ and (\[Eq:lemma:keyproof4\]), we have \$ \_2 (F) \_1 (F). \$ Moreover, since $\|\tilde{\vb}\|_2^2=1$, we have $\alpha_1^2 \le \lambda_{\max} (\hat{\Bb}_F)$. Thus, $$\label{Eq:lemma:keyproof9}
\begin{split}
&\left\|\sum_{j=1}^{k'} \alpha_j \cdot \left( \frac{\eta}{\rho} \right)\cdot [\hat{\lambda}_j (F) - \rho] \cdot \hat{\Bb}_F \bxi_j \right\|_2^2\\
&\quad \le \alpha_1^2 (\hat{\lambda}_1(F) -\rho)^2 \lambda_{\max}(\hat{\Bb}_F) \cdot (\eta / \rho)^2 + \lambda_{\max} (\hat{\Bb}_F) \sum_{j=2}^{k'} \alpha_j^2 \cdot (\eta/\rho)^2 [\hat{\lambda}_{j}(F)-\rho]^2\\
&\quad \le \lambda_{\max}^2 (\hat{\Bb}_F) \cdot \eta^2 \cdot (3\delta \hat{\kappa})^2 + \lambda_{\max} (\hat{\Bb}_F) \cdot \eta^2 \cdot [{\hat{\lambda}_1 (F)/\rho-1}]^2\cdot \sum_{j=2}^{k'} \alpha_j^2 \\
&\quad \le \lambda_{\max}^2 (\hat{\Bb}_F) \cdot \eta^2 \cdot (3\delta \hat{\kappa})^2 + 2 \lambda_{\max}^2 (\hat{\Bb}_F) \cdot \eta^2 \cdot \delta \cdot(3\delta \hat{\kappa})^2\\
&\quad = 9 \cdot \lambda_{\max}^2 (\hat{\Bb}_F) \cdot \eta^2 \cdot \delta^2 \cdot \hat{\kappa}^2+ 18 \cdot \lambda_{\max}^2 (\hat{\Bb}_F) \cdot \eta^2 \cdot \delta^3 \cdot \hat{\kappa}^2,
\end{split}$$ where the second inequality is from (\[Eq:lemma:keyproof4\]) and the third inequality follows from (\[Eq:lemma:keyproof1\]). Substituting (\[Eq:lemma:keyproof9\]) into (\[Eq:lemma:keyproof8\]), we have $$\label{Eq:lemma:keyproof10}
\begin{split}
\|\Cb_F \tilde{\vb}\|_2^2 &\le 1 + 9 \cdot \lambda_{\max}^2 (\hat{\Bb}_F) \cdot \eta^2 \cdot \delta^2 \cdot \hat{\kappa}^2+ 18 \cdot \lambda_{\max}^2 (\hat{\Bb}_F) \cdot \eta^2 \cdot \delta^3 \cdot \hat{\kappa}^2\\
&\le 1 + 12 \cdot \lambda_{\max}^2 (\hat{\Bb}_F) \cdot \eta^2 \cdot \delta^2 \cdot \hat{\kappa}^2,
\end{split}$$ where the last inequality follows from the fact that $2\delta \le 1/4$, which holds by the assumption that $\delta \le 1/(8c_{\mathrm{upper}} \kappa)$. Meanwhile, note that the second term in the upper bound is less than one by the assumption $\delta \le 1/(8c_{\mathrm{upper}} \kappa)$ and $\eta c_{\mathrm{upper}} \lambda_{\max} (\Bb)<1$. Hence, by invoking (\[Eq:lemma:keyproof10\]) nd the fact that $1/\sqrt{1+y} \ge 1-y/2$ for $|y| <1$, we have $$\label{Eq:lemma:keyproof11}
\|\Cb_F \tilde{\vb}\|_2^{-1}\ge 1 - 6 \cdot \lambda_{\max}^2 (\hat{\Bb}_F) \cdot \eta^2 \cdot \delta^2 \cdot \hat{\kappa}^2.$$\
**Lower bound for $\yb(F)^T \Cb_F \tilde{\vb}$:** We have $$\label{Eq:lemma:keyproof12}
\begin{split}
\yb(F)^T \Cb_F \tilde{\vb} &= \yb(F)^T \tilde{\vb} + \frac{\eta}{\rho} \cdot \yb(F)^T (\hat{\Ab}_F-\rho \hat{\Bb}_F) \tilde{\vb}\\
&= 1-\delta + \frac{\eta}{\rho }\cdot [\hat{\lambda}_1 (F) - \rho ]\cdot \yb(F)^T \hat{\Bb}_F \tilde{\vb}\\
&=1-\delta + \frac{\eta}{\rho }\cdot [\hat{\lambda}_1 (F) - \rho ]\cdot \biggl( \alpha_1 \cdot \frac{ \bxi_1^T \hat{\Bb}_F \bxi_1}{\|\bxi_1\|_2} \biggr)\\
&=1-\delta + \eta \cdot \alpha_1 \cdot \biggl[\frac{\hat{\lambda}_1 (F) - \rho}{\rho} \biggr]\cdot \|\yb(F)\|_{\hat{\Bb}_F}\\
&\ge 1-\delta + \frac{1}{2}\cdot \eta \cdot \alpha_1^2 \cdot \biggl[ \frac{(1-\gamma) \cdot \lambda_{\max}(\hat{\Bb}_F) \cdot \delta}{\alpha_1^2\cdot \hat{\kappa} + \gamma \cdot \lambda_{\max}(\hat{\Bb}_F)\cdot \delta}
\biggr] \\
&\ge 1-\delta + \frac{1}{2}\cdot \eta \cdot \frac{\alpha_1^2 \cdot (1-\gamma) \cdot \delta}{\hat{\kappa} + \gamma}\\
&\ge 1-\delta + \frac{1}{3}\cdot \eta \cdot \lambda_{\min}(\hat{\Bb}_F)\cdot \frac{ (1-\gamma) \cdot \delta}{ (\hat{\kappa} + \gamma)},
\end{split}$$ where the first inequality follows from (\[Eq:lemma:keyproof3\]) and (\[Eq:lemma:keyproof6\]), the second inequality uses the fact that $\alpha_1^2 \le \lambda_{\max} (\hat{\Bb}_F)$, and the last inequality follows from (\[Eq:lemma:keyproof2\]).\
**Combining the results:** We now establish a lower bound on $\yb(F)^T \vb'$. From (\[Eq:lemma:keyproof11\]) and (\[Eq:lemma:keyproof12\]), we have $$\label{Eq:lemma:keyproof13}
\begin{split}
\yb(F)^T \vb' &= \yb(F)^T \Cb_F \tilde{\vb} \cdot \|\Cb_F \tilde{\vb}\|^{-1}_2\\
&\ge \left(1-\delta + \frac{1}{3} \cdot \eta \cdot \lambda_{\min}(\hat{\Bb}_F) \cdot \left[
\frac{(1-\gamma)\cdot \delta}{ (\hat{\kappa}+ \gamma) }
\right] \right) \cdot \left( 1 - 6 \cdot \lambda_{\max}^2 (\hat{\Bb}_F) \cdot \eta^2\cdot \delta^2 \cdot \hat{\kappa}^2 \right)\\
&\ge 1-\delta + \frac{1}{3} \cdot \eta\cdot \lambda_{\min} (\hat{\Bb}_F)\cdot \left[ \frac{(1-\gamma)\cdot \delta}{ (\hat{\kappa}+\gamma)}\right] - 6 \cdot \lambda_{\max}^2 (\hat{\Bb}_F)\cdot \eta^2\cdot \delta^2\cdot \hat{\kappa}^2\\
&\quad - 2 \cdot \hat{\kappa}^2\cdot \eta^3 \cdot \lambda_{\max}^3 (\hat{\Bb}_F) \cdot \delta^2\cdot \left[ \frac{(1-\gamma)\cdot \delta}{ (\hat{\kappa}+\gamma) }\right]\\
&\ge 1-\delta + \frac{1}{3} \cdot \eta\cdot \lambda_{\min} (\hat{\Bb}_F)\cdot \left[ \frac{(1-\gamma)\cdot \delta}{ (\hat{\kappa}+\gamma)}\right] - 6.25 \cdot \lambda_{\max}^2 (\hat{\Bb}_F)\cdot \eta^2\cdot \delta^2\cdot \hat{\kappa}^2\\
&\ge 1-\delta + \frac{1}{8}\cdot \eta\cdot \lambda_{\min} (\hat{\Bb}_F) \cdot \left[ \frac{(1-\gamma) \delta
}{(\hat{\kappa} + \gamma)}\right],
\end{split}$$ in which the third inequality holds by the assumption that the step size $\eta$ is sufficiently small such that $\eta \lambda_{\max} (\hat{\Bb}_F)<1$, and the last inequality holds under the condition that $$\frac{1-\gamma}{ (\hat{\kappa}+\gamma)} \ge 30 \eta \lambda_{\max} (\hat{\Bb}) \delta \hat{\kappa}^2,$$ which is implied by the following inequality under Assumption \[ass:large n\] $$\delta \le
\frac{1-\gamma}{30\cdot (1+c) \cdot c_{\mathrm{upper}}^2\cdot \eta \cdot \lambda_{\max} (\Bb) \cdot \kappa^2 \cdot (c_{\mathrm{upper}}\kappa+\gamma)}.$$ By Assumption \[ass:large n\], we have $$\yb(F)^T \vb' \ge 1-\delta + \frac{1+c}{8}\cdot \eta\cdot \lambda_{\min} ({\Bb})\cdot [1-\yb(F)^T \tilde{\vb}]\cdot \left( \frac{1-\gamma}{ c_{\mathrm{upper}}{\kappa}+\gamma}\right),$$ as desired.
Proof of Lemma \[lemma:combine\]
--------------------------------
Recall that $V$ is the support of $\vb^*$, the population leading generalized vector and also $\yb^* = \vb^*/\|\vb^*\|_2$. Let $F_{t-1} = \mathrm{supp}(\vb_{t-1})$, $F_{t} = \mathrm{supp}(\vb_{t})$, and let $F=F_{t-1}\cup F_t \cup V$. Note that the cardinality of $F$ is no more than $k' = 2k+s$, since $|F_t|=|F_{t-1}| = k$. Let $$\vb_t' = \Cb_F \vb_{t-1}/ \| \Cb_F \vb_{t-1}\|_2,$$ where $\Cb_F$ is the submatrix of $\Cb_F$ restricted to the rows and columns indexed by $F$. We note that ${\vb}_t'$ is equivalent to the one in Algorithm \[alg:tgd\], since the elements of $\vb_t'$ outside of the set $F$ take value zero.
Applying Lemma \[lemma:key\] with the set $F$, we obtain $$\yb(F)^T \vb_t' \ge \yb(F)^T \vb_{t-1} + \frac{1+c}{8} \cdot \eta\cdot \lambda_{\min} ({\Bb})\cdot [1-\yb(F)^T {\vb_{t-1}}] \cdot \left[ \frac{1-\gamma}{c_\mathrm{upper}\kappa ({\Bb})+\gamma}\right].$$ Subtracting both sides of the equation by one and rearranging the terms, we obtain $$\label{Eq:lemma:combine}
1-\yb(F)^T \vb_t' \le [ 1- \yb(F)^T \vb_{t-1} ] \cdot \left\{
1-\frac{1+c}{8}\cdot \eta \cdot \lambda_{\min} ({\Bb})\cdot
\left[ \frac{1-\gamma}{c_\mathrm{upper}\kappa ({\Bb})+\gamma}\right]\right\}.$$ This implies that $$\label{Eq:lemma:combine2}
\|\yb(F)- \vb_t'\|_2 \le \| \yb(F)- \vb_{t-1} \|_2 \cdot \sqrt{
1-\frac{1+c}{8}\cdot \eta \cdot \lambda_{\min} ({\Bb})\cdot
\left[ \frac{1-\gamma}{c_\mathrm{upper}\kappa ({\Bb})+\gamma}\right]}.$$
By Lemma \[lemma:perturbation vec\], we have $\|\yb(F)- \yb\|_2 \le \omega (k')$. By the triangle inequality, we have $$\label{Eq:lemma:combine3}
\begin{split}
\|\yb-\vb_t'\|_2 &\le \|\yb(F)- \vb_t'\|_2+ \|\yb(F)- \yb\|_2\\
&\le \| \yb(F)- \vb_{t-1} \|_2 \cdot \sqrt{
1-\frac{1+c}{8}\cdot \eta \cdot \lambda_{\min} ({\Bb})\cdot
\left[ \frac{1-\gamma}{c_\mathrm{upper}\kappa ({\Bb})+\gamma}\right]}+ \omega (k')\\
&\le \| \yb- \vb_{t-1} \|_2 \cdot \sqrt{
1-\frac{1+c}{8}\cdot \eta \cdot \lambda_{\min} ({\Bb})\cdot
\left[ \frac{1-\gamma}{c_\mathrm{upper}\kappa ({\Bb})+\gamma}\right]}+2\omega(k'),
\end{split}$$ where the second inequality follows from (\[Eq:lemma:combine2\]). This is equivalent to $$\label{Eq:lemma:combine3}
\sqrt{1- |\yb^T \vb_t'| } \le \sqrt{1- |\yb^T \vb_{t-1}| } \cdot \sqrt{
1-\frac{1+c}{8} \cdot \eta \cdot \lambda_{\min} ({\Bb})\cdot
\left[ \frac{1-\gamma}{c_\mathrm{upper}\kappa ({\Bb})+\gamma}\right]}+\sqrt{2}\cdot \omega(k').$$
We define $$\nu = \sqrt{1+2 [(s/k)^{1/2} + s/k] }\cdot \sqrt{
1-\frac{1+c}{8} \cdot\eta \cdot \lambda_{\min} ({\Bb})\cdot
\left[ \frac{1-\gamma}{c_\mathrm{upper}\kappa ({\Bb})+\gamma}\right]}.$$ By Lemma \[lemma:truncation\] and picking $k>s$, we have $$\label{Eq:lemma:combine4}
\begin{split}
\sqrt{1- |\yb^T \hat{\vb}_t| } &\le \sqrt{1- |\yb^T \vb_t'| +
[(s/k)^{1/2} + s/k] \cdot [1-|\yb^T \vb_t'|^2] }\\
&\le \sqrt{1- |\yb^T \vb_t'|} \cdot \sqrt{1+ [(s/k)^{1/2} + s/k] \cdot [1+|\yb^T \vb_t'|]}\\
&\le \sqrt{1- |\yb^T \vb_t'|} \cdot \sqrt{1+2 [(s/k)^{1/2} + s/k] }\\
&\le \nu \sqrt{1- |\yb^T \vb_{t-1}|} + \sqrt{10} \cdot \omega (k'),
\end{split}$$ where the third inequality holds using the fact that $|\yb^T \vb'_t|\le 1$, and the last inequality holds by (\[Eq:lemma:combine3\]).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Based on a sample of $1.31$ billion $J/\psi$ events collected with the BESIII detector, we report the study of the doubly radiative decay $\eta^\prime\to \gamma\gamma\pi^0$ for the first time, where the $\eta^\prime$ meson is produced via the $J/\psi\to \gamma\eta^\prime$ decay. The branching fraction of $\eta^\prime\to \gamma\gamma\pi^0$ inclusive decay is measured to be ${\cal B}(\eta^\prime\to \gamma\gamma\pi^0)_{\text{Incl.}}$ = $(3.20\pm0.07\mbox{(stat)}\pm0.23\mbox{(sys)})\times 10^{-3}$, while the branching fractions of the dominant process $\eta^\prime\rightarrow\gamma\omega$ and the nonresonant component are determined to be ${\cal B}(\eta^\prime\to \gamma\omega)\times {\cal B}(\omega\to \gamma\pi^0) = (23.7 \pm1.4\mbox{(stat)}\pm1.8\mbox{(sys)})\times 10^{-4}$ and ${\cal B}(\eta^\prime\to \gamma\gamma\pi^0)_{\text{NR}} = (6.16\pm0.64\mbox{(stat)} \pm0.67\mbox{(sys)})\times 10^{-4}$, respectively. In addition, the $M^2_{\gamma\gamma}$-dependent partial widths of the inclusive decay are also presented.'
author:
- |
[M. Ablikim$^{1}$, M. N. Achasov$^{9,d}$, S. Ahmed$^{14}$, X. C. Ai$^{1}$, O. Albayrak$^{5}$, M. Albrecht$^{4}$, D. J. Ambrose$^{45}$, A. Amoroso$^{50A,50C}$, F. F. An$^{1}$, Q. An$^{47,38}$, J. Z. Bai$^{1}$, O. Bakina$^{23}$, R. Baldini Ferroli$^{20A}$, Y. Ban$^{31}$, D. W. Bennett$^{19}$, J. V. Bennett$^{5}$, N. Berger$^{22}$, M. Bertani$^{20A}$, D. Bettoni$^{21A}$, J. M. Bian$^{44}$, F. Bianchi$^{50A,50C}$, E. Boger$^{23,b}$, I. Boyko$^{23}$, R. A. Briere$^{5}$, H. Cai$^{52}$, X. Cai$^{1,38}$, O. Cakir$^{41A}$, A. Calcaterra$^{20A}$, G. F. Cao$^{1,42}$, S. A. Cetin$^{41B}$, J. F. Chang$^{1,38}$, G. Chelkov$^{23,b,c}$, G. Chen$^{1}$, H. S. Chen$^{1,42}$, J. C. Chen$^{1}$, M. L. Chen$^{1,38}$, S. Chen$^{42}$, S. J. Chen$^{29}$, X. Chen$^{1,38}$, X. R. Chen$^{26}$, Y. B. Chen$^{1,38}$, X. K. Chu$^{31}$, G. Cibinetto$^{21A}$, H. L. Dai$^{1,38}$, J. P. Dai$^{34,h}$, A. Dbeyssi$^{14}$, D. Dedovich$^{23}$, Z. Y. Deng$^{1}$, A. Denig$^{22}$, I. Denysenko$^{23}$, M. Destefanis$^{50A,50C}$, F. De Mori$^{50A,50C}$, Y. Ding$^{27}$, C. Dong$^{30}$, J. Dong$^{1,38}$, L. Y. Dong$^{1,42}$, M. Y. Dong$^{1,38,42}$, Z. L. Dou$^{29}$, S. X. Du$^{54}$, P. F. Duan$^{1}$, J. Z. Fan$^{40}$, J. Fang$^{1,38}$, S. S. Fang$^{1,42}$, X. Fang$^{47,38}$, Y. Fang$^{1}$, R. Farinelli$^{21A,21B}$, L. Fava$^{50B,50C}$, F. Feldbauer$^{22}$, G. Felici$^{20A}$, C. Q. Feng$^{47,38}$, E. Fioravanti$^{21A}$, M. Fritsch$^{22,14}$, C. D. Fu$^{1}$, Q. Gao$^{1}$, X. L. Gao$^{47,38}$, Y. Gao$^{40}$, Z. Gao$^{47,38}$, I. Garzia$^{21A}$, K. Goetzen$^{10}$, L. Gong$^{30}$, W. X. Gong$^{1,38}$, W. Gradl$^{22}$, M. Greco$^{50A,50C}$, M. H. Gu$^{1,38}$, Y. T. Gu$^{12}$, Y. H. Guan$^{1}$, A. Q. Guo$^{1}$, L. B. Guo$^{28}$, R. P. Guo$^{1}$, Y. Guo$^{1}$, Y. P. Guo$^{22}$, Z. Haddadi$^{25}$, A. Hafner$^{22}$, S. Han$^{52}$, X. Q. Hao$^{15}$, F. A. Harris$^{43}$, K. L. He$^{1,42}$, F. H. Heinsius$^{4}$, T. Held$^{4}$, Y. K. Heng$^{1,38,42}$, T. Holtmann$^{4}$, Z. L. Hou$^{1}$, C. Hu$^{28}$, H. M. Hu$^{1,42}$, J. F. Hu$^{50A,50C}$, T. Hu$^{1,38,42}$, Y. Hu$^{1}$, G. S. Huang$^{47,38}$, J. S. Huang$^{15}$, X. T. Huang$^{33}$, X. Z. Huang$^{29}$, Z. L. Huang$^{27}$, T. Hussain$^{49}$, W. Ikegami Andersson$^{51}$, Q. Ji$^{1}$, Q. P. Ji$^{15}$, X. B. Ji$^{1,42}$, X. L. Ji$^{1,38}$, L. W. Jiang$^{52}$, X. S. Jiang$^{1,38,42}$, X. Y. Jiang$^{30}$, J. B. Jiao$^{33}$, Z. Jiao$^{17}$, D. P. Jin$^{1,38,42}$, S. Jin$^{1,42}$, T. Johansson$^{51}$, A. Julin$^{44}$, N. Kalantar-Nayestanaki$^{25}$, X. L. Kang$^{1}$, X. S. Kang$^{30}$, M. Kavatsyuk$^{25}$, B. C. Ke$^{5}$, P. Kiese$^{22}$, R. Kliemt$^{10}$, B. Kloss$^{22}$, O. B. Kolcu$^{41B,f}$, B. Kopf$^{4}$, M. Kornicer$^{43}$, A. Kupsc$^{51}$, W. Kühn$^{24}$, J. S. Lange$^{24}$, M. Lara$^{19}$, P. Larin$^{14}$, H. Leithoff$^{22}$, C. Leng$^{50C}$, C. Li$^{51}$, Cheng Li$^{47,38}$, D. M. Li$^{54}$, F. Li$^{1,38}$, F. Y. Li$^{31}$, G. Li$^{1}$, H. B. Li$^{1,42}$, H. J. Li$^{1}$, J. C. Li$^{1}$, Jin Li$^{32}$, K. Li$^{33}$, K. Li$^{13}$, Lei Li$^{3}$, P. R. Li$^{42,7}$, Q. Y. Li$^{33}$, T. Li$^{33}$, W. D. Li$^{1,42}$, W. G. Li$^{1}$, X. L. Li$^{33}$, X. N. Li$^{1,38}$, X. Q. Li$^{30}$, Y. B. Li$^{2}$, Z. B. Li$^{39}$, H. Liang$^{47,38}$, Y. F. Liang$^{36}$, Y. T. Liang$^{24}$, G. R. Liao$^{11}$, D. X. Lin$^{14}$, B. Liu$^{34,h}$, B. J. Liu$^{1}$, C. X. Liu$^{1}$, D. Liu$^{47,38}$, F. H. Liu$^{35}$, Fang Liu$^{1}$, Feng Liu$^{6}$, H. B. Liu$^{12}$, HuanHuan Liu$^{1}$, HuiHui Liu$^{16}$, H. M. Liu$^{1,42}$, J. Liu$^{1}$, J. B. Liu$^{47,38}$, J. P. Liu$^{52}$, J. Y. Liu$^{1}$, K. Liu$^{40}$, K. Y. Liu$^{27}$, L. D. Liu$^{31}$, P. L. Liu$^{1,38}$, Q. Liu$^{42}$, S. B. Liu$^{47,38}$, X. Liu$^{26}$, Y. B. Liu$^{30}$, Y. Y. Liu$^{30}$, Z. A. Liu$^{1,38,42}$, Zhiqing Liu$^{22}$, H. Loehner$^{25}$, X. C. Lou$^{1,38,42}$, H. J. Lu$^{17}$, J. G. Lu$^{1,38}$, Y. Lu$^{1}$, Y. P. Lu$^{1,38}$, C. L. Luo$^{28}$, M. X. Luo$^{53}$, T. Luo$^{43}$, X. L. Luo$^{1,38}$, X. R. Lyu$^{42}$, F. C. Ma$^{27}$, H. L. Ma$^{1}$, L. L. Ma$^{33}$, M. M. Ma$^{1}$, Q. M. Ma$^{1}$, T. Ma$^{1}$, X. N. Ma$^{30}$, X. Y. Ma$^{1,38}$, Y. M. Ma$^{33}$, F. E. Maas$^{14}$, M. Maggiora$^{50A,50C}$, Q. A. Malik$^{49}$, Y. J. Mao$^{31}$, Z. P. Mao$^{1}$, S. Marcello$^{50A,50C}$, J. G. Messchendorp$^{25}$, G. Mezzadri$^{21B}$, J. Min$^{1,38}$, T. J. Min$^{1}$, R. E. Mitchell$^{19}$, X. H. Mo$^{1,38,42}$, Y. J. Mo$^{6}$, C. Morales Morales$^{14}$, N. Yu. Muchnoi$^{9,d}$, H. Muramatsu$^{44}$, P. Musiol$^{4}$, Y. Nefedov$^{23}$, F. Nerling$^{10}$, I. B. Nikolaev$^{9,d}$, Z. Ning$^{1,38}$, S. Nisar$^{8}$, S. L. Niu$^{1,38}$, X. Y. Niu$^{1}$, S. L. Olsen$^{32}$, Q. Ouyang$^{1,38,42}$, S. Pacetti$^{20B}$, Y. Pan$^{47,38}$, M. Papenbrock$^{51}$, P. Patteri$^{20A}$, M. Pelizaeus$^{4}$, H. P. Peng$^{47,38}$, K. Peters$^{10,g}$, J. Pettersson$^{51}$, J. L. Ping$^{28}$, R. G. Ping$^{1,42}$, R. Poling$^{44}$, V. Prasad$^{1}$, H. R. Qi$^{2}$, M. Qi$^{29}$, S. Qian$^{1,38}$, C. F. Qiao$^{42}$, L. Q. Qin$^{33}$, N. Qin$^{52}$, X. S. Qin$^{1}$, Z. H. Qin$^{1,38}$, J. F. Qiu$^{1}$, K. H. Rashid$^{49,i}$, C. F. Redmer$^{22}$, M. Ripka$^{22}$, G. Rong$^{1,42}$, Ch. Rosner$^{14}$, X. D. Ruan$^{12}$, A. Sarantsev$^{23,e}$, M. Savrié$^{21B}$, C. Schnier$^{4}$, K. Schoenning$^{51}$, W. Shan$^{31}$, M. Shao$^{47,38}$, C. P. Shen$^{2}$, P. X. Shen$^{30}$, X. Y. Shen$^{1,42}$, H. Y. Sheng$^{1}$, W. M. Song$^{1}$, X. Y. Song$^{1}$, S. Sosio$^{50A,50C}$, S. Spataro$^{50A,50C}$, G. X. Sun$^{1}$, J. F. Sun$^{15}$, S. S. Sun$^{1,42}$, X. H. Sun$^{1}$, Y. J. Sun$^{47,38}$, Y. Z. Sun$^{1}$, Z. J. Sun$^{1,38}$, Z. T. Sun$^{19}$, C. J. Tang$^{36}$, X. Tang$^{1}$, I. Tapan$^{41C}$, E. H. Thorndike$^{45}$, M. Tiemens$^{25}$, I. Uman$^{41D}$, G. S. Varner$^{43}$, B. Wang$^{30}$, B. L. Wang$^{42}$, D. Wang$^{31}$, D. Y. Wang$^{31}$, K. Wang$^{1,38}$, L. L. Wang$^{1}$, L. S. Wang$^{1}$, M. Wang$^{33}$, P. Wang$^{1}$, P. L. Wang$^{1}$, W. Wang$^{1,38}$, W. P. Wang$^{47,38}$, X. F. Wang$^{40}$, Y. Wang$^{37}$, Y. D. Wang$^{14}$, Y. F. Wang$^{1,38,42}$, Y. Q. Wang$^{22}$, Z. Wang$^{1,38}$, Z. G. Wang$^{1,38}$, Z. H. Wang$^{47,38}$, Z. Y. Wang$^{1}$, Zongyuan Wang$^{1}$, T. Weber$^{22}$, D. H. Wei$^{11}$, P. Weidenkaff$^{22}$, S. P. Wen$^{1}$, U. Wiedner$^{4}$, M. Wolke$^{51}$, L. H. Wu$^{1}$, L. J. Wu$^{1}$, Z. Wu$^{1,38}$, L. Xia$^{47,38}$, L. G. Xia$^{40}$, Y. Xia$^{18}$, D. Xiao$^{1}$, H. Xiao$^{48}$, Z. J. Xiao$^{28}$, Y. G. Xie$^{1,38}$, Y. H. Xie$^{6}$, Q. L. Xiu$^{1,38}$, G. F. Xu$^{1}$, J. J. Xu$^{1}$, L. Xu$^{1}$, Q. J. Xu$^{13}$, Q. N. Xu$^{42}$, X. P. Xu$^{37}$, L. Yan$^{50A,50C}$, W. B. Yan$^{47,38}$, W. C. Yan$^{47,38}$, Y. H. Yan$^{18}$, H. J. Yang$^{34,h}$, H. X. Yang$^{1}$, L. Yang$^{52}$, Y. X. Yang$^{11}$, M. Ye$^{1,38}$, M. H. Ye$^{7}$, J. H. Yin$^{1}$, Z. Y. You$^{39}$, B. X. Yu$^{1,38,42}$, C. X. Yu$^{30}$, J. S. Yu$^{26}$, C. Z. Yuan$^{1,42}$, Y. Yuan$^{1}$, A. Yuncu$^{41B,a}$, A. A. Zafar$^{49}$, Y. Zeng$^{18}$, Z. Zeng$^{47,38}$, B. X. Zhang$^{1}$, B. Y. Zhang$^{1,38}$, C. C. Zhang$^{1}$, D. H. Zhang$^{1}$, H. H. Zhang$^{39}$, H. Y. Zhang$^{1,38}$, J. Zhang$^{1}$, J. J. Zhang$^{1}$, J. L. Zhang$^{1}$, J. Q. Zhang$^{1}$, J. W. Zhang$^{1,38,42}$, J. Y. Zhang$^{1}$, J. Z. Zhang$^{1,42}$, K. Zhang$^{1}$, L. Zhang$^{1}$, S. Q. Zhang$^{30}$, X. Y. Zhang$^{33}$, Y. Zhang$^{1}$, Y. H. Zhang$^{1,38}$, Y. N. Zhang$^{42}$, Y. T. Zhang$^{47,38}$, Yu Zhang$^{42}$, Z. H. Zhang$^{6}$, Z. P. Zhang$^{47}$, Z. Y. Zhang$^{52}$, G. Zhao$^{1}$, J. W. Zhao$^{1,38}$, J. Y. Zhao$^{1}$, J. Z. Zhao$^{1,38}$, Lei Zhao$^{47,38}$, Ling Zhao$^{1}$, M. G. Zhao$^{30}$, Q. Zhao$^{1}$, Q. W. Zhao$^{1}$, S. J. Zhao$^{54}$, T. C. Zhao$^{1}$, Y. B. Zhao$^{1,38}$, Z. G. Zhao$^{47,38}$, A. Zhemchugov$^{23,b}$, B. Zheng$^{48,14}$, J. P. Zheng$^{1,38}$, W. J. Zheng$^{33}$, Y. H. Zheng$^{42}$, B. Zhong$^{28}$, L. Zhou$^{1,38}$, X. Zhou$^{52}$, X. K. Zhou$^{47,38}$, X. R. Zhou$^{47,38}$, X. Y. Zhou$^{1}$, K. Zhu$^{1}$, K. J. Zhu$^{1,38,42}$, S. Zhu$^{1}$, S. H. Zhu$^{46}$, X. L. Zhu$^{40}$, Y. C. Zhu$^{47,38}$, Y. S. Zhu$^{1,42}$, Z. A. Zhu$^{1,42}$, J. Zhuang$^{1,38}$, L. Zotti$^{50A,50C}$, B. S. Zou$^{1}$, J. H. Zou$^{1}$\
(BESIII Collaboration)\
]{}
title: 'Observation of the doubly radiative decay $\eta^{\prime}\to \gamma\gamma\pi^0$'
---
plus 1pt minus 1pt
Introduction
============
The $\eta^\prime$ meson provides a unique stage for understanding the distinct symmetry-breaking mechanisms present in low-energy quantum chromodynamics (QCD) [@qcd-sym1; @qcd-sym2; @qcd-sym3; @bes3-eta; @bes3-eta2] and its decays play an important role in exploring the effective theory of QCD at low energy [@XPhT]. Recently, the doubly radiative decay $\eta^\prime\to \gamma\gamma\pi^0$ was studied in the frameworks of the linear $\sigma$ model (L$\sigma$M) and the vector meson dominance (VMD) model [@VMD_LsigmaM_0; @VMD_LsigmaM]. It has been demonstrated that the contributions from the VMD are dominant. Experimentally, only an upper limit of the nonresonant branching fraction of ${\cal B}(\eta^\prime\to \gamma\gamma\pi^0)_{\text{NR}}<8\times 10^{-4}$ at the 90% confidence level has been determined by the GAMS-2000 experiment [@GAMS_3].
In this article, we report the first measurement of the branching fraction of the inclusive $\eta^\prime\to \gamma\gamma\pi^0$ decay and the determination of the $M^2_{\gamma\gamma}$ dependent partial widths, where $M_{\gamma\gamma}$ is the invariant mass of the two radiative photons. The inclusive decay is defined as the $\eta^\prime$ decay into the final state $\gamma\gamma\pi^0$ including all possible intermediate contributions from the $\rho$ and $\omega$ mesons below the $\eta^\prime$ mass threshold and the nonresonant contribution from the excited vector meson above the $\eta^\prime$ mass threshold. Since the contribution from mesons above the $\eta^\prime$ threshold actually derives from the low-mass tail and looks like a contact term, we call this contribution ’nonresonant’. The branching fraction for the nonresonant $\eta^\prime\to \gamma\gamma\pi^0$ decay is obtained from a fit to the $\gamma\pi^0$ invariant mass distribution by excluding the coherent contributions from the $\rho$ and $\omega$ intermediate states. The measurement of the $M^2_{\gamma\gamma}$ dependent partial widths will provide direct inputs to the theoretical calculations on the transition form factors of $\eta^\prime\to \gamma\gamma\pi^0$ and improve the theoretical understanding of the $\eta^\prime$ decay mechanisms.
Experimental Details
====================
The source of $\eta^\prime$ mesons is the radiative $J/\psi\to \gamma\eta^\prime$ decay in a sample of $1.31\times 10^{9}$ $J/\psi$ events [@NJpsi09; @NJpsi] collected by the BESIII detector. Details on the features and capabilities of the BESIII detector can be found in Ref. [@BEPCII].
The response of the BESIII detector is modeled with a Monte Carlo (MC) simulation based on [geant4]{} [@geant1]. The program [evtgen]{} [@evtgen] is used to generate a $J/\psi\to \gamma\eta^\prime$ MC sample with an angular distribution of $1 + \cos^2\theta_\gamma$, where $\theta_\gamma$ is the angle of the radiative photon relative to the positron beam direction in the $J/\psi$ rest frame. The decays $\eta^\prime\to \gamma\omega(\rho)$, $\omega(\rho)\to \gamma\pi^0$ are generated using the helicity amplitude formalism. For the nonresonant $\eta^\prime\to \gamma\gamma\pi^0$ decay, the VMD model [@VMD_LsigmaM_0; @VMD_LsigmaM] is used to generate the MC sample with $\rho(1450)$ or $\omega(1650)$ exchange. Inclusive $J/\psi$ decays are generated with [kkmc]{} [@kkmc] generator; the known $J/\psi$ decay modes are generated by [evtgen]{} [@evtgen] with branching fractions setting at Particle Data Group (PDG) world average values [@PDG14]; the remaining unknown decays are generated with [lundcharm]{} [@lund].
Event Selection and Background Estimation
=========================================
Electromagnetic showers are reconstructed from clusters of energy deposits in the electromagnetic calorimeter (EMC). The energy deposited in nearby time-of-light (TOF) counters is included to improve the reconstruction efficiency and energy resolution. The photon candidate showers must have a minimum energy of 25 MeV in the barrel region ($|\cos\theta|<0.80$) or 50 MeV in the end cap region ($0.86<|\cos\theta|<0.92$). Showers in the region between the barrel and the end caps are poorly measured and excluded from the analysis. In this analysis, only the events without charged particles are subjected to further analysis. The average event vertex of each run is assumed as the origin for the selected candidates. To select $J/\psi\to \gamma\eta^\prime$, $\eta^\prime\to \gamma\gamma\pi^0$ $(\pi^0\to \gamma\gamma)$ signal events, only the events with exactly five photon candidates are selected.
To improve resolution and reduce background, a five-constraint kinematic (5C) fit imposing energy-momentum conservation and a $\pi^0$ mass constraint is performed to the $\gamma\gamma\gamma\pi^0$ hypothesis, where the $\pi^0$ candidate is reconstructed with a pair of photons. For events with more than one $\pi^0$ candidate, the combination with the smallest $\chi^{2}_{5\mbox{c}}$ is selected. Only events with $\chi^{2}_{5\mbox{c}}<30$ are retained. The $\chi^{2}_{5C}$ distribution is shown in Fig. \[m2gam\_bf5c\] with events in the $\eta^{\prime}$ signal region of $|M_{\gamma\gamma\pi^{0}} - M_{\eta^{\prime}}|<25$ MeV ($M_{\eta^{\prime}}$ is the $\eta^\prime$ nominal mass from PDG [@PDG14]). In order to suppress the multi-$\pi^0$ backgrounds and remove the miscombined $\pi^0$ candidates, an event is vetoed if any two of five selected photons (except for the combination for the $\pi^0$ candidate) satisfies $|M_{\gamma\gamma} - M_{\pi^0}|<18$ MeV/c$^2$, where $M_{\pi^0}$ is the $\pi^0$ nominal mass. After the application of the above requirements, the most energetic photon is taken as the primary photon from the $J/\psi$ decay, and the remaining two photons and the $\pi^0$ are used to reconstruct the $\eta^\prime$ candidates. Figure \[etafit\_R\] shows the $\gamma\gamma\pi^0$ invariant mass spectrum.
![Distribution of the $\chi^{2}_{5C}$ of the 5C kinematic fit for the inclusive $\eta^{\prime}$ decay. Dots with error bars are data; the heavy (black) solid-curve is the sum of signal and expected backgrounds from MC simulations; the light (red) solid-curves is signal components which are normalized to the fitted yields; the (green) dotted-curve is the class I background; and the (pink) dot-dashed-curve is the class II background.[]{data-label="m2gam_bf5c"}](chisq_inetaReg_etap_1210.eps){width="6.5cm"}
![Results of the fit to $M_{\gamma\gamma\pi^0}$ for the selected inclusive $\eta^\prime\to \gamma\gamma\pi^0$ signal events. The (black) dots with error bars are the data.[]{data-label="etafit_R"}](etapsb-fit_1231-2016_draft.eps){width="6.5cm"}
Detailed MC studies indicate that no peaking background remains after all the selection criteria. The sources of backgrounds are divided into two classes. Background events of class I are from $J/\psi\to \gamma\eta^\prime$ with $\eta^\prime$ decaying into final states other than the signal final states. These background events accumulate near the lower side of the $\eta^\prime$ signal region and are mainly from $\eta^\prime\to \pi^0\pi^0\eta$ ($\eta\to \gamma\gamma$), $\eta^\prime\to 3\pi^0$ and $\eta^\prime\to \gamma\gamma$, as shown as the (green) dotted curve in Fig. \[etafit\_R\]. Background events in class II are mainly from $J/\psi$ decays to final states without $\eta^\prime$, such as $J/\psi\to \gamma\pi^0\pi^0$ and $J/\psi\to \omega\eta$ ($\omega\to \gamma\pi^0$, $\eta\to \gamma\gamma$) decays, which contribute a smooth distribution under the $\eta^\prime$ signal region as displayed as the (pink) dot-dashed curve in Fig. \[etafit\_R\].
$\eta^\prime\to \gamma\gamma\pi^0$ (Inclusive) $\eta^\prime\to \gamma\omega, \omega\to \gamma\pi^0$ $\eta^\prime\to \gamma\gamma\pi^0$ (Nonresonant)
--------------------------------------- ------------------------------------------------ ------------------------------------------------------ -------------------------------------------------- --
$N^{\eta^\prime}$ $3435\pm76\pm244$ $2340\pm141\pm180$ $655\pm68\pm71$
$\epsilon$ 16.1% 14.8% 15.9%
${\mathcal B}~(10^{-4})$ $32.0\pm0.7\pm2.3$ $23.7\pm1.4\pm1.8^{a}$ $6.16\pm0.64\pm0.67$
${\mathcal B}_{\text{PDG}}~(10^{-4})$ – $21.7\pm1.3^{b}$ $<8$
Predictions $(10^{-4})$ 57 [@VMD_LsigmaM_0],65 [@VMD_LsigmaM] – –
\[tab:br\]
Signal Yields and Branching Fractions
=====================================
A fit to the $\gamma\gamma\pi^0$ invariant mass distribution is performed to determine the inclusive $\eta^\prime\to \gamma\gamma\pi^0$ signal yield. The probability density function (PDF) for the signal component is represented by the signal MC shape, which is obtained from the signal MC sample generated with an incoherent mixture of $\rho$, $\omega$ and the nonresonant components according to the fractions obtained in this analysis. Both the shape and the yield for the class I background are fixed to the MC simulations and their expected intensities. The shape for the class II background is described by a third-order Chebychev polynomial, and the corresponding yield and PDF parameters are left free in the fit to data. The fit range is 0.70$-$1.10 GeV/c$^2$. Figure \[etafit\_R\] shows the results of the fit. The fit quality assessed with the binned distribution is $\chi^2/\text{n.d.f}=108/95=1.14$. The signal yield and the MC-determined signal efficiency for the inclusive $\eta^\prime$ decay are summarized in Table \[tab:br\].
In this analysis, the partial widths can be obtained by studying the efficiency-corrected signal yields for each given $M^2_{\gamma\gamma}$ bin $i$ for the inclusive $\eta^\prime \to \gamma\gamma\pi^0$ decay. The resolution in $M^2_{\gamma\gamma}$ is found to be about $5\times10^2$ (MeV/c$^2)^2$ from the MC simulation, which is much smaller than $1.0\times 10^4$ (MeV/c$^2)^2$, a statistically reasonable bin width, and hence no unfolding is necessary. The $\eta^\prime$ signal yield in each $M^2_{\gamma\gamma}$ bin is obtained by performing bin-by-bin fits to the $\gamma\gamma\pi^0$ invariant mass distributions using the fit procedure described above. Thus the background-subtracted, efficiency-corrected signal yield can be used to obtain the partial width for each given $M^2_{\gamma\gamma}$ interval, where the PDG value is used for the total width of the $\eta^{\prime}$ meson [@PDG14]. The results for $d\Gamma(\eta^\prime\to \gamma\gamma\pi^0)/dM^2_{\gamma\gamma}$ in each $M^2_{\gamma\gamma}$ interval are listed in Table \[tab:BR-FF\] and depicted in Fig. \[Form\_factor\], where the contributions from each component obtained from the MC simulations are normalized with the yields by fitting to $M_{\gamma\pi^0}$ as displayed in Fig. \[Inter\_fit\].
---------------------------------------------------------------- ---------------------- ---------------------- ---------------------- ---------------------- ----------------------
$M^2_{\gamma\gamma}$ ((GeV/c$^2)^2)$ $[0.0, 0.01]$ $[0.01, 0.04]$ $[0.04, 0.06]$ $[0.06, 0.09]$ $[0.09, 0.12]$
$d\Gamma(\eta^\prime\to \gamma\gamma\pi^0)/M^2_{\gamma\gamma}$ $3.17\pm0.44\pm0.24$ $2.57\pm0.18\pm0.19$ $2.60\pm0.15\pm0.18$ $1.87\pm0.12\pm0.14$ $1.76\pm0.11\pm0.13$
$M^2_{\gamma\gamma}$ ((GeV/c$^2)^2)$ $[0.12, 0.16]$ $[0.16, 0.20]$ $[0.20, 0.25]$ $[0.25, 0.28]$ $[0.28, 0.31]$
$d\Gamma(\eta^\prime\to \gamma\gamma\pi^0)/M^2_{\gamma\gamma}$ $1.63\pm0.10\pm0.12$ $1.76\pm0.09\pm0.13$ $1.97\pm0.10\pm0.14$ $2.00\pm0.17\pm0.15$ $1.07\pm0.20\pm0.08$
$M^2_{\gamma\gamma}$ ((GeV/c$^2)^2)$ $[0.31, 0.36]$ $[0.36, 0.42]$ $[0.42, 0.64]$
$d\Gamma(\eta^\prime\to \gamma\gamma\pi^0)/M^2_{\gamma\gamma}$ $0.34\pm0.06\pm0.03$ $0.12\pm0.03\pm0.01$ $0.06\pm0.01\pm0.01$
---------------------------------------------------------------- ---------------------- ---------------------- ---------------------- ---------------------- ----------------------
\[tab:BR-FF\]
![ Partial width (in keV) versus $M^2_{\gamma\gamma}$ for the inclusive $\eta^\prime\to \gamma\gamma\pi^0$ decay. The error includes the statistic and systematic uncertainties. The (blue) histogram is the sum of an incoherent mixture of $\rho$-$\omega$ and the nonresonant components from MC simulations; the (back) dotted-curves is $\omega$-contribution; the (red) dot-dashed-curve is the $\rho$-contribution; and the (green) dashed-curve is the nonresonant contribution. All the components are normalized using the yields obtained in Fig. \[Inter\_fit\].[]{data-label="Form_factor"}](M2gamgam_DataandMC0505-2017.eps){width="6.5cm"}
![ Distribution of the invariant mass $M_{\gamma\pi^0}$ and fit results in the $\eta^\prime$ mass region. The points with error bars are data; the (black) dotted-curve is from the $\omega$-contribution; the (red) long dashed-curve is from the $\rho$-contribution; the (blue) short dashed-curve is the contribution of $\rho$-$\omega$ interference; the (green) long dashed curve is the nonresonance; the (pink) histogram is from the class II background; the (black) short dot-dashed curve is the combinatorial backgrounds of $\eta^\prime\to \gamma\omega$, $\gamma\rho$. The (blue) solid line shows the total fit function.[]{data-label="Inter_fit"}](Mgampi0_Interf_09162016_VMD-draft.eps){width="6.5cm"}
Assuming that the inclusive decay $\eta^\prime\to \gamma\gamma\pi^0$ can be attributed to the vector mesons $\rho$ and $\omega$ and the nonresonant contribution, we apply a fit to the $\gamma\pi^0$ invariant mass to determine the branching fraction for the nonresonant $\eta^\prime\to \gamma\gamma\pi^0$ decay using the $\eta^\prime$ signal events with $|M_{\gamma\gamma\pi^0} - m_{\eta^\prime}|<25$ MeV/c$^{2}$. In the fit, the $\rho$-$\omega$ interference is considered, but possible interference between the $\omega$ ($\rho$) and the nonresonant process is neglected. To validate our fit, we also determine the product branching fraction for the decay chain $\eta^\prime\to \gamma\omega$, $\omega\to \gamma\pi^0$. Figure \[Inter\_fit\] shows the $M_{\gamma\pi^0}$ distribution. Since the doubly radiative photons are indistinguishable, two entries are filled into the histogram for each event. For the PDF of the coherent $\omega$ and $\rho$ produced in $\eta^\prime\to \gamma\gamma\pi^0$, we use $[\varepsilon(M_{\gamma\pi^0})\times E^3_{\gamma^{\eta^\prime}}\times E^3_{\gamma^{\omega(\rho)}}\times |\text{BW}_{\omega}(M_{\gamma\pi^0}) + \alpha e^{i\theta}\text{BW}_{\rho}(M_{\gamma\pi^0})|^2\times \text{B}^2_{\eta^\prime}\times \text{B}^2_{\omega(\rho)}]\otimes \text{G}(0, \sigma)$, where $\varepsilon(M_{\gamma\pi^{0}})$ is the detection efficiency determined by the MC simulations; $E_{\gamma^{\eta^\prime(\omega/\rho)}}$ is the energy of the transition photon in the rest frame of $\eta^\prime$ ($\omega/\rho$); $\text{BW}_{\omega}(M_{\gamma\pi^0})$ is a relativistic Breit-Wigner (BW) function, and $\text{BW}_{\rho}(M_{\gamma\pi^0})$ is a relativistic BW function with mass-dependent width [@GS]. The masses and widths of the $\rho$ and $\omega$ meson are fixed to their PDG values [@PDG14]. $\text{B}^2_{\eta^\prime(\omega/\rho)}$ is the Blatt-Weisskopf centrifugal barrier factor for the $\eta^\prime$($\omega/\rho$) decay vertex with radius $R=0.75$ fm [@BR-factor1; @BR-factor2], and $\text{B}^2_{\eta^\prime(\omega/\rho)}$ is used to damp the divergent tail due to the factor $E^3_{\gamma^{\eta^\prime(\omega/\rho)}}$. The Gaussian function $\text{G}(0, \sigma)$ is used to parameterize the detector resolution. The combinatorial background is produced by the combination of the $\pi^0$ and the photon from the $\eta^\prime$ meson, and its PDF is described with a fixed shape from the MC simulation. The ratio of yields between the combinatorial backgrounds and the coherent sum of $\rho$-$\omega$ signals is fixed from the MC simulations. The shape of the nonresonant signal $\eta^\prime\to \gamma\gamma\pi^0$ is determined from the MC simulation, and its yield is determined in the fit. The background from the class I as discussed above is fixed to the shape and yield of the MC simulation. Finally, the shape from the class II background is obtained from the $\eta^\prime$ mass sidebands (738$-$788 and 1008$-$1058 MeV/c$^{2}$), and its normalization is fixed in the fit. The $M_{\gamma\pi^0}$ mass range used in the fit is 0.20$-$0.92 GeV/c$^2$. In the fit, the interference phase $\theta$ between the $\rho$- and $\omega$-components is allowed. Due to the low statistics of the $\rho$ meson contribution, we fix the ratio $\alpha$ of $\rho$ and $\omega$ intensities to the value for the ratio of ${\cal B}(\eta^\prime \to \gamma\rho)\cdot {\cal B}(\rho\to \gamma\pi^0)$ and ${\cal B}(\eta^\prime \to \gamma\omega)\cdot{\cal B}(\omega\to \gamma\pi^0)$ from the PDG [@PDG14]. Figure \[Inter\_fit\] shows the results. The yields for the vector mesons $\rho$, $\omega$ and their interference are determined to be $(183\pm15)$, $(2340\pm141)$, and $(174\pm92)$, respectively. The signal yields and efficiencies as well as the corresponding branching fractions for the $\eta^\prime\to \gamma\omega(\omega\to \gamma\pi^0)$ and nonresonant decays are summarized in Table \[tab:br\].
Systematic Uncertainties
========================
The systematic uncertainties on the branching fraction measurements are summarized in Table \[tab:error\]. The uncertainty due to the photon reconstruction is determined to be 1% per photon as described in Ref. [@number]. The uncertainties associated with the other selection criteria, kinematic fit with $\chi^2_{5C}<30$, the number of photons equal to 5 and $\pi^0$ veto ($|M_{\gamma\gamma} - M_{\pi^0}|>18$ MeV/c$^2$) are studied with the control sample $J/\psi\to \gamma\eta^{\prime}$, $\eta^{\prime}\to \gamma\omega$, $\omega\to \gamma\pi^{0}$ decay, respectively. The systematic error in each of the applied selection criteria is numerically estimated from the ratio of the number of events with and without the corresponding requirement. The corresponding resulting efficiency differences between data and MC (2.7%, 0.5%, and 1.9% , respectively) are taken to be representative of the corresponding systematic uncertainties.
In the fit for the inclusive $\eta^\prime$ decay, the signal shape is fixed to the MC simulation. The uncertainty due to the signal shape is considered by convolving a Gaussian function to account for the difference in the mass resolution between data and MC simulation. In the fit to the $\gamma\pi^{0}$ distribution, alternative fits with the mass resolution left free in the fit and the radius $R$ in the barrier factor changed from 0.75 fm to 0.35 fm are performed, and the changes of the signal yields are taken as the uncertainty due to the signal shape.
In the fit to the $M_{\gamma\gamma\pi^{0}}$ distribution, the signal shape is described with an incoherent sum of contributions from processes involving $\rho$ and $\omega$ and nonresonant processes obtained from MC simulation, where the nonresonant process is modeled with the VMD model. A fit with an alternative signal model for the different components, *i.e.* a coherent sum for the $\rho$-, $\omega$-components and a uniform angular distribution in phase space (PHSP) for the nonresonant process, is performed. The resultant changes in the branching fractions are taken as the uncertainty related to the signal model. An alternate fit to the $M_{\gamma\pi^{0}}$ distribution is performed, where the PDF of the nonresonant decay is extracted from the PHSP MC sample. The changes in the measured branching fractions are considered to be the uncertainty arising from the signal model.
In the fit to the $M_{\gamma\pi^{0}}$ distribution, the uncertainty due to the fixed relative $\rho$ intensity is evaluated by changing its expectation by one standard deviation. An alternative fit in which the ratio of yields between combinatorial backgrounds and the coherent sum of $\rho-\omega$ signals is changed by one standard deviation from the MC simulation is performed, and the change observed in the signal yield is assigned as the uncertainty. A series of fits using different fit ranges is performed and the maximum change of the branching fraction is taken as a systematic uncertainty.
The uncertainty due to the class I background is estimated by varying the numbers of expected background events by one standard deviation according to the errors on the branching fraction values in PDG [@PDG14]. The uncertainty due to the class II background is evaluated by changing the order of the Chebychev polynomial from 3 to 4 for the fit to the $\eta^{\prime}$ inclusive decay, and varying the ranges of $\eta^{\prime}$ sidebands for the fit to the $\gamma\pi^{0}$ invariant mass distribution, respectively.
The number of $J/\psi$ events is $N_{J/\psi} = (1310.6\pm 10.5)\times 10^{6}$ [@NJpsi09; @NJpsi], corresponding to an uncertainty of 0.8%. The branching fractions for the $J/\psi\to \gamma\eta^\prime$ and $\pi^0\to \gamma\gamma$ decays are taken from the PDG [@PDG14], and the corresponding uncertainties are taken as a systematic uncertainty. The total systematic errors are 7.1%, 7.7%, 10.8% for the inclusive decay, $\omega$ contribution and nonresonant decay, respectively, as summarized in Table \[tab:error\].
[C[4.0cm]{}|\*[3]{}[C[1.cm]{}]{}]{} &$\eta^\prime_{\text{Incl.}}$ &$\eta^{\prime}_\omega$ &$\eta^{\prime}_{\text{NR}}$\
Photon detection &5.0 &5.0 &5.0\
5C kinematic fit &2.7 &2.7 &2.7\
Number of photons &0.5 &0.5 &0.5\
$\pi^{0}$ veto &1.9 &1.9 &1.9\
Signal shape &0.5 &1.5 &2.3\
Signal model &1.7 &1.0 &4.3\
$\rho$ relative intensity &– &1.3 &4.9\
Combinatorial backgrounds &– &1.3 &0.8\
Fit range &0.8 &1.6 &2.1\
Class I background &0.1 &0.2 &0.6\
Class II background &0.3 &1.8 &4.2\
Cited branching fractions &3.1 &3.1 &3.1\
Number of $J/\psi$ events &0.8 &0.8 &0.8\
Total systematic error &7.1 &7.7 &10.8\
\[tab:error\]
Summary
=======
In summary, with a sample of $1.31\times 10^{9}$ $J/\psi$ events collected with the BESIII detector, the doubly radiative decay $\eta^{\prime}\to \gamma\gamma\pi^{0}$ has been studied. The branching fraction of the inclusive decay is measured for the first time to be ${\cal B}(\eta^{\prime}\to \gamma\gamma\pi^{0})_{\text{Incl.}} = (3.20\pm0.07\mbox{(stat)}\pm0.23\mbox{(sys)})\times 10^{-3}$. The $M^{2}_{\gamma\gamma}$ dependent partial decay widths are also determined. In addition, the branching fraction for the nonresonant decay is determined to be ${\cal B}(\eta^{\prime}\to \gamma\gamma\pi^{0})_{\text{NR}}$ = $(6.16\pm0.64\mbox{(stat)}\pm0.67\mbox{(sys)})\times 10^{-4}$, which agrees with the upper limit measured by the GAMS-2000 experiment [@GAMS_3]. As a validation of the fit, the product branching fraction with the omega intermediate state involved is obtained to be ${\cal B}(\eta^{\prime}\to \gamma\omega)\cdot{\cal B}(\omega\to \gamma\pi^{0})$ = $(2.37\pm0.14\mbox{(stat)} \pm0.18\mbox{(sys)})\times 10^{-3}$, which is consistent with the PDG value [@PDG14]. These results are useful to test QCD calculations on the transition form factor, and provide valuable inputs to the theoretical understanding of the light meson decay mechanisms.
Acknowledgments {#acknowledgments .unnumbered}
===============
The BESIII Collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts No. 11125525, No. 11235011, No. 11322544, No. 11335008, No. 11335009, No. 11425524, No. 11505111, No. 11635010, No. 11675184; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics (CCEPP); the Collaborative Innovation Center for Particles and Interactions (CICPI); Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts No. U1232201, No. U1332201, No. U1532257, No. U1532258; CAS under Contracts No. KJCX2-YWN29, No. KJCX2-YW-N45; 100 Talents Program of CAS; National 1000 Talents Program of China; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; German Research Foundation DFG under Contracts No. Collaborative Research Center CRC 1044, FOR 2359; Istituto Nazionale di Fisica Nucleare, Italy; Koninklijke Nederlandse Akademie van Wetenschappen (KNAW) under Contract No. 530-4CDP03; Ministry of Development of Turkey under Contract No. DPT2006K-120470; The Swedish Research Council; U.S. Department of Energy under Contracts No. DE-FG02-05ER41374, No. DE-SC-0010118, No. DE-SC-0010504, No. DE-SC-0012069; U.S. National Science Foundation; University of Groningen (RuG) and the Helmholtzzentrum fuer Schwerionenforschung GmbH (GSI), Darmstadt; WCU Program of National Research Foundation of Korea under Contract No. R32-2008-000-10155-0.
[99]{} J. Steinberger, Phys. Rev. [**76**]{}, 1180 (1949); S. L. Adler, Phys. Rev. [**177**]{}, 2426 (1969); J. S. Bell and R. Jackiw, Nuovo Cim. [**A 60**]{}, 47 (1969); W. A. Bardeen, Phys. Rev. [**184**]{}, 1848 (1969).
J. Wess and B. Zumino, Phys. Lett. [**B 37**]{}, 95 (1971).
E. Witten, Nucl. Phys. [**B223**]{}, 422 (1983).
H. B. Li, J. Phys. [**G 36**]{}, 085009 (2009).
A. Kupsc, Int. J. Mod. Phys. [**E 18**]{}, 1255 (2009). J. Gasser and H. Leutwyler, Nucl. Phys. [**B250**]{}, 465 (1985); H. Neufeld and H. Rupertsberger, Z. Phys. [**C 68**]{}, 91 (1995). R. Jora, Nucl. Phys. Proc. Suppl. 207-208, 224 (2010).
R. Escribano, PoS QNP 2012, 079 (2012).
D. Alde [*et al.*]{} (GAMS-2000 Collaboration), Z. Phys. [**C 36**]{}, 603 (1987).
M. Ablikim [*et al.*]{} (BESIII Collaboration), Chin. Phys. [**C 36**]{}, 915 (2012).
M. Ablikim [*et al.*]{} \[BESIII Collaboration\], Chin. Phys. **C 41**, 013001 (2017). M. Ablikim [*et al.*]{} (BES Collaboration), Nucl. Instrum. Methods Phys. Res., Sect. [**A 614**]{}, 345 (2010).
S. Agostinelli [*et al.*]{} (GEANT4 Collaboration), Nucl. Instrum. Methods Phys. Res., Sect. [**A 506**]{}, 250 (2003).
D. J. Lange, Nucl. Instrum. Meth. [**A 462**]{}, 1 (2001); R. G. Ping, Chin. Phys. [**C 32**]{}, 599 (2008).
S. Jadach, B. F. L. Ward and Z. Was, Comput. Phys. Commun. [**130**]{}, 260 (2000); Phys. Rev. [**D 63**]{}, 113009 (2001).
C. Patrignani [*et al.*]{} (Particle Data Group), Chin. Phys. [**C 40**]{}, 100001 (2016).
J. C. Chen, G. S. Huang, X. R. Qi, D. H. Zhang, and Y. S. Zhu, Phys. Rev. [**D 62**]{}, 034003 (2000).
J. P. Lees [*et al.*]{} (BaBar Collaboration), Phys. Rev. [**D 88**]{}, 032013 (2013).
S. U. Chung, Phys. Rev. [**D 48**]{}, 1225 (1993).
F. Hippel, C. Quigg, Phys. Rev. [**D 5**]{}, 624 (1972).
M. Ablikim [*et al.*]{} (BESIII Collaboration), Phys. Rev. [**D 83**]{}, 012003 (2011).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A new construction of codes from old ones is considered, it is an extension of the matrix-product construction. Several linear codes that improve the parameters of the known ones are presented.'
title: 'New Linear Codes from Matrix-Product Codes with Polynomial Units'
---
<span style="font-variant:small-caps;">Fernando Hernando[^1]</span>
<span style="font-variant:small-caps;">Diego Ruano[^2]</span>
Introduction
============
Matrix-Product codes were initially considered in [@Blackmore-Norton; @Ozbudak]. They are an extension of several classic constructions of codes from old ones, like the Plotkin $u|u+v$-construction. In this article we consider this construction with cyclic codes, matrix-product codes with polynomials units, where the elements of the matrix used to define the codes are polynomials instead of elements of the finite field. The codes obtained with this construction are quasi-cyclic codes [@LF]. These codes became important after it was shown that some codes in this class meet a modified Gilbert-Varshamov bound [@Kas].
An extension of the lower bound on the minimum distance from [@Ozbudak] is obtained. This bound is sharp for a matrix-product code of nested codes, however it is not sharp in this new setting, that is we obtain codes with minimum distance beyond this bound. By investigating the construction of the words with possible minimum weight of a matrix-product code, we are able to sift an exhaustive search and to obtain three matrix-product codes with polynomials units, that improve the parameters of the codes in [@cota]. Another four linear codes, improving the parameters of the known linear codes, are obtained from the previous ones.
Matrix-Product Codes with Polynomial Units {#sec:mp}
==========================================
Let ${\mathbb{F}_q}$ be the finite field with $q$ elements, $C_1, \ldots, C_s \subset \mathbb{F}_q^m$ cyclic codes of length $m$ and $A=(a_{i,j})$ an $s\times l$-matrix, with $s\leq l$, whose entries are units in ${\mathbb{F}_q}[x]/(x^m -1)$. A unit in ${\mathbb{F}_q}[x]/(x^m -1)$ is a polynomial of degree lower than $m$ whose greatest common divisor with $x^m -1$ is $1$ (they are co-prime). We remark, that the cyclic codes generated by $f$ and by $f u$, with $f \mid x^m -1$ and $\gcd (u,x^m-1)=1$, are the same code. The so-called matrix-product code with polynomial units $C=[C_1 \cdots C_s] \cdot A$ is the set of all matrix-products $[c_1 \cdots c_s] \cdot A$ where $c_i\in C_i \subset {\mathbb{F}_q}[x]/(x^m -1)$ for $i=1,\ldots, s$.
The $i$-th column of any codeword is an element of the form $\sum_{j=1}^s a_{j,i} c_j\in \mathbb{F}_q[x]/(x^m-1)$, the codewords can be viewed as, $$\label{VectorCodeword}
c=\left(\sum_{j=1}^s a_{j,1} c_j, \ldots , \sum_{j=1}^s a_{j,l} c_j \right)
\in\mathbb({F}_q[x]/(x^m -1))^l.$$
One can generate $C$ with the matrix: $$G=\left(
\begin{tabular}{cccccc}
$a_{1,1}f_1$ & $a_{1,2}f_1$ & $\cdots$ & $a_{1,s}f_1$& $\cdots$ & $a_{1,l}f_1$\\
$a_{2,1}f_2$ & $a_{2,2}f_2$& $\cdots$ & $a_{2,s}f_2$ & $\cdots$ & $a_{2,l}f_2$\\
$\vdots$ & $\vdots$& $\cdots$ & $\vdots$& $\cdots$ & $\vdots$\\
$a_{s,1}f_s$ & $a_{s,2}f_s$& $\cdots$ & $a_{s,s}f_s$ & $\cdots$ & $a_{s,l}f_s$\\
\end{tabular}\right),$$ where $f_i$ is the generator polynomial of $C_i$, $i=1,\ldots,s$. That is, we have that $C = \{ (h_1, \ldots, h_s) G ~|~ h_i \in {\mathbb{F}_q}[x] \mathrm{~with~degree~} < m - deg(f_i), i=1,\ldots ,s \}$ and it follows that $C$ is a quasi-cyclic code.
Let $C_i$ be a $[m,k_i,d_i]$ cyclic code, then the matrix-product code with polynomial units $C=[C_1 \cdots C_s] \cdot A$ is a linear code over $\mathbb{F}_q$ with length $lm$ and dimension $k=k_1+\cdots+k_s$ if the matrix $A$ has full rank over $\mathbb{F}_q[x]/(x^m-1)$.
The length follows from the construction of the code. Let $A$ be a $s\times l$ matrix with $s\leq l$ which has full rank. Let $c_i\in C_i$ for $i=1,\ldots,s$ such that $[c_1,\ldots,c_s]\neq [0,\ldots,0]$. Since $A$ has rank equal to $s$ then $[c_1,\ldots,c_s]\cdot A\neq [0,\ldots,0]$. Therefore, $\# C=\#\{[c_1,\ldots,c_s]\cdot A\mid c_i\in C_i, i=1,\ldots,s\}= (\# C_1) \cdots (\# C_s)=q^{k_1+\cdots+k_s}$.
We denote by $R_i= (a_{i,1},\ldots,a_{i,l})$ the element of $({\mathbb{F}_q}[x]/(x^m -1))^l$ consisting of the $i$-th row of $A$, for $i=1,\ldots,s$. We consider $C_{R_i}$, the ${\mathbb{F}_q}[x]/(x^m -1)$-submodule of $({\mathbb{F}_q}[x]/(x^m -1))^l$ generated by $R_1,\ldots, R_i$. In other words, $C_{R_i}$ is a linear code over a ring, and we denote by $D_i$ the minimum Hamming weight of the words of $C_{R_i}$, $D_i = \min \{ wt (x) {~ | ~}x \in C_{R_i} \}$. We obtain a lower bound for the minimum distance of $C$ by just extending the proof of the main result in [@Ozbudak].
\[lowerbound\] Let $C$ be the matrix-product code with polynomial units $[C_1 \cdots C_s] \cdot A$ where $A$ has full rank over $\mathbb{F}_q[x]/(x^m-1)$. Then $$\label{distancia}
d(C)\geq d^\ast= \min\{d_1D_1,d_2D_2, \ldots ,d_s D_s\},$$ where $d_i = d(C_i)$, $D_i = d(C_{R_i})$ and $C_{R_i}$ is as described above.
Any codeword of $C$ is of the form $c=[c_1 \cdots
c_s]\cdot A$. Let us suppose that $c_r \neq 0$ and $c_i=0$ for all $i>r$. It follows that $[c_{j,1}x^{j-1}, \cdots, c_{j,s}x^{j-1}] \cdot A \in
C_{R_r}$ for $j=1,\ldots,m$, where $c_i = c_{1,i} + c_{2,i} x + \cdots + c_{m,i} x^ {m-1}$. Since $c_r \neq 0$ it has at least $d_r$ monomials with non-zero coefficient. Suppose $c_{{i_v},r} \neq 0$, for $v=1,\ldots, d_r$. For each $v=1,\ldots, d_r$, the product $[c_{{i_v},1}x^{j-1}, \cdots, c_{{i_v},s}x^{j-1}]\cdot A$ is a non-zero codeword in $C_{R_r}$, since $A$ has full rank. Therefore the weight of $[c_{{i_v},1}x^{i_v-1}, \cdots, c_{{i_v},s}x^{i_v-1}]\cdot A$ is greater than or equal to $D_r$ and the weight of $c$ is greater than or equal to $d_rD_r$.
If $C_1, \ldots, C_s \subset \mathbb{F}_q^m$ are linear codes of length $m$ and $A=(a_{i,j}) \in \mathcal{M}({\mathbb{F}_q}, s \times l)$ a matrix with $s\leq l$, then $C=[C_1 \cdots C_s] \cdot A$ is a matrix-product code, initially considered in [@Blackmore-Norton; @Ozbudak]. We denote by $R_i= (a_{i,1},\ldots,a_{i,l})$ the element of $\mathbb{F}_q^l$ consisting of the $i$-th row of $A$, for $i=1,\ldots,s$. We set $D_i$ the minimum distance of the code $C_{R_i}$ generated by $\langle R_1,\ldots, R_i\rangle$ in ${\mathbb{F}_q}^l$. In [@Ozbudak] the following lower bound for the minimum distance of the matrix-product code $C$ is obtained, $d(C)\geq \min\{d_1D_1,d_2D_2, \ldots ,d_s D_s\}$, where $d_i$ is the minimum distance of $C_i$. If we consider $C_1, \ldots, C_s$ nested codes, the previous bound is sharp for matrix-product codes [@hlr]. However, if we consider a matrix-product code with polynomial units, then the bound from proposition \[lowerbound\] is not sharp in general, as one can see in the examples stated below.
Let us consider the same approach as that of [@hlr] to construct a codeword with minimum weight in this more general setting: set $c_{1}, \ldots, c_{p}\in {\mathbb{F}_q}[x]/(x^m -1)$ such that $c_{1}=\cdots = c_{p}$, with $wt(c_{p}) = d_p$, and $c_{p+1}=\ldots=c_s = 0$. Let $r = \sum_{i=1}^p r_i R_i$, with $r_i \in {\mathbb{F}_q}[x]/(x^m-1)$, be a word in $C_{R_{p}}$ with weight $D_p$. If $c'_i = r_i c_i$ then $$[c'_1 \cdots c'_s]\cdot A=c_1\left(\sum_{j=1}^p a_{j,1} r_j,
\ldots,\sum_{j=1}^p a_{j,l} r_j \right)=c_p r.$$
Although, for a cyclic code $C$ and a unit $g$ in ${\mathbb{F}_q}[x]/(x^m-1)$, $C = \{ c g {~ | ~}c \in C \}$, the weight of $c$ is different from the one of $c g$, in general. Hence, the weight of $c_p r$ is greater than or equal to $d_p D_p$. We remark that this phenomenon allows us to obtain codes with minimum distance beyond the lower bound.
New linear codes: Plotkin construction with polynomials {#sec:newcodes}
=======================================================
Obtaining a sharper bound than the one in the previous section is a very tough problem, actually it is the same question as the computation of the minimum distance of a quasi-cyclic code. However, by analyzing the lower bound $d^\ast$ we have performed a search to find codes with good parameters. An exhaustive search in this family is only feasible if one considers some extra conditions, these conditions should be necessary for having good parameters, but not sufficient. We will assume further particular conditions that allowed us to successfully achieve a search, discarding a significant amount of cases. We have used the structure obtained in the previous section for matrix-product codes with polynomials units from nested codes and we have obtained some binary linear codes improving the parameters of the previously known codes.
Let $s=l=2$, and $A$ the matrix $$A=\left(\begin{matrix}
g_1 & g_2 \\
0 & g_4 \\
\end{matrix}\right),$$ where $g_1, g_2, g_4$ are units in $\mathbb{F}_2[x]/(x^m-1)$. In this way $A$ is full rank over $\mathbb{F}_2[x]/(x^m-1)$ with $D_1=2$ and $D_2=1$. We may also consider this family of codes as an extension of the Plotkin $u\mid u+v$-construction.
For nested matrix-product codes the bound $d^\ast = \min \{ d_1 D_1, \ldots, d_s D_s \}$ is sharp. Furthermore, by [@hlr Theorem 1] we have some words with weight $d_i D_i$ for $i=1, \ldots, s$. We follow the construction of these words and consider a matrix $A$ in a such a way that they have weight larger than $d_iD_i$. Let $C_1 = (f_1)$ and $C_2 = (f_2)$, with $f_1 \mid f_2$ (that is, $C_1 \supset C_2$), and $C=[C_1C_2]\cdot A$. We consider $h_1, h_2 \in \mathbb{F}_2[x]$ such that $wt(f_1 h_1) = d_1$ and $wt (f_2 h_2) =d_2$, and $r_1, r_2 \in \mathbb{F}_2[x]/(x^m -1)$ such that $r_1 R_1 + r_2 R_2$ is a codeword with minimum Hamming weight in $C_{R_2}$, that is with weight $1$. Thus, the words $[f_1 h_1, 0] \cdot A= (f_1 h_1 g_1, f_1 h_1 g_2)$ and $[f_2 h_2 r_1, f_2 h_2 r_2]\cdot A= ( f_2 h_2 r_1 g_1, f_2 h_2 (r_1 g_2 + r_2 g_4) )$ have weight greater than or equal to $2d_1$ and $d_2$, respectively.
In particular, the words with minimum Hamming weight in $C_{R_2}$ are generated by $R_2$, for $r_1=0$, and $g_4 R_1 -g_2 R_2$, for $r_1=g_4$, $r_2=-g_2$. Therefore, the words of $C$ with possible minimum weight are: $(f_1 h_1 g_1 , f_1 h_1 g_2)$, $(0, f_2 h_2 g_4)$ and $(f_2 h_2 g_1 g_4, 0)$. Hence, we want to get $f_1 h_1 g_1$ or $f_1 h_1 g_2$ with weight greater than $d_1$ and $f_2 h_2 g_4 $ and $f_2 h_2 g_1 g_4$ with weight greater than $d_2$.
We shall assume also that $d_2 > 2 d_1$, therefore we only should have $f_1 h_1 g_1$ or $f_1 h_1 g_2$ with weight greater than $d_1$ in order to have a chance to improve the lower bound from Proposition \[lowerbound\].
Moreover we may consider $g_1=1$ without further restriction of generality: notice that $f_2$ and $f_2g_1$ define the same cyclic code, hence a codeword is of the form $(f_1h_1g_1,f_1h_1g_2+f_2h_2g_1)$. Multiplying by $g_1^{-1}$ we obtain $(f_1h_1,f_1h_1(g_2/g_1)+f_2h_2)$ where $g=g_2/g_1$ is a unit.
Summarizing, we have performed a sifted search following the criteria: we consider matrix-product codes with polynomial units $C=[C_1C_2]\cdot A$, where $C_1, C_2$ are cyclic nested codes, with same length and $d_2$ larger than $2 d_1$, and a matrix $$A=\left(\begin{matrix}
1 & g \\
0 & 1
\end{matrix}\right),$$with $g$ unit in $\mathbb{F}_2[x]/(x^m -1)$ such that $wt(f_1 h_1 g) > d_1$.
We have compared the minimum distance of these binary linear codes with the ones in [@cota] using [@ma]. We pre-computed a table containing all the cyclic codes up to length $55$, their parameters and their words of minimum weight. We obtained the following linear codes whose parameters are better than the ones previously known:
From [@cota] New codes
--------------- ----------------------------------
$[94,25,26]$ ${\mathcal{C}}_1=[94,25,27]$
$[102,28,27]$ ${\mathcal{C}}_2=[102,28,28] $
$[102,29,26]$ ${\mathcal{C}}_3=[102, 29, 28] $
${\mathcal{C}}_1=[C_1,C_2] \cdot A$, where $C_1=(f_1)$ and $C_2=(f_2)$ with:
- $f_1=x^{23} + x^{22} + x^{21} + x^{20} + x^{18} + x^{17} + x^{16} +
x^{14} + x^{13} + x^{11} + x^{10}+ x^9 + x^5 + x^4 + 1,$
- $f_2=(x^{47}-1)/(x+1),$
- $g=x^{20} + x^{19} + x^{13} + x^{12} + x^{11} + x^9 + x^7 +
x^4 + x^3 + x^2 + 1.$
${\mathcal{C}}_2=[C_1,C_2]\cdot A$, where $C_1=(f_1)$ and $C_2=(f_2)$ with:
- $f_1=x^{25} + x^{23} + x^{22} + x^{21} + x^{20} + x^{18} + x^{16} +
x^{11 }+ x^{10} + x^8 + x^7 + x^6 +x^5 + x^4 + x + 1,$
- $f_2=(x^{51}-1)/(x^2+x+1),$
- $g=x^{20} + x^{15 }+ x^{14} + x^{10} + x^9 + x^7 + 1.$
${\mathcal{C}}_3=[C_1,C_2] \cdot A$, where $C_1=(f_1)$ and $C_2=(f_2)$ with:
- $f_1=x^{24} + x^{23} + x^{21} + x^{19} + x^{18} + x^{15} + x^{14} +
x^{13} + x^{12} + x^{11} + x^9+ x^8+ x^6 + x^4 + 1,$
- $f_2=(x^{51}-1)/(x^2+x+1),$
- $g=x^{50} + x^{49} + x^{48} + x^{46} + x^{44} + x^{43} + x^{42} + x^{41} + x^{38} + x^{37} + x^{36} +
x^{34} + x^{32} + x^{29} + x^{27} + x^{25} + x^{24} + x^{19} + x^{17} + x^{15} + x^{13} + x^{12} +
x^{10} + x^8 + x^5 + x + 1.$
Moreover operating on $C_3$ we get four more codes.
From [@cota] New codes Method
--------------- -------------------------------- ------------------------------------------
$[101,29,26]$ ${\mathcal{C}}_4 =[101,29,27]$ Puncture Code(${\mathcal{C}}_3$,[102]{})
$[101,28,26]$ ${\mathcal{C}}_5 =[101,28,28]$ Shorten Code(${\mathcal{C}}_3$,[101]{})
$[100,28,26]$ ${\mathcal{C}}_6 =[100,28,27]$ Puncture Code(${\mathcal{C}}_5$,[101]{})
$[103,29,27]$ ${\mathcal{C}}_7 =[103,29,28]$ Extend Code(${\mathcal{C}}_3$)
Also, a good number of new quasi-cyclic codes reaching the best known lower bounds are achieved with this method. One can find $434$ of these codes in [@chen-web].
Acknowledgements {#acknowledgements .unnumbered}
================
The authors would like to thank M. Greferath for his course at Claude Shannon Institute and P. Beelen and T. Høholdt for helpful comments on this paper.
[99]{}
Tim Blackmore and Graham H. Norton. Matrix-product codes over [$\Bbb F\sb q$]{}. , 12(6):477–500, 2001.
Wieb Bosma, John Cannon, and Catherine Playoust. The magma algebra system. [I]{}. the user language. , 24(3-4):235–265, 1997.
Eric Zhi Chen. Web database of binary [QC]{} codes. Online available at <http://www.tec.hkr.se/~chen/research/codes/searchqc2.htm>. Accessed on 2009-03-27.
Markus Grassl. . Online available at <http://www.codetables.de>, 2007. Accessed on 2009-03-27.
Fernando Hernando, Kristine Lally, and Diego Ruano. Construction and decoding of matrix-product codes from nested codes. , 20:497–507, 2009.
T. Kasami. A [G]{}ilbert-[V]{}arshamov bound for quasi-cyclic codes of rate [$1/2$]{}. , IT-20:679, 1974.
Kristine Lally and Patrick Fitzpatrick. Algebraic structure of quasicyclic codes. , 111(1-2):157–175, 2001.
Ferruh [Ö]{}zbudak and Henning Stichtenoth. Note on [N]{}iederreiter-[X]{}ing’s propagation rule for linear codes. , 13(1):53–56, 2002.
[^1]: Is supported in part by the Claude Shannon Institute, Science Foundation Ireland Grant 06/MI/006 (Ireland) and by MEC MTM2007-64704 and Junta de CyL VA025A07 (Spain).
[^2]: Is supported in part by MEC MTM2007-64704 and Junta de CyL VA065A07 (Spain).
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Ingryd Pereira$^{1}$'
- 'Diego Santos$^{2}$'
bibliography:
- 'mybib.bib'
- 'paper.bib'
title: 'OMG Emotion Challenge - ExCouple Team'
---
The proposed model is just for the audio module. The expression of emotion through the voice is one of the essential forms of human communication [@weninger2013acoustics]. Although the voice is a reliable affection source, recognize the affection through the voice is a more complicated task [@liu2015emotional; @schuller2011recognising].
One of the challenges when processing audio, is the representation of the audio characteristics. For a long time, handmade transformations have stood out in this area, such as the MFCC. But traditional feature extraction techniques loss too much information from the audio unlike deep learning models which possible the use of the lowest level of raw speech, like spectral characteristics, for speech recognition and automatically learns to make this transformation. Deep learning models also allow the use of convolutional and clustering operations to represent and deal with some typical speech variability (e.g., differences in vocal tract length at high-speakers, different speech styles, etc.)[@deng2014deep].
But deep learning models require a high number of labeled training data to perform well, and there is a scarcity of emotional data available, which makes the task of emotion recognition challenging.
The semi-supervised learning can overcome the lack information problem of labeled data. For the OMG Challenge, we use a GAN, which has unsupervised learning, to learn and generate the audio representation and this representation will be used as input for the model that will predict the values of arousal and valence. The benefit of using this approach is that part of the model that will represent the audio can be trained with any database, with a much larger amount of data, since it does not require a label for your training. Doing this also creates a general model of audio representation, which allow the use of the model in different tasks and different databases without retraining.
To develop the application used for this challenge, we use a BEGAN that uses an autoencoder as a discriminator. The encoder part of this autoencoder learns how to perform the audio representation. For the BEGAN training, we use the IEMOCAP database, which is one of the largest emotional databases available. The training occurred in 100 epochs, with batch size 16, and with a $\gamma$ value of 0.7.
We only use the audio module from the database, but all files are available in mp4 video format. So as preprocessing the application extracts and saves the audio from all videos in the database as WAV format. The next step is to change the audio frequency to 16kH. Then each audio track was decomposed into 1-second chunks without overlapping. After that, the raw audio was converted to a spectrogram via Short Time Fourier Transform, with an FFT of size 1024 and a length of 512.
Figure \[abstraction\] presents the developed model abstraction. The model uses the preprocessed audio as input to the representation module pre-trained by the BEGAN. The encoder output is the input for a set of convolutional layers followed by dense layers with activation *tahn*, which predicts the arousal and valence values (values between -1 and 1).
![Abstraction of the classifier and prediction models[]{data-label="abstraction"}](absDesafio.jpg)
In preprocessing, the application divides the audios into 1-second pieces, performing the prediction for each of these pieces. But at the end of the prediction process, it is necessary to gather the results from each part and check out the value of arousal and valence for whole audio. To do this, we use the median value of the predicted values of each 1-second part from given audio. The median of the set of predicted arousal values will be the representation of arousal of that given audio, and the application uses the same process for valence value.
![Box plot with the CCC of the Arousal and Valence values predicts[]{data-label="boxplot"}](boxplot.jpg)
Figure \[boxplot\] shows the box-plot with the predicted arousal and valence values in 10 model runs. The base line present in Barros et al. work, [@barros2018omg] has a CCC better than 0.15 and valence 0.21, and as can be seen in Figure \[boxplot\], our model obtain better results.
[1]{} Liu, Mengmeng and Chen, Hui and Li, Yang and Zhang, Fengjun *“Emotional tone-based audio continuous emotion recognition”*. International Conference on Multimedia Modeling. Springer. 2015
Schuller, Bj[ö]{}rn and Batliner, Anton and Steidl, Stefan and Seppi, Dino *“Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge”*. Speech Communication. Elsevier. 2011
Deng, Li and Yu, Dong and others *“Deep learning: methods and applications”*. Foundations and Trends in Signal Processing. Now Publishers, Inc.. 2014
Weninger, Felix and Eyben, Florian and Schuller, Bj[ö]{}rn W and Mortillaro, Marcello and Scherer, Klaus R *“On the acoustics of emotion in audio: what speech, music, and sound have in common”*. Frontiers in psychology. Frontiers Media SA. 2013
Barros, Pablo and Churamani, Nikhil and Lakomkin, Egor and Siqueira, Henrique and Sutherland, Alexander and Wermter, Stefan *“The OMG-Emotion Behavior Dataset”*. arXiv preprint arXiv:1803.05434. 2018
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Fast feedback control and safety guarantees are essential in modern robotics. We present an approach that achieves both by combining novel robust model predictive control (MPC) with function approximation via (deep) neural networks (NNs). The result is a new approach for complex tasks with nonlinear, uncertain, and constrained dynamics as are common in robotics. Specifically, we leverage recent results in MPC research to propose a new robust setpoint tracking MPC algorithm, which achieves reliable and safe tracking of a dynamic setpoint while guaranteeing stability and constraint satisfaction. The presented robust MPC scheme constitutes a one-layer approach that unifies the often separated planning and control layers, by directly computing the control command based on a reference and possibly obstacle positions. As a separate contribution, we show how the computation time of the MPC can be drastically reduced by approximating the MPC law with a NN controller. The NN is trained and validated from offline samples of the MPC, yielding statistical guarantees, and used in lieu thereof at run time. Our experiments on a state-of-the-art robot manipulator are the first to show that both the proposed robust and approximate MPC schemes scale to real-world robotic systems.'
author:
- 'Julian Nubert$^{1,2}$, Johannes Köhler$^3$, Vincent Berenz$^4$, Frank Allgöwer$^3$, and Sebastian Trimpe$^1$ [^1] [^2][^3][^4][^5]'
bibliography:
- 'icra\_ral\_2020.bib'
title: '**Safe and Fast Tracking Control on a Robot Manipulator: Robust MPC and Neural Network Control**'
---
=1
Introduction
============
The need to handle complexity becomes more prominent in modern control design, especially in robotics. First of all, complexity often stems from tasks or system descriptions that are high-dimensional and nonlinear. Second, not only classic control properties such as nominal stability or step-response characteristics are of interest, but also additional guarantees such as stability under uncertain conditions or satisfaction of hard constraints on inputs and states. In particular, the ability to *robustly* guarantee safety becomes absolutely essential when humans are involved within the process, such as for automated driving or human-robot interaction (HRI). Finally, many robotic systems and tasks require fast acting controllers in the range of milliseconds, which is exacerbated by the need to run algorithms on resource-limited hardware.
Designing controllers for such challenging applications often involves the combination of several different conceptual layers. For example, classical robot manipulator control involves trajectory planning in the task space, solving for the inverse kinematics of a single point (i.e., the setpoint) or multiple points (task space trajectory), and the determination of required control commands in the state space [@Siciliano2008]. These approaches can be affected by corner cases of one of the components; for example, solving for the inverse kinematics may not be trivial for redundant robots. For many complex scenarios, a direct approach is hence desirable for tracking of (potentially unreachable) reference setpoints in task space.
![Apollo robot with two LBR4+ arms (at MPI-IS Tübingen). The end effector tracks the reference encircled in green, while guaranteeing stability and constraint satisfaction at all times (e.g., avoiding obstacles).[]{data-label="fig:robot_experiment"}](./images/ApolloMovementDenoted.png){width="1.0\linewidth"}
In this paper, we propose a single-layer approach for robot tracking control that handles all aforementioned challenges. We achieve this by combining (and extending) recent robust model predictive control (RMPC) and function approximation via supervised learning with (deep) neural networks (NNs). The proposed RMPC can handle nonlinear systems, constraints, and uncertainty. In order to overcome the computational complexity inherent in the online MPC optimization, we present a solution that approximates the RMPC with supervised learning yielding a NN as an explicit control law and a speed improvement by two orders of magnitude. Through experiments on a KUKA LBR4+ robotic manipulator (see Figure \[fig:robot\_experiment\]), we demonstrate – for the first time – the feasibility of both the novel robust MPC and its NN approximation for robot control.
### Related Work {#related-work .unnumbered}
MPC can handle nonlinear constraints and is applicable to nonlinear systems [@rawlings2009model], however, disturbances or uncertainty can compromise the safety guarantees of nominal MPC schemes. RMPC overcomes this by preserving safety and stability despite disturbances and uncertainty.
Recent advances in computationally efficient RMPC schemes allow for guaranteeing constraint satisfaction despite uncertainty. For instance, tube-based MPC does so by predicting a tube around the nominal (predicted) trajectory that confines the actual (uncertain) system trajectory. A robust constraint tightening scheme for linear systems is presented in [@chisci2001systems]. In [@villanueva2017robust], an approach based on min-max differential inequalities is presented to achieve robustness for the nonlinear case. In this work, we build upon the novel *nonlinear* constraint tightening approach in [@KoehlerCompEff18], which provides slightly more conservative results than the approach in [@villanueva2017robust], but is far more computationally efficient.
We herein extend [@KoehlerCompEff18] to setpoint tracking. Setpoint tracking MPC, as introduced in [@LIMON18] for nonlinear systems, enables the controller to track piece-wise constant output reference signals. A robust version for linear systems is presented in [@limon2010robust]. To obtain a robust version for nonlinear systems, we optimize the size of the terminal set around the artificial steady state online, similar as done in [@kohler19nonlinear] for nominal MPC. None of the aforementioned robust or setpoint tracking MPC approaches has been applied on a real-world, safety-critical system of complexity similar to the robot arm herein.
Approximate MPC (AMPC) allows for running high performance control on relatively cheap hardware by using supervised learning (e.g. NNs) to approximate the implicit solution of the optimization problem. Recently, in [@DeepMPCChen18; @zhang2019near; @karg18], [@chen2019large] theoretical approaches for AMPC for linear systems were presented, which use projection/active set iterations for feasibility [@DeepMPCChen18; @chen2019large], statistical validation [@karg18], and duality for performance bounds [@zhang2019near]. Herein, we leverage the AMPC approach for *nonlinear* systems, recently proposed in [@hertneck18], which yields a NN control law that inherits the MPC’s guarantees (in a statistical sense) through robust design and statistical validation.
MPC control for robotic manipulators is investigated, for example, in [@Faulwasser17; @carron2019data]. However, both of these approaches assume a trajectory in the joint space to be given beforehand. In [@PredInvKin19], reference tracking in the task space by using task scaling to solve for the inverse kinematics of a redundant manipulator is proposed, taking kinematic limits into account. In none of these approaches, safety guarantees or robustness under uncertainty are considered. Approaches making use of robust MPC schemes are not widely used in robotics (yet), but tube and funnel approaches have recently been explored for robust robot motion planning [@majumdar2017funnel; @singh2017robust; @fridovich2018planning]. However, to the best of our knowledge, no experimental implementation of an MPC design with theoretically guaranteed robustness exists yet for a robotic system.
### Contributions {#contributions .unnumbered}
This paper makes contributions in three main directions: *(i)* robust setpoint tracking MPC, *(ii)* approximate MPC via supervised learning with NNs, and *(iii)* their application to real robotic systems.
*(i)* We present a new RMPC setpoint tracking approach that combines the RMPC [@KoehlerCompEff18] with the MPC setpoint tracking in [@LIMON18] by proposing online optimized terminal ingredients to improve performance subject to safety constraints. The resulting robust approach provides safety guarantees in face of disturbances and uncertainties while yielding fully integrated robot control in one-layer (i.e., robust motion planning and feedback control). *(ii)* The presented AMPC builds and improves upon the approach in [@hertneck18] by providing a novel, less conservative validation criterion that also considers model mismatch, which is crucial for robot experiments. The proposed AMPC considerably improves performance due to fast NN evaluation, while providing statistical guarantees on safety. *(iii)* Finally, this work comprises the first experimental implementations of both, the RMPC based on [@KoehlerCompEff18] and the AMPC originating from [@hertneck18]. To the best of our knowledge, this is the first experimental implementation of nonlinear tracking RMPC with safety properties theoretically guaranteed by design.
Problem Formulation {#sec:general_approach}
===================
We consider disturbed nonlinear continuous-time systems $$\label{equ:system_ct}
\dot{x}_t=f_{\mathrm{c}}(x_t,u_t)+d_{\mathrm{w},\mathrm{c},t},~y_t=o(x_t,u_t),$$ with state $x_t \in \mathbb{R}^n$, control input $u_t \in \mathbb{R}^m$, output $y_t \in \mathbb{R}^q$, nominal dynamics $f_\mathrm{c}$ and model mismatch $d_{\mathrm{w},\mathrm{c},t} \in \mathcal{W}(x_t,u_t)$ with some known compact set $\mathcal{W}$. For the nonlinear state, input and output constraint set $\mathcal{Z}$, we consider $$\label{equ:constraint_formulation}
\mathcal{Z} = \{(x,u) \in \mathbb{R}^{n+m} | \bar{g}_j(x,u,o(x,u)) \leq 0,~j = 1,\dots,p\}. \nonumber$$ In the following, we denote $g_j(x,u):=\bar{g}_j(x,u,o(x,u))$ and omit the time index $t$ when clear from context.
### Objective {#objective .unnumbered}
Given an output reference $y^{\mathrm{d}}_t$, the control goal is to exponentially stabilize the optimal reachable setpoint, while ensuring robust constraint satisfaction, i.e. $(x_t,u_t) \in \mathcal{Z}~\forall t\geq 0$. This should hold irrespective of the reference, and even for a non-reachable output reference $y^\mathrm{d}$. To meet requirements of modern robotics, the controller should operate at fast update rates, e.g., ideally at the order of milliseconds.
Such control problems are ubiquitous in robotics and other areas and combine the challenges of safe and fast tracking for complex (i.e., nonlinear, uncertain, constrained) systems.
Methods: RMPC Setpoint Tracking & AMPC {#sec:rmpc_design}
======================================
In this section, we introduce the RMPC scheme based on [@KoehlerCompEff18] (Sec. \[sec:Method\_RMPC\]) and extend it to robust output tracking (Sec. \[sec:Method\_Tracking\]). Following this, we show how the online control can be accelerated by moving the optimization offline using AMPC (Sec. \[sec:Method\_AMPC\]) as an extension to the approach in [@hertneck18].
Robust MPC Design {#sec:Method_RMPC}
-----------------
To ensure fast feedback, the piece-wise constant MPC control input $\pi_{\text{MPC}}$ is combined with a continuous-time control law $\kappa(x)$, i.e. the closed-loop input is given by $$\label{equ:controlled_system}
u_t=\pi_{\text{MPC}}\left(x_{t_k},y^{\mathrm{d}}_{t_k}\right) + \kappa(x_t),~\forall t \in [t_k, t_k+h],$$ where $h$ denotes the sampling time of the RMPC, $t_k=k h$ the sampling instance, and $\pi_{\text{MPC}}$ the piece-wise constant MPC control law. Denote $f_{\mathrm{c},\kappa}(x,v)=f_{c}(x,v+\kappa(x))$, $g_{j,\kappa}(x,v)=g_j(x,v+\kappa(x))$, $o_\kappa(x,v)=o(x,v+\kappa(x))$ and $\mathcal{W}_\kappa(x,v)=\mathcal{W}(x,v+\kappa(x))$.
### Incremental Stability
For the design of the RMPC, we assume that the feedback $\kappa$ ensures incremental exponential stability, similar to [@KoehlerCompEff18 Ass. 9].
\[ass:loc\_inc\_stab\] There exists an incremental Lyapunov function $V_{\delta}:\mathbb{R}^n\times\mathbb{R}^n\rightarrow\mathbb{R}_{\geq 0}$ and constants $c_{\delta,\mathrm{l}},c_{\delta,\mathrm{u}},c_j,\rho_{\mathrm{c}} > 0$ s.t. the following properties hold $\forall(z,v+\kappa(z))\in\mathcal{Z}$, $x\in\mathbb{R}^n$:
$$\begin{aligned}
\label{equ:inc_stab_equ_1}
& c_{\delta,\mathrm{l}} ||x-z||^2 \leq V_{\delta}(x,z) \leq c_{\delta,\mathrm{u}} ||x-z||^2, \\
\label{equ:inc_stab_equ_2}
& g_{j,\kappa}(x,v)-g_{j,\kappa}(z,v) \leq c_j \sqrt{V_{\delta}(x,z)}, \\
\label{equ:inc_stab_equ_3}
& \frac{d}{dt}V_{\delta}(x,z) \leq -2\rho_{\mathrm{c}} V_{\delta}(x,z), \end{aligned}$$
with $\dot{x} = f_{\mathrm{c}},\kappa(x,v)$, $\dot{z} = f_{\mathrm{c},\kappa}(z,v)$. Furthermore, the following norm-like inequality holds $\forall x_1,x_2,x_3\in\mathbb{R}^n$: $$\label{equ:norm_like_cond}
\sqrt{V_{\delta}(x_1,x_2)}+\sqrt{V_{\delta}(x_2,x_3)}\geq \sqrt{V_{\delta}(x_1,x_3)}.$$
The first and third condition (, ) formulate stability while the second is fulfilled for locally Lipschitz continuous $g_{j,\kappa}$. Incremental stability is a rather general condition, among others allowing for the usage of standard polytopic and ellipsoidal Lyapunov functions $V_{\delta}$ (i.e. $V_{\delta}(x,z)=\|x-z\|_{P_\delta}^2$), which satisfy condition due to the triangular inequality. Compare [@KoehlerCompEff18 Remark 1] for a general discussion.
### Tube
In this work, we use $V_\delta$ to characterize the tube around the nominal trajectory according to the system dynamics $\dot{z} = f_{\mathrm{c},\kappa}(z,v)$. The predicted tube is parameterized by $\mathbb{X}_{\tau|t}=\{x|~V_{\delta}(x,z_{\tau|t})\leq s_{\tau|t}^2\}$, where $z_{\tau|t}$ denotes the nominal prediction and the tube size $s_{\tau|t}\geq 0$ is a scalar. For the construction of the tube and hence, for the design of the RMPC controller, we use a characterization of the magnitude of occurring uncertainties.
### Disturbance Description
To characterize the magnitude of the uncertainties arising from the model mismatch $d_{\mathrm{w},\mathrm{c},t} \in \mathcal{W}(x_t,u_t)$, we need a (possibly constant) function $\overline{w}_{\mathrm{c}}$. Given $\mathcal{W}(x,u)$, it is possible to construct $\overline{w}_{\mathrm{c}}$ satisfying $$\begin{aligned}
\label{eq:V_delta_w_diff}
& \frac{d}{dt}\sqrt{V_\delta(x,z)} + \rho_{\mathrm{c}}\sqrt{V_\delta(x,z)} \leq \bar{w}_{\mathrm{c}}(z,v,\sqrt{V_{\delta}(z,v)}),\\
& \dot{x}=f_{\mathrm{c},\kappa}(x,v)+d_{\mathrm{w},\mathrm{c}},~\dot{z}=f_{c,\kappa}(z,v),
\forall~ d_{\mathrm{w},\mathrm{c}}\in\mathcal{W}_\kappa(x,v).\nonumber\end{aligned}$$ The state and input dependency of $\bar{w}_\mathrm{c}$ can e.g. represent larger uncertainty in case of high dynamic operation due to parametric uncertainty. For simplicity, we only consider a positive constant $\overline{w}_{\mathrm{c}}>0$ in the following, for details regarding the general case see [@KoehlerCompEff18].
### Tube Dynamics and Design Quantities {#sec:design_quantities}
By using inequality , the tube propagation is given by $\dot{s}_t = -\rho_{\mathrm{c}} s_t+\overline{w}_{\mathrm{c}}$, yielding $s_t=\overline{w}_{\mathrm{c}}/\rho_{\mathrm{c}}(1-e^{-\rho_{\mathrm{c}} t})$. To allow for an efficient online optimization, we consider the discrete-time system $x^+=f_{\mathrm{d},\mathrm{w},\kappa}(x,v)=f_{\mathrm{d},\kappa}(x,v)+d_{\mathrm{w},\mathrm{d}}$, where $f_{\mathrm{d},\kappa}$ is the discretization of $f_{\mathrm{c},\kappa}$ with sampling time $h$ and $d_{\mathrm{w},\mathrm{d}}\in\mathcal{W}_d(x,v)$ denoting the discrete-time model mismatch. Given the sampling time $h$, the corresponding discrete-time tube size is given by $s_{k\cdot h}=\frac{1-\rho_{\mathrm{d}}^k}{1-\rho_{\mathrm{d}}}\overline{w}_\mathrm{d}$ with $\rho_\mathrm{d}=e^{-\rho_{\mathrm{c}} h}$, $\overline{w}_{\mathrm{d}}=s_h$. The discrete-time model mismatch satisfies $\sqrt{V_{\delta}(f_{\mathrm{d},\kappa}(x,v)+d_{\mathrm{w},\mathrm{d}},f_{\mathrm{d},\kappa}(x,v))}\leq \overline{w}_{\mathrm{d}}$, $\forall d_{\mathrm{w},\mathrm{d}}\in\mathcal{W}_{\mathrm{d}}(x,v)$. The contraction rate $\rho_\mathrm{d}$ defines the growing speed of the tube while $s_{k \cdot h}$ denotes the size of the tube around the nominal trajectory, which bounds the uncertainties.
Robust Setpoint Tracking {#sec:Method_Tracking}
------------------------
A standard MPC design (c.f. [@rawlings2009model]) minimizes the squared distance $\|x(k|t)-x^{\mathrm{d}}\|_Q^2 + \|u(k|t)-u^d\|_R^2$ to some desired setpoint $(x^\mathrm{d},u^\mathrm{d})$, which requires a feasible target reference in the state and input space. For the considered problem of (robust) setpoint tracking of the output $y$ (the end effector position in Sec. \[sec:experiment\]), this would require a (usually unknown) mapping of the form $x^\mathrm{d} = m_x(y^\mathrm{d}),~u^\mathrm{d} = m_u(y^\mathrm{d})$.
In our specific use case of controlling a robotic manipulator, $m_x$ corresponds to the inverse kinematics. For MPC-based robot control such mappings are used in [@Faulwasser17; @carron2019data], which we particularly avoid within our work.
The proposed approach is a combination of [@KoehlerCompEff18] and [@LIMON18] and hence, can be seen as an extension of [@kohler19nonlinear] to the robust case. The following optimization problem characterizes the proposed RMPC scheme for setpoint tracking and avoids the need of providing $m_x,m_u$:
\[equ:rmpc\_opt\_problem\] $$\begin{aligned}
{2}
& V_N(x_{t},y^{\mathrm{d}}_{t}) & & = \!\min_{v(\cdot|t),x^{\mathrm{s}},v^{\mathrm{s}},\alpha} J_N(x_{t},y^{\mathrm{d}}_{t};v(\cdot|t),y^{\mathrm{s}},x^{\mathrm{s}},v^{\mathrm{s}}) \nonumber \\
& \text{subject to} & & x(0|t) = x_t, \nonumber \\
\label{equ:dyn_pred}
& & & x(k+1|t) = f_{\mathrm{d},\kappa}(x(k|t), v(k|t)), \\
\label{equ:tightened_const}
& & & g_{j,\kappa}(x(k|t),v(k|t)) + c_j \frac{1-\rho_\mathrm{d}^k}{1-\rho_\mathrm{d}}\overline{w}_\mathrm{d} \leq 0, \\
\label{equ:new_constraints}
& & & x^s = f_{\mathrm{d},\kappa}(x^\mathrm{s},v^\mathrm{s}), \quad y^\mathrm{s} = o_\kappa(x^\mathrm{s},v^\mathrm{s}), \\
\label{equ:constr_term_set_1}
& & & \frac{\overline{w}_\mathrm{d}}{1-\rho_{\mathrm{d}}} \leq \alpha \leq -\frac{g_{j,\kappa}(x^\mathrm{s},v^\mathrm{s})}{c_j}, \\
\label{equ:constr_term_set_2}
& & & x(N|t) \in \mathcal{X}_{\mathrm{f}}(x^\mathrm{s},\alpha), \\
& & & k = 0,...,N-1, \quad j = 1,...,p, \nonumber\end{aligned}$$
with the objective function $$\label{equ:objective_func}
\!\begin{aligned}
& J_N(x_{t},y^\mathrm{d}_{t};v(\cdot|t),y^\mathrm{s},x^\mathrm{s},v^\mathrm{s}) \\
= & \sum_{k=0}^{N-1} \left(||x(k|t)-x^{\mathrm{s}}||_Q^2 + ||v(k|t)-v^{\mathrm{s}}||_R^2\right) \\
& + V_{\mathrm{f}}(x(N|t),x^{\mathrm{s}}) + ||y^{\mathrm{s}}-y^{\mathrm{d}}_t||_{Q_{\mathrm{o}}}^2,
\end{aligned}$$ $Q,R,Q_{\mathrm{o}} \succ 0$. The terminal set is given as $$\label{equ:constr_term_set_3}
\mathcal{X}_{\mathrm{f}}(x^{\mathrm{s}},\alpha) := \{x \in \mathbb{R}^n | \sqrt{V_\delta(x,x^{\mathrm{s}})} + \frac{1-\rho_{\mathrm{d}}^N}{1-\rho_{\mathrm{d}}}\overline{w}_{\mathrm{d}} \leq \alpha \}.$$ The optimization problem is solved at time $t$ with the initial state $x_t$. The optimal input sequence is denoted by $v^*(\cdot|t)$ with the control law denoted as $\pi_{\text{MPC}}(x_t,y_t^{\mathrm{d}})=v^*(0|t)$. The predictions along the horizon $N$ are done w.r.t. the nominal system description in . Furthermore, the constraints in are tightened with tube size $s_{k \cdot h}$. In the following, we explain the considered objective function $J_N$ in and the conditions for the terminal set $\mathcal{X}_{\mathrm{f}}$ in , and for setpoint tracking in more detail.
### Objective Function
To track the external output reference $y^{\mathrm{d}}$, we use the setpoint tracking formulation introduced by Limon et al. [@LIMON18]. Additional decision variables $(x^{\mathrm{s}},v^{\mathrm{s}})$ are used to define an artificial steady-state . The first part of the objective function $J_N$ ensures that the MPC steers the system to the artificial steady-state, while the term $\|y^{\mathrm{s}}-y^{\mathrm{d}}_t\|_{Q_{\mathrm{o}}}^2$ ensures that the output $y^{\mathrm{s}}$ at the artificial steady-state $(x^{\mathrm{s}},v^{\mathrm{s}})$ tracks the desired output $y^{\mathrm{d}}$. In Theorem \[thm:main\], we prove exponential stability of the optimal (safely reachable) steady-state, as an extension of [@LIMON18; @kohler19nonlinear] to the robust setting.
### New Terminal Ingredients
The main approach in MPC design for ensuring stability and recursive feasibility is to introduce terminal ingredients, i.e. a terminal cost $V_{\mathrm{f}}$ and a terminal set $\mathcal{X}_{\mathrm{f}}$. Determining the setpoint $(x^{\mathrm{s}},v^{\mathrm{s}})$ online and occurring disturbances, further complicate their design.
The proposed approach determines the terminal set size $\alpha$ online, using one additional scalar variable similar to [@kohler19nonlinear], which is less conservative than the design in [@LIMON18]. Furthermore, by parametrizing the terminal set $\mathcal{X}_{\mathrm{f}}$ with the incremental Lyapunov function $V_{\delta}$, we can derive intuitive formulas that ensure robust recursive feasibility in terms of lower and upper bounds on $\alpha$ . As a result, we improve and extend [@LIMON18; @kohler19nonlinear] to the case of nonlinear robust setpoint tracking. The properties of the terminal ingredients are summarized in the following proposition.
\[thm:set\] The set of constraints , and together with and the terminal controller $k_{\mathrm{f}} = v^{\mathrm{s}}$, provide a terminal set that ensures the following properties needed for robust recursive feasibility (c.f. [@KoehlerCompEff18 Ass. 7]).
- The terminal set constraint $x(N|t)\in\mathcal{X}_{\mathrm{f}}(x^{\mathrm{s}},\alpha)$ is robust recursively feasible for fixed values $x^{\mathrm{s}},v^{\mathrm{s}},\alpha$.
- The tightened state and input constraints are satisfied within the terminal region.
The candidate $\sqrt{V_{\delta}(\tilde{x}^+,x^+)}\leq \rho_{\mathrm{d}}^N\overline{w}_{\mathrm{d}}$ (c.f. [@KoehlerCompEff18 Ass. 7]) satisfies the terminal constraint by using $$\begin{aligned}
&\sqrt{V_{\delta}(\tilde{x}^+,x^{\mathrm{s}})}\stackrel{\eqref{equ:inc_stab_equ_3},\eqref{eq:V_delta_w_diff}}{\leq} \rho_{\mathrm{d}}\sqrt{V_{\delta}(x,x^{\mathrm{s}})}+\rho_{\mathrm{d}}^N\overline{w}_{\mathrm{d}}\\
\stackrel{\eqref{equ:constr_term_set_3}}{\leq}& \rho_{\mathrm{d}}\alpha-\rho_{\mathrm{d}}\frac{1-\rho_{\mathrm{d}}^N}{1-\rho_{\mathrm{d}}}\overline{w}_{\mathrm{d}}+\rho_{\mathrm{d}}^N\overline{w}_{\mathrm{d}}\stackrel{\eqref{equ:constr_term_set_1}}{\leq} \alpha-\frac{1-\rho_{\mathrm{d}}^N}{1-\rho_{\mathrm{d}}}\overline{w}_{\mathrm{d}}.\end{aligned}$$ Satisfaction of the tightened constraints inside the terminal set follows with $$\begin{aligned}
&g_{j,\kappa}(x,v^{\mathrm{s}})+c_j\frac{1-\rho_{\mathrm{d}}^N}{1-\rho_{\mathrm{d}}}\overline{w}_{\mathrm{d}}\\
\stackrel{\eqref{equ:inc_stab_equ_2}}{\leq}& g_{j,\kappa}(x^{\mathrm{s}},v^{\mathrm{s}})+c_j(\sqrt{V_{\delta}(x,x^{\mathrm{s}})}+\frac{1-\rho_{\mathrm{d}}^N}{1-\rho_{\mathrm{d}}}\overline{w}_{\mathrm{d}})\stackrel{\eqref{equ:constr_term_set_1},\eqref{equ:constr_term_set_3}}{\leq}0.\qedhere\end{aligned}$$
In addition to the presented terminal set, we consider some Lipschitz continuous terminal cost $V_{\mathrm{f}}$, which satisfies the following conditions in the terminal set with some $c>0$
\[equ:terminal\_cost\] $$\begin{aligned}
\label{equ:terminal_cost_1}
& V_{\mathrm{f}}(f_{\mathrm{d},\kappa}(x,v^{\mathrm{s}}),x^{\mathrm{s}})-V_{\mathrm{f}}(x,x^{\mathrm{s}})\leq -\|x-x^{\mathrm{s}}\|_Q^2, \\
\label{equ:terminal_cost_2}
& V_{\mathrm{f}}(x,x^{\mathrm{s}})\leq c\|x-x^{\mathrm{s}}\|^2.\end{aligned}$$
For the computation of the terminal cost for nonlinear systems with varying setpoints, we refer to [@LIMON18; @kohler19nonlinear].
### Offline/Online Computations
The procedure for performing the offline calculations can be found in Algorithm \[alg:rmpc\_off\_calc\]. One approach to compute suitable functions $V_{\delta},\kappa,V_{\mathrm{f}}$ using a quasi-LPV parametrization and linear matrix inequalities (LMIs) is described in [@KoehlerQINF18]. The subsequent online calculations can then be performed according to Algorithm \[alg:rmpc\_on\_calc\].
Determine a stabilizing feedback $\kappa$ and a corresponding incremental Lyapunov function $V_{\delta}$ (Ass. \[ass:loc\_inc\_stab\]). Compute constant $\overline{w}_{\mathrm{c}}$ satisfying . Compute constants $ c_j$ satisfying . Define sampling time $h$ and compute $\rho_{\mathrm{d}}$, $\overline{w}_{\mathrm{d}}$ as described in Section \[sec:design\_quantities\]. Determine terminal cost $V_{\mathrm{f}}(x,x^{\mathrm{s}})$ satisfying .
Solve the MPC problem from . Apply input $u_t=\pi_{\text{MPC}}(x_{t_k},y_{t_k}^{\mathrm{d}})+\kappa(x_t),t \in [t_k,t_k+h)$.
### Closed-Loop Properties
In the following, we derive the closed-loop properties of the proposed scheme. The set of safely reachable steady-state outputs $y^{\mathrm{s}}$ is given by $\mathbb{Y}_{\mathrm{s}}:=\{y^s \in \mathbb{R}^q |~g_{j}(m_x(y_{\mathrm{s}}),m_u(y_{\mathrm{s}}))+c_j\overline{w}_{\mathrm{d}}/(1-\rho_{\mathrm{d}})\leq 0,~j=1,\dots,r\}$. The optimal (safely reachable) setpoint $y_{\mathrm{opt}}^{\mathrm{s}}$, is the minimizer to the steady-state optimization problem $\!\min_{y_{\mathrm{s}}\in\mathbb{Y}_{\mathrm{s}}}\|y^{\mathrm{s}}-y^{\mathrm{d}}\|_{Q_{\mathrm{o}}}^2$.
The following technical condition is necessary to ensure convergence to the optimal steady-state, compare [@LIMON18], [@kohler19nonlinear].
\[ass:Limon\] There exist (typically unknown) unique functions $m_x,m_u$, that are Lipschitz continuous. Furthermore, the set of safe output references $\mathbb{Y}_{\mathrm{s}}$ is convex.
Consequently, save operation and stability convergence is guaranteed due to the following theorem.
\[thm:main\] Let Assumption \[ass:loc\_inc\_stab\] hold and suppose that the optimization problem is feasible at $t=0$. Then the optimization problem is recursively feasible and the posed constraints $\mathcal{Z}$ are satisfied for the resulting closed loop (Algorithm \[alg:rmpc\_on\_calc\]), i.e., the system operates safely. Suppose further that Assumption \[ass:Limon\] holds and $y^{\mathrm{d}}$ is constant. Then the optimal (safely reachable) setpoint $x^{\mathrm{s}}_{\mathrm{opt}}$ is practically exponentially stable for the closed-loop system and the output $y$ practically exponentially converges to $y_{\mathrm{opt}}^{\mathrm{s}}$.
The safety properties of the proposed scheme are due to the RMPC theory in [@KoehlerCompEff18], using the known contraction rate $\rho_{\mathrm{d}}$ and the constant $\overline{w}_{\mathrm{d}}$ (bounding the uncertainty) to compute a safe constraint tightening in . Proposition \[thm:set\] ensures that the novel design of the terminal ingredients using , and also satisfies the conditions in [@KoehlerCompEff18 Ass. 7] for fixed values $x^{\mathrm{s}},v^{\mathrm{s}},\alpha$. The stability/convergence properties of the considered formulation are based on the non-empty terminal set ($\alpha>0$) with corresponding terminal cost and convexity of $\mathbb{Y}_{\mathrm{s}}$ (Ass. \[ass:Limon\]), which allow for an incremental change in $y_{\mathrm{s}}$ towards the desired output $y^{\mathrm{d}}$, compare [@LIMON18; @kohler19nonlinear] for details. Thus, the Lyapunov arguments in [@kohler19nonlinear] remain valid with a quadratically bounded Lyapunov function $V_t$ satisfying $V_{t+1}-V_t\leq -\gamma\|x_t-x_{\mathrm{opt}}^{\mathrm{s}}\|^2+\alpha_{\mathrm{w}}(\overline{w}_{\mathrm{d}})$ with a positive definite function $\alpha_{\mathrm{w}}$ from [@KoehlerCompEff18] bounding the effect of the model mismatch. This implies practical exponential stability of $x_{\mathrm{opt}}^{\mathrm{s}}$, and thus the output $y$ (practically) converges to a neighborhood of the optimal setpoint $y^{\mathrm{s}}_{\mathrm{opt}}$.
*Practical* stability implies that the system only converges to a neighborhood (with size depending on the model mismatch $\overline{w}_{\mathrm{d}}$) around the optimal setpoint $x^{\mathrm{s}}_{\mathrm{opt}}$.
\[rem:non\_conv\] Convexity of $\mathbb{Y}_s$ and uniqueness of the functions $m_x$, $m_u$ (Ass. \[ass:Limon\]) are strong assumption for general nonlinear problems. In particular, for the considered redundant 7-DOF robotic manipulator (Sec. \[sec:experiment\]), the functions $m_x,m_u$ are not unique (potentially multiple optimal steady-states) and the feasible steady-state manifold $\mathbb{Y}_{\mathrm{s}}$ is not convex (collision avoidance constraint). Nevertheless, the safety properties are not affected by Assumption \[ass:Limon\] and in the experimental implementation, the RMPC typically converges to some (not necessarily unique) steady-state.
Approximate MPC {#sec:Method_AMPC}
---------------
In the following, we introduce the AMPC, which provides an explicit approximation $\pi_{\text{approx}}$ of the RMPC control law $\pi_{\text{MPC}}$, yielding a significant decrease in computational complexity. In particular, as demonstrated in the numerical study in [@chen2019large Sec. 9.4], approximate MPC without additional modifications will in general not satisfy the constraints. Consequently, the core idea of the presented AMPC approach is to compensate for inaccuracies of the approximation by introducing additional robustness within the RMPC design. In the following, we present a solution to obtain statistical guarantees (\[sec:statistical\_guarantees\]) for the application of the resulting AMPC. To that end, we introduce an improved validation criterion (Prop. \[prop:ampc\_stab\], \[sec:val\_criterion\]) compared to the one in [@hertneck18], being more suitable for real world applications.
### Validation Criterion {#sec:val_criterion}
The following proposition provides a sufficient condition for AMPC safety guarantees.
\[prop:ampc\_stab\] Let Assumption \[ass:loc\_inc\_stab\] hold. Suppose the model mismatch between the real and the nominal system satisfies $$\begin{aligned}
\label{equ:ampc_stab_equ1}
& \sqrt{V_\delta(f_{\mathrm{d},\kappa}(x,v),f_{\mathrm{d},\kappa}(x,v)+d_{\mathrm{w},\mathrm{d}})} \leq \bar{w}_{\mathrm{d},\mathrm{model}}, \end{aligned}$$ $ \forall (x,v+\kappa(x))\in\mathcal{Z}$, $\forall d_{w,d}\in\mathcal{W}_{\mathrm{d}}(x,v)$. If $\pi_{\text{MPC}}$ is designed with some $\overline{w}_{\mathrm{d}}$ and the approximation $\pi_{\text{approx}}$ satisfies $$\!\begin{aligned}
\label{equ:ass_within_prop_2}
& \sqrt{V_\delta(f_{\mathrm{d},\kappa}(x,\pi_{\text{approx}}(x)), f_{\mathrm{d},\kappa}(x,\pi_{\text{MPC}}(x))} \\
& \quad \leq \overline{w}_{\mathrm{d},\text{approx}}:=\overline{w}_{\mathrm{d}}-\overline{w}_{\mathrm{d},\mathrm{model}},
\end{aligned}$$ for any state $x$ with being feasible, then the AMPC ensures the same properties as the RMPC in Theorem \[thm:main\].
We use the following bound on the perturbed AMPC: $$\begin{aligned}
& \sqrt{V_\delta(f_{\mathrm{d}}(x,\pi_{\text{approx}})+d_{\mathrm{w},\mathrm{d}}, f_\mathrm{d}(x,\pi_{\text{MPC}}))}\\
& \stackrel{\text{\eqref{equ:norm_like_cond}}}{\leq} \sqrt{V_\delta(f_{d}(x,\pi_{\text{approx}})+d_{w,d}, f_{d}(x,\pi_{\text{approx}}))} \\
& + \sqrt{V_\delta(f_{d}(x,\pi_{\text{approx}}), f_d(x,\pi_{\text{MPC}}))} \\
& \stackrel{\eqref{equ:ampc_stab_equ1},\eqref{equ:ass_within_prop_2}}{\leq} \bar{w}_{\mathrm{d},\mathrm{model}} + \bar{w}_{\mathrm{d},\text{approx}} \stackrel{\eqref{equ:ass_within_prop_2}}{=} \bar{w}_{\mathrm{d}}.\end{aligned}$$ Then, the properties follow from Theorem \[thm:main\].
### Statistical Guarantees {#sec:statistical_guarantees}
In practice, guaranteeing a specified error $\overline{w}_{\mathrm{d},\text{approx}}$ for all possible values $(x,y^{\mathrm{d}})$ with a supervised learning approach is difficult, especially for deep NNs. However, it is possible to make statistical statements about $\pi_{\text{approx}}$ using Hoeffding’s inequality [@hoeffding63]. For the statistical guarantees, we adopt the approach from [@hertneck18] and use our improved validation criterion as introduced in Proposition \[prop:ampc\_stab\].
\[ass:deterministic\] The prestabilized, disturbed system dynamics $f_{\mathrm{d},\mathrm{w},\kappa}$ characterize a deterministic (possibly unknown) map.
We validate full trajectories under the AMPC with independent and identically distributed (i.i.d.) initial condition and setpoints. Due to Assumption \[ass:deterministic\], also the trajectories themselves are i.i.d.. Specifically, we define a trajectory as $$\label{equ:X_i}
\!\begin{aligned}
& X_i := \{ x(k), k \in \mathbb{N}: x_i(0)~\text{feasible at}~t=0, \\
& \qquad x(k+1) = f_{\mathrm{d},\mathrm{w},\kappa}(x(k),\pi_{\text{approx}}(x(k),y_i^d)) \}.
\end{aligned}$$ Further, we consider the indicator function based on $$\nonumber
\label{equ:hertneck_indic_func}
I(X_i)=
\begin{cases}
1,& \text{if } \sqrt{V_\delta(f_{\mathrm{d},\kappa}(x,\pi_{\text{approx}}), f_{\mathrm{d},\kappa}(x,\pi_{\text{MPC}}))} \\
& \quad \leq \overline{w}_{\mathrm{d},\text{approx}},~\forall x \in X_i \\
0, & \text{otherwise}.
\end{cases}$$ The indicator measures, whether for any time step along the trajectory, there is a discrepancy larger than $\overline{w}_{\mathrm{d},\text{approx}}$ between the ideal trajectory with $\pi_{\text{MPC}}$ and the trajectory with the approximated input $\pi_{\text{approx}}$. The empirical risk is given as $\tilde{\mu} = \frac{1}{b} \sum_{j=1}^{b} I(X_j)$ for $b$ sampled trajectories, while $\mu$ is denoting the true expected value of the random variable. With Hoeffding’s inequality the following Lemma can be derived.
\[lem:stat\_val\] [@hertneck18 Lemma 1] Suppose Assumption \[ass:deterministic\] holds. Then the condition $\mathbb{P}\left[I(X_i) = 1 \right] \geq \mu_{\mathrm{crit}} := \tilde{\mu} - \sqrt{-\ln(\delta_h/2)/(2b)}$, holds at least with confidence $1-\delta_h$.
In practice, it is not possible to check for infinite length trajectories $X_i$. Since in our definition, the reference $y_i^{\mathrm{d}}$ is fixed along the whole trajectory, we do the validation until a steady state is reached below a certain threshold.
We provide the following illustration: given a large enough number of successfully validated trajectories, we obtain a high empirical risk, e.g. $\tilde{\mu} \approx 99\%$. This result ensures that with confidence of e.g. $(1-\delta_h)\approx 99\%$, holds at least with probability $\mu_{\mathrm{crit}}$ (e.g. $\mu_{\mathrm{crit}} \approx 98$%) for a new trajectory with initial condition $(x,y^{\mathrm{d}})$. Thus, with high probability, the guarantees in Proposition \[prop:ampc\_stab\] (safety and stability) hold.
### Algorithm
The overall procedure for the AMPC is summarized in Algorithm \[alg:ampc\], based on Hertneck et al. in [@hertneck18].
Choose $\overline{w}_{\mathrm{d}}$, determine $\overline{w}_{\mathrm{d},\mathrm{model}}$ and calculate $\overline{w}_{\mathrm{d},\text{approx}}$. Design the RMPC according to Algorithm \[alg:rmpc\_off\_calc\]. Learn $\pi_{\text{approx}} \approx \pi_{\text{MPC}}$. Validate $\pi_{\text{approx}}$ according to Lemma \[lem:stat\_val\]. If the validation fails, repeat the learning from step 3.
Robot Experiments {#sec:experiment}
=================
We demonstrate the proposed RMPC and AMPC approaches on a KUKA LBR4+ robotic manipulator (Fig. \[fig:robot\_experiment\]).
Robotic System
--------------
Several works investigated the dynamics formulation of the KUKA LBR4+ and LBR iiwa robotic manipulators [@Jubien14; @Sturz17], with dynamic equations of the form $M(q)\ddot{q}+b(q,\dot{q}) = \tau$. Here, $\tau$ denotes the applied torque and $q,\dot{q},\ddot{q}$ the joint angle, joint velocity and joint acceleration [@Siciliano2008].
### System Formulation
In this work, we leverage existing low-level controllers as an inverse dynamics inner-loop feedback linearization ending up with a kinematic model that assumes direct control of joint accelerations, i.e., $\ddot{q}_t = u_t$. Such a description is not uncommon for designing higher-level controllers in robotics, compare e.g. the MPC scheme in [@carron2019data] based on a kinematic model. As the control objective, we aim for tracking a given reference $y^d$ in the task space with the manipulator end effector position, defined as $y$. Since this position only depends on the first four joints, we consider those for our control design. The resulting nonlinear system with state $x_t= [ q_t,\dot{q}_t ]^\top$ is given by $$\begin{aligned}
\dot{x}_t=[\dot{q}_t^\top ,u_t^\top]^\top,~y_t=o(x_t,u_t),~x_t\in\mathbb{R}^8,~u_t\in\mathbb{R}^4.\end{aligned}$$ The output $y=o(x,u)$ is given by the forward kinematic of the robot: $$\label{equ:apollo_endeff_pos_2}
\begin{gathered}
y = o(x,u) = \\
\begin{pmatrix}
\scriptstyle -C_1c_{q_1}s_{q_2} - C_2s_{q_4}(s_{q_1}s_{q_3} -c_{q_1}c_{q_2}c_{q_3}) - C_2c_{q_1}c_{q_4}s_{q_2} \\
\scriptstyle C_2s_{q_4}(c_{q_1}s_{q_3}+c_{q_2}c_{q_3}s_{q_1}) -C_1s_{q_1}s_{q_2} - C_2c_{q_4}s_{q_1}s_{q_2} \\
\scriptstyle C_1c_{q_2} + C_2c_{q_2}c_{q_4} +C_2c_{q_3}s_{q_2}s_{q_4} + C_3
\end{pmatrix},
\end{gathered}$$ where $s_{q_i}$ and $c_{q_i}$ denote the sine and cosine of $q_i$, respectively, and $C_1=0.4, C_2=0.578, C_3=0.31$.
### Constraints
States and inputs are subject to the following polytopic constraints: joint angles $q_i$ can turn less than $180^\circ$ (exact values can be found in [@nubert19]), joint velocity $|\dot{q}|\leq 2.3\frac{\text{rad}}{\text{s}}$, and joint acceleration $|\ddot{q}|\leq 8\frac{\text{rad}}{\text{s}^2}$.
More interestingly, we also impose constraints on the output function $y$ to ensure obstacle avoidance in the Cartesian space. We approximate the obstacles with differentiable functions, compare Figure \[fig:output\_constraints\]. This allows for a simpler implementation and design.
![Visualization of the output constraints. We use (quadratic) differentiable functions to over-approximate the non-differentiable obstacles.[]{data-label="fig:output_constraints"}](./images/constraints.PNG){width="0.97\linewidth"}
For example, $$\label{equ:obst_vert_outp_constr}
g_p(x,u) = -(y_1 - y_1^{\mathrm{o}}) -C \left((y_3 - y_3^{\mathrm{o}})^2 + (y_2 - y_2^{\mathrm{o}})^2\right)\leq 0, \nonumber$$ models the box-shaped obstacle, with the obstacle position $y^{\mathrm{o}}=(y_1^{\mathrm{o}},y_2^{\mathrm{o}},y_3^{\mathrm{o}})$, the end effector $y=(y_1,y_2,y_3)$, and here $C=2$. Similarly, we introduce a nonlinear constraint that prevents the robot from hitting itself (see Figure \[fig:output\_constraints\]).
This constraint formulation uses a simple (conservative) over-approximation and assumes static obstacles. Both limitations can be addressed by using the exact reformulation in [@Zhang2017] based on duality and using the robust extension in [@Soloperto18] for uncertain moving obstacles.
Robust MPC Design {#robust-mpc-design}
-----------------
In general, the dynamic compensation introduced in the previous subsection is not exact and hence, the resulting model mismatch needs to be addressed in the robust design.
### Determination of Disturbance Level {#sec:dist_det}
For the determination of the disturbance level, we sample trajectories for a specified sampling time and compare the observed trajectory to the nominal prediction for each discrete-time step. The deviation of the two determines the disturbance bound introduced for the discrete-time case, i.e. $d_{\mathrm{w},\mathrm{d}}$. In Figure \[fig:obs\_dist\], a plot of the $\infty$-norm of the observed disturbance with respect to the applied acceleration is shown.
![Observed disturbance with respect to the applied acceleration for a sampling rate of $2.5\,\text{Hz}$. Proportionality-like behavior is apparent.[]{data-label="fig:obs_dist"}](./images/mismatch.eps){width="0.8\linewidth"}
The maximal observed model mismatch satisfies $\|d_{\mathrm{w},\mathrm{d}}\|_{\infty}\leq 0.06$. As a precaution we add some tolerance and use $\|d_{\mathrm{w},\mathrm{d}}\|_\infty \leq 0.1$ for our design. From the figure, it can be seen that the induced disturbance can be larger for higher accelerations. This behavior is not surprising, since the low level controllers have more difficulties to follow the reference acceleration for more dynamic movements. Using $\overline{w}_{\mathrm{c}}(x,u)=c_0+c_1\|u\|_{\infty}+c_2\|\dot{q}\|_\infty$ instead of a constant bound, could help to further decrease conservatism (compare [@KoehlerCompEff18]). Furthermore, the uncertainty could also be reduced by improving the kinematic model using data, as e.g. done in [@carron2019data] with an additional gaussian process (GP) error model.
### Computations {#sec:rmpc_computations}
The offline computations are done according to Algorithm \[alg:rmpc\_off\_calc\]. We consider a quadratic incremental Lyapunov function $V_{\delta}(x,z)=\|x-z\|_{P_\delta}^2$ and a linear feedback $\kappa(x)=K_{\delta}x$, both computed using tailored LMIs (incorporating , ), compare [@nubert19]. The terminal cost $V_{\mathrm{f}}$ is given by the LQR infinite horizon cost. The online computations from Algorithm \[alg:rmpc\_on\_calc\] are performed in a real-time C++ environment by deploying the CasADi C++ API for solving the involved optimization problem [@Andersson2018]. The feedback $\kappa(x_t)$ is updated with a rate of $1\,\text{kHz}$ – hence, it can be considered as being continuous-time for all practical purposes. Furthermore, $\pi_{\text{MPC}}$ is evaluated every $h=400 \,\text{ms}$.
In general, $\mathbb{Y}_{\mathrm{s}}$ is non-convex due to the collision avoidance constraints and hence Assumption \[ass:Limon\] is not satisfied, compare Remark \[rem:non\_conv\]. Issues owing to local minima were not observed in the considered experiments.
Experimental Results RMPC
-------------------------
With the RMPC design, we demonstrate a reliable and safe way for controlling the end effector position of the robotic manipulator. An exemplary trajectory on the real system can be observed in Figure \[fig:robot\_experiment\], where the end effector tracks the reference, which is set by the user.
![ Experimental (solid) and simulation (dashed) data of RMPC (blue-colored) and AMPC (orange-colored) with the same reference $y_t^{\mathrm{d}}$. Reference $y^\mathrm{d}$ is continuously moving for $t \in [0,11]$s; constant, but unreachable in the interval $t\in[11,15]$s; and moving again after a step for $t \in [15,19]$s. Left: Tracking error $\|y_t-y_t^\mathrm{d}\|_{2}$. Right: Relative closed-loop input $\|\frac{u_t}{u_{\mathrm{max}}}\|_\infty$, with $u_t=\kappa(x_t)+\pi_{\text{MPC}}(x_{t_k})$ and $u_t=\kappa(x_t)+\pi_{\text{approx}}(x_{t_k})$.](./images/track_err.eps "fig:"){width="0.492\linewidth"} ![ Experimental (solid) and simulation (dashed) data of RMPC (blue-colored) and AMPC (orange-colored) with the same reference $y_t^{\mathrm{d}}$. Reference $y^\mathrm{d}$ is continuously moving for $t \in [0,11]$s; constant, but unreachable in the interval $t\in[11,15]$s; and moving again after a step for $t \in [15,19]$s. Left: Tracking error $\|y_t-y_t^\mathrm{d}\|_{2}$. Right: Relative closed-loop input $\|\frac{u_t}{u_{\mathrm{max}}}\|_\infty$, with $u_t=\kappa(x_t)+\pi_{\text{MPC}}(x_{t_k})$ and $u_t=\kappa(x_t)+\pi_{\text{approx}}(x_{t_k})$.](./images/input.eps "fig:"){width="0.492\linewidth"}
\[fig:track\_err\_and\_cont\_inp\]
It can be seen that even though the direct way is obstructed by an obstacle, the controller obtains a solution while keeping safe distance to the obstacle. In Fig. \[fig:track\_err\_and\_cont\_inp\] the tracking error and closed-loop input of the RMPC for an exemplary use-case can be observed. The controller is able to track the reference. However, due to the computational complexity and its induced delay, the controller has a larger tracking error in intervals of changing set points (interval $[2,11]$s in Fig. \[fig:track\_err\_and\_cont\_inp\]). Note that the constraint tightening of the considered RMPC method only restricts future control actions and thus the scheme can in principle utilize the full input magnitude. However, due to the combination of the velocity constraint and the long sampling time $h$, the full input is only utilized with the AMPC, with the faster sampling time. More experimental results can be observed in the supplementary video[^6].
Integrating the tracking control within a single optimization problem and automatically resolving corner cases such as unreachable setpoints are particular features that make the deployment of the approach simple, safe, and reliable in practice. As expected by the considered robust design, in thousands of runs (one run corresponds to one initial condition and one output reference), the robot never came close to hitting any of the obstacles (e.g. $\text{1:15}\,\text{min}$ in the video). This is the result of using the conservative bound $\overline{w}_{\mathrm{d}}$ on the model mismatch, implying safe but conservative operation. Furthermore, the controller is able to steer the end effector along interesting trajectories in order to avoid potential collisions in an optimal way (e.g. video: $\text{1:50}\,\text{min}$).
AMPC Design
-----------
For the robot control, the AMPC is designed according to Algorithm 3. For this purpose, we first design an RMPC with a sampling time of $h=40\,\text{ms}$, i.e., ten times faster than the previous RMPC. To simplify the learning problem, we only consider the self-collision avoidance constraint. Therefore, the MPC control law $\pi_{\text{MPC}}$ depends on the state $x \in\mathbb{R}^8$ and the desired reference $y^{\mathrm{d}} \in\mathbb{R}^3$, i.e., on $11$ parameters in total.
To obtain the necessary precision for the AMPC, interesting questions emerged regarding the structure of the used NN, its training procedure and the sampling of the (ground truth) RMPC. Regarding the depth of the network, our observations confirm insights in [@karg18]: deep NNs are better suited to obtain an explicit policy representation.
A tradeoff exists between the higher expressiveness and the slower training of deeper networks. We decided to use a fully connected NN with 20 hidden layers, consecutively shrinking the layers from $1024$ neurons in the first hidden layer to $4$ in the output layer. This results in roughly $5 \cdot 10^6$ trainable parameters in total. All hidden neurons are *ReLu*-activated, whereas the output layer is activated linearly. Other techniques such as batch normalization, regularization, or skip connections did not help to improve the approximation.
The RMPC control law $\pi_{\text{MPC}}(x,y^{\mathrm{d}})$ can become relatively large in magnitude, which makes the regression more difficult. We circumvent this problem by directly learning the applied input $u^*(x,y^{\mathrm{d}})=\pi_{\text{MPC}}(x,y^{\mathrm{d}})+\kappa(x)\in\left[-8\frac{\text{rad}}{\text{s}^2},8\frac{\text{rad}}{\text{s}^2}\right]$. This can be seen as a zero-centered normalization of the reference output, which allows us to achieve significantly smaller approximation errors. In addition, $\pi_{\text{approx}}$ can be readily evaluated online, since $\kappa$ is known.
For the training, we use a set of approximately $50 \cdot 10^6$ datapoints which are obtained by offline sampling the RMPC. Our training corpus consists of a combination of random sampling $\{\left(x^{(j)},y^{\mathrm{d}(j)}\right),\pi_{\text{MPC}}\left(x^{(j)},y^{\mathrm{d}(j)}\right)\}$ and trajectory-based sampling of i.i.d. trajectories $\{\left(x^{j} \in X_i,y^{\mathrm{d},(i)} \right),\pi_{\text{MPC}}\left(x^{(i)},y^{\mathrm{d}(j)}\right) \}$, with trajectory-wise random initial condition $x^{(i)}$ and reference $y^{\mathrm{d}(i)}$. The former helps the network to get an idea of all areas, whereas the latter one represents the areas of high interest.
Given the AMPC design, we next aim to perform the validation as per Sec. \[sec:Method\_AMPC\]. We execute the validation in simulation, which is deterministic. We account for the model mismatch with a separate term during the validation (cf. Prop. \[prop:ampc\_stab\]). We found that for the considered system and controller tasks, performing the validation is demanding. Currently, we are able to satisfy criterion for approximately $90$% of all sampled points. While this is not fully satisfactory for a high-probability guarantees on full trajectories, it is still helpful to understand the quality of the learned controller. While no failure cases were observed in the experiments reported herein, performing such *a priori* validation for the robot implementation is subject to future work.
Experimental Results AMPC
-------------------------
With the AMPC design, we are able to obtain a 10 times faster feedback on the robot while at the same time reducing the computational demand by a factor of $20$ compared to the used RMPC ($1\,\text{ms}$ vs. $200\,\text{ms}$ evaluation time, $10$ times faster update rate). Due to the short evaluation of less than $1\,\text{ms}$, the control input can be applied immediately for the current sampling interval of $40$ms instead of performing the optimization for the predicted next state. This results in a response time in the interval $[1,40]\,\text{ms}$ for the AMPC instead of $[400,800]\,\text{ms}$ for the RMPC. The resulting, more aggressive input can be observed in Figure \[fig:track\_err\_and\_cont\_inp\]. Note that the AMPC sometimes violates the input constraints in the shown experiment. This is mainly due to a combination of a large control gain in the pre-stabilization $\kappa$ and large measurement noise in the experiment. To circumvent this problem, the noise could be considered in the design or a less aggressive feedback $\kappa$ could be used.
We emphasize that the results and achieved performance are significant, considering the $11$ parameters in the nonlinear MPC, while standard explicit MPC approaches are only applicable to small-medium scale linear problems.
Conclusion
==========
The approach developed in this paper achieves safe and fast tracking control on complex systems such as modern robots by combining robust MPC and NN control.
The proposed robust MPC ensures safe operation (stability, constraint satisfaction) despite uncertain system descriptions. What is more, the MPC scheme simplifies complex tracking control tasks to a single design step by joining otherwise often separate planning and control layers: real-time control commands are directly computed for given reference and constraints. Our experiments on a KUKA LBR4+ arm are the first to demonstrate such robust MPC on a real robotic system. The proposed RMPC thus, provides a complete framework for tracking control of complex robotic tasks.
We tackled the computational complexity of MPC in fast robotics applications by proposing an approximate MPC. This approach replaces the online optimization with the evaluation of a NN, which is trained and validated in an offline fashion on a suitably defined robust MPC. The proposed approach demonstrates significant speed and performance improvements. Again, the presented experiments are the first to demonstrate the suitability of such NN-based control on real robots. Providing *a priori* statistical guarantees for such robot experiments by further improving the learning and validation procedures are relevant topics for future work.
Acknowledgments
===============
The authors thank A. Marco and F. Solowjow for helpful discussions, and their colleagues at MPI-IS who contributed to the Apollo robot platform.
[^1]: $^{1}$Max Planck Institute for Intelligent Systems, Intelligent Control Systems Group, 70569 Stuttgart, Germany (email: {nubert, trimpe}@is.mpg.de).
[^2]: $^{2}$ETH Zürich (M.Sc. stud.), 8092 Zürich, Switzerland ([email protected]).
[^3]: $^{3}$Inst. for Systems Theory and Automatic Control, Univ. of Stuttgart, 70550 Stuttgart, Germany ({johannes.koehler, frank.allgower}@ist.uni-stuttgart.de).
[^4]: $^{4}$Max Planck Institute for Intelligent Systems, Autonomous Motion Dept., 72076 Tübingen, Germany ([email protected]).
[^5]: This work was supported in part by the Max Planck Society, the Cyber Valley Initiative, and the German Research Foundation (grant GRK 2198/1).
[^6]: <https://youtu.be/c5EekdSl9To>
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Resonance modes in single crystal sapphire ($\alpha$-Al$_2$O$_3$) exhibit extremely high electrical and mechanical Q-factors ($\approx 10^9$ at 4K), which are important characteristics for electromechanical experiments at the quantum limit. We report the first cooldown of a bulk sapphire sample below superfluid liquid helium temperature (1.6K) to as low as 25mK. The electromagnetic properties were characterised at microwave frequencies, and we report the first observation of electromagnetically induced thermal bistability in whispering gallery modes due to the material $T^3$ dependence on thermal conductivity and the ultra-low dielectric loss tangent. We identify “magic temperatures” between 80 to 2100 mK , the lowest ever measured, at which the onset of bistability is suppressed and the frequency-temperature dependence is annulled. These phenomena at low temperatures make sapphire suitable for quantum metrology and ultra-stable clock applications, including the possible realization of the first quantum limited sapphire clock.'
author:
- 'Daniel L. Creedon'
- 'Michael E. Tobar'
- 'Jean-Michel'
- Yarema Reshitnyk
- Timothy Duty
date: 'August 6, 2010'
title: 'Single Crystal Sapphire at milli-Kelvin Temperatures: Observation of Electromagnetically Induced Thermal Bistability in High Q-factor Whispering Gallery Modes'
---
Experiments to couple superconducting qubits based on Josephson junctions to microwave resonators ($Q\approx 10^4$) at cryogenic temperatures have been well represented in recent scientific literature for a diverse range of circuit quantum electrodynamic applications. This includes generating non-classical states of microwave cavities such as Fock states, where the limit on producing these non-classical fields is due to the finite photon lifetime (or linewidth) of the resonator, as well as detecting a nanomechanical resonator at or near the ground state [@RocheleauNature; @OconnellNature; @HofheinzNature; @HofheinzNature2; @Osborn07; @DutySuperconducting; @Castellanos07]. Sapphire resonators are of particular interest for future experiments due to their extremely low loss, with electronic $Q$-factors of order $10^9$ at 1.8K [@LuitenBook; @Hartnett2006apl], and mechanical $Q$-factors as high as $5 \times 10^8$ at 4.2K [@lockeparametric; @systemssmalldissipation]. The thermal, mechanical, and bulk electronic properties of sapphire have been characterised extensively over a wide range of temperatures from room temperature to superfluid liquid helium (1.6K) using WG mode techniques [@JerzyMTT; @JerzyMST], but have never been examined in the regime approaching the absolute zero of temperature. Sapphire resonators at millikelvin temperature have the potential to play an important role in the next generation of quantum electronics and metrology experiments by virtue of this anomalously high $Q$-factor. A significant body of research already exists in which sapphire has been used at cryogenic temperatures as a parametric transducer in an effort to reach the Standard Quantum Limit [@lockeparametric; @lockeparametric2; @tobarparametric; @lockeparametric3; @TobarSQL; @Cuthb]. Oscillators can be prepared in their quantum ground state due to very low thermal phonon occupation when $T << hf/{k_B}$, where $h$ and $k_B$ are Planck’s and Boltzmann’s constants respectively. For microwave oscillators such as those based on single-crystal sapphire resonators, the corresponding temperature regime is in the experimentally accessible millikelvin range, making them ideal candidates for quantum measurement experiments. It is thus important to characterise such devices in this unexplored ultra-low temperature regime.
![\[fig1\]Network analyser measurement of the WGH$_{20,0,0}$ mode in transmission at 50 mK. The excitation power was varied in steps of 5dB from -50dBm to -15dBm, and the observed mode frequency was downshifted. The -50dBm and -45dBm curves are the highest in frequency and lie on top of one another. The threshold power is defined to be the power incident on the resonator which is sufficient to shift the mode frequency by one bandwidth from the ‘unperturbed’ lowest power measurement](fig1.jpg){width="86mm"}
In this Article we report on the first measurements of the electromagnetic properties of a single-crystal sapphire resonator at millikelvin temperature. The resonator used was a highest purity HEMEX-grade sapphire from Crystal Systems, similar to that used in several Cryogenic Sapphire Oscillator (CSO) [@TobarMann; @Hartnett2006apl; @Locke2006rsi] and Whispering Gallery Maser Oscillator (WHIGMO) experiments [@Pyb2005apl; @Benmessai2008prl; @Creedon2010; @Benmessai2007el]. Here, we report on the observation of the lowest frequency-temperature turning points for Whispering Gallery (WG) mode resonances ever measured, as well as making the first observation of a thermal bistability effect in sapphire for this ultra-low temperature regime. We give a model to predict thermal bistability threshold power and show that the effect is dependent on the thermal conductivity of the sapphire. Furthermore, we show that the bistability effect may be suppressed by operating at a “magic temperature”, where a frequency-temperature turning point occurs. Thus, combining the low temperature operation with stabilisation at a millikelvin frequency-temperature turning point gives the potential to realise a sapphire based frequency standard at the quantum limit (rather than the thermal limit which has been previously reported [@Benmessai2008prl]).\
![\[sweeptime\]The WGH$_{20,0,0}$ mode in transmission for varying sweep time. Sweep direction is increasing in frequency in all cases. The linewidth of the mode narrows sharply with sweep time due to temperature dependence of permittivity of the sapphire. The governing equations for the lineshape are given in [@Vahala04]](fig2.jpg){width="86mm"}
The sapphire resonator, a cylinder 5cm diameter $\times$ 3cm height, was cleaned in acid and mounted in a silver plated copper cavity. The resonator is machined such that the anisotropy c-axis of the sapphire is aligned with the cylindrical z-axis. A radially oriented loop probe and axially oriented straight antenna were used to couple microwave radiation in and out of the crystal. The cavity was attached to the mixing chamber of a dilution refrigerator with a copper mount and cooled to 25mK. The fundamental quasi-transverse magnetic Whispering Gallery modes WGH$_{m,0,0}$, with azimuthal mode number $m$ from 13 to 20, were characterised over a range of temperatures using a vector network analyser. We observed that particularly high-$Q$ WG modes exhibited a hysteretic behaviour, which was thermal in nature. The frequency of the WG modes supported in the resonator are dependent on both the physical dimensions of the crystal and its permittivity, the latter effect being more than an order of magnitude stronger [@systemssmalldissipation]. As the network analyser sweeps in frequency, heating occurs as power is deposited into the sapphire on resonance. The change in permittivity due to temperature causes a shift in the resonant frequency of the mode in the opposite direction to the sweep. The result is an astoundingly narrow, yet artificial linewidth with a sharp threshold. If the frequency was swept in the opposite direction, the mode frequency is shifted in the same direction as the sweep, and an artificially broadened linewidth would be observed. A similar effect in such dielectric resonators has only been observed at optical frequencies in fused silica microspheres [@europhys]. It was shown that the “thermal bistability” caused either narrowing or broadening of the line resonance depending on the direction of the frequency sweep during measurement. Examples of optical bistability are numerous in the literature [@Braginsky; @Vahala04; @VahalaAPL], but are normally attributed to a $\chi^{(3)}$ Kerr nonlinearity, which results in a threshold power for optical bistability that scales like $Q^{-2}$. Collot et al. [@europhys] note that for mode Q-factors below $10^9$ the thermal bistability effect dominates over the Kerr effect due to significantly lower threshold power. For quality factors in the range of $10^9$, the effects can be distinguished by the observed dependence of threshold power on Q. We find an excellent fit using a thermal model (see Eqn.\[eq3\] and Fig.\[thresh\]) which has a threshold power that scales like $Q^{-1}$, showing that the effect is clearly thermal in nature. A measurement of the WGH$_{20,0,0}$ mode was made (see Fig. \[fig1\]) which shows the first observation of this thermal bistability effect in a millikelvin sapphire resonator. A sharp threshold was observed, giving an FWHM linewidth of only 0.00173 Hz, which was strongly dependent on input power. Our experimental apparatus was unable to sweep downwards in frequency, but the bistability effect is still observable by varying the sweep speed. Figure \[sweeptime\] shows the effect of the thermal bistability for a range of sweep times. Note that only the resonant peak moves; the longer sweep time results in more time spent per measurement point, depositing more power into the resonator and creating a larger apparent frequency shift and linewidth change. As sufficient heat is deposited into the resonator only on resonance, the off-resonance transmission does not depend on sweep time.
![\[fig3\]Temperature dependence of $Q$-factor (shaded circles) and frequency (empty circles) for the 20$^{\text{th}}$ azimuthal order WG mode. Note that due to paramagnetic impurities in the sapphire, ${df}/{dT}$ is annulled at 2.0897K. The $Q$-factor remains approximately constant over a wide range of temperature.](fig3.jpg){width="86mm"}
$\boldsymbol{m}$ $\boldsymbol{T}$ **(mK)** $\boldsymbol{Q}$ $\boldsymbol{\alpha}$
------------------ --------------------------- -------------------- -------------------------
100 $7.84 \times 10^8$ $-6.66 \times 10^{-9}$
800 $1.7 \times 10^9$ $-1.3 \times 10^{-11}$
100 $6.17 \times 10^8$ $-6.85 \times 10^{-8}$
19 440 $5.40 \times 10^8$ $-1.10 \times 10^{-7}$
2260 $5.85 \times 10^8$ $-1.60 \times 10^{-11}$
200 $2.21 \times 10^9$ $-1.19 \times 10^{-7}$
20 630 $1.72 \times 10^9$ $-3.26 \times 10^{-8}$
2100 $1.76 \times 10^9$ $-5.85 \times 10^{-11}$
: \[alphaTable\]Measured parameters for the thermal coefficient and Q
It is possible to model the threshold power at which bistable behaviour becomes apparent. Considering a temperature dependent fractional frequency shift of the WG mode of interest, $\Delta{\nu}/\nu = -\alpha \Delta{T}$, where the temperature coefficient $\alpha$ is experimentally determined, we then expect thermal bistability for a threshold temperature rise of the resonator $\Delta{T_{th}}= \frac{1}{\alpha} \frac{\Delta{\nu}}{\nu} = \frac{1}{\alpha Q}$ K. An expression for the threshold power required to achieve this bistability is given by [@europhys]: $$\label{powerthreshold}
P_{th} = \frac{C_{p} \rho V_{eff} \Delta T_{th}}{\tau_{\text{T}}}$$ where $C_{p}$ is the heat capacity of sapphire, $\rho$ is its density (4.0 g/cm$^3$ [@handbookchemphys]), $\tau_{T}$ the characteristic heat diffusion time constant, and $V_{eff}$ the effective volume occupied by the Whispering Gallery mode. The heat diffusion time constant in turn may be expressed as: $$\label{timeconstant}
\tau_{\text{T}} = \frac{l m C_{p}}{A k}$$ where $l$ and $A$ are the length and cross-sectional area of the sapphire segment, $m$ is the mass of the sapphire, and $k$ the thermal conductivity [@tobarifcs1994]. Finally, an expression (independent of the heat capacity and thermal time constant) is derived by combining Equations \[powerthreshold\] and \[timeconstant\]: $$\label{eq3}
P_{th} = \frac{A k}{l \alpha Q} \frac{m_{eff}}{m}$$ Where $m_{eff}$ is the mass of the effective volume occupied by the WG mode. The expression for the threshold power is now clearly only a function of the thermal coefficient $\alpha$, $Q-$factor, and thermal conductivity of the sapphire, as well as its dimensions. The thermal conductivity $k$ was estimated by fitting an approximate $T^3$ power law to extrapolate below 1K from the data for sapphire in Touloukian et al.[@touloukian], giving $k=0.039T^{2.8924}$ W cm$^{-1}$K$^{-1}$. The thermal time constant of sapphire remains similar to that at liquid helium temperature because the heat capacity follows a similar cubic law to the thermal conductivity, leaving the ratio unchanged (Eqn. \[timeconstant\]). However, the threshold for bistability is substantially lowered with respect to liquid helium temperature due to the reduction in thermal conductivity of the sapphire.\
To experimentally determine the thermal coefficient $\alpha$, frequency measurements of several Whispering Gallery modes were made over a range of temperatures from 25mK to 5.5K. The modes examined were fundamental quasi-transverse magnetic modes WGH$_{14,0,0}$, WGH$_{19,0,0}$ and WGH$_{20,0,0}$. The modes were excited using a vector network analyser at low power, typically $-45$ dBm. In this way, saturation of residual paramagnetic spins in the sapphire was avoided. The temperature of the resonator was controlled at a number of points between 25-5500mK using a Lakeshore Model 370 AC Resistance Bridge, and custom data acquisition software recorded the temperature, $Q$-factor, and frequency of the modes in transmission. The base temperature of the dilution refrigerator was 23mK, and temperature control was stable to within several millikelvin. The temperature dependence of mode frequency was mapped to produce plots such as Fig. \[fig3\], and several temperatures were chosen to measure the threshold power at which thermal bistability became apparent. The temperatures were chosen to reflect a range of values for the thermal coefficient $\alpha$, ranging from nominally zero near the frequency-temperature turnover point, to a maximum at the largest slope. As previously, we define the threshold power to be the power incident on the resonator which is sufficient to shift the mode frequency by one bandwidth from the unperturbed low power measurement. Equation \[eq3\] is then used to calculate the theoretical threshold power. The experimentally determined thermal coefficients are summarised in Table \[alphaTable\], and a particular example of the measured and calculated threshold power is given in Fig. \[thresh\] for WGH$_{20,0,0}$.\
![\[thresh\]Predicted threshold power (using Eqn.\[eq3\]) as a function of temperature for WGH$_{20,0,0}$. Shaded circles are the measured threshold values for 200, 630, and 2100 mK.](fig4.jpg){width="86mm"}
Clearly, operation at the frequency temperature turning point is advantageous as the thermal coefficient $\alpha$ approaches zero and the threshold power required for bistability approaches infinity. Temperature turning points have been observed with no bistability at input powers up to 20 dBm from 4-9K in similar sapphire resonators, and are caused by residual paramagnetic impurities such as Ti$^{3+}$, Cr$^{3+}$, Mo$^{3+}$, V$^{3+}$, Mn$^{3+}$, and Ni$^{3+}$ present at concentrations of parts-per-billion to parts-per-million. The opposite sign effects of temperature-dependent Curie law paramagnetic susceptibility, and temperature dependence of permittivity[@DickWang1; @JonesBlair1988el; @Mann1992jpDap; @HartnettTi3; @TobarJPhysD] cause a turnover in the frequency-temperature dependence. Operating at this turning point or “magic temperature” allows frequency fluctuations due to temperature instability to be annulled to first order, and has been crucial to achieve state-of-the-art short term fractional frequency stability in CSOs in the past[@Hartnett2006apl; @Locke2006rsi; @DickWang1]. Our results are the first observation of temperature turning points below the boiling point of liquid helium. Table \[turningpoints\] lists the magic temperatures (turning points) measured for a range of WG modes. Operation of the WHIGMO at a millikelvin magic temperature rather than its current $\approx$8K would have the benefit of reduced thermal noise floor of the maser, with potential operation at the quantum limit. Operation at a millikelvin turning point, where the thermal coefficient $\alpha$ is vanishing, would minimise the effects from the considerable heating due to the large ($>$10 dBm) input power required to saturate the pump transition of the maser. This is similarly advantageous for CSOs [@Locke2006rsi] which typically circulate large amounts of power through the sapphire resonator, at least several milliwatt, while the cooling power at base temperature of a dilution refrigerator is only on the order of several hundred $\mu$W. Additionally, high power operation of quantum limited transducers could be attained at these temperatures.\
***m*** **Magic Temperature (mK)**
--------- -------------------------------
13 89.75
14 96.25
15 Not trackable
16 Not trackable
17 Possible turnover below 80 mK
18 2749.35
19 2280.30
20 2089.75
: \[turningpoints\]Measured ‘magic temperatures’ for a range of quasi-transverse magnetic WG modes. Several modes close to the Fe$^{3+}$ centre frequency exhibited strong distortion and could not be accurately tracked to determine the turning point. The $m=17$ mode required large power to excite and could not be measured below 80mK due to heating effects.
In summary, we report the first characterisation of a single-crystal sapphire resonator at temperatures more than an order of magnitude lower than previously achieved, the first measurement of thermal bistability in a microwave sapphire resonator at these temperatures, and the first observation of millikelvin frequency-temperature turning points. We give a model for the thermal bistability threshold power, and show that it is closely dependent on the material properties of the sapphire resonator. We propose several reasons the effect has not been previously observed in this system. Typically the CSO/WHIGMO is operated very close to a frequency-temperature turning point where the temperature coefficient $\alpha$ is small, and the threshold power for bistability becomes significantly larger than normal operational power levels. In the experiments reported in this paper, the resonator was operated at low temperature, far from a turning point where the temperature coefficient was large, leading to a lower and readily observable threshold power. Frequency-temperature turning points as low as tens of millikelvin were observed for WG modes in the resonator, and we show that the thermal bistability effect can be suppressed by operating at these “magic temperatures”. Additionally, mode $Q$-factors remained high and were comparable to their usual values at higher temperature, ruling out the existence of extra loss mechanisms in sapphire in the millikelvin regime. We conclude that single-crystal sapphire is an excellent candidate for both clock applications, and quantum metrology of macroscopic systems at temperatures approaching absolute zero.
This work was supported by the Australian Research Council and a University of Western Australia collaboration grant.
[36]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} in @noop [**]{}, (, , ) pp. @noop [****, ()]{} @noop [****, ()]{} @noop [**]{} (, , ) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} , ed., @noop [**]{}, ed. (, , ) in @noop [**]{} () pp. @noop [**]{}, Vol. (, , ) @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'As a flexible and scalable architecture, heterogeneous cloud radio access networks (H-CRANs) inject strong vigor into the green evolution of current wireless networks. But the brutal truth is that energy efficiency (EE) improves at the cost of other indexes such as spectral efficiency (SE), fairness, and delay. It is thus important to investigate performance tradeoffs for striking flexible balances between energy-efficient transmission and excellent quality-of-service (QoS) guarantees under this new architecture. In this article, we first propose some potential techniques to energy-efficiently operate H-CRANs by exploiting their features. We then elaborate the initial ideas of modeling three fundamental tradeoffs, namely EE-SE, EE-fairness, and EE-delay tradeoffs, when applying these green techniques, and present open issues and challenges for future investigations. These related results are expected to shed light on green operation of H-CRANs from adaptive resource allocation, intelligent network control, and scalable network planning.'
author:
- 'Yuzhou Li, Tao Jiang, Kai Luo, and Shiwen Mao[^1]'
bibliography:
- 'IEEEabrv.bib'
- 'F://ReferenceOfPaper//MyRef.bib'
title: 'Green Heterogeneous Cloud Radio Access Networks: Potential Techniques, Performance Tradeoffs, and Challenges'
---
Introduction
============
Background and Motivation
-------------------------
The dramatic increase in the number of smart phones and tablets with ubiquitous broadband connectivity has triggered an explosive growth in mobile data traffic [@CiscoMobileDataTrafficForecast_2016_2021]. Cisco forecasts that, the amount of global mobile data traffic will increase 7-fold from 2016 to 2021 and its majority is generated by energy-hungry applications such as mobile video [@CiscoMobileDataTrafficForecast_2016_2021]. This is also referred to as the well-known $1000\times$ data challenge in cellular networks. Meanwhile, the number of devices connected to the global mobile communication networks will reach 100 billion in the future and that of mobile terminals will surpass 10 billion by 2020 [@5G_VisionAndRequirements].
Although unprecedented opportunities for the development of wireless networks brought by the massive traffic amount and connected devices, a concomitant crux is that this growth skyrockets the energy consumption (EC) and greenhouse gas emissions in the meantime. From statistical data, the information and communication technology (ICT) industry is responsible for $2\%$ of world-wide $\text{CO}_2$ emissions and $2\%$-$10\%$ of global EC, of which more than $60\%$ is directly attributed to radio access networks (RANs) [@Footprint_of_EcologicalAndEconomic_CM2011]. For this regard, 5G wireless communication networks are anticipated to provide spectral and energy efficiency growth by a factor of at least 10 and 10 times longer battery life of connected devices [@5G_VisionAndRequirements].
Concept of H-CRANs
------------------
To meet the $1000\times$ data challenge, heterogeneous networks (HetNets), composed of a diverse set of small cells (e.g., microcells, picocells, and femtocells) overlaying the conventional macrocells, have been introduced as one of the most promising solutions [@5G_VisionAndRequirements]. However, the ubiquitous deployment of HetNets is accompanied by the following shackles:
- **Severe interference**. The spectrum re-use among cells incurs severe mutual interference, which may significantly reduce the expected system spectral efficiency (SE) and also decrease the network energy efficiency (EE).
- **Unsatisfactory EE**. The densely-deployed small cells lead to an escalated EC and thus a reduced EE, and also increases capital expenditures (CAPEX) and operational expenditures (OPEX).
- **No computing-enhanced coordination centers**. There are no centralized units with strong computing abilities to globally coordinate multi-tier interference and execute cross-RAN optimization, which dramatically limits cooperative gains among cells.
- **Inflexibility and unscalability**. Fragmented base stations (BSs) result in inflexible and unscalable network control and operations, thus leading to redundant network planning and inconvenient network upgrade.
To overcome these challenges faced by HetNets, cloud RANs (C-RANs), new centralized cellular architectures armed with powerful cloud computing and virtualization techniques, have been parallelly put forward to coordinate interference and manage resources across cells and RANs [@CRAN_WhitePaper]. In C-RANs, a large number of low-cost low-pwer remote radio heads (RRHs), connecting to the baseband unit (BBU) pool through the fronthaul links, are randomly deployed to enhance the wireless capacity in the hot spots. Consequently, the combination of HetNets and C-RANs, known as heterogeneous C-RANs (H-CRANs), becomes a potential solution to support both spectral- and energy-efficient transmission.
Green H-CRANs
-------------
As mentioned above, one of the main missions for H-CRANs from their birth is to construct eco-friendly and cost-efficient wireless communication systems. Benefiting from H-CRANs’ global coordination ability, many promising techniques, such as joint processing/allocation, traffic load offloading, energy balance, self-organization, and adaptive network deployment, can be applied in these scenarios for energy-efficient transmissions. Unfortunately, the network EE improves usually at the cost of the performance of other technique metrics, such as SE, fairness, and delay, all of which however are equally important as EE to guarantee users’ quality-of-service (QoS). That is, there are EE-SE, EE-fairness, and EE-delay tradeoffs. It is thus interesting to investigate these performance tradeoffs in H-CRANs for establishing rules to flexibly balance the network EE and users’ QoS demands when greening H-CRANs.
Compared with existing works (e.g. [@RecentAdvances_CRAN_Survey2016]) on the system architecture or radio resource management (RRM) mainly in terms of EE and SE, this article focuses on the green evolution of H-CRANs, and particularly investigates it from the perspective of EE-SE, EE-fairness, and EE-delay tradeoffs instead of the indexes themselves. To reach our targets, we organize the remainder of this article as follows. In Section \[Section:Architecture\_HCRANs\], we first simply review the architecture of H-CRANs and then exploit their features to propose three potential techniques for green H-CRANs. Section \[Section:TradeoffTheories\] introduces the possible methods to depict these tradeoffs and also provides corresponding challenges and open problems when applying these proposed techniques. We conclude the article in Section \[Section:Conlusions\].
Architecture of H-CRANs and Potential Green Techniques {#Section:Architecture_HCRANs}
======================================================
In C-RANs, the idea of dividing conventional cellular BSs into two parts of BBUs and RRHs is introduced. BBUs are then integrated into centralized BBU pools, where cloud computing and virtualization techniques are implemented to enhance computational ability and to virtualize network function. BBUs are responsible for resource control and signal processing, while RRHs for information radiation and reception, with their interconnection via dedicated transport networks. Thus, the cloud-computing-enhanced centralized BBU pools facilitate cross-cell and cross-RAN information sharing, which paves the path for global resource optimization adapting to network conditions (e.g., channel conditions, interference strength, traffic loads, and so on). H-CRANs absorb this architecture in C-RANs and maintain macro BSs (MBSs) and small-cell BSs (SBSs) in HetNets to support both global control and seamless communications.
Architecture of H-CRANs
-----------------------
=3.2in
As shown in Fig. \[Fig:H\_CRAN\], H-CRANs are composed of three functional modules.
1. **Real-time virtualization and cloud-enhanced BBU pool**. Equipped with powerful virtualization techniques and strong real-time cloud computing ability, BBU pools integrate independent BBUs scattered in cells.
2. **High-reliability transport networks**. RRHs are connected to BBUs in the BBU pool via high-bandwidth low-latency fronthaul links such as optical transport networks. The data and control interfaces between the BBU pool and MBSs are S1 and X2, respectively [@HCRAN_EE_Perspective_IWC2014].
3. **MBSs, SBSs, and RRHs**. In H-CRANs, multiple access points (APs), e.g., MBSs, SBSs, and RRHs, coexist. MBSs are deployed mainly for network control and mobility performance improvement, e.g., decreasing handover times to avoid Ping-Pong effects for high-mobility users. SBSs and RRHs are geographically distributed within cells close to users to increase capacity and decrease transmit power in the meantime.
In H-CRANs, the function separation between BBUs and RRHs, the decoupling between control and data planes, and the cloud-computing-enhanced centralized integration of BBUs facilitate efficient management of densely-deployed mobile networks. For example, the operators only need to install new RRHs and connect them to the BBU pool to expand network coverage and improve network capacity. Moreover, flexible software solutions can be easily implemented under this architecture. For instance, the operators can upgrade RANs and support multi-standard operations only through software update by deploying software defined radio (SDR).
Potential Techniques for Green H-CRANs
--------------------------------------
The four revolutionary changes, i.e., function separation, control-data decoupling, centralized architecture, and cloud-computing-enhanced processing, make H-CRANs significantly different from existing 2G, 3G, and 4G wireless networks. By exploiting these features, it is possible to construct H-CRANs flexible in network management, adaptive in network control, and scalable in network planning. As a result, energy-efficient operation of H-CRANs without a significant loss in other indexes such as SE, fairness, and delay can be achieved.
**1) Joint Resource Optimization across RRHs and RANs**
In H-CRANs, each BBU first collects its individual network conditions and then shares this information within the BBU pool. As a result, this distributed-collection centralized-control architecture, further enhanced by virtualization techniques and cloud computing, enables efficient transmission/reception cooperation across RRHs and convenient global control across RANs. Consequently, the existing cooperative techniques, such as coordinated multi-point (CoMP) transmission, enhanced inter-cell interference coordination (eICIC), and interference alignment (IA), can be readily implemented in H-CRANs. All these techniques are self-contained in theory but have rarely been applied to conventional cellular networks because of difficulties in sharing and handling global network information.
=3.5in
As introduced above, multi-RANs and multi-APs with different coverage and functions are deployed in H-CRANs. As a result, unlike traditional single-mode terminals communicating only through a RAN’s AP, multi-mode terminals could send and receive data concurrently through multiple of them. This indicates H-CRANs with a new characteristic of network diversity, which can be exploited to design user association strategies. By this, traffic load distributions among RANs and APs can be well balanced, which in turn affects the working states of RANs and resource optimization, and thus affects network interference and EE.
Moreover, under this new centralized architecture, the network EE can be further improved by incorporating more resource allocation dimensions (e.g., power allocation, subcarrier assignment, user association, and RRH operation) into the formulations. Fig. \[Fig:EE\_VS\_CircuitPower\] shows that joint optimization of RRH operation and power allocation improves EE by up to 84% compared with the power-allocation-only algorithm in downlink H-CRANs. Thus, through the aforementioned joint resource optimization and network-diversity-aware user association, significant improvement in EE and reduction in EC can be achieved.
**2) Large-scale MBS and SBS Deployment**
Compared to the transmit power, the overall static power consumption by MBSs and SBSs, composed of cooling and circuit power, are usually much larger [@GreenCommunications_ConceptReality_IWC2012]. For example, a typical UMTS base station consumes 800–1500W with RF output power of 20–40W. As a result, under the constraints of basic coverage requirements, the deployment of MBSs and SBSs, characterized by the distance between two MBS sites and the number of SBSs per site, affects the area power consumption (APC) and the area SE (ASE) significantly in H-CRANs. The general purpose of large-scale MBS and SBS deployment is to macroscopically plan an appropriate number of BSs to support users’ demands for energy saving by avoiding the static power consumption.
Intuitively, the APC will sharply decrease if we reduce the number of MBSs, i.e., increase the inter site distance. Meanwhile, the ASE will also decrease, because the increased inter site distance reduces the spectrum re-use. Similarly, the number of SBSs deployed in each site will also affect the APC and the ASE. As an example, Fig. \[Fig:APC\_ASE\_Versus\_ISD\] clearly shows the significant impacts of the configuration of MBSs and SBSs on the APC and ASE under practical parameter settings. Therefore, we need careful network planning from a large-scale perspective to flexibly balance these two metrics and to conveniently upgrade the system.
=3.5in
**3) Load-Aware RRH Operations**
The so-called worst-case network planning philosophy has been widely adopted to guarantee users’ QoS even during peak traffic periods in conventional cellular networks. However, mobile traffic loads usually vary in both spatial and temporal domains, which is referred to as the tidal phenomenon. Specifically, the fraction of time when the traffic is below 10% of the peak during a day is about 30% on weekdays and 45% on weekends [@DynamicTrafficLoads_CellularNetwork]. As a result, a large number of RRHs are extremely under-utilized in the cases of dense deployment in H-CRANs during off-peak periods. But RRHs still consume circuit power even with little or no activity. Consequently, a significant waste of EC and a sharp decrease in EE will be resulted if RRHs are underutilized but still activated. Thus, apart from the aforementioned spatial deployment, energy conservation can also be achieved by exploiting temporal traffic variations. For the fixed deployment, we can adopt load-aware network control in H-CRANs to perform on/off operations of RRHs adapting to spatial and temporal traffic amounts to improve EE.
=3.5in
As an example, we consider a downlink H-CRAN to show the impacts of load-aware RRH on/off operations on energy expenditure. Specifically, we jointly optimize RRH operation and power allocation to maximize the network EE with stochastic and time-varying traffic arrivals taken into account. Two algorithms, denoted by the optimal and suboptimal, are developed to solve the problem. Fig. \[Fig:PowerConsumption\_VS\_CircuitPower\] shows that the proposed algorithms can dramatically reduce the energy consumption compared to the algorithm without RRH operation (i.e., only optimizing power allocation), denoted by ePower, especially in light and middle traffic states (up to a 58% gain in light traffic states when the traffic arrival rate $\boldsymbol \lambda = 1.5$ bits/slot/Hz).
Performance Tradeoffs and Challenges for Green H-CRANs {#Section:TradeoffTheories}
======================================================
Leveraging the proposed potential green techniques in H-CRANs, it is then of importance to explore the key theories that support ubiquitous energy-efficient transmission and meanwhile provision satisfactory QoS for users. Among them, performance tradeoffs deserve significant consideration [@Funda_Tradeoff_YeLi].
Apart from the widely studied deployment efficiency-EE, EE-SE, bandwidth-power, and delay-power tradeoffs [@Funda_Tradeoff_YeLi], there are two additional fundamental tradeoffs, EE-fairness and EE-delay tradeoffs. This section elaborates the ideas of modeling these two tradeoffs, analyzes challenges and open problems, and provides some possible solutions. Since H-CRANs originally are designed to enhance the network SE and thus the wireless capacity as well, we thus also review the key concepts and present challenges associated with the EE-SE tradeoff under this new architecture.
EE-SE Tradeoff
--------------
Vast existing research falls into this direction due to the following reasons. The traditional indexes EC and SE measure how small the amount of energy is needed to satisfy users’ QoS and how efficiently a limited spectrum is utilized, respectively. However, both of them fail to quantify how efficiently the energy is consumed, i.e., EE. Moreover, the optimality of EE and EC and that of EE and SE are not always achieved simultaneously and may even conflict with each other [@Funda_Tradeoff_YeLi]. As a consequence, the existing results from the EC minimization or the SE maximization usually can hardly provide insights into EE-SE tradeoff problems.
The general idea of modeling the EE-SE tradeoff is that the system maximizes the network EE [@EE_OFDMA_YeLi] or a weighted EE-SE tradeoff index [@ResourceEfficiency_OFDMA_TWC2014] under the constraints of users’ QoS and resource allocation (e.g., power allocation and RRH operation). As a common feature, these works usually assume infinite backlog, i.e., there is always data for transmission in the buffer. Under this view, formulations are presented and algorithms are developed only based on the observation time, where the network EE is defined as the ratio of the instantaneous achievable sum rate $R_{\text{tot}}$ to the corresponding total power consumption $P_{\text{tot}}$ (cf. Eq. (5) or (6a) in [@EE_OFDMA_YeLi]). Note that $P_{\text{tot}}$ is usually modeled to include both transmit and circuit energy consumption, which is affected by the power amplifier inefficiency, transmit power, and circuit power. In the article, we call these formulations short-term (i.e., snapshot-based) models, since only short-term system performance is considered. Accordingly, we denote the network EE of this kind of definition by $\text{EE}_{\text{short-term}}$ for simplicity.
Although there have been a large number of works to address the EE-SE tradeoff based on the short-term models, lots of problems remain open in complex H-CRANs. First, jointly considering multi-dimensional resource optimization and multi-available signal processing techniques, it is challenging to formulate EE-SE tradeoff problems with network conditions and users’ requirements both taken into account in H-CRANs. Furthermore, due to the nonconvexity of $\text{EE}_{\text{short-term}}$ (cf. Eq. (5) or (6a) in [@EE_OFDMA_YeLi] or Eq. (26) in [@ResourceEfficiency_OFDMA_TWC2014]), EE-SE tradeoff problems are usually difficult to solve even if we only optimize power allocation in spectrum-sharing H-CRANs. As a result, these problems become much more complicated once we extend from one-dimensional to multi-dimensional resource optimization. Thus, how to develop joint resource allocation algorithms that reach the theoretical limits of the network EE and thus serve as benchmarks to evaluate performance of other heuristic algorithms is another challenge. Moreover, it is also necessary to develop cost-efficient and easy-to-implement algorithms with acceptable performance levels to solve these problems for practical applications.
EE-Fairness Tradeoff
--------------------
The widely studied EE-optimal problems (NEPs) in H-CRANs emphasize the network EE maximization without considering EE fairness, i.e., ignoring the EE of individual links. By purely benefiting the links in good network conditions (e.g., excellent wireless channel, little interference, low traffic loads, or all), the NEPs improve the network EE at the cost of the EE of the links in poor conditions. As a result, the NEPs would inevitably lead to severe unfairness among links in terms of EE. However, as traditional concerns on individual links’ SE or EC, it is also important to guarantee the EE of each link from users’ perception. It is therefore of interesting to investigate the EE-fairness tradeoff in H-CRANs, but to the best of our knowledge, studies on this issue have so far been very scarce.
To intuitively show the EE-fairness tradeoff, we take the max-min EE fairness in an uplink OFDMA-based cellular network (it can be seen as a special case of single-cell H-CRANs) as an example. Specifically, we maximize the EE of the worst-case link subject to subcarrier assignment and power allocation constraints to ensure the max-min EE fairness among links, which is referred to as the max-min EE-optimal problem (MEP). In Fig. \[Fig:EE\_Network\_Best\_Worst\_16\_128\], we compare the statistical performance between the NEP and the MEP from three aspects: the EE of the network, the best link, and the worst link. Observe that, the EE of the best and worst links in the NEP differs significantly, while the EE whether of the network, the best link, or the worst link in the MEP is well-balanced. This is because the NEP maximizes the network EE at the cost of the EE fairness among links, but reversely, the MEP sacrifices the network EE to guarantee the max-min EE fairness.
=3.5in
Fig. \[Fig:EE\_Network\_Best\_Worst\_16\_128\] exhibits the phenomenon of the EE-fairness tradeoff, but we are still at a very primary stage of revealing and tuning this tradeoff, limited by the following two challenges.
- Unified frameworks to quantify and formulate the EE-fairness tradeoff are currently not available.
- General techniques or analytical methods to tackle the EE-fairness tradeoff problems are still open.
It should be pointed out that the utility theory, originally used to investigate the rate-fairness tradeoff [@Fairness_Efficiency_Tradeoff_TON2013], is a possible method to demystify the quantitative EE-fairness tradeoff.
EE-Delay Tradeoff
-----------------
As far as we know, the concept of the EE-delay tradeoff was first proposed by H. V. Poor *et al.* in 2009 [@EE_Delay_Tradeoffs_Game_IT2009], where the authors showed that the delay constraints would lead to a loss in EE at equilibrium by a game-theoretical approach. However, to date, how to quantify and control the EE-delay tradeoff is still unresolved.
In our view, one possible reason that prevents the existing works including [@EE_Delay_Tradeoffs_Game_IT2009] from obtaining a quantitative tradeoff is the choice of adopting short-term models with the full buffer assumption, where $\text{EE}_{\text{short-term}}$ is used to characterize the network EE. However, different from the full buffer assumption, practical H-CRANs operate in the presence of time-varying wireless channels and stochastic traffic arrivals, both of which significantly affect the EE and delay and thus the EE-delay tradeoff. Hence, short-term formulations in general cannot reflect the delay due to their independence of time and without considering traffic arrivals. As a result, it is unlikely for such models to show the explicit EE-delay relationships.
We further illustrate the principles behind the EE-delay tradeoff with two extreme cases. Regarding stochastic traffic arrivals, in the case of aggressive emphasis on the EE, transmission decisions should be triggered only when network conditions are good enough, by which the delay performance degrades inevitably. Alternatively, to ensure a small delay, the network has to transmit data at the cost of energy expenditure even when network conditions are very poor, which undoubtedly decreases the EE. Thus, to model the EE-delay tradeoff, the following two issues need to be considered.
- How to decide whether to transmit data or defer a transmission in each slot in terms of the EE and delay and how to optimize resource allocation such as power allocation, subcarrier assignment, and RRH operation if transmission is chosen?
- How to ensure that deferring transmissions to anticipate more advantageous network conditions becoming available in the future would not result in an uncontrollable delay because of time-variant, stochastic, and unpredicted network conditions?
In what follows, we present a possible method to model and reveal the quantitative EE-delay tradeoff.
To formulate EE and delay in a framework, we first need to shift from previously short-term to long-term models. In long-term formulations, random traffic arrivals can be enfolded to obtain a dynamic arrival-departure queue for each user, given as ${Q_i}\left( {t + 1} \right) = \max [{Q_i}\left( t \right) - {R_i}\left( t \right),0] + {A_i}\left( t \right), \forall i$ [@EE_Delay_D2D_JSAC2013]. Here, ${A_i}\left( t \right)$ and ${Q_i}\left( t \right)$ denote the amount of newly arrived data and queue length of user $i$ at slot $t$, respectively. Note that the average delay can be characterized by queue length, as it is proportional to the queue length for a given traffic arrival rate from the Little’s Theorem.
Furthermore, it is also necessary to inject the concept of time into the EE definition $\text{EE}_{\text{short-term}}$ in order to bridge the EE and delay. One possible way to achieve this is to define the EE from a long-term average perspective, given by the ratio of the long-term aggregate data delivered to the corresponding long-term total power consumption (cf. Eq. (10) in [@EE_Delay_D2D_JSAC2013]). For simplicity, we denote this kind of the network EE definition by $\text{EE}_{\text{long-term}}$. From [@EE_OFDMA_YeLi] and [@EE_Delay_D2D_JSAC2013], we know that, $\text{EE}_{\text{long-term}}$ can also be seen as an extension of $\text{EE}_{\text{short-term}}$, because it degenerates to $\text{EE}_{\text{short-term}}$ if there are no time averages and expectations in $\text{EE}_{\text{long-term}}$. Then, by integrating the queue length control (i.e., delay control) and EE maximization into a framework, we can depict the EE and average delay simultaneously.
=3.5in
We utilize the above ideas to display the EE-delay tradeoff in H-CRANs by formulating a stochastic optimization problem that maximizes the network EE $\text{EE}_{\text{long-term}}$ subject to a queue length control constraint through joint optimization of RRH operation and power allocation. Two algorithms, referred to as the optimal and suboptimal, are developed to solve this problem. Fig. \[Fig:EE\_VS\_Delay\_ForAlpha\] intuitively shows the EE-delay tradeoff, where $V \ge 0$ and $\alpha \in [0,1]$ are two control parameters introduced in the model to adjust the EE-delay tradeoff. Specifically, from Fig. \[Fig:EE\_VS\_Delay\_ForAlpha\], for the same $V$, the smaller $\alpha$ is, the better the EE, and the larger the average delay. In addition, for the same $\alpha$, the bigger $V$ is, the better the EE, and the larger the average delay. These observations together exhibit the EE-delay tradeoff, which can be explicitly balanced by $V$ and $\alpha$. Hence, the long-term model can be used to tune the EE-delay tradeoff via adjusting $V$ and $\alpha$. More clearly, $\alpha$ is used to confine the tradeoff range between the EE and average delay ( a small $\alpha$ gives a large range and vice versa) and $V$ to tune the tradeoff point between the EE and average delay (a small $V$ yields a small delay but low EE and vice versa).
Although [@EE_Delay_Tradeoffs_Game_IT2009] found the EE-delay tradeoff and [@EE_Delay_D2D_JSAC2013] obtained an EE-delay tradeoff of $[O\left(1/V\right),O\left(V\right)]$, the optimal EE-delay tradeoff, i.e., the optimal order for the average delay in $V$ when the EE increases to the optimal by the law of $O\left(1/V\right)$, is still unknown. Moreover, [@EE_Delay_Tradeoffs_Game_IT2009; @EE_Delay_D2D_JSAC2013] focused on the average delay and thus the obtained results therein are valid only for non-real-time traffic such as web browsing and file transfers. However, there are some other real-time applications, e.g., voice and mobile video, in H-CRANs, which impose hard-deadline (or maximum delay) constraints. It is thus deserved to study how to provision deterministic delay guarantees and improve the EE in the meantime. Moreover, in more realistic H-CRANs with both non-real-time and real-time traffic, it is also well worth investigating how to flexibly balance the EE-delay performance for each kind of traffic from a perspective of systematic design and further devise control algorithms. Potential techniques that can be used to settle these unresolved issues are stochastic optimization, dynamic programming, Markov decision process, queue theory, and stochastic analysis.
Conclusions {#Section:Conlusions}
===========
Under the triple drives of capacity enhancement, EE improvement, and communication ubiquity, H-CRANs have emerged as a promising architecture for future wireless network design. In this article, we have first exploited the features of H-CRANs to propose three green techniques and then particularly focused on three fundamental tradeoffs, namely EE-SE, EE-fairness, and EE-delay tradeoffs. We have introduced the methods to model and analyze these tradeoffs, presented open issues and challenges, and also provided some potential solutions. However, we are still at a very primary stage in these studies, and thus further investigations on exploitation of the high-dimension, flexible, and scalable architecture of H-CRANs are eagerly deserved for a green future.
Acknowledgement
===============
This work was supported in part by the National Science Foundation of China under Grant 61601192, 61601193, 61631015, 61471163, the U.S. NSF under Grant CNS-1320664, the Major State Basic Research Development Program of China (973 Program) under Grant 2013CB329006, the Major Program of National Natural Science Foundation of Hubei in China under Grant 2016CFA009, and the Fundamental Research Funds for the Central Universities under Grant 2016YXMS298.
Biographies {#biographies .unnumbered}
===========
[^1]: Y. Li, T. Jiang, and K. Luo are with Huazhong University of Science and Technology and S. Mao is with Auburn University.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Passive red galaxies frequently contain warm ionized gas and have spectra similar to low-ionization nuclear emission-line regions (LINERs). Here we investigate the nature of the ionizing sources powering this emission, by comparing nuclear spectroscopy from the Palomar survey with larger aperture data from the Sloan Digital Sky Survey. We find the line emission in the majority of passive red galaxies is spatially extended; the surface brightness profile depends on radius $r$ as $r^{-1.28}$. We detect strong line ratio gradients with radius in /, /, and /, requiring the ionization parameter to increase outwards. Combined with a realistic gas density profile, this outward increasing ionization parameter convincingly rules out AGN as the dominant ionizing source, and strongly favors distributed ionizing sources. Sources that follow the stellar density profile can additionally reproduce the observed luminosity-dependence of the line ratio gradient. Post-AGB stars provide a natural ionization source candidate, though they have an ionization parameter deficit. Velocity width differences among different emission lines disfavor shocks as the dominant ionization mechanism, and suggest that the interstellar medium in these galaxies contains multiple components. We conclude that the line emission in most LINER-like galaxies found in large aperture ($>100$ pc) spectroscopy is not primarily powered by AGN activity and thus does not trace the AGN bolometric luminosity. However, they can be used to trace warm gas in these red galaxies.'
author:
- 'Renbin Yan$^{1}$, Michael R. Blanton$^{1}$'
bibliography:
- 'apj-jour.bib'
- 'astro\_refs.bib'
title: 'The Nature of LINER-like Emission in Red Galaxies'
---
Introduction
============
Emission lines are important spectral features that can help us probe the gaseous component in galaxies. They are not unique to star-forming galaxies, but also exist in galaxies with only old stellar populations. Numerous results [@Phillips86; @Goudfrooij94; @Yan06] have shown that line emission is prevalent in more than 50% of passive red galaxies, and they have line ratios similar to the low ionization nuclear emission line regions (LINERs, @Heckman80). What powers this line emission has been an unsettled question for decades.
LINERs are identified by their particular pattern of line strength ratios, with strong low-ionization forbidden lines (e.g. , , , ) relative to recombination lines (e.g. , ) and high-ionization forbidden lines (e.g. ). Unlike galaxies dominated by star-forming HII regions or Seyferts, whose line ratio patterns clearly identify their ionizing sources as young massive stars or active galactic nuclei (AGN), respectively, LINERs can be produced by a wide array of ionization mechanisms, such as photo-ionization by an AGN [@FerlandN83; @HalpernS83; @GrovesDS04II], photoionization by post-AGB stars [@Binette94], fast radiative shocks [@DopitaS95], photoionization by the hot X-ray-emitting gas [@VoitD90; @DonahueV91], or thermal conduction from the hot gas [@SparksMG89]. Therefore, their exact ionization mechanism has been hotly debated.
The LINER puzzle is futher complicated by the limited spatial resolution available in many samples, particularly in SDSS. Originally, “LINER” only referred to a class of galaxy nuclei. They were first identified in [*nuclear*]{} spectra of nearby galaxies [@Heckman80]. This is the case in most LINER studies of nearby galaxies [e.g. @HoFS97V]. For our discussion, we refer to these LINERs as “nuclear LINERs.” We should keep in mind that ground-based slit spectra, under typical seeing, usually cannot resolve better than the central few hundred parsecs for even nearby galaxies ($\lesssim40$ Mpc). This is the scale referred to by the word ’nuclear’. With narrow band imaging and/or long-slit spectroscopy surveys, [@Phillips86], [@Kim89], [@Buson93], [@Goudfrooij94], [@Macchetto96], [@Zeilinger96], and others found that the line emission in early-type galaxies is often extended to kpc scale and has LINER-like line ratios. We refer to these cases as “extended LINERs.” In surveys of much more distant galaxies, such as most galaxies in SDSS, or surveys at high redshifts, such as DEEP2 [@Davis03], zCOSMOS [@Lilly07], BOSS [@Eisenstein11], and distant cluster surveys (e.g. @Lubin09), the spectra obtained usually covers a much larger scale than the nuclei and we are not always able to tell how the emission is distributed spatially. Still, a large number of galaxies show LINER-like spectra [@Yan06; @Lemaux10; @Bongiorno10; @Yan11]. The term “LINER” is often casually adopted in this case to refer to galaxies with LINER-like spectra. Here, we refer to these cases as “LINER-like galaxies.”
Although the name of LINER includes a morphological description — “nuclear”, there has been no quantitative definition of what line emission distribution would qualify as a nuclear LINER. All LINERs have been defined only spectroscopically based on their line ratio patten. The distinction between nuclear LINERs and extended LINERs is very murky. Practically, it often refers to the scale over which the spectrum is taken — $\lesssim200~{\rm pc}$ for nuclear and $\gtrsim1~{\rm kpc}$ for extended, rather than a characteristic scale or morphological description of the line emission distribution. [@Masegosa11] tried to classify the different morphologies of nuclear line emission distribution based on narrow band HST images. In the large majority of LINERs (84%), they found an unresolved nuclear source surrounded by diffuse emission extending to a few hundreds of parsecs. However, it is unclear in those data whether the two components have different ionizing sources and which component dominates in total luminosity.
For a large fraction of nuclear LINERs, which are defined using spectra from the central few hundred parsecs, AGN activity clearly exists in the center, although some puzzles still remain. Evidence for AGN activity includes the detection of central hard X-ray point sources, compact radio cores, broad emission lines in direct or polarized light, and UV variability (see @Ho08 and references therein). On the other hand, there is growing evidence [@Ho01; @Eracleous10] that the weak AGN in most nuclear LINERs does not emit enough photoionizing photons to account for the observed intensity in optical emission lines. Based on narrowband images or slit spatial profiles observed from HST, the narrow line region in LINERs appear to be strongly concentrated in the center, with typical dimensions smaller than tens of parsecs [@Walsh08]. However, in some objects the profile can extend to a few hundred parsecs [@Shields07]. Thus, even for nuclear LINERs, whether the AGN is responsible for all of the narrow line emission within the central few hundred parsecs is unclear.
For the extended LINERs, it is totally unclear how they are related with nuclear LINERs and what mechanism powers their line emission. For LINER-like galaxies, we know even less. Are they very powerful nuclear LINERs or are they dominated by extended LINER emission? The nuclear-LINER fraction among early-type galaxies is fairly similar to the LINER-like galaxy fraction among red galaxies in SDSS [@HoFS97V; @Yan06]. Is this similarity fortuitous, or is there a physical connection? We try to answer these questions in this paper.
Solving these puzzles is important. Many people have used the line strength in LINER-like galaxies from SDSS as an indicator of AGN power [@KauffmannHT03; @Constantin06; @Kewley06; @Schawinski07; @KauffmannH09], while others argued they are not genuine AGNs [@Stasinska08; @Sarzi10; @CidFernandes11; @Capetti11] but more likely powered by hot evolved stars. Therefore, settling this puzzle is not only important for AGN demographics, but also for understanding the ISM in early-type galaxies. If the line strength is not an AGN indicator, it might instead reflect the amount of cool gas supply, or the cooling rate of the hot gas. To make the correct physical connections, we have to first find out what powers the line emission. Additionally, if we can pin down the ionization mechanism, we will be able to measure the gas-phase metallicity in these red galaxies, which has so far been impossible.
For the extended LINER emission, three types of evidence have been presented to argue that they are not powered by AGN. (a) Post-AGBs could produce enough photo-ionizing photons [@dSA90; @Binette94; @Stasinska08]. (b) The line emission regions are spatially extended with a surface brightness profile that is shallower than $r^{-2}$ [@Sarzi10]. (c) The line luminosity correlates with stellar luminosity [@Sarzi10; @Capetti11]. This evidence led to the reasonable suspicion that a population of hot evolved stars might in fact dominate over the AGN photoionization in early-type galaxies. However, all of these arguments depends on inherent assumptions about observationally unknown factors.
For the first argument, [@dSA90], [@Binette94], and [@Stasinska08] all assumed that nearly all the photons produced by post-AGB stars are absorbed by the gas and that the gas is distributed near the ionizing stars to produce the correct ionization parameter. Both the gas covering factor and the relative distribution of gas to stars are unknown.
For the second argument, [@Sarzi06; @Sarzi10] showed that the line emission in nearly all line-emitting early-type galaxies is spatially extended, and has a surface brightness profile shallower than $r^{-2}$. Although [@Sarzi10] is very careful in not drawing conclusions based on this fact alone, such extended emission has been quoted by many others [e.g. @Kaviraj10; @Masters10; @Schawinski10] as evidence for stellar photoionization. However, a central point source can also produce extended line emission regions. The line emission surface brightness profile not only depends on the ionizing flux profile, but also depends on how the gas filling factor, spatial distribution of gas clouds, and the gas density vary with radius. Both the filling factor and the cloud spatial distribution are so poorly known that the surface brightness profile provides no practical constraint on the flux profile.
The third argument is based on the observed correlation in surface brightness between line emission and stellar continuum. [@Sarzi10] showed with IFU data that the equivalent width (EW) is fairly constant throughout each galaxy when excluding the nuclear region; [@Capetti11] argued a similar point based on integrated line emission from SDSS. This might seem like the strongest support for stellar photoionization. However, as mentioned above, the line flux depends on many other unconstrained factors besides the ionizing flux, such as the gas filing factor. [@Sarzi10] performed the calculation for a simple model of stellar photoionization and found that it did not produce the spatially-invariant EW of they found throughout each galaxy in their sample. To make the model consistent with observations, certain fine tuning of the gas filling factor, density, and/or mean-free path of the ionizing photons is required, which has no independent observational support.
In fact, as long as both the line flux and stellar continuum are strong functions with radius in these galaxies, one would always find tight correlation between the two, even in the case of photoionization by an AGN.
Thus, we need a simpler test that can distinguish different ionization mechanisms that is free of assumptions about unknown parameters. A central point source (e.g. AGN) and a system of spatially distributed ionizing sources will produce different ionizing flux profiles as a function of radius. Thus, for the same gas density profile, they would yield different ionization parameter profiles, leading to different spatial gradients in line ratios. Thus, looking at the line ratio gradients may provide a clue to differentiate the two scenarios. The only assumption involved here is the gas density profile, which can be derived from the hot gas density profile assuming pressure equilibrium or can be measured directly from line ratios.
Ideally, the line ratio gradient is best measured from integral field spectroscopy (IFS) data. However, current IFS data (SAURON, @Bacon01; ATLAS3D, @Cappellari11) have very limited wavelength coverage and do not probe enough strong emission lines to detect the ionization gradient in a large number of galaxies. The emission lines covered by SAURON are , , and in a few cases . Because / depends on the ionization parameter and to some extent also on the hardness of the ionizing spectra, it alone does not provide an unambiguous constraint on the ionization gradient. In addition, the weakness of makes it more difficult to detect small changes.
Our solution is to use the nuclear spectra from the Palomar survey [@HoFS95], and the fiber spectra from SDSS. By identifying the same population of galaxies at different redshifts, for which the fixed angular aperture corresponds to different physical radii, we can study statistically the spatial profile of emission line surface brightness and the line ratio gradient. The emission line surface brightness profile can also inform us about the relationship between nuclear LINERs and extended LINERs.
In addition, we will examine the widths of emission lines. Different forbidden lines will have different widths if there is line ratio variation within a galaxy and the variation is correlated with gas kinematics. On the other hand, shock ionization models also should produce width differences among multiple lines due to the dependence of line ratios on shock velocity.
Our investigation includes all line-emitting red galaxies except for those containing star formation; we do not specifically select for LINER-like galaxies. Based on line ratio diagnostics, these line-emitting red galaxies do have fairly uniform line ratios with the most typical case belonging to the LINER category, as shown by the line ratio diagnostic diagram in Figure \[fig:bpt\]. Avoiding the use of line ratios in sample selection is essential for our study of the line ratio gradient.
![Line ratio diagnostic diagram for SDSS galaxies (gray scale) at $0.09<z<0.1$ with all four emission lines detected at more than 3$\sigma$ significance, and the passive red galaxies (black points) among them. The latter is selected according to the criteria described in Section \[sec:sampleselect\]. The curves represent demarcations defined by [@KewleyDS01] (solid) and [@KauffmannHT03] (dashed). This illustrates that red galaxies have fairly uniform line ratios which puts most of them in the LINER-like galaxy category. []{data-label="fig:bpt"}](bpt_sloanpalomar.ps)
This paper is organized as following. In Section 2, we will describe the data, measurements, and sample selection. In Section 3, we will investigate the relationship between nuclear LINERs and extended LINERs and derive an average emission line surface brightness profile. In Section 4, we will show the line ratio gradient. In Section 5, we will present the line width differences. In Section 6, we discuss the viability of each ionization mechanism in explaining the line ratio gradient and line width differences. We conclude in Section 7.
Throughout this paper, we use a flat $\Lambda$CDM cosmology with $\Omega_m=0.3$, and a Hubble constant of $H_0=75 h_{75}{\rm km s^{-1} Mpc^{-1}}$ with $h_{75}=1$. This Hubble constant is chosen to make the comparison easier with the data presented by [@HoFS97III]. All the magnitude used are in the AB system.
Data and Measurements
=====================
To investigate the luminosity and line ratio gradient, we compare line luminosity and line ratio measurements between different physical apertures for the same population of galaxies. The Palomar survey [@HoFS95] provides us the best sample for nuclear aperture measurements. The SDSS survey can provide consecutively larger aperture measurements if we select identical samples at consecutively higher redshifts.
Data
----
In the Palomar survey, nuclear spectrum were taken for a sample of $\sim500$ local galaxies, selected from the Revised Shapley-Ames Catalog of Bright Galaxies (RSA; @SandageT81) and the Second Reference Catalogue of Bright Galaxies (RC2; @RC2) with the criteria of $B_T<12.5$ (Vega magnitudes) and $\delta >0$. The nuclear regions ($\sim200$ pc) of these galaxies are isolated using a $2\arcsec \times4\arcsec$ aperture. The details of data reduction, stellar continuum subtraction, and line measurements can be found in [@HoFS95].
We also employ data from the Sloan Digital Sky Survey [@York00; @Stoughton02] Data Release Seven [@SDSSDR7]. Using two fiber-fed spectrographs on a dedicated 2.5-m telescope, SDSS has obtained high quality spectra for roughly half a million galaxies with $r<17.77$ in the wavelength range of 3800-9200Å with a resolution of $R\sim2000$. The SDSS fibers have a fixed aperture of 3$''$ diameter, which corresponds to different physical scales at different distances.
The SDSS spectroscopic data used here have been reduced through the Princeton spectroscopic reduction pipeline, which produces the flux- and wavelength-calibrated spectra.[^1] The redshift catalog of galaxies used is from the NYU Value Added Galaxy Catalog (DR7) [^2] [@BlantonSS05]. K-corrections for SDSS were derived using [@BlantonR07]’s [*kcorrect*]{} package (v4\_2).
Emission line measurements
--------------------------
For emission line measurements in the Palomar sample, we adopt the tabulated values provided by [@HoFS97III].
For the SDSS sample, we measured the emission lines in the spectra after a careful subtraction of the stellar continua. The code used is an updated version of the code used by [@Yan06]. The major updates are:
1. We apply an additional flux calibration to all of SDSS spectra to fix small scale calibration residuals [@Yan11flux]. This is critically important for our results. We describe this correction in more detail below.
2. The absolute flux is calibrated for each spectrum by matching the synthetic $r$-band magnitude with the $r$-band fiber magnitude. The spectra are also corrected for Galactic extinction.
3. The stellar continuum is modelled as a non-negative linear combination of 7 templates. The templates are seven simple stellar population models with solar metallicity, with ages of 0.125, 0.25, 0.5, 1, 2, 7, and 13 Gyrs, made using the [@BC03] stellar population models.
As in [@Yan06], the emisson line flux is measured from direct flux-summing in the continuum-subtracted spectra. The line windows and sidebands are unchanged.
We discovered that the flux calibration produced by the standard SDSS pipeline has percent-level small-scale residuals that can significantly impact the measurement of emission line flux when the equivalent width of the line is low (a few Angstroms; see @Yan11flux Fig. 1 for an example of the impact). For example, for an emission line EW of 1Å measured in a 20Åwindow, if the throughput differs by 1% between the central window and the sidebands where the continuum level is measured, the line flux measurement will be off by 20%. This will introduce systematic offsets in line flux and line ratios as a function of redshift, significantly hampering our investigation. Therefore, we need a much more accurate small scale flux calibration than what the standard pipeline produces.
[@Yan11flux] solved this problem by comparing stacked red-sequence spectra between small redshift intervals to statistically determine the relative throughput as a function of wavelength, and achieved an flux calibration accurracy of 0.1% on scales of a few hundred Angstrom. We applied this small scale flux calibration to the spectra. This calibration is essential for the result presented in this paper (see Fig. 6 in @Yan11flux).
In this paper, we also make use of the line width measurements for the SDSS sample. The line widths are measured by fitting Gaussians to each emission line. Different emission lines are allowed to have different widths, except that the two lines (6716Å and 6731Å) and the two lines (6548Å and 6584Å) are both forced to have the same width. The instrumental resolution of SDSS varies with wavelength, the position of the fiber on the focal plane, and the spectrograph. This varying resolution as a function of wavelength is given for each individual spectrum by the Princeton pipeline. We subtracted quadratically the instrumental broadening from the measured line width to derive the intrinsic width of each emission line for each galaxy. Our quoted uncertainty of the line width measurement is the formal uncertainty of the Gaussian fit.
Photometry
----------
To identify the same population of galaxies at different redshifts, we use a photometric selection. We intentionally avoid the use of line ratios in sample selection to avoid bias on the line ratio gradient estimates.
For the Palomar survey, we took the catalog provided by [@HoFS97III]. Photometric information is available from the Third Reference Catalogue of Bright Galaxies (RC3;@RC3) but is incomplete. To increase the sample size with available photometry and to put them on the same system as the SDSS galaxies, we re-measured photometry for those galaxies inside the SDSS footprint using the SDSS images. We employed an improved background subtraction technique [@Blanton11] to treat these nearby large galaxies properly. After proper background subtraction, mosaicking, and deblending, we measure the photometry by fitting a two-dimensional Sersic profile to the deblended galaxy image. In the end, we derive the Galactic extinction corrected restframe $B$ and $V$ magnitudes for these galaxies from the measured $g$ and $r$ magnitudes using the [*kcorrect*]{} software package (v4\_2, @BlantonR07).
For those galaxies outside the SDSS footprint and for certain Messier objects for which the new method does not yield satisfactory results, we take the photometric information from the RC3 catalog and convert them to the AB system, and then correct for Galactic extinction. We do not attempt to correct for internal extinction for these galaxies as such measurements are not available for the SDSS sample.
For the higher redshift SDSS spectroscopic sample we derived the $B$ and $V$ magnitudes from the SDSS magnitudes using the [*kcorrect*]{} package mentioned above.
Sample definition {#sec:sampleselect}
-----------------
Figure \[fig:mb\_bv\_sloanpalomar\] shows the color-magnitude distribution for the Palomar sample overlaid on a sample of SDSS galaxies between $0.09<z<0.1$. The two samples have consistent color-magnitude distributions. We select only the red-sequence galaxies in both samples using two stringent color cuts defined by $$\begin{aligned}
(B-V) &> -0.016(M_V-5\log h_{75})+0.415 \\
(B-V) &< -0.016(M_V-5\log h_{75})+0.475\end{aligned}$$.
![Color-magnitude distribution of the Palomar sample (dark crosses) and galaxies in SDSS with $0.09<z<0.1$ (contour and gray points). The lines indicate our color and magnitude cuts.[]{data-label="fig:mb_bv_sloanpalomar"}](mv_bv_sloanpalomar.ps)
These cuts are chosen to reduce contamination from dusty star-forming galaxies. We limit to galaxies brighter than $-20.4$ in $M_V-5\log
h_{75}$ to match the magnitude limit of the SDSS survey at $z\sim0.10$.
There are 86 red galaxies in the Palomar survey satisfying these two cuts. Based on the morphological type given by the RC3 catalog, there are 30 ellipticals, 30 lenticulars, 15 early spirals(S0/a, Sa, Sab), 9 late spirals (Sb,Sbc,Sc) and 2 irregular galaxies. With the classification scheme given in [@HoFS97III], there are 29 LINER nuclei, 19 transition objects, 11 Seyferts and 3 HII regions. The remaining 24 objects have no line emission detectable in their nuclei.
In SDSS, we select a comparison sample with $0.01<z<0.1$ using the same color and absolute magnitude cuts. In most of our analysis, we bin the SDSS sample into 9 redshift bins with $\Delta z=0.01$.
Despite our selection on color, red galaxies can have sufficient star formation to contribute to the line emission in our apertures, especially for the more distant galaxies. The morphological distribution of the Palomar red galaxy sample suggests that this is occuring, given the presence of late-type spirals. We would like to exclude star-forming spectra in our analysis, since we want to understand the origin of line emission not associated with star formation. Therefore, in the Palomar sample, we exclude galaxies with Hubble types later than S0 and those spectroscopically classified as HII nuclei. This removes 31% of the Palomar red sample. The remaining sample includes 19 LINERs, 13 transition objects, 6 Seyferts, and 21 quiescent galaxies.
To achieve a similar selection in the SDSS sample, we exclude galaxies with any star formation using a stringent cut on $D_n(4000)$ (@Balogh99). This quantity is a proxy for the light weighted mean stellar age, and is thus sensitive to small levels of star formation. Measured over two 100Å windows separated by 50Å, it is less affected by dust reddening than rest frame colors, and more robustly measured than the H$\delta_A$ equivalent width. Figure \[fig:n2ha\_d4000\_z0.1\] shows the $D_n(4000)$ vs. $\log {\mbox{[\ion{N}{2}]}}/{\mbox{H$\alpha$}}$ for those galaxies in the SDSS sample with $0.09<z<0.1$. Galaxies that have small $D_n(4000)$ also tend to have lower /, suggesting that star formation could be contributing significantly among these. In the SDSS sample, we remove the 30% galaxies with the lowest $D_n(4000)$. This selection by $D_n(4000)$ rank is done separately in each redshift bin to take into account any potential aperture effects and redshift evolution. We choose a cut on $D_n(4000)$ rather than one based on line ratios to avoid biasing the comparison of emission line properties. In the rest of this paper, we refer to samples with possible star-forming galaxies removed as the Palomar red sample and the SDSS red sample.
![$D_n(4000)$ vs. $\log {\mbox{[\ion{N}{2}]}}/{\mbox{H$\alpha$}}$ for red sequence galaxies in SDSS with $0.09<z<0.1$ and $M_V < -20.4$. We show only the brightest 50% of the sample in luminosity. We show our chosen threshold as the solid horizontal line; it is set at the 30-th percentile in the $D_n(4000)$ distribution of the whole sample. Those galaxies with low $D_n(4000)$ and low ${\mbox{[\ion{N}{2}]}}/{\mbox{H$\alpha$}}$ probably have significant contribution by young massive stars in the production of their line emission. []{data-label="fig:n2ha_d4000_z0.1"}](n2ha_d4000_z0.1.ps)
To summarize, from the Palomar survey and SDSS, we identified a volume-limited sample of passive red galaxies without any star formation at $0<z<0.1$.
Spatial distribution of line emission
=====================================
In this section, we investigate the spatial distribution of line emission. The spatial distribution alone does not distinguish between different ionization mechanisms, but it is essential for the interpretation of other measurements, such as line ratio gradients.
We do this in a two-step process. First, we compare the nuclear aperture measurements from the Palomar survey with the large aperture measurements from SDSS at $z\sim0.1$ to establish the relation between nuclear LINERs in Palomar and the LINER-like galaxies in SDSS. Then, we utilize all apertures available to us from $0<z<0.1$ to measure the average emission line surface brightness profile among passive red galaxies.
Nuclear Emission vs. Extended Emission
--------------------------------------
In this section, we will compare the emission line luminosity distributions between two identically-selected, volume-limited samples, but for which the line luminosities are measured from different physical apertures. The difference in their luminosity distributions demonstrates that the emission measured in the larger aperture has to be spatially extended.
In the left panel of Figure \[fig:n2ha\_hal\_sloanpalomar\], we compare $L({\mbox{H$\alpha$}})$ and / between the Palomar red sample and the SDSS red sample at $z\sim0.1$. Not all galaxies in either sample have detected (64.4% of the Palomar red sample and 52.0% in the SDSS red sample at $z\sim0.1$ have detection). Therefore, we only compare the brightest half of each volume-limited sample in luminosity.
 
The SDSS red sample shows much brighter luminosities and slightly lower / ratios than the Palomar sample. Since both samples are volume-limited to the same magnitude cut, we adopted the same sample selection, and any evolution effect over a redshift difference of 0.1 should be tiny, this difference in luminosities must be caused mostly by the difference in the physical aperture size between the Palomar survey and SDSS. The Palomar sample reflects the nuclear properties of red galaxies while the SDSS sample reflects their integrated properties on much larger scales.
The brightest 25th percentile in luminosity for the $z\sim0.1$ SDSS sample (including non-detections) is $5.82\times10^{39} {\rm
erg~s^{-1}~h_{75}^{-2}}$, nearly 7 times larger than that in the Palomar sample ($0.85\times10^{39} {\rm erg~s^{-1}~h_{75}^{-2}}$). In fact, even the median emitter in the SDSS red sample is brighter than the majority of nuclear emitters in the Palomar red sample. Nuclear emission in red galaxies is therefore only rarely as luminous as found in the SDSS galaxies. Furthermore, we expect no strong evolution in AGN activity between $z\sim0.1$ and $z\sim0$. Thus, in the SDSS galaxies, a substantial contribution to emission must come from outside the nucleus, and therefore the emission observed by SDSS in these $z\sim0.1$ passive red galaxies has to be [*spatially extended*]{}.
One might wonder if the spatially extended emission found in large aperture measurements is due to low-level star formation in these galaxies. We can simulate the expected / ratio and $L({\mbox{H$\alpha$}})$ by adding a typical star-forming emission-line spectrum to a typical nuclear spectrum in the Palomar sample. We use the median / and median luminosity in the brighter half (in ) of the Palomar red sample to represent a typical nucleus. For star formation, we adopt an / ratio of 0.45, typical of a high metallicity star-forming galaxy, which yields a conservatively high / ratio. The result is shown by the curve in the left panel of Fig. \[fig:n2ha\_hal\_sloanpalomar\]. From the bottom end of the curve to the top, the luminosity contributed by the star formation goes from 0 to 100 times that of the typical nucleus. The curve misses the majority of the red galaxies at $z\sim0.1$. Clearly, the spatially extended line emission in red galaxies we selected at $z\sim0.1$ is not powered by star formation.
As described above, in constructing this passively-evolving red galaxy sample, we have removed potential star-forming contaminants by removing the 30% galaxies with the lowest $D_n(4000)$. The right panel of Figure \[fig:n2ha\_hal\_sloanpalomar\] shows the effect of this procedure, where we plot the low $D_n(4000)$ galaxies plus those Palomar red galaxies with Hubble types later than S0 for comparison. In the SDSS, those red galaxies we have removed generally have higher luminosities and lower / than those we have kept do. For the right panel, the curve indicates the track traced by adding star formation to a typical passive red galaxy in the SDSS sample from the left panel. From the bottom end of the curve to the top, the luminosity contributed by star formation goes from 0 to 10 times that of the median luminosity in passive red galaxies at this redshift. The curve traces the distribution fairly well, suggesting that the line emission in red galaxies with low $D_n(4000)$ have more ongoing, low-level star formation than those red galaxies with high $D_n(4000)$.
The right panel of Fig. \[fig:n2ha\_hal\_sloanpalomar\] also demonstrates that the Palomar red galaxies we have removed from the sample have fairly similar line luminosities and line ratios to those Palomar red galaxies we have kept (except for the three HII nuclei, with the lowest / ratios). This result reflects the fact that the Palomar spectra have smaller physical apertures. Therefore, although the criteria we used to remove star-forming contaminants differs slightly for the Palomar sample than for the SDSS sample, the Palomar sample is insensitive to these differences.
Now, we have shown that the line emission regions in passive red galaxies in SDSS have to be spatially extended, simply because few nuclear regions in red galaxies are luminous enough to explain the SDSS results. The next question is which galaxies host these extended emission regions at $z\sim0$. Are they the same galaxies that host those nuclear emission regions? The answer must be “yes.” Suppose they were not the same galaxies: then galaxies hosting these extended emission regions would need to have undetectable line emission in their centers. No such galaxies are found by the SAURON survey. As shown by [@Sarzi06], in a representative sample of 48 early-type galaxies in the nearby universe, all galaxies with emission lines detectable have their line flux peaking at the center, and the distribution is nearly always extended. Therefore, we conclude that most, if not all, red galaxies that have nuclear emission line regions also have extended line-emitting regions, and vice versa. The host galaxies of nuclear line emitting regions and those of extended line emitting regions are largely the same population.
Emission line surface brightness profile {#sec:surfacebrightness}
----------------------------------------
In this section, we compare the emission line luminosity distributions among passive red galaxies at a series of redshifts, which translates to a series of apertures, to investigate the average surface brightness profile of their line emission. We bin the SDSS sample with $0.01<z<0.1$ into 9 redshift bins with a binsize of 0.01. We limit the Palomar sample to only those galaxies at $D<40{\rm Mpc}$, which corresponds to $z=0.01$, for the lowest redshift bin.
Figure \[fig:halumdist\_all\] shows the luminosity distribution as a function of redshift. In each bin, we only plot the brighter half of the sample in . With increasing redshift, i.e., increasing aperture, the luminosities increase. Thus, the luminosities observed with larger apertures have to have significant contributions from spatially extended emission line regions. In each redshift bin, we sort all passive red galaxies by their luminosity. Figure \[fig:halum25\_z\] plots the brightest 25th percentile luminosity as a function of physical scale covered by the SDSS fiber. The 25th percentile is safely above the detection threshold at all redshifts. The luminosity increases with scale roughly as a power law with an index of $0.72$, as shown by the power-law fit in Fig. \[fig:halum25\_z\]. As this is the integrated luminosity within radius $r$, it indicates the average surface brightness profile follows $r^{-1.28}$. This slope is fairly consistent with what [@Sarzi10] found in nearby early-type galaxies targeted by the SAURON survey (see their Figure 4). We also show the same measurement of the Palomar sample (with late-type galaxies removed) at the median effective radius probed by the $2\arcsec\times4\arcsec$ aperture in the Palomar survey, which we treat as equivalent to a circular aperture with 3diameter. The 25th percentile in the Palomar sample is fairly consistent with the power-law fit to the SDSS data points. This evidence further strengthens the conclusion that the nuclear and the extended line emitting regions exist in the same galaxy population.
![ luminosity distributions of passive red galaxies as a function of redshifts. The first bin at $z<0.01$ is from the Palomar sample and the rest are from the SDSS sample. Only the brighter half of the sample in each redshift bin is plotted. The brightest 10 percent of galaxies in in each bin are plotted as points. The gray scales indicate the ranges populated by different percentiles in each bin: brightest 10-20th, 20-30th, 30-40th, and 40-50th, from top to bottom, respectively. The dashed line at the bottom indicates the 3$\sigma$ detection limit in SDSS, which is 3 times the luminosity corresponding to the median flux error in each bin. Clearly, the luminosity increases with increasing redshift or aperture size.[]{data-label="fig:halumdist_all"}](halumdist_all.ps)
![The 25th percentile (counting from the brightest) luminosity (asterisks with error bars) measured with SDSS fibers in a sample of non-star-forming red sequence galaxies as a function of the physical scale probed by the fiber. The triangle point represents the measurement in the Palomar sample, which is fairly consistent with the extrapolation of the power-law fit on small scales. The uncertainties of these measurements are measured with bootstrap resampling. The ’+’ signs at the bottom indicate the 3$\sigma$ detection limits.[]{data-label="fig:halum25_z"}](halum25_z.ps)
However, the extended line emission alone does not constrain the source of the ionizing radiation. Contrary to intuition, the extended emission could also be produced by a central ionizing source, such as an AGN. The emission line brightness profile depends on many factors: the ionizing flux profile, the density profile, the gas filling factor, and how the typical size of the gas clouds change with radius. We leave the detailed calculations to §\[sec:sbprofile\].
Line Ratio Gradient
===================
With the above technique, we can also check if the line ratio distribution in this population changes between redshifts/apertures. This check can only be done on those galaxies with detectable line emission. To ensure low uncertainty on the line ratio measurement, we choose only the brightest 25% in total emission line luminosity at each redshift and compare their various line ratios. To avoid bias on the line ratios, instead of selecting the brightest 25% in luminosity, we base the selection on the total luminosity of the several strongest emission lines available in the spectra (+++). This combination is a better proxy of the total line emission output than L().


Figure \[fig:lineratio\_all\] shows the full distribution of several line ratios as a function of aperture radius. We also plot those galaxies in the Palomar red sample with $D<40~{\rm Mpc}$ to probe the smallest scales. As for the SDSS sample, we only select the brightest 25% galaxies in total observed emission line luminosity.
Fig. \[fig:lineratio\_all\] shows the distribution of line ratios in the red galaxies. At small aperture radii, the scatter is dominated by intrinsic variations in line ratios among galaxies. At the large aperture end, the uncertainty in line measurements starts to dominate the scatter, leading to a slight increase in the width of the distribution. In most line ratios, the intrinsic variation has a standard deviation of approximately 0.1 dex.
Interestingly, in some line ratios, the median of the distribution changes systematically with aperture radius, most notably in /, /, and /. The trends also extend to the Palomar sample at the smallest scales. The changes are so large in / and /ratios that most of the Palomar sample populates only one side of the median of SDSS sample even in its first bin.
Figure \[fig:lineratio\_scale\] shows how the median line ratios change with aperture size, with the error bars giving the uncertainties of the median estimates. The systematic changes in /, /, and / with radius are significant and the Palomar sample confirms the trend on small scales.
One might worry that the change in / ratio could be due to dust extinction, because these two lines are separated in wavelength. However, the / ratio is nearly constant with radius, indicating that the average dust extinction does not vary much. The values of the median / ratio are also close to the Case B prediction of 2.85 (or 3.1 if collisional excitation is included), indicating that the level of dust extinction is low. Applying extinction corrections to each galaxy in the sample only shifts the median / ratio up by a nearly constant $\sim0.1$ dex in each bin.
The line ratios presented in Fig. \[fig:lineratio\_scale\] are cumulative measurements: they reflect the luminosity-weighted average line ratios within aperture radius $r$, rather than that in an annulus at radius $r$. We need to combine these median line ratios in integrated apertures with the median line luminosity of this sample in corresponding apertures, to derive the average line ratios in circular annuli at radius $r$.
In Fig. \[fig:linelum12\_z\], we show the median line luminosities of this sample in corresponding apertures for , , and . This plot is similar to Fig. \[fig:halum25\_z\], but uses only the 25% brightest galaxies in total emission line luminosity. From this figure, it is evident that the line has a very different surface brightness profile from and , consistent with the trends in integrated line ratios.
![The median integrated (asterisk), (triangle), and (square) luminosities as a function of aperture among the 25% passive red galaxies (in each bin) with the brightest total emission line luminosity. The smallest scale is probed by the Palomar sample and the larger scales are probed by the SDSS sample. The and points are slightly shifted in the horizontal direction for clarity. The uncertainties of these measurements are measured with bootstrap resampling.[]{data-label="fig:linelum12_z"}](linelum12_z.ps)
With the integrated line ratios and integrated luminosities, we can compute the line ratio in annuli. For example, the average / ratio between radius $r_i$ and $r_j$ can be computed by the following equations, $$\left\langle{{\mbox{[\ion{N}{2}]}}\over {\mbox{H$\alpha$}}}\right\rangle_{r_i<r<r_j} =
{\left\langle{{\mbox{[\ion{N}{2}]}}\over
{\mbox{H$\alpha$}}}\right\rangle_{r<r_j}L_j({\mbox{H$\alpha$}}) -
\left\langle{{\mbox{[\ion{N}{2}]}}\over{\mbox{H$\alpha$}}}\right\rangle_{r<r_i}L_i({\mbox{H$\alpha$}}) \over
L_j({\mbox{H$\alpha$}})-L_i({\mbox{H$\alpha$}}) },
\label{eqn:difflineratio}$$ where $L_i({\mbox{H$\alpha$}})$ and $L_j({\mbox{H$\alpha$}})$ are the median luminosities in apertures with radius $r_i$ and $r_j$, respectively. The / and / ratios in annuli are also computed by combining integrated line ratios with the median luminosities. The / and / ratios are computed by combining integrated line ratios with the median luminosities. The / ratios are computed by combining with the median luminosities. Calculating this between every consecutive bin results in large uncertainties, due to the large fractional error in the luminosity differences. We therefore calculate the annulus line ratios using aperture pairs: $[i,j] = [1,3],[1,4],[2,5],[3,6],[4,7],[5,8],[6,9],[7,10], {\rm and} [8,10]$, where Aperture 1 is the Palomar aperture and Aperture 10 is the aperture for SDSS galaxies at $0.09<z<0.1$. The results are shown in Fig. \[fig:lineratio\_ring\_data\]
Although the uncertainties become much larger, the differences between the innermost bin and the outer bins remain robust in /, /, and /. The second bin in each panel always shows a very large uncertainty. This is due to the dramatic change in line ratios between the first few bins and their larger uncertainties. In Table \[tab:lineratio\_change\], we list the line ratios of the inner most bin and the median line ratios of the outer bins ($700<r<2500~{\rm pc}$).

The line ratio gradients we observe have been in principle detectable in past long-slit spectroscopic surveys of early-type galaxies [e.g., @Phillips86; @Kim89; @Zeilinger96; @HoFS97III; @Caon00]. However, these authors either did not have data with sufficient quality or did not look at the line ratio gradient at all. The only exception is [@Zeilinger96], who showed that the / ratio decreases outwards in four galaxies. However, like many of these past surveys, their data did not have wide enough wavelength coverage to cover multiple line ratios, which would be critical for identifying the cause of the line ratio gradients. Recently, [@Annibali10] investigated the line ratio gradients using long-slit spectra with wide wavelength coverage for a sample of relatively gas-rich early-type galaxies. They found that the / ratios in most of them decrease with radius, consistent with our result. However, they did not look for gradients in other line ratios, except for /, which is very often too noisy to support firm conclusions.
We will discuss in §\[sec:discussion\] what changes in physical conditions are required to produce such changes in line ratios.
Location $\log$/ $\log$/ $\log$ /
---------------- --------------- ---------------- ----------------
Inner 300 pc $0.25\pm0.03$ $0.13\pm0.04$ $-0.40\pm0.04$
Outside 700 pc $0.01\pm0.01$ $-0.07\pm0.01$ $-0.07\pm0.01$
: Median line ratios within 300 pc radius and outside 700 pc
\[tab:lineratio\_change\]
Clues from Line Widths {#sec:linewidth}
======================
We initally thought that the line ratio gradient would produce different line widths in different emission lines, due to varying kinematics of the line emitting clouds across each galaxy. If this were so, it could provide a complementary constraint on the radial distribution of the emission. The data do indeed show different line widths for different lines. However, we have concluded that this variation is probably not due to line ratio and kinematic gradients, primarily because the line width differences are not a function of aperture size. We describe our investigation of line width differences in this section.
Because the width measurement is noisy on SDSS spectra, we need a control sample to demonstrate that our line width measurement is robust. The star-forming galaxies provides such a comparison. In a pure star-forming galaxies, the line emission originates mostly from HII regions photoionized by O and B stars. The line width in integrated spectra reflects the rotation velocity of the galaxy. In the absence of strong metallicity gradient, we would observe approximately the same line ratios in all HII regions. In this case, all lines should display the same line width.
   
We choose a star forming galaxy sample using the line ratio criteria described by [@Kewley06]. Basically, the galaxies in this sample are selected to fall in the star-forming branch on all three diagnostic diagrams (/ vs. /, /, and /). We compare this star-forming galaxy sample with the sample we used for deriving the line ratio profile, namely the top 25% passive red galaxies that have the highest total emission line luminosities in each redshift bin. We combine all the redshift bins together. In addition, to ensure good measurements on the line width ratio, we require the uncertainty of the line width ratio to be smaller than 0.1 dex.
In Figure \[fig:widthratio\], we plot the distributions of width ratio in four line pairs for galaxies in the star-forming sample (thin-lined histograms) and in the passive red galaxy sample (thick-lined histograms). In all line pairs, the distribution of star-forming galaxies always show a fairly symmetric and narrow distribution peaking around zero in logarithmic space, meaning all the lines have roughly the same width. This proves that our line width measurement is robust. It also indicates that line ratios in star-forming HII regions do not correlate strongly with the velocity of the HII regions. The -to- pair may be an exception; for star-forming galaxies this width ratio has a wider distribution than the other line pairs. This broad distribution probably reflects intrinsic variation in the population. It might be caused by variation in the disturbed component of the diffuse ionized medium in star-forming galaxies, which produces strong and wide lines [@Wang97]. We leave this question for future investigations.
However, for line-emitting passive red galaxies, the distributions in -to-, -to-, -to- width ratios do not peak around zero in log space. Their offsets from zero are highly significant. On average, the and lines in them are wider than lines by 8% and the lines are wider than by 16% (Table \[tab:medianratio\]). This means the / ratios in the line wings are higher than that in the line center, and the higher velocity clouds which contribute to the wings must have higher / ratios. The same must be true for / and / ratios. This indicates that in these galaxies, line emitting clouds do not have uniform line ratios and the line ratio must correlate with the velocity of the clouds.
In the previous section, we found the line ratios have a systematic variation with radius. Could these line ratio variations produce the line width differences?
Given the fact that / ratio increases with radius, to obatin a wider line than , the line-of-sight velocity broadening must also increase with radius. The broadening could either come from random motions or ordered rotation. As shown by previous long slit [@Phillips86; @Kim89; @Zeilinger96] and IFU [@Sarzi06] observations, in most early-type galaxies, the kinematics of the gas is largely consistent with disk-like rotation with rotation velocity increasing outwards (though see @HeckmanBvB89 for some exceptions). This is consistent with the requirement here. The -to- and -to- width ratios could also be consistent with their respective line ratio gradients (if the decrease in / towards the center is real, as indicated by the Palomar data point). However, the show the same width as , inconsistent with the expectation.
![Top panel: the -to- width ratio distribution as a function of aperture radius for the 25% passive red galaxies with the highest total line luminosity. The dark points with error bars indicate the median width ratio in each redshift bin. Bottom panel: same plot for the width ratio between and .[]{data-label="fig:lw_scale"}](lw_scale.ps)
On the other hand, if the width difference is indeed caused by the line ratio gradient and the rotation of the gas disk, the line width difference should gets smaller in larger apertures, since we expect the flux to be increasingly dominated by the outskirts where the line ratio profile gets flat. Figure \[fig:lw\_scale\] shows the line width ratio distributions as a function of aperture radius. Apparently, the average line width ratios are roughly constant, independent of the aperture size. This is inconsistent with the expectation of the model.
We demonstrate this inconsistency quantitatively by simulating the expected line width ratios using a toy model of a rotating gas disk in a spherically-symmetric galaxy with stellar mass of $7.5\times10^{10}M_\odot$ and an effective radius of 4.8 kpc, the median values among our 25% passive red galaxy sample with the brightest total line luminosity. The rotation curve of the gas disk is set by the stellar density profile, which is assumed to be a $\gamma$-model described by [@Dehnen93] with $\gamma=1.5$. This stellar density profile gives a stellar surface brightness profile closely resembling the de Vaucouleurs’ $R^{1/4}$ profile. The integrated luminosity profile in is fixed to be a power law with index of 0.77, measured by fitting the data points in Fig. \[fig:linelum12\_z\]. The inclination of the disk is set at $60^{\circ}$. We assume that the line ratios at each point in the disk are solely dependent on radius. We model the logarithm of the line ratio profile as a broken power-law of the form, $$\log {{\text{[\ion{O}{3}]}}\ \over {\mbox{[\ion{S}{2}]}}} = \left\{ \begin{array}{rl}
A (r/r_0)^{\gamma_1}-0.5 &, r < r_0 \\
(A-0.4) (r/r_0)^{\gamma_2}-0.1 &, r \ge r_0
\end{array} \right.$$ Based on the trend seen in Fig. \[fig:lineratio\_ring\_data\], we fix the model to have $\log {\text{[\ion{O}{3}]}}/{\mbox{[\ion{S}{2}]}}$ equal to -0.5 (an arbitrary choice) at $r=0$ and asymptote to -0.1 as $r \rightarrow \infty$. We fit the model to the integrated line ratios rather than the differentiated line ratios, since the former have independent uncertainties.
The velocity dispersion in the disk is assumed to be a constant everywhere and equal to $50 {\rm km s^{-1}}$. Many gas kinematic studies have shown that the velocity dispersion is likely to increase towards the center. Here we assume the extreme case of flat dispersion, since using an increasing velocity dispersion towards the center would erase the line width differences.
We employ a Markov-Chain Monte Carlo technique to find a large number of models that best fit the data. Among the 10 data points, we ignored the second bin ($r\sim500~{\rm pc}$) in the fitting, as including it makes the fit difficult. We chose five typical but different models to illustrate the trend expected in line width differences as aperture radius increases. The model parameters are given in Table. \[tab:modelpara\].
A $r_0$ (kpc) $\gamma_1$ $\gamma_2$
------- ------------- ------------ ------------
1.86 345.9 3.69 -2.44
1.56 309.2 3.82 -2.04
1.21 402.3 2.29 -1.98
0.982 565.1 1.30 -2.30
0.868 364.7 1.58 -1.46
: Line-ratio Profile Model Parameters
\[tab:modelpara\]
The left panel of Fig. \[fig:simuratio\] shows these models give reasonably good fits to the integrated line ratio profiles. The right panel of Fig. \[fig:simuratio\] shows the line width ratio as a function of aperture radius for these models. The solid curves and the dashed curves show the result for two different methods for measuring the line width. Because circular rotation makes more boxy profiles than Gaussian, the width measured from Gaussian fitting is different from that measured from FWHM. No matter how the width is measured, the width ratio between and produced by all circular rotation models decreases strongly towards large apertures, inconsistent with what we observe. The decrease in width ratio towards large aperture is expected in the model since the line luminosity is increasingly dominated by flux from large radius and the line ratio profile flattens outward. Therefore, the line width ratio is probably not caused by the line ratio gradient and circular rotations. So, what is the real cause of the line width difference?
 
The line width is mainly contributed by two components: the thermal broadening and the bulk motion of the clouds. For gas at $T=10^4 {\rm K}$, the thermal broadening is approximately $15{\rm km/s}$ for . For the red galaxies in our sample, the lines are very wide, with Gaussian sigmas ranging between 100km/s and 300km/s and a median of 176km/s in lines. Therefore, the thermal broadening is a minor contributor. The width is likely dominated by bulk motion broadening or turbulence in the clouds.
To produce a width difference between different lines, we have to have clouds with different line ratios and different line widths, and the line ratio has to correlate with the line width. For example, for and to have different widths, we need a population of clouds with high / flux ratio and a population of clouds with low /. To make a wider than , those high / clouds need to produce a wider width than those low / clouds. Basically, to produce a constant width ratio with radius, the width ratio has to be approximately the same everywhere in the galaxy. Thus it requires at least two components of the ISM throughout the galaxy that have different line ratios and different line broadening. Because the total flux ratio has a gradient with radius, the two components need to have approximately synchronous line ratio gradient behaviours. One component is kinematically more disturbed than the other and thus produces a wider line width. Our observed width ratios indicate that the more disturbed component has higher / and / ratios, lower / ratio, and similar / ratio to the more quiescent component. Without knowing the ionization mechanism for the gas, the origin of these multiple components and the reason for their line ratio difference is difficult to analyze.
Line pair Star-forming galaxies Old red galaxies
----------------------------------------------------------------- ----------------------- ------------------
$\sigma_{{\mbox{[\ion{N}{2}]}}}/\sigma_{{\mbox{[\ion{S}{2}]}}}$ $0.991\pm0.0004$ $1.079\pm0.004$
$\sigma_{{\text{[\ion{O}{3}]}}}/\sigma_{{\mbox{[\ion{S}{2}]}}}$ $1.006\pm0.001$ $1.164\pm0.006$
$\sigma_{{\mbox{[\ion{S}{2}]}}}/\sigma_{{\mbox{H$\alpha$}}}$ $1.033\pm0.0004$ $0.933\pm0.004$
$\sigma_{{\mbox{[\ion{N}{2}]}}}/\sigma_{{\mbox{H$\alpha$}}}$ $1.022\pm0.0003$ $1.009\pm0.003$
: Median line width ratios
\[tab:medianratio\]
We check if the line width difference changes with the line strength. Figure \[fig:lw\_ew\] shows the width ratio distribution between emission lines as a function of EW. We only use the brightest 25% galaxies in total emission line luminosity and excluded among them those with uncertainty on line width ratio greater than 0.2 dex. The median -to- and -to- width ratios always stay above 1. The -to- width ratio seems to decrease slightly with increasing EW.
![Top panel: the width ratio distribution between and as a function of EW for line-emitting red galaxies. Bottom panel: same plot for the width ratio between and . []{data-label="fig:lw_ew"}](lw_ew.ps)
Figure \[fig:lw\_lum\] show the width radio distribution for the line-emitting red galaxies as a function of absolute luminosity. The same sample is used as in Fig. \[fig:lw\_ew\]. The -to- width ratio stays flat as a function of luminosity but the -to- width ratio declines slowly towards fainter galaxies. The important point is that they always show significant offset from zero in log space, suggesting that the reason causing the width difference is universal in these galaxies.
![Top panel: the width ratio distribution between and as a function of luminosity for line-emitting red galaxies. Bottom panel: same plot for the width ratio between and . []{data-label="fig:lw_lum"}](lw_lum.ps)
Discussion {#sec:discussion}
==========
In this section, we first discuss what physical factors determine the emission line surface brightness profile, and show that the profile alone does not provide a discriminator between different ionization mechanisms. Then, we discuss the viability of the different ionization mechanisms in light of the observational results we presented above.
Surface Brightness Profile {#sec:sbprofile}
--------------------------
Except shock heating and conductive heating by the hot gas, all other major ionization mechanisms proposed involve photoionization. In this section, we consider a generic photoionization model and examine which of its parameters determine the emission line surface brightness profile.
We assume that the ISM is filled with hot, ionized gas. Embedded in it are neutral dense clouds. Each cloud is optically-thick to the ionizing radiation. In photoionization equilibrium, for each cloud the total emission line luminosity has to be equal to the total photoionizing luminosity it receives, which is the incoming flux times the projected area of the cloud. Therefore, the luminosity density profile depends on the photoionizing flux profile (as a function of radius) and the total projected cloud area per unit volume. For example, for a volume filling factor of $f_g$, assuming the clouds have an average volume of $\langle V \rangle$ and an average projected area of $\langle A\rangle$, the luminosity density of line emission would be $$j(r) = F(r) {f_g(r) \over \langle V \rangle} \langle A \rangle$$ Here, $F(r)$ is the ionizing flux profile. The second term on the right hand side gives the number density of clouds. Multiplying it with the average projected area yields the total projected cloud area per unit volume. To obtain the final surface brightness profile, we also need to convolve the luminosity density profile with the spatial distribution of the clouds. If the clouds all reside in a disk with constant thickness, then the surface brightness scales with radius in the same way as the luminosity density. However, if the thickness of the disk increases with radius, like the Milky Way gas disk, then the surface brightness profile would be much shallower. Assuming the scale height of the disk is $H(r)$, the surface brightness of the line emission would be $$\Sigma (r) = F(r) f_g(r) H(r) {\langle A \rangle \over \langle V \rangle}$$ The typical cloud area and volume could also change with distance from the center. Because the gas density ($n$) depends on distance from the galaxy center, if the mass distribution of the clouds is independent of the distance, then the typical $ \langle A \rangle/\langle V \rangle$ will scale as $n^{1/3}$. Therefore, the constraint from the surface brightness profile is $$\Sigma (r) = F(r) f_g(r) H(r) n^{1/3} \propto r^{-1.28}
\label{eqn:surfacebrightness}$$
We do not have enough information about the geometry of the cloud distribution and how the typical cloud sizes change with radius to constrain the photoionizing flux profile. Therefore, the extended line emission only provides a partial constraint on the source of the ionization. We need more information from other methods.
Many ionization mechanisms have been proposed to explain the observed emission line ratios in these red galaxies, which mostly have LINER-like line ratios. In the following sections we consider these mechanisms, dividing them generically into three categories: a central photoionizing source, distributed photoionizing sources, and shocks.
Photoionization by an accreting SMBH
------------------------------------
An accreting supermassive black hole will emit X-rays and extreme UV radiation that photoionizes surrounding gas clouds and produces line emission. In this section we will examine the predictions of this model for the line ratio gradients and line width differences.
### Line ratio gradients for a SMBH {#sec:agn}
First, we will demonstrate that this model cannot explain the observed line ratio gradients. To do so, we use models calculated with the MAPPINGS III codes [@DopitaS96; @GrovesDS04I; @Allen08]. Figure \[fig:mappings\_agn\] shows two line ratio diagnostic diagrams with the grids representing the models presented by [@GrovesDS04I].
This standard photoionization model (with no dust or radiation pressure) assumes a metallicity of $2{\rm Z}_\odot$ and a hydrogen density of 1000 cm$^{-3}$. With the given range of parameters, this model cannot produce the / and / ratios observed in the center of these galaxies. In fact, none of the dust-free classical models in [@GrovesDS04I] can produce the central / ratio: they are all too low. It may be possible to fit the / ratios by tweaking the N/S abundance ratio, or adopting the dusty, radiation-pressure dominated photoionization model [@Dopita02]. However, it is unlikely that the latter model is applicable to the low intensity radiation fields in LINERs.
Since we are not yet certain about the ionization mechanism, it is premature to constrain the exact physical parameters using these measurements. However, the models should provide a guide as to the direction of change in the line ratios. Thus, we only use these models to investigate what the line ratio gradients tell us about the change in the physical parameters, not their precise values.

The line ratios are primarily determined by four parameters: gas density ($n$), metallicity ($Z$), ionization parameter ($\log
U$)[^3], and the spectral index ($\alpha$). From the four strong lines, , , , , we have only three line ratios at each position. Therefore, we cannot hope to determine the trend in all of these parameters and have to start by keeping some parameters fixed. In the six panels of Fig. \[fig:mapping\_n2ha\_all\], we fix two parameters at a time and look at the line ratio dependence on the other two parameters, to determine all possible scenarios for the observed line ratio variation. We only look at / vs. / diagram since the / data points are not well covered by the models.
There are three ways that the / ratio can increase outward: an increase in the density (panel c), a decrease in the metallicity (panel d), or an increase in the ionization parameter (panel a). For the first option, it is unphysical to expect the density to increase outwards, which would require a higher gas pressure at larger radius (since the temperature in the ionized gas is likely to be always near $10^4{\rm K}$).
The second option, the metallicity gradient, is also not a likely source for the change in / ratio. At fixed density and ionization parameter, it requires a factor of $\sim3$ change in metallicity with radius, which is comparable though somewhat larger than the observed stellar metallicity gradients (@Kuntschner10). However, the density is likely to decrease outwards, which acts as a countervailing force and would require a larger metallicity gradient. Indeed, since at low metallicity the / ratio becomes insensitive to $Z$, this possibility could be ruled out.
The third option, an increasing ionization parameter, is much more promising. / is quite sensitive to $\log U$: a variation of more than 0.1 dex would dominate any other possible effect. Even before considering specific models, we should suspect that the ionization parameter in these objects increases outwards.
The ionization parameter is defined as the ratio between ionizing flux and gas density. Therefore, we now examine how gas density changes with radius. X-ray observations of giant ellipticals have shown that the hot gas density follows the square root of the stellar density profile, $n_e
\propto \rho_*^{1/2}$ (@MathewsB03 and references there in). This means that the gas density profile falls with radius as $r^{-p}$ with $0.5<p<1$ at the center and an increasing $p$ at large radii. This range of central density slope is consistent with the more recent X-ray measurements by [@Allen06]. The temperature profile of the hot gas is much flatter, varying by at most 50% between the center and the outskirts [@MathewsB03]. The gas clouds that generated the observed optical line emission always have temperature near $10^4{\rm K}$. Therefore, under pressure equilibrium, with the nearly constant temperature profile in both the hot gas and the warm ionized gas, the density in the warm ionized gas clouds should fall with radius in roughly the same way as the hot gas density.
It is important to note that this density scaling is only verified in giant ellipticals. In fainter early-type galaxies it may not hold. Nonetheless, we expect the central density profile in faint ellipticals is also much shallower than $1/r^2$. Evidence for this comes from density profile measurements in the central regions ($\lesssim100{\rm pc}$) of a few fainter early-type galaxies (NGC 1052, NGC 3998, NGC 4579) from line ratios. [@Walsh08] showed that the power-law indices of their gas density profiles are around $-0.6$. Therefore, we use the $n_e \propto \rho_*^{1/2}$ scaling as a working assumption here and in the next section. Our main conclusion remains the same if one switches to a power law density profile with $n_e \propto r^{-1}$ or shallower.
In the case of a central ionizing source, the flux decreases as $r^{-2}$. Since the gas density is at most decreasing as $r^{-1}$, the ionization parameter must decrease outwards by a large amount, at least 0.5 dex. No change in metallicity or spectral index could conceivably make up for this decrease: this model inevitably predicts a strongly decreasing / ratio with radius, the opposite of what we observe. Therefore, unless the gas density profile actually falls faster than $r^{-2}$, the AGN photoionization model cannot explain the observed line ratio gradients.
Meanwhile, there are also three ways for the / ratio to decrease outward: a softening of the ionizing spectrum (panel a), a decrease in the density (panel b), a decrease in the metallicity (panel d), or a combination of these. The outward decreasing density provides a natural solution. Though its decline with radius might be too slow to explain all the change in / ratio. An additional contribution from metallicity gradient and a change in the spectral index might be needed as well. With the current modeling uncertainty, we cannot break the degeneracy among these possibilities.
### Comparison to line width differences observed in nearby Seyferts
Next, we consider the observed variations of line width between our various lines. We conclude here that the variations we observe are probably not related to those known to exist for Seyfert galaxies.
Velocity width variations among different emission lines have been observed in classical nearby Seyferts and LINER nuclei [@FilippenkoH84; @Filippenko85; @deRobertisO86; @HoFS96]. In most cases, the line widths correlate strongly with critical density for collisonal deexcitation: lines with higher critical density tend to have larger widths. In a minority of Seyfert 2s, the line width correlates with the ionization potential of the ions.
The line width differences we observe for LINERs is broadly consistent with that seen in local Seyferts. In a sample of 18 Seyfert 2 galaxies presented by [@deRobertisO86], the median of -to- width ratio is $1.11\pm0.06$, the median of -to- width ratio is $1.18\pm0.08$. Our width ratios are only slightly smaller.
[@FilippenkoH84] proposed the following picture to explain the line width differences. The ionization flux decreases outward according to the inverse-square law. If the density also falls as $r^{-2}$, then the ionization parameter seen by each cloud will be the same. If all clouds are optically-thick to the ionizing radiation, then they will all have the same ionization structure and produce the same set of emission lines. The relative line ratio will vary according to the density of each cloud. Lines with a high critical density will be mainly contributed by a high density clouds, which are closer to the nucleus and have higher velocities.
In this scenario, the / and / flux ratios should increase towards the center, the opposite of what we observe for LINERs. In addition, for most red galaxies the line emission is spatially extended, with an average surface brightness profile falling as $r^{-1.28}$. Thus, the line luminosity is not dominated by the central regions where the density gradient is steep. The kinematic structure on large scales is also different from the Keplerian rotation found near the SMBH. Therefore, while the scenario is applicable to the narrow line regions of Seyferts, cannot be applicable in our case. The similar line width ratios of our results and those of [@deRobertisO86] may simply be a coincidence.
Photo-ionization by Distributed Ionizing Sources
------------------------------------------------
The suggestion that the ionization parameter increases outwards naturally points to a slower decrease in the flux and thus to distributed ionizing sources rather than a central one. A number of models have been proposed along these lines, such as photoionization by hot evolved stars and by the hot X-ray emitting gas. In this section we first discuss the generic predictions of models with distributed sources, and then discuss particular models in more detail.
### Line ratio gradients from distributed sources
Although the ionizing spectra produced by these models differ from that produced by an AGN, the overall dependence of line ratios on ionization parameter and metallicity are very similar. Thus, as in the case of AGN, to explain the line ratio gradients we need the ionization parameter to increase outwards. Distributed sources can produce this trend.
Suppose the ionizing sources are distributed like the stars, i.e., their luminosity density profile follows the stellar density profile. Then, we can compute the ionizing flux profile using the latter. Assuming that the galaxy is spherically symmetric, that the stellar density profile is $\rho(r)$, and that the average number of output photoionizing photons per unit time per unit stellar mass is $Q_0$, the total integrated ionizing flux at distance $D$ from the center of the galaxy is:
$$\begin{aligned}
F(D) &= \int_0^{\infty} \mathrm{d}r \int_0^{2\pi} \mathrm{d}\phi
\int_0^{\pi} {Q_0 \rho(r) r^2 \sin\theta \over 4\pi (D^2+r^2 - 2Dr \cos\theta)} \mathrm{d}\theta \\
&= {Q_0 \over 2} \int_0^{\infty} \rho(r) {r\over D} \ln { D+r \over |D-r|} \mathrm{d}r \label{eqn:fluxint}\end{aligned}$$
The integral can be solved by substituting $r=D(1+e^u)$ for the $r>D$ part and substituting $r=D(1-e^u)$ for the $r<D$ part.
For the stellar density profile, we adopt the $\gamma$-model described by [@Dehnen93]: $$\rho(r) = {(3-\gamma) M \over 4\pi} {a \over r^\gamma (r+a)^{4-\gamma} }
\label{eqn:dehnen}$$ where $M$ is the total mass, and $a$ is a scaling factor which relates to the effective radius $R_e$, depending on the inner slope, $\gamma$. The $\gamma=1$ model correponds to the [@Hernquist90] profile, the $\gamma=2$ model corresponds to the [@Jaffe83] profile, and the $\gamma=1.5$ yields the best approximation of the de Vaucouleurs’ $R^{1/4}$ model in surface brightness profile. Putting this model in Eqn. \[eqn:fluxint\], we integrate numerically to obtain the total ionizing flux as a function of radius for a model galaxy with an effective radius of 4.8 kpc and a stellar mass of $7.5\times10^{10}M_\odot$, the medians for the top 25% line-emitting passive red galaxies in our sample.
Figure \[fig:pagbflux\] shows the resulting ionizing flux profile for three models with different $\gamma$ values, along with the inverse square law expected from a central ionizing source. In the inner kpc, distributed ionizing sources will produce a much shallower ionizing profile than the inverse square law.
![The integrated ionizing flux from a system of ionizing sources following the stellar density profile as a function of distance to the center of a model galaxy. The curves correspond to different $\gamma$-models for the stellar density profile. The solid line corresponds to the model with the best fit to the de Vaucoulers’ profile in surface brightness. The long dashed line corresponds to the inverse square law as expected in the AGN model. All models show flatter flux profile.[]{data-label="fig:pagbflux"}](pagbflux.ps)
![The ionization parameter produced by a system of ionizing sources following the stellar density profile in a galaxy shining a cloud as a function of distance from the galaxy center. We assumed gas density profiles of $n(r) \propto n_*^{1/2}$ and normalize them to $100 {\rm cm}^{-3}$ at 1 kpc. The three curves correspond to three different $\gamma$-models as described by Eqn. \[eqn:dehnen\]. $M_*$ is the stellar mass of the galaxy. []{data-label="fig:pagblogu"}](pagblogu.ps)
In Figure \[fig:pagblogu\], we divide the ionizing flux profiles by a gas number density profile to see how the dimensionless ionizing parameter will vary with radius under these different models. We adopt a gas density profile that scales as the square root of the stellar density profile, $n_g \propto n_*^{1/2}$ (@MathewsB03 and references therein, also see discussion in §\[sec:agn\]), and normalize them to $100{\rm cm}^{-3}$ at 1 kpc. This warm gas density is consistent with observations [@HeckmanBvB89; @DonahueV97] and our assumption of pressure equilibrium between the warm gas ($T\sim10^4K$) and the hot gas ($T=10^6$ – $10^7{\rm K}$, $n=0.1$ –$1 {\rm cm^{-3}}$). Interestingly, for the $\gamma=1.5$ model, which gives the best fit to de Vacucouleurs’ profile, the ionizing parameter displays the same trend as we observe, as shown by the / ratio profile in Fig. \[fig:lineratio\_ring\_data\]. It not only produces the increase with radius in the central part but also a slow decline on the outskirts.
### Luminosity dependence
A prediction of the above model is that the line ratio gradient should have a luminosity dependence. Many studies [@Lauer95; @Faber97; @Rest01; @Ravindranath01; @Lauer05; @Ferrarese06; @Glass11] have shown that the inner power-law slope of the stellar luminosity density profiles changes from $-1$ for bright galaxies to $-2$ for faint galaxies. As shown by Figure \[fig:pagblogu\], these different stellar density profiles should generate different gradients in the ionization parameter. The transition point is approximately at $M_B =
-20.5$ (or around $M_V = -21.3$). Here, we investigate whether the gradients depend on luminosity in the expected manner.
We divide our passive red galaxy sample at $M_V=-21.3$ into bright and faint samples to look for the luminosity dependence. The bright sample has a median stellar mass of $1.0\times10^{11}M_\odot$ and a median effective radius of $5.8~{\rm kpc}$. For the faint sample, the corresponding numbers are $4.7\times10^{10}M_\odot$ and $3.4~{\rm kpc}$.
First, we show that [@Dehnen93] models with different $\gamma$ values can provide reasonable approximations to the density profiles of bright and faint galaxies in our sample. To demonstrate this, we compare the models to the surface brightness profile fits presented by [@Ferrarese06] for the 14 early-type galaxies in the Virgo cluster that satisfy our luminosity cut ($M_V < -20.4$). In Fig. \[fig:virgogals\], the gray solid curves show profiles of the brighter galaxies with $M_V<-21.3$ and the gray dashed curves show those of the fainter ones. The fainter galaxies generally have steeper profiles, despite their smaller Sersic indices as reported by [@Ferrarese06]. The difference can be reasonably approximated by the difference between the surface brightness profiles of two [@Dehnen93] models with $\gamma=1.3$, $R_e=5.8~{\rm kpc}$, and with $\gamma=1.7$, $R_e=3.4~{\rm kpc}$.
![Surface brightness profiles for the 14 early-type galaxies in the Virgo cluster that would satisify our luminosity cut ($M_V<-20.4$), as fit by [@Ferrarese06]. The solid gray curves and the dashed gray curves represent galaxies brighter and fainter than $M_V$ of $-21.3$, respectively. The two thick curves represent the profiles computed for two [@Dehnen93] models, with parameters indicated in the legend. They provide reasonable approximations to the profile difference between bright and faint early-type galaxies. The two vertical dotted lines indicate the range of apertures probed in this paper.[]{data-label="fig:virgogals"}](virgogals.ps)
![Luminosity-weighted ionization parameter within aperture as a function of aperture radius, for five models with different stellar density profiles. The density profiles are specified by its inner power-law slope, $\gamma$, and effective radius, $R_e$. The gas density are assumed to scale as the square root of the stellar density and is normalized to $100 {\rm cm}^{-3}$at 1 kpc for all models. This figure shows that $\gamma$ controls the shape of the resulting ionization parameter profile while $R_e$ affects mainly the normalization. []{data-label="fig:logu_lumdep"}](logu_lumdep.ps)
Next, we demonstrate that the shape of integrated ionization parameter profile is not sensitive to the effective radius of the galaxy, but is primarily controlled by the $\gamma$ parameter. Bright galaxies not only have shallower inner density profiles, but also have larger effective radii than faint galaxies. In Figure \[fig:logu\_lumdep\], we plot models with various choices for $\gamma$ and $R_e$. The ionization parameter plotted is the luminosity-weighted average within an aperture. For the luminosity weighting, we assume a power-law surface brightness profile with an index of $-1.23$, as derived from a fit to the profile in Fig. \[fig:linelum12\_z\][^4]. Combined with the ionization parameter profile from the model, we compute the luminosity-weighted average ionization parameter as a function of aperture radius. In all models, the gas density is assumed to scale as the square root of the stellar density and they are all normalized to be $100 {\rm cm}^{-3}$ at 1 kpc.
The middle three curves in Fig. \[fig:logu\_lumdep\] are models with the same $R_e=4.8$ kpc but different $\gamma$. They have different slopes in both the outer part and the inner part. They differ little in overall normalization. The top and bottom curves have the same $\gamma$ as the solid curve, but differ in $R_e$. These three cases have nearly identical shapes, but differ significantly in their normalization. Other factors can impact the normalization, including $Q_0$, the stellar mass, and the normalization of the gas density. Without knowing $Q_0$ and the normalization of the gas density, the normalization of the curve is free to vary. In contrast, only $\gamma$ controls the shape of the ionization parameter profile. Since we know $\gamma$ varies with luminosity, a definite prediction by this model is that the bright sample and faint sample should differ in the shape of their integrated / profile.
![The aperture / ratio as a function of aperture size for the whole sample (stars), bright sample (triangles, $M_V < -21.3$), and faint sample (squares, $M_V > -21.3$). The curves represent predictions of three simple models with different $\gamma$ parameter (Eqn. \[eqn:dehnen\]). They are [*not*]{} fits to the data, but a scaled and shifted version of the luminosity-weighted average ionization parameter. See text for detail. []{data-label="fig:o3s2_logu_all"}](o3s2_logu_all.ps)
[Now we check this luminosity dependence in the data. We divide all passive red galaxies into bright and faint subsamples at $M_V=-21.3$.]{} As in the whole sample, we select the top 25% galaxies in each subsample that have the brightest total emission line luminosity. Figure \[fig:o3s2\_logu\_all\] shows the / ratios as a function of aperture size for the bright sample and the faint sample separately, along with those for the whole sample. The curves represent the prediction of three models with different $\gamma$ parameter, $R_e$ and stellar mass, with the latter two parameters adopting median values in the data. The models are calculated in the same way as for Fig. \[fig:logu\_lumdep\].
To convert the luminosity-weighted average ionization parameter, we used the median stellar mass and $R_e$ for each model, assumed that $Q_0$ was constant, and assumed that every 0.3 dex in $\log U$ translates to 0.5 dex in / ratios. Then we shifted the models vertically by varying the gas density normalization so that the $\gamma=1.3$, 1.5, and 1.7 models match roughly the normalization of the data points for the bright sample, the whole sample, and the faint sample, respectively. We did not perform an explicit fit using these data because there are still too many poorly-known factors in the model. Under our assumptions, the gas density at 1 kpc must be $\sim13\%$ lower for the faint sample than in the full sample, and $\sim6\%$ higher for the bright sample. This normalization difference between the bright and faint samples does not have to be due to a density difference. It could also be due to the higher fraction of flat systems (lenticular galaxies) in the faint sample [@Bernardi10]. The stars in an intrinsically flat galaxy would be systematically closer to the gas and yield a larger ionization parameter, hence higher / ratios.
Although the normalizations match the data by design, the shape of the models are set completely by the $\gamma$ values. The bright galaxy sample has an increasing / ratio (integrated) with radius, matching the model prediction of a stellar density profile with a flatter inner slope (small $\gamma$). The faint galaxies have a much flatter outer slope, matching the prediction by a stellar profile with a steeper inner slope (large $\gamma$). In addition, the fainter sample displays a much steeper line ratio gradient on small radii ($<500~{\rm pc}$) than the brighter sample, matching the general trend predicted by the model.
The data on the smallest scales do not match the data. This mismatch could be due to the overly simplified assumptions we made. The scaling between / ratio and $\log U$ may be non-linear; the inner gas density profile may be steeper than assumed. These could all change the shape of the curves.
It is remarkable that a simple single-parameter model is able to predict the overall luminosity dependence of the / profile shape in the data. It provides a strong support for ionization mechanisms invoking sources that are distributed like the stars.
![Aperture / ratios for the bright (triangles) and faint (squares) passive red galaxies as a function of aperture radius. []{data-label="fig:n2ha_logu_all"}](n2ha_logu_all.ps)
In Fig. \[fig:n2ha\_logu\_all\], we show the / ratios as a function of aperture size for the bright and faint samples. The brighter sample always shows larger median / ratios than the fainter sample. This correlation between / and galaxy luminosity has been seen by [@Phillips86]. Our result shows that this correlation exists at all aperture scales. As we learned from Fig. \[fig:mapping\_n2ha\_all\], to increase / with photoionization, we have to either increase the metallicity, increase the density, or use harder ionizing spectra. The density is unlikely to vary by more than a factor of 10 between the bright galaxies and faint galaxies. And if all galaxies are powered by the same ionizing sources, the spectra should also be the same. Therefore, the difference in / is most likely due to the gas-phase metallicity difference between the two samples.
To summarize, photoionization by distributed ionizing sources following the stellar density profile can naturally produce the general variation of ionization parameter with radius, including the sharp rise at small radius and gentle decline on large radius. It is also able to produce the overall direction of the luminosity-dependence of the line ratio gradient. This strongly indicates that the spatial distribution of the true ionizing source is similar to the stellar distribution.
### Post-AGB stars
[@Binette94] proposed that photo-ionization by post-AGB stars could explain the extended line emission in red galaxies. The spatial distribution of these post-AGB stars should be very similar to the overall stellar distribution. Therefore, based on the results of the previous sections, the diffuse ionizing field they form can produce the observed line ratio gradient and its luminosity dependence. In this section we discuss relevant aspects of post-AGB evolution and planetary nebulae, and evaluate whether they can produce enough ionizing photons and a sufficiently high ionization parameter.
These stars have left the asymptotic giant branch and are evolving horizontally on the H-R diagram towards very high temperatures ($\sim10^5$K) before cooling down to form white dwarfs. They are burning hydrogen or helium in a shell around a degenerate core. Because their temperatures are high enough to ionize the surrounding medium and plenty of material has been expelled from them in earlier evolutionary stages, they are often accompanied by a planetary nebula. After their planetary nebulae disperse into the interstellar medium, the long-lived post-AGB stars can produce a diffuse ionizing field.
Most of our knowledge about post-AGB stars comes from studies of planetary nebulae. Observations of planetary nebulae have shown their dynamical ages are about 30000 years [@Schonberner83; @Phillips89]. The time spent by stars in the post-AGB phase is a very strong function of their core mass [@Renzini83]. For high mass post-AGB stars, the evolution is very fast: a post-AGB star with core mass of $1.0M_\sun$ will have a nuclear burning time of only 25 yr [@Tylenda89] and fades by a factor of 10 in luminosity on a similar timescale. These stars evolve too fast to make their planetary nebulae visible for long. Thus most planetary nebulae have central stars with core masses less than $0.64M_\odot$ [@TylendaS89]. On the other hand, very low mass post-AGB stars ($M < 0.55M_\sun$) evolve so slowly that before they raise their temperature to $3\times10^4K$ (needed to ionize hydrogen) the material expelled in the AGB phase has completely dissipated into the interstellar medium. These stars, termed ‘lazy post-AGB’ [@Renzini81], will not appear as planetary nebulae. Therefore, considering the lifetime of post-AGB stars and the dynamical ages of the planetary nebulae, the central stars of planetary nebulae have to be post-AGB stars with core mass in a narrow range of $0.55$—$0.64M_\sun$ (@TylendaS89, also see @Buzzoni06 Fig. 15 for a nice illustration of the mass dependent PN visibility). This core mass range corresponds roughly to an initial mass of $1$—$3M_\sun$ [@Weidemann00]. Lazy post-AGB stars and those post-AGB stars that live longer than 30000 years will form a diffuse ionizing field capable of ionizing the neutral gas in the larger scale interstellar medium.
However, our current understanding of the late stages of stellar evolution is fairly poor and we do not know the temperature and age distribution of these stars well. Most post-AGB stars observed are either hidden inside planetary nebulae or observed when they are not yet hot enough to ionize the nebula. Few hot naked post-AGB stars have been observationally identified [@Napiwotzki98; @Brown00; @Weston10], which may be due to strong observational bias since they will be very luminous in the extreme UV but very faint in the optical. Therefore, it is uncertain what fraction of post-AGB stars contribute to the large-scale photo-ionizng field and what fraction are hidden inside planetary nebulae.
Fortunately, there are two clues indicating that planetary nebulae do not dominate the line luminosity in most of our line-emitting galaxies. First, if the line emission is dominated by planetary nebulae, their kinematics should follow the stellar kinematics exactly. However, [@Sarzi06] showed that the ionized-gas kinematics is decoupled from the stellar kinematics in the majority of galaxies in their sample.
Second, we can estimate the total luminosity contributed by planetary nebulae by integrating the planetary nebula luminosity function. We take the double exponential function given by [@Ciardullo89], $$\log N(M) = 0.133M + \log [1 - e^{3(M^*-M)}] + const,$$ where M is defined as $M_{\rm [OIII]} = -2.5\log F_{\rm
[OIII]}-13.74$. The bright cut-off magnitude $M^*$ is -4.47. The faint cut-off magnitude is 8 mags fainter than $M^*$ [@Henize63]. The normalization is usually given as the total number of planetary nebulae within the two cut-off manitudes divided by the total luminosity of the galaxy. We adopt the median value reported by [@Buzzoni06] for a sample of early-type galaxies, which is $N=1.65\times10^{-7}L_{gal}/L_\odot$. With these, we found the total luminosity produced by planetary nebulae should be $L({\text{[\ion{O}{3}]}})= 1.35\times10^{28} L_{\rm gal}/L_\odot~{\rm erg~s}^{-1}$. To estimate the total PN light we observed through the fiber, we should use the fiber magnitude to derive $L_{\rm gal}$. For the 25% passive red galaxies at $0.09<z<0.1$ with the brightest total line luminosity, the median V-band absolute magnitude within the fiber aperture (derived using the fiber mags) is $-20.09$. This produces a median luminosity of $1.21\times10^{38} {\rm erg~s^{-1}}$, much smaller than the median luminosity among these galaxies, which is $6.2\times10^{39} {\rm erg~s^{-1}}$. Therefore, we conclude that planetary nebulae is a minor contributor to the total line emission in these galaxies.
Can the diffuse ionizing field produced by naked post-AGB stars explain what we see? In this case, the sources are distributed like the stars and will produce the observed ionization parameter gradient and its luminosity dependence. The question is whether there are enough ionizing photons and what ionization parameter they can produce.
[@Binette94] argue that the diffuse ionizing field has many more ionizing photons than planetary nebulae can produce, especially when the stellar population gets older than 3 Gyr. They estimate a total $Q_0\sim1\times10^{41} s^{-1} M_\odot^{-1}$. The median stellar mass with the fiber aperture for the 25% passive galaxies at $0.09<z<0.1$ with the brightest total line luminosity is $2.1\times10^{10}
M_\odot$. Assuming all post-AGB ionizing photons are completely absorbed and on average it takes 2.2 photoionizing photons to produce one photon, this will produce a median luminosity of $2.9\times10^{39} {\rm erg s}^{-1}$, which is about 1/3 of the median observed luminosity $8.3\times10^{39} {\rm erg s}^{-1}$ (before extinction correction, but we expect the extinction to be small). Considering the uncertainties on $Q_0$ and other parameters involved in the calculation, this can be considered as a good agreement. This luminosity is much larger than the contribution from all planetary nebulae. More detailed calculations by [@Stasinska08] and [@CidFernandes11] yield similar results. Thus, post-AGB stars can produce enough ionizing photons to account for most of the line luminosity, as long as all these photons are trapped inside the galaxy. This latter question is related with how the gas clouds are distributed relative to the post-AGB stars.
In light of the decoupling of the gas kinematics from the stellar kinematics, let us assume that the line-emitting gas clouds are randomly distributed in relation to the post-AGB stars. In this case, we can estimate the ionization parameter by multiplying the $Q_0$ for post-AGB into Fig. \[fig:pagblogu\]. This yields an ionization parameter of $\log U= -5.2$ at 1 kpc, a factor of 10 lower than what is required ($\log U \sim -3.5$ from Figure \[fig:mapping\_n2ha\_all\], or -4 according to @Binette94). Therefore, although there may be enough photons from post-AGB stars to produce the total line luminosity, in our model the light is deposited onto clouds far away from the ionizing sources. In consequence, the flux is significantly reduced and the resulting ionization structure and line ratios are very different from the expectation from clouds closer to the ionizing sources.
A possible solution might be that clouds closest to individual post-AGB stars dominate in luminosity; if this were so, because they also have the highest ionization parameters, the luminosity-weighted average ionization parameter among all clouds would be raised. Here we determine that this solution is unlikely to work. To evaluate this possibility, we compute the average spacing between post-AGB stars using a rough number density computed by dividing the total ionizing flux by that from an average star. We use an individual post-AGB luminosity of $10^4L_\odot$, significantly higher than average, which maximizes the contribution of individual stars in this calculation. With this luminosity, the inter-spacing is around $85~{\rm
pc}$ at $r=1~{\rm kpc}$ and increases outwards. For gas clouds that are randomly distributed with regard to the post-AGB stars, the luminosity each cloud receives is Flux$\times$Area. The Flux consists of two components, the diffuse background $F_b$, and the flux from its nearest post-AGB star $Q_1/(4\pi r^2)$. Over a spherical volume with diameter equal to the inter-post-AGB spacing ($r_{\rm
max}$), the total luminosity due to the diffuse background is $$L_{bkgd} = F_b {4\pi \over 3} r_{\rm max}^3 n_c \langle A\rangle,$$ where $n_c$ is the number density of gas clouds and $\langle A\rangle$ is the average projected cloud area. The total luminosity due to an individual post-AGB star is $$\begin{aligned}
L_1 &= (\int_0^{r_{\rm max}} {Q_1 \over 4\pi r^2} 4\pi r^2 \mathrm{d}r ) n_c \langle A\rangle \\
&= Q_1 r_{\rm max} n_c \langle A\rangle\end{aligned}$$ where $Q_1$ is the total photoionizing photon output rate of the star.
![The ratio between emission line luminosity produced by diffuse pAGB ionizing background shining on randomly distributed clouds and that produced by a single nearby pAGB star as a function of radius. This indicates that the diffuse background is dominant in ionizing randomly distributed clouds, confirming the predicted ionization parameter.[]{data-label="fig:pagbbkgd"}](pagbbkgd.ps)
Figure \[fig:pagbbkgd\] shows the ratio $L_{bkgd}/L_1$ as a function of radius for the $\gamma=1.5$ model. Except for the very central part of the galaxy, the luminosity due to the background is significantly larger than that due to individual nearby post-AGB stars. This result suggests the ionizing field produced by post-AGB stars is fairly smooth in most parts of the galaxies. The luminosity-weighted ionization parameter should be fairly close to what we showed in Fig. \[fig:pagblogu\]. Granularity in the ionization field is thus unlikely to cause a substantially increased ionization parameter.
Increasing the ionization parameter requires the clouds to be closer to the post-AGB stars. A factor of 4 decrease in average distance would probably be enough to bring the ionization parameter into the right ballpark. To achieve this, either the clouds must originate from progenitors of the post-AGB stars or the post-AGB stars must be preferentially distributed near the warm/cool gas. Since the gas is quite often kinematically decoupled from the stars, both scenarios require that the post-AGBs share the same origin as the gas, rather than that of the main stellar population. Among all post-AGB stars, those with the largest core mass dominate in luminosity, which are also the youngest. If both the gas and the dominant post-AGB stars are associated with the most recent star formation episode, the dominant post-AGB population might share a similar spatial distribution and kinematics with the gas. This scenario would help resolve the deficit in ionization parameter. However, because our sample selection involves a cut in $D_n(4000)$ which would exclude systems with more than a few percent[^5] of their stellar mass from a young stellar population ($<1 {\rm Gyr}$), we consider this scenario unlikely. Nonetheless, it can be tested by looking at planetary nebulae kinematics to see whether they follow the stars or the gas.
Another possible scenario is that the cool gas responsible for the emission indeed originates from the progenitors of the post-AGB stars. They have expanded so much that they no longer appear as planetary nebulae, but still are not as far as a randomly positioned cloud. At this point, they are completely dispersed in the interstellar medium and are carried away by the motion of the hot gas and thus they appear kinematically decoupled from the stars. In this picture, the cold gas has an internal origin, but their kinematics is driven by the hot gas, which is kinematically decoupled from the stars, perhaps due to mergers and the collisional nature of the gas.
Another possible solution is that the stars have a distribution that is better resemebled by a thick disk than a sphere. Compared to the spherical symmetric distribution we assumed, a flatter disky distribution would bring the stars closer to the gas, raising the ionization parameter. We can investigate this by looking at whether the stronger line emitting systems are preferentially more disky in morphology. We leave this for future investigation.
A final possible solution is that abundance of post-AGB stars is much larger than predicted.
To summarize, the post-AGB star photionization model can naturally produce the general variation of ionization parameter with radius, including the sharp rise at small radius and gentle decline on large radius. It is also able to produce the overall direction of the luminosity-dependence of the line ratio gradient. This result strongly indicates that the spatial distribution of the true ionizing source is similar to the stellar distribution. However, based on our current knowledge about post-AGB stars, the ionization parameter they produce would be too small, even though they may have sufficient total luminosity. The uncertainty in the number density of post-AGB stars is still too large [@Brown00; @Brown08; @Weston10] and observations are too scarce. Deeper observations and larger surveys of them are necessary to settle these questions.
### Other possible distributed photoionizing sources
Low-mass X-ray binaries and extreme horizontal branch stars are two other evolved populations that could provide some additional ionizing photons. However, [@Sarzi10] have argued that they would produce much fewer photoionizing photons than post-AGB stars. Thus they are unlikely to be responsible on their own for the observed line emission, or to make up the ionization parameter deficit found above.
Recently, high-mass X-ray binaries (HMXBs) and ultraluminous X-ray sources (ULXs) (ULXs) have also been invoked to explain the LINER-like emission [@McKernan11]. However, we do not think this population would solve the deficit either. First, in an old galaxy like those in our sample, there would be very few high-mass X-ray binaries, because they are only associated with young stellar populations. Second, both HMXBs and ULXs are X-ray bright so they should have been included in the accounting by [@Eracleous10], who showed that extrapolating the X-ray luminosity in the nuclear region of LINERs to the ultraviolet does not yield enough ionizing photons to produce the nuclear luminosity observed. Therefore, these components would make at most a minor contribution.
The hot X-ray emitting gas is also a distributed ionizing source that can produce LINER-like emission [@VoitD90; @DonahueV91]. Because the hot gas density approximately follows the square root of the stellar density, the X-ray emission should have the same luminosity density profile as the stars. Therefore, it can also produce the expected trend in line ratio gradient and luminosity dependence.
However, the X-ray gas is unlikely to produce enough ionizing photons. For the typical galaxy in our sample (the 25% passive red galaxy at $0.09<z<0.1$ with the brightest total line luminosity), with a median stellar mass of $7.5\times10^{10}M_\odot$ and $L_B=2.8\times10^{10}L_\odot$, the X-ray luminosity from the hot gas is on the order of $10^{41} {\rm erg~s^{-1}}$ [@O'Sullivan01], much lower than the total ionizing luminosity of post-AGB stars ($\sim10^{42} {\rm erg~s^{-1}}$). Therefore, they should be subdominant to post-AGB stars and would have an even lower contribution to the ionization parameter.
Fast radiative shocks
---------------------
Shocks are prevalent in many astrophysical phenomena, such as supernova explosions, stellar winds, AGN jets and outflows, cloud collisions, etc. Collisional excitation in the post-shock medium can produce line ratios similar to LINERs. The fast radiative shock could also photoionize the precursor, unshocked regions. When combined with the emission lines produced in the cooling zone of the shock, the resulting line ratios are similar to those of Seyferts [@GrovesDS04II]. In the section, we investigate whether the shock model can reproduce the trends we observed in the data.
Because the shock-only model produces a better match to the LINER-like line ratios observed in these passive red galaxies, we only consider this model. In this model, the line ratios are determined by four parameters: shock velocity, magnetic field strength, density, and metallicity. Because the strong dependence of line ratios on shock velocity, if there are a wide range of shock velocities present in a galaxy, we should observe different widths for different lines in the shock-only model. The data do show different widths for different emission lines. The issue is whether the velocity dependence of line ratios can produce the observed width difference in the right direction.
In Section \[sec:linewidth\], we showed that the line is on average wider than lines by 16%. This means that those high-velocity line-emitting regions have a higher / ratios than low-velocity regions. Although the bulk-motion velocity of clouds is not the same as the shock velocity, we expect in general faster moving clouds in a galaxy would generate faster shocks when they collide. If the lines are mostly produced in post-shock cooling zones resulting from cloud collisions, to explain the data would require higher / ratio to be produced in faster shocks.
![Line ratios produced by shocks as a function of shock velocity for different magentic field strengths. The curves are plotted from dark black to light grey in order of increasing magnetic field strength.[]{data-label="fig:shock_velocity"}](shock_velocity.ps)
We look at the line ratio dependence on velocity in the fast shock models given by [@Allen08], which are run using the MAPPINGS III code. Figure \[fig:shock\_velocity\] shows the / ratio as a function of shock velocity for different magnetic field strengths. This model is run with solar metallicity and a pre-shock gas density of $n=100 {\rm cm}^{-3}$. In most part of the parameter space, / decreases with increasing velocity. Only for $B/n^{1/2} \ge 10 \mu{\rm G cm^{3/2}}$ and for shock velocity between 250 and 400[km/s]{}, does the / ratio increase with velocity. However, these parameter ranges do not produce the right velocity dependence for the other line ratios. The lines in our sample is narrower than lines by 7% on average, and the lines are having roughly the same width as lines. This requires the / to decrease with velocity and / to stay constant with velocity. However, the model predicts a strongly increasing / and a slightly increasing/flat / with velocity in those particular parameter ranges, inconsistent with the requirement to explain the observations. Therefore, although shocks certainly exist in most galaxies, they are probably not the dominant source in producing the extended line emission in these passive red galaxies.
Our conclusion agrees with the conclusion of [@Sarzi10], who also argued against the shock scenario as the dominant ionizing source based on the low circular velocity and low velocity dispersion observed, lack of morphological correlation between line emission structure and line ratio structure, and the flat EW distribution.
On the other hand, [@Annibali10] argued that shocks could be important in the central regions of some early-type galaxies as the AGN jet-driven outflows or accretion onto a massive black hole could possibly reach the high velocities (300-500 km/s) required by the shock models. Shocks certainly exist in these situations, but having the right condition for shocks to occur does not mean shocks are directly responsible for the ionization of the gas. Further proof, such as a correlation between line ratio and velocity, is necessary.
In a totally different case, in ultraluminous infrared galaxies (ULIRGs), shocks could indeed be responsible for producing the strong LINER-like emission found there [@Monreal-Ibero06; @Monreal-Ibero10; @Alonso-Herrero10]. As these galaxies are usually results of major mergers, stronger and faster shocks are more prevalent. Star formation in these galaxies might also be partially responsible for the line emission.
Conclusions
===========
In this paper, we studied the spatial distribution of LINER-like line emission in passive red galaxies by comparing the nuclear emission luminosity measured from the Palomar survey with the larger aperture data from SDSS. We find strong evidence for line ratio gradients. We also find that different emission lines have different velocity widths, in contrast to the uniform velocity widths in star-forming galaxies. We have reached the following conclusions.
1. In the majority of line-emitting red galaxies, the line emission is spatially extended and its intensity peaks at the center. The average surface brightness profile can be well approximated by a power-law with an index of $-1.28$. Line-emitting red galaxies identified with nuclear aperture spectroscopy or those with extended aperture spectroscopy are essentially the same population.
2. Line ratio gradients exist in these line-emitting red galaxies, with the very center having generally larger /, /, and smaller / than the outskirts. The / gradient requires an increasing ionization parameter towards larger distances. Because the cool gas density is likely to fall with radius at a much slower rate than $r^{-2}$, an outward increasing ionization parameter strongly disfavors AGN as the dominant ionizing mechanism in these galaxies.
3. The line ratio gradient can be produced by ionizing sources that are distributed like the stars. This model also predicts different line ratio gradient trends in bright and faint galaxies, which are generally matched by observations.
4. The leading candidate for the ionizing source is the population of post-AGB stars. The majority of these stars cannot be central stars of planetary nebulae, but have to be naked post-AGB stars creating a diffuse ionizing field. However, the ionization parameter produced by post-AGB stars falls short of the required value by more than a factor of 10. Either the abundance of post-AGB stars is underpredicted or their spatial distribution has to be much closer to the gas clouds than assumed. The latter possibility would suggest a common origin of the gas and the post-AGB stars.
5. Different emission lines in passive red galaxies often have different widths. The is on average wider than by 16%; and are wider than by $\sim8\%$. The width ratios do not vary as a function of aperture size. This latter result strongly suggests that the width ratio is not produced by the combination of the line ratio gradient and rotation, but more likely due to a multiphase ISM in these galaxies.
6. We considered shock models for producing these trends. Because line ratios produced in the cooling zone of the shocks have a strong dependence on the shock velocity, these models naturally produce width differences among different lines. However, their velocity dependence generate opposite width differences from what we observe. Therefore, shocks are strongly disfavored by our results as the dominant ionizing source in these passive red galaxies. However, it may be responsible for producing LINER-like emission found in ULIRGs.
7. The systematically different / ratio profiles between bright and faint galaxies (Fig. \[fig:n2ha\_logu\_all\]) suggest that the gas-phase metallicity is dependent on galaxy luminosity.
Our result strongly disfavors AGN as the dominant ionization mechanism for the line emission in passive red galaxies. However, it does not mean that all LINERs have nothing to do with AGN. For a large fraction of those nuclear LINERs identified in the Palomar survey, accretion activity probably does exist, as evidenced by the detection of compact radio core [@Nagar00] and X-ray point sources in their centers [@Ho01]. Our result does mean that the optical line emission in most of them are probably powered by sources unrelated with AGN activity. The AGN is probably significantly less luminous in line emission than previously thought and dominates on much smaller scales than 100 [pc]{}. This result also helps to resolve the energy budget problem reported for most of these “nuclear LINERs.” The large X-ray and radio detection fraction may be a result of more fuel supply in these relatively “gas-rich" early-type galaxies.
For line emission found in apertures covering much larger area, such as in the SDSS at $z>0.02$, the line emission is nearly always dominated by extended emission unrelated with AGN activity. Therefore, most studies using line emission to derive AGN bolometric luminosity for LINER-like objects using SDSS data (e.g. @KauffmannHT03 [@Kewley06; @KauffmannH09; @Choi09]) or higher-$z$ surveys [@Bongiorno10] probably need to have their results re-inspected. The impact is probably most significant for objects with $L_{\rm [OIII]} \lesssim 10^6 L_\odot $, for which we have demonstrated that the ionizing sources are outside the nucleus. The exact threshold is also a function of the aperture size and galaxy luminosity.
Although we only focused on passive red galaxies in our investigation, the result should also apply to LINER-like objects among younger red-sequence galaxies. Those red LINER-like objects with smaller $D_n(4000)$ probably also have some weak star-forming activity contributing to their line emission, as demonstrated in Fig. \[fig:n2ha\_hal\_sloanpalomar\].
The extended line emission is present in more than half of red-sequence galaxies and is much more luminous than most low-ionization AGNs. Based on the Palomar results, only the brightest few percent of low-ionization AGNs have a chance of detection in large aperture spectroscopy data.
Our result favors a distribution of ionizing sources that follows the stars, but does not confirm post-AGB stars as the ionizing sources. Post-AGB stars are the only source that has sufficient total energy to produce the observed emission lines. However, they fall short in the ionization parameter. This mystery awaits future observations to resolve.
If the post-AGB stars are confirmed as the ionization sources, the LINERs can provide a window onto the gas dynamics of passive red galaxies. The total flux from post-AGB stars stay fairly constant with the age of the stellar population, except for the first Gyr after the starburst. In this case, our results would then indicate that differing amounts of line emission in these galaxies is mainly an indicator of different amounts of cool/warm gas. We could therefore use the line strength observed to study the cooling and heating of warm gas in early-type galaxies.
This study shows the huge amount of information we can learn from wide wavelength range, well-calibrated, high-resolution spectroscopy. It also demonstrates the power of large statistical samples. More detailed, spatially resolved IFU studies of nearby early-type galaxies are obviously the next step to confirm our results. An important lesson from this work is that the inclusion of , , and in the resolved spectra was essential to constraining the ionizing sources; indeed, in our case the broad wavelength range is arguably more important than the spatial resolution available to IFU observations. This result motivates the use of IFU techniques with broad wavelength coverage to maximize the available information.
We would like to thank the referee for detailed and thorough comments, which helped us improve the paper. RY would like to thank Guangtun Zhu and Timothy Heckman for illuminating discussions that greatly improved this work. RY and MB acknowledge the support of the NSF Grant AST-0908354, NASA Grant 08-ADP08-0019m, NASA Grant 08-ADP08-0072, and a Google Research Award.
Funding for the Sloan Digital Sky Survey (SDSS) has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, the University of Pittsburgh, Princeton University, the United States Naval Observatory, and the University of Washington.
[^1]: http://spectro.princeton.edu/
[^2]: http://sdss.physics.nyu.edu/vagc/
[^3]: $U$ is the dimensionless ratio of the ionizing photon flux density to the electron density, $U\equiv q(H^0)/(cn_H)$.
[^4]: The data show that the line emission surface brightness profile has only a weak dependence on the galaxy luminosity. Thus, we adopt the same luminosity profile for all models.
[^5]: The exact mass fraction of the young population tolerable by our $D_n(4000)$ cut depends on the assumptions used in the stellar population modeling. Assuming an old simple stellar population with solar metallicity and an age of 4.6 Gyr, which yields the observed median $D_n(4000)$ of 1.9, our $D_n(4000)$ cut at $z\sim0.1$ can only tolerate at most $2\%$ of its stellar mass coming from a population younger than 1 Gyr.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The paper presents new method for calculating the low-temperature asymptotics of free energy of the $3D$ Ising model in external magnetic field $(H\neq 0)$. The results obtained are valid in the wide range of temperature and magnetic field values fulfilling the condition: $[1-\tanh(h/2)]\sim\varepsilon,$ for $\varepsilon\ll 1$, where $h=\beta H$, $\beta$ - the inverse temperature and $H$ - external magnetic field. For this purpose the method of transfer-matrix, and generalized Jordan-Wigner transformations, in the form introduced by the author in $\cite{mkoch95}$, are applied.'
address: |
Institute of Physics, Pedagogical University\
T.Rejtana 16 A , 35–310 Rzeszów, Poland\
e-mail: [email protected]
author:
- 'Martin S. Kochmański'
title: '[LOW-TEMPERATURE ASYMPTOTICS OF FREE ENERGY OF $3D$ ISING MODEL IN EXTERNAL MAGNETIC FIELD]{}'
---
Formulation of the problem {#sec: level1}
==========================
As is well known, till now an exact solution for the $2D$ Ising model in external magnetic field $(H\neq 0)$ was not found. In the case of the $3D$ Ising model there does not exist an exact solution forvanishing magnetic field $(H=0)$, without even mentioning the case with magnetic field. Despite great successes in investigations of the Ising model reachedusing the renormalization group method $\cite{wilson74}$, and other approximate methods $\cite{mccoy-wu73,ma76,sinaj80,curr91}$, the problem of calculation various asymptotics for the $2D$ and $3D$ Ising models in external magnetic field $(H\neq 0)$ is still of great importance. In the paper $\cite{koch97}$ we calculated low-temperature asymptotics for the $2D$ Ising model in external magnetic field $(H\neq 0)$, and free energy for this model in the limit of asymptotically vanishing magnetic field. In this paper we shortly discuss the problem of calculation of the low-temperature asymptotics for free energy in the $3D$ Ising model in external magnetic field $(H\neq 0)$, following the approach and the ideas we introduced in the paper $\cite{koch97}$.
Let us consider a cubic lattice built of $N$ rows, $M$ columns and $K$ planes, to vertices of which are assigned the numbers $\sigma_{nmk}$ from the two-entries set $\pm 1$. These quantities we will be calling here and everywhere below the Ising “spins.” The multiindex $(nmk)$ numbers vertices of the lattice, with $n$ numbering rows, $m$ numbering columns, and $k$ numbering planes. The Ising model with the nearest neighbors ineteraction in external magnetic field is described by the Hamiltonian of the form: $${\cal H}=-\sum^{NMK}_{(n,m,k)=1}\left(J_{1}\sigma_{nmk}\sigma_{n+1,mk}+
J_{2}\sigma_{nmk}\sigma_{n,m+1,k}+J_{3}\sigma_{nmk}\sigma_{nm,k+1}+
H\sigma_{nmk}\right),$$ taking into account anisotropy of the interaction between the nearest neighbors $(J_{1,2,3}>0)$, and the interaction of the spins $\sigma_{nmk}$ with external magnetic field $H$, directed “up” $(\sigma_{nmk}=+1)$. The main problem consists of calculation of the statistical sum for the system: $$\begin{aligned}
Z_3(h)=\sum_{\sigma_{111}=\pm 1} ... \sum_{\sigma_{NMK}=\pm 1}e^{-\beta\cal H}=\end{aligned}$$ $$\sum_{\{\sigma_{nmk}=\pm 1\}}\exp\left[\sum_{nmk}(K_1\sigma_{nmk}
\sigma_{n+1,mk}+K_2\sigma_{nmk}\sigma_{n,m+1,k}+K_3\sigma_{nmk}
\sigma_{nm,k+1}+h\sigma_{nmk})\right],$$ where $K_{1,2,3}={\beta}J_{1,2,3}, \;\;\; h={\beta}H, \;\;\;\beta=1/k_{B}T$. Typical boundary conditions for the variables $\sigma_{nmk}$ are the periodic ones. We follow this standard assumption everywhere below. Let us note here that the statistical sum $(1.2)$ is symmetric with respect to the change $(h\rightarrow -h)$.
In this letter we consider a limited version of the problem. Namely, the problem of calculation of the low-temperature asymptotics for free energy in the $3D$ Ising model in external magnetic field. More precisely, given the coupling constants $(J_{1,2,3}=const)$ and external magnetic field $(H=const)$, we consider the region of temperatures satisfying the condition: $h\sim{\varepsilon}^{-1}, \;\;\;\;\; \varepsilon\ll 1$. To be more exact, we introduce a small parameter in the following way: $$1 - \tanh(h/2)\sim\varepsilon, \;\;\;\;\; \varepsilon\ll 1 ,$$ Then we consider the problem of calculation of free energy per one Ising spin in the thermodynamic limit, with exactness up to quantities of the order $\sim{\varepsilon}^2$ in expansions of the operators associated with interaction of spins as well among themselves as with the external field. (details of the approximation used will be presented below). In our opinion the problem formulated above is of reasonable importance, and, as far as is known to the author, it was not investigated in the existing literature.
Partition function {#sec: level2}
==================
Let us consider an auxillary $4D$ Ising model in external magnetic field $H$ on simple $4D$ lattice $(N\times M\times K\times L)$. We write the Hamiltonian for the $4D$ Ising model with the nearest neighbor interaction in the form: $${\cal H}=-\sum_{n,m,k,l}\left(J_{1}\sigma_{nmkl}\sigma_{n+1,mkl}+
J_{2}\sigma_{nmkl}\sigma_{n,m+1,kl}+J_{3}\sigma_{nmkl}\sigma_{nm,k+1,l}+
J_{4}\sigma_{nmkl}\sigma_{nmk,l+1}+H\sigma_{nmkl}\right),$$ taking into account anisotropy of the interaction between the nearest neighbors $(J_{1,2,3,4}>0)$, and interaction of the spins $\sigma_{nmkl}$ with external magnetic field $H$, directed “up” $(\sigma_{nmkl}=+1)$. Here $(2.1)$ the multiindex $(nmkl)$ numbers the vertices of the $4D$ lattice, and the indices $(n,m,k,l)$ take on values from $1$ to $(N,M,K,L)$, respectively. As in the case of the $3D$ Ising model, we introduce periodic boundary conditions for the variables $\sigma_{nmkl}$. Then we write the partition function $Z_4(h)$ in the form: $$\begin{aligned}
Z_4(h)=\sum_{\sigma_{1111}=\pm 1} ... \sum_{\sigma_{NMKL}=\pm 1}e^{-\beta\cal
H}=\sum_{\{\sigma_{nmkl}=\pm 1\}}\exp\left[\sum_{nmkl}(K_1\sigma_{nmkl}
\sigma_{n+1,mkl} + \right. \end{aligned}$$ $$\left.K_2\sigma_{nmkl}\sigma_{n,m+1,kl} + K_3\sigma_{nmkl}\sigma_{nm,k+1,l}
+ K_4\sigma_{nmkl}\sigma_{nmk,l+1} + h\sigma_{nmkl})\right],$$ where the quantities $K_{i}$ and $h$ are defined as above $(1.2)$ $\cite{baxter82,izyum87}$. The expression $(2.2)$ we can write, using the well known method of transfer matrix, in the form of a trace from the $L$-th power of the operator $\hat{T}$: $$Z_4(h)=Tr(\hat{T})^L, \;\;\;\; \hat{T}=T_4T_h^{1/2}T_3T_2T_1T_h^{1/2},$$ where the operators $T_{1,2,3,4,h}$ are defined by the formulas: $$T_1=\exp\left(K_1\sum_{nmk}\tau^{z}_{nmk}\tau^{z}_{n+1,mk}\right), \;\;\;\;
T_2=\exp\left(K_2\sum_{nmk}\tau^{z}_{nmk}\tau^{z}_{n,m+1,k}\right),$$ $$T_3=\exp\left(K_3\sum_{nmk}\tau^{z}_{nmk}\tau^{z}_{nm,k+1}\right), \;\;\;\;
T_4=(2\sinh 2K_4)^{NMK/2}\exp\left(K^{*}_{4}\sum_{nmk}\tau^{x}_{nmk}\right),$$ $$T_h=\exp\left(h\sum_{nmk}\tau^{z}_{nmk}\right),$$ and the quantities $K_4$ i $K^{*}_4$ are coupled by the following relations: $$\tanh(K_{4})=\exp(-2K_{4}^{*}), \;\;\; or \;\;\; \sinh2K_{4}\sinh2K_{4}^{*}=1.$$ The Pauli spin matrices $\tau^{x,y,z}_{nmk}$ commute for $(nmk)\neq (n'm'k')$, and for given $(nmk)$ these matrices satisfy the usual relations $\cite{huang63}$. It is easy to see that the matrices $T_{1,2,3,h}$ commute among themselves, but do not commute with the matrix $T_4$. In the case in which one of the quantities $K_i=0,\;\; (i=1,2,3)$, we get obviously the known expressions describing the $3D$ Ising model on a simple cubic lattice. Namely, the transition to the $3D$ Ising model with respect to the coupling constants $K_1$, $K_2$, or $K_3$ is realized by taking $(K_1=0)$, or $(K_2=0)$, or $(K_3=0)$, and removing summation over $n$, $(N=1)$, or over $m$, $(M=1)$, or over $k$, $(K=1)$, respectively. As a result we get the standard expressions $\cite{baxter82}$ for the $3D$ Ising model in external magnetic field. In the process the operators $T_i, \;\;(i=1,2,3)$ in every one of the cases are identically equal to the unit operator $(T_i\equiv \hat{1})$. A bit different situation appears in the case of the transition to the $3D$ Ising model with respect to the coupling constant $K_4$. In this case we take $(K_4=0,\;\; L=1)$, i.e. we remove summation over $l$. In consequence we get the following expression for the operator $T_4$, $(2.5)$: $$T^{*}_{4}\equiv T_4(K_4=0)=\prod_{nmk}(1+\tau^{x}_{nmk}) ,$$ where we used the relation $(2.7)$. Then, after transition to the limit $(K_4=0, \;\; L=1)$ in $(2.3)$, we can write the following expression for the partition function for the $3D$ Ising model: $$Z_3(h)=Tr(T_4^*T_h^{1/2}T_3T_2T_1T_h^{1/2}),$$ where the matrices $T_i$ are defined as above $(2.4-6,8)$. Now we pass to the fermionic representation. For this aim one should write the matrices $T_i$ in terms of the Pauli operators $\tau_{nmk}^{\pm}$, $\cite{izyum87}$: $$\tau^{\pm}_{nmk}=\frac{1}{2}(\tau^{z}_{nmk}\pm i\tau^{y}_{nmk}),$$ which satisfy anticommutation relations for one vertex, and which commute for different vertices.
As the next step one should pass from the representation by Pauli operators $(2.10)$ to the representation by Fermi creation and annihilation operators $\cite{mkoch95}$. In the paper $\cite{mkoch95}$ were introduced appropriate transformations (generalized transformations of the Jordan-Wigner type), enabling the transition to the fermionic representation: $$\begin{aligned}
\tau^+_{nmk}=\exp \left[ i\pi\left(\sum^{N}_{s=1}
\sum^{M}_{p=1}\sum^{k-1}_{q=1}\alpha^{+}_{spq}\alpha_{spq}+
\sum^{N}_{s=1}\sum^{m-1}_{p=1}\alpha^+_{spk}\alpha_{spk}+
\sum^{n-1}_{s=1}\alpha^+_{smk}\alpha_{smk}\right)\right]\alpha^{+}_{nmk}
\nonumber\\
\tau^+_{nmk}=\exp \left[ i\pi\left(\sum^{N}_{s=1}
\sum^{M}_{p=1}\sum^{k-1}_{q=1}\beta^{+}_{spq}\beta_{spq}+
\sum^{ n-1}_{s=1}\sum^{M}_{p=1}\beta^+_{spk}\beta_{spk}+
\sum^{m-1}_{p=1}\beta^+_{npk}\beta_{npk}\right)\right]\beta^+_{nmk}
\nonumber\\
\tau^+_{nmk}=\exp \left[ i\pi\left(\sum^{N}_{s=1}
\sum^{m-1}_{p=1}\sum^{K}_{q=1}\gamma^{+}_{spq}\gamma_{spq}+
\sum^{N}_{s=1}\sum^{k-1}_{q=1}\gamma^+_{smq}\gamma_{smq}+
\sum^{n-1}_{s=1}\gamma^+_{smk}\gamma_{smk}\right)\right]\gamma^+_{nmk}
\nonumber\\
\tau^+_{nmk}=\exp \left[ i\pi\left(\sum^{N}_{s=1}
\sum^{m-1}_{p=1}\sum^{K}_{q=1}\eta^{+}_{spq}\eta_{spq}+
\sum^{n-1}_{s=1}\sum^{K}_{q=1}\eta^+_{smq}\eta_{smq}+
\sum^{k-1}_{q=1}\eta^+_{nmq}\eta_{nmq}\right)\right]\eta^+_{nmk}
\nonumber\\
\tau^+_{nmk}=\exp \left[ i\pi\left(\sum^{n-1}_{s=1}
\sum^{M}_{p=1}\sum^{K}_{q=1}\omega^{+}_{spq}\omega_{spq}+
\sum^{M}_{p=1}\sum^{k-1}_{q=1}\omega^+_{npq}\omega_{npq}+
\sum^{m-1}_{p=1}\omega^+_{npk}\omega_{npk}\right)\right]\omega^+_{nmk}
\nonumber\\
\tau^+_{nmk}=\exp \left[ i\pi\left(\sum^{n-1}_{s=1}
\sum^{M}_{p=1}\sum^{K}_{q=1}\theta^{+}_{spq}\theta_{spq}+
\sum^{m-1}_{p=1}\sum^{K}_{q=1}\theta^+_{npq}\theta_{npq}+
\sum^{k-1}_{q=1}\theta^+_{nmq}\theta_{nmq}\right)\right]\theta^+_{nmk}\end{aligned}$$ and analogously for the operators $\tau^{-}_{nmk}$. In the paper $\cite{mkoch95}$ we obtained formulas for relations between various Fermi operators, and commutation relations for them. Further in this paper we will use the fact that the following equality of local occupation numbers is valid: $$\begin{aligned}
\tau^+_{nmk}\tau^-_{nmk}&=&\alpha^+_{nmk}\alpha_{nmk}=\beta^+_{nmk}\beta_{nmk}=
\gamma^+_{nmk}\gamma_{nmk}=\eta^+_{nmk}\eta_{nmk}=
\omega^+_{nmk}\omega_{nmk}=\theta^+_{nmk}\theta_{nmk}.\end{aligned}$$ Then, applying the expressions $(2.10-12)$ and considerations from the paper $\cite{koch97}$, we can write the partition function $(2.9)$ in the form: $$Z_3(h)=(2\cosh^2h/2)^{NMK}<0|T^*|0>=A<0|U+{\mu}^2CUD|0>, \;\;
U\equiv T_h^lT_3T_2T_1T_h^r,$$ where $A=(2\cosh^2h/2)^{NMK}$ and $\mu=\tanh(h/2)$, and the operators $T_{1,2,3}$, $T^{l,r}_h$ and $C,D$ are of the form: $$\begin{aligned}
T_1=\exp\left[K_{1}\sum_{n,m,k=1}^{N,M,K}(\alpha^{+}_{nmk}-\alpha_{nmk})
(\alpha^{+}_{n+1,mk}+\alpha_{n+1,mk})\right] , \nonumber\\
T_2=\exp\left[K_{2}\sum_{n,m,k=1}^{N,M,K}(\beta_{nmk}^{+}-\beta_{nmk})
(\beta^{+}_{n,m+1,k}+\beta_{n,m+1,k})\right] , \nonumber\\
T_3=\exp\left[K_{3}\sum_{n,m,k=1}^{N,M,K}(\theta_{nmk}^{+}-\theta_{nmk})
(\theta^{+}_{nm,k+1}+\theta_{nm,k+1})\right] ,\end{aligned}$$ and $$\begin{aligned}
T^r_h=\exp\!\left\{\!\mu^2\left[\sum_{nmk}\sum^{N-n}_{s=1}
\alpha^{+}_{nmk}\alpha^{+}_{n+s,mk}+\sum_{nn'mk}\sum^{M-m}_{t=1}
\alpha^{+}_{nmk}\alpha^{+}_{n',m+t,k}+\sum_{nn'mm'k}\sum^{K-k}_{l=1}
\alpha^+_{nmk}\alpha^+_{n'm',k+l}\right]\!\right\},\nonumber\\
T^l_h=\exp\!\left\{\!\mu^2\left[\sum_{nmk}\sum^{K-k}_{l=1}
\theta_{nm,k+l}\theta_{nmk}+\sum_{nmkk'}\sum^{M-m}_{t=1}
\theta_{n,m+t,k}\theta_{nmk'}+\sum_{nmm'kk'}\sum^{N-n}_{s=1}
\theta_{n+s,mk}\theta_{nm'k'}\right]\!\right\}, \end{aligned}$$ $$\begin{aligned}
C=\sum_{nmk}\theta_{nmk}, \;\;\;\;\;\;\;\;\; D=\sum_{nmk}\alpha^+_{nmk}\end{aligned}$$ Here and below $\sum_{n,m,...}$ means summation over the complete set of indices $(n=1,... N; \;\; m=1,... M; \;\;etc.)$. It is obvious that the operator $\hat{G}$: $$\hat{G}=(-1)^{\hat{S}}, \;\;\;\;\;\;\;\; \hat{S}=\sum_{nmk}\alpha^+_{nmk}\alpha_{nmk},$$ where $\hat{S}$ is the operator of the total number of particles, commutes with the operator $T^*$, $(2.13)$. Therefore, we can divide all states of the operator $T^*$ into states with even $(\lambda_{\hat{G}}=+1)$ or odd number of particles $(\lambda_{\hat{G}}=-1)$ with respect to the operator $\hat{G}$, $(2.16)$. The form of the operators $T_{1,2,3}$ does not change during the course, only the boundary conditions for the operators $(\alpha_{nmk}, ...)$ do. In the case of even states $(\lambda_{\hat{G}}=+1)$ antiperiodic boundary conditions , and in the case of odd states periodic ones, are chosen $\cite{koch97}$.
The next step is transition to the momentum representation: $$\begin{aligned}
\alpha^+_{nmk}=\frac{exp(i\pi/4)}{(NMK)^{1/2}}\sum_{qp\nu}e^{-i(nq+mp+k\nu)}
\xi^+_{qp\nu}, \;\;\;\;\; \beta^+_{nmk}\rightarrow\eta^+_{qp\nu}, \;\;\;\;\;\;
\theta^+_{nmk}\rightarrow\zeta^+_{qp\nu},\end{aligned}$$ and introduction for fixed $(qp\nu)$ corresponding bases for $\xi$-, $\eta$- and $\zeta$- Fermi creation and annihilation operators in the representation in terms of occupation numbers in the finite-dimensional Fock space of dimension $2^8=256$). Then, after a series of transformations and calculations we arrive at the following formula for the partition function $(2.13)$: $$Z^{+}_{3D}(h)=A\left(\prod_{0<{q,p,\nu}<\pi}A^4_1(q)\right)\left(\prod_
{0<{q,p,\nu}<\pi}A^4_3(\nu)\right)<0|T^*_3(h)T_2T^*_1(h)|0>,$$ where the operators $T_1^*(h), \;\;T_2, \;\;T^*_3(h)$ are of the form $$\begin{aligned}
T^*_1(h)=\exp\left[\sum_{0<q,p,\nu <\pi}B_1(q)(\xi^+_{-q-p-\nu}\xi^+_{qp\nu}+
\xi^+_{-q-p\nu}\xi^+_{qp-\nu}+ \xi^+_{-qp-\nu}\xi^+_{q-p\nu}+
\xi^+_{-qp\nu}\xi^+_{q-p-\nu})\right], \nonumber\\
T_2=\exp\left\{2K_2\sum_{0<q,p,\nu <\pi}[\cos p(\eta^+_{qp\nu}\eta_{qp\nu} +
...)+\sin p(\eta^+_{-q-p-\nu}\eta^+_{qp\nu} +
... + \eta_{qp\nu}\eta_{-q-p-\nu} + ...]\right\}, \nonumber\\
T^*_3(h)=\exp\left[\sum_{0<q,p,\nu <\pi}B_3(\nu)(\zeta_{qp\nu}\zeta_{-q-p-\nu} +
\zeta_{-qp\nu}\zeta_{q-p-\nu} + \zeta_{q-p\nu}\zeta_{-qp-\nu} +
\zeta_{-q-p\nu}\zeta_{qp-\nu})\right],\end{aligned}$$ and $A_1(q,h), ...$ are defined by the expressions: $$\begin{aligned}
A_1(q,h)=\cosh 2K_1-\sinh 2K_1\cos q+\alpha(h,q)\sinh 2K_1\sin q, \nonumber\\
A_3(\nu,h)=\cosh 2K_3-\sinh 2K_3\cos\nu+\alpha(h,\nu)\sinh 2K_3\sin\nu, \nonumber\\
B_1(q,h)=\frac{\alpha(h,q)[\cosh 2K_1+\sinh 2K_1\cos q]+\sinh 2K_1\sin q}
{A_1(q,h)},\nonumber\\
B_3(\nu,h)=\frac{\alpha(h,\nu)[\cosh 2K_3+\sinh 2K_3\cos\nu]+\sinh 2K_3\sin\nu}
{A_3(\nu,h)},\nonumber\\
\alpha(h,q)=\tanh^2(h/2)\frac{1+\cos q}{\sin q},\;\;\;\;
\alpha(h,\nu)=\tanh^2(h/2)\frac{1+\cos\nu}{\sin\nu}.\end{aligned}$$ In the formula for $Z^+_{3D}(h)$ the sign $(+)$ means that we consider the case of even states $(\lambda_{\hat{G}}=+1)$ with respect to the operator $\hat{G}$, $(2.16)$. It is obvious that for $h=0$ we arrive at the $3D$ Ising model in vanishing magnetic field. Then, for $K_1=0$ (or $K_2=0,\;\; or\;\; K_3=0$) the expression $(2.17)$ for the statistical sum describes the $2D$ Ising model in external magnetic field $\cite{koch97}$.
[ Solution]{}
=============
Let us consider calculation of free energy per one Ising spin in external magnetoc field in the approximation described shortly in the introduction. For this aim let us consider the operators $T^*_1(h)$ and $T^*_3(h)$ in the “coordinate” representation: $$\begin{aligned}
T^*_1(h)=\exp\left[\sum_{nmk}\sum^{N-n}_{s=1}a(s)
\alpha^{+}_{nmk}\alpha^{+}_{n+s,mk}\right],\nonumber\\
T^*_3(h)=\exp\left[\sum_{nmk}\sum^{K-k}_{l=1}c(l)
\theta_{nm,k+l}\theta_{nmk}\right],\end{aligned}$$ where the “weights” $a(s)$ and $c(l)$ are defined by the formulas: $$\begin{aligned}
a(s)=\frac{1}{N}\sum_{0<{q}<\pi}2B_1(q)\sin(sq)=
{z^*_1}^s+\tanh^2h^*_1\frac{1-{z^*_1}^s}{(1-z^*_1)^2},\;\;\;\;s=1,2,3,...\nonumber\\
c(l)=\frac{1}{K}\sum_{0<{\nu}<\pi}2B_3(\nu)\sin(l\nu)=
{z^*_3}^l+\tanh^2h^*_3\frac{1-{z^*_3}^l}{(1-z^*_3)^2},\;\;\;\;l=1,2,3, ...\end{aligned}$$ We introduced renormalized quantities $(K^*_{1,3},\;\;h^*_{1,3})$ defined as follows : $$\begin{aligned}
\sinh2K^*_{1,3}=\beta_{1,3}[\sinh2K_{1,3}(1-\tanh^2(h/2)],\nonumber \\
\cosh(2K^*_{1,3})=\beta_{1,3}[\cosh2K_{1,3}+\tanh^2(h/2)\sinh2K_{1,3}],
\nonumber \\
\beta_{1,3}=[1+2\tanh^2(h/2)\sinh2K_{1,3}e^{2K_{1,3}}]^{-1/2}, \;\;\;
\tanh^2h^*_{1,3}=\tanh^2(h/2)\frac{\beta_{1,3}\exp(2K_{1,3})}{\cosh^2K^*_{1,3}},\end{aligned}$$ These formulas are valid for $(K_{1,3}\geq 0)$. As in the case of the $2D$ Ising model $\cite{koch97,999r.97}$, also in this case one can introduce a diagrammatic representation for the vacuum matrix element $S\equiv<0|T^*_3(h)T_2T^*_1(h)|0>$. Computation of the vacuum matrix element $S$, which enters the formula $(2.17)$ for $Z^+_{3D}(h)$ in general case, where the “weights” $(3.2)$ are arbitrary is, at least at present, impossible. Nevertheless, there exists a special case in which we can calculate the quantity $S$ in the $3D$ case. Namely, this is the case where the “weights” $(3.2)$ are independent of $l$ and $s$. In this case one should, as in the $2D$ case $\cite{koch96}$, put the parameters $K_{1,3}$ equal zero $(K_{1,3}=0)$ in the formula $(2.13)$, and then express the operators $T^{l,r}_h$ in terms of the Fermi $\beta$-operators $(2.11)$ of creation and annihilation, with the goal to calculate $S$. After transition to the momentum representation, one should calculate the vacuum matrix element $S^*(y_1,y_3,z_2)$: $$\begin{aligned}
S^*(y_1,y_3,z_2)\equiv<0|T^l(y_3)T_2T^r(y_1)|0>,
\;\;\;\;y_{1,3}\equiv\tanh^2h_{1,3},\end{aligned}$$ where $z_2=\tanh K_2$ becomes trivial.(Here we introduced the following change of notation: $h/2\rightarrow h_1$ - for the operator $T^r_h$, and $h/2\rightarrow h_3$ - for the operator $T^l_h$). We can write the result for $S^*(y_1,y_3,z_2)$ in the following form: $$\begin{aligned}
S^*(y_1,y_3,z_2)=(2\cosh^2K_2)^\frac{NMK}{2}\prod_{0<qp\nu<\pi}\left[(1-2z_2\cos p
+ z^2_2)(1-cos p)+ 2z_2(y_1+y_3)\sin^2p + \right.\nonumber\\
\left.y_1y_3(1+2z_2\cos p + z^2_2)(1 + \cos p)\right]^4 .\end{aligned}$$ This result can be used further to calculate free energy in the approximation discussed above $(1.3)$. For this aim let us note that the conditions $[\tanh^2h^*_{1,3}/(1-z^*_{1,3})^2]\rightarrow 1$ are equivalent, accordingly to $(3.3)$, to the conditions $(\exp(-2K_{1,3})(1-\tanh^2{h/2})\rightarrow 0)$. It follows from this equation that for fixed $(J_{1,3}=const,\;\; H=const)$ these conditions are satisfied in the region of temperatures $T$, in which $(h/2)\sim{\varepsilon}^{-1}, \;\;\;\varepsilon\ll 1$. In this case we can use the result $(3.4)$. Namely, let us consider the formulas $(2.19)$ for $B_{1,3}$, written in terms of the renormalized parameters $(h^*_{1,3},\;\;K^*_{1,3})$: $$B_{1,3}=\frac{\tanh^2h^*_{1,3}\frac{\sin q(\nu)}{1-\cos q(\nu)}+2z_{1,3}^*\sin
q(\nu)}{1-2z_{1,3}^*\cos q(\nu) +{z_{1,3}^*}^2},$$ where $z^*_{1,3}=\tanh K^*_{1,3}$. Next, since the following equalities are satisfied: $$\begin{aligned}
\frac{z_{1,3}^*}{1+{z_{1,3}^*}^2}=\frac{z_{1,3}(1-\tanh^2{h/2})}{1+
2z_{1,3}\tanh^2{h/2}+z^2_{1,3}},\end{aligned}$$ then, if we introduce a small parameter $[1-\tanh(h/2)]\sim\varepsilon, \;\;\; (\varepsilon\ll 1)$, and expand $B_{1,3}$ into a series in powers of $\varepsilon$ $(z^*_{1,3}\sim\varepsilon)$, we obtain $$\begin{aligned}
B_{1,3}=\frac{(\tanh^2h^*_{1,3}+2z^*_{1,3})\sin q(\nu)}{1-\cos q(\nu)}+
\sim{\varepsilon}^2\end{aligned}$$ This formula gives the following expressions for the “weights”$a(s)$ and $c(l)$, $(3.2)$ in this approximation: $$a(s)= \tanh^2h^*_1+2z^*_1,\;\;\;\;\; c(l)=\tanh^2h^*_3+2z^*_3 ,$$ with exactness of the order of smallness $\sim{\varepsilon}^2$. As a result in this approximation the “weights” $a(s), \;\;c(l)$ do not depend on $(s,l)$. Finally, if we substitute to the expression $(3.4)$ for $S^*(y_1,y_3,z_2)$ the parameters $y_1\rightarrow a(s)$ and $y_3\rightarrow c(l)$, $(3.6)$, we arrive at the following formula for free energy on one Ising spin $F_{3D}(h)$ in the thermodynamic limit: $$\begin{aligned}
-\beta F_{3D}(h)\asymp\ln(2^{3/2}\cosh K^*_1\cosh K_2\cosh K^*_3\cosh^2{h/2})+
\frac{1}{2\pi}\int^{\pi}_0\ln\left[(1-2z_2\cos p+z^2_2)\times\right.\nonumber\\
\left.(1-\cos p)+2z_2(\tanh^2h^*_1+\tanh^2h^*_3+2z^*_1+2z^*_3)\sin^2p+(\tanh^2h^*_1+
2z^*_1)(\tanh^2h^*_3+\right. \nonumber\\
\left.2z^*_3)(1 + 2z_2\cos p + z^2_2)(1 + \cos p)\right]dp,\end{aligned}$$ where $\beta=1/k_BT$, and $z_2=\tanh K_2$, and $h^*_{1,3}$ and $K^*_{1,3}$ are coupled with $h$ and $K_{1,3}$ by the relations $(3.3)$. One can show that, as it was done for the $1D$ and $2D$ Ising models $\cite{koch97,koch96}$, in the case of the states odd $(\lambda_{\hat{G}}=-1)$ with respect to the operator $\hat{G}$, $(2.16)$, the formula for $F_{3D}(h)$ is described in the thermodynamic limit by $(3.7)$. Let us note that the asymptotics $(3.7)$ obtained above can be applied also in the case of rather strong magnetic fields $(H)$, as far as it satisfies the condition $(1-\tanh h)\sim\varepsilon, \;\;\; \varepsilon\ll 1, \;\; (T=const)$.
Final remarks
=============
The result derived above$(3.7)$ can be applied to the analysis of equilibrium thermodynamics of the three dimensional Ising magnetic, lattice gas, and also three dimensional models of binary alloys $\cite{ziman79,thompson88}$ in the region of temperatures and magnetic fields $(1.3)$ derived above. Such analysis, as well as construction of appropriate phase diagramms for the models mentioned above is, in our opinion, of great interest. They deserve presentation in a separate publication. Therefore we deliberately do not compare here our result $(3.7)$ with the existing papers devoted to this problem. The other important feature of the presented method is the possibility of deriving expressions for the free energy of the $3D$ Ising model in the limiting case of the magnetic field tending to zero $(H\rightarrow 0, \;\;\; N,M,K\rightarrow\infty)$ if we know exact solution for the $3D$ Ising model in the absence of external magnetic field $(H=0)$. This possibility results from equations $(3.2-3.3)$ describing renormalised interaction constants $K^*_{1,3}$, and corresponds, as was presented in the paper $\cite{koch97}$, to the results obtained by C.N.Yang $\cite{yang52}$ for the $2D$ Ising model.
I am grateful to H. Makaruk, R. Owczarek for their assistance in preparation of the final form of this paper.
M.S. Kochmański, J.Tech.Phys., [**36**]{}, 485 (1995). K.G.Wilson, J.Kogut, The Renormalization Group and the $\varepsilon$-Expansion; Phys.Reports, C.[**12**]{}, N 2 (1974). B. McCoy, and T.T. Wu, [*Two Dimensional Ising Models*]{}, Harvard U. Press. Cambridge, Mass.,(1973). Sh. Ma, [*Modern Theory of Critical Phenomena*]{}, Univ. California, U.A. Ben.,Inc.,(1976). Ya.G. Sinay, Theory of phase transitions,“NAUKA”, Moscow, (in russian),(1980). Current Problems in Statistical Mechanics, Physica A: Stat. and Theor.Physics, [**177**]{}, nn 1-3, (1991). M.S. Kochmański, Phys.Rev.E, [**56**]{}, ... (1997). R.J.Baxter, [*Exactly Solved Models in Statistical Mechanics*]{}, Ac.Press, Inc., (1982). Yu.A. Izyumov, and Yu.N. Skryabin: [*Statistical Mechanics of Magnetically Ordered Systems*]{}, “Nauka”, Moscow, (in russian), (1987). K. Huang: [*Statistical Mechanics*]{}, J.Wiley and Sons, Inc., New York - London, (1963). M.S. Kohmański, Zh.Eksp.Teor.Fiz.,[**111**]{}, 1717 (1997) \[JETP [**84**]{}, 940 (1997)\]. M.S. Kochmański, J.Tech.Phys., [**37**]{}, 67 (1996). J.M. Ziman:[*Models of Disorder*]{}, Cambridge Univ. Press, Cambridge, (1979). C.J.Thompson,[*Classical Equilibrium Statistical Mechanics*]{}, Cl. Press - Oxford, (1988). C.N. Yang, Phys.Rev., [**85**]{}, 809 (1952).
|
{
"pile_set_name": "ArXiv"
}
|
---
address: 'Department of Mathematics, University of Toronto, Toronto, ON M5S 2E4, CANADA'
author:
- Yael Karshon
- 'Christina Bjorndahl ${}^*$'
title: |
Revisiting Tietze-Nakajima –\
local and global convexity for maps
---
[^1]
Introduction
============
A theorem of Tietze and Nakajima, from 1928, asserts that if a subset $X$ of $\R^n$ is closed, connected, and locally convex, then it is convex [@T; @N]. There are many generalizations of this “local to global convexity" phenomenon in the literature; a partial list is [@BF; @cel; @kay; @KW; @klee; @SSV; @S; @tam].
This paper contains an analogous “local to global convexity" theorem when the inclusion map of $X$ to $\R^n$ is replaced by a map from a topological space $X$ to $\R^n$ that satisfies certain local properties: We define a map $\Psi \colon X \to \R^n$ to be convex if any two points in $X$ can be connected by a path $\gamma$ whose composition with $\Psi$ parametrizes a straight line segment in $\R^n$ and this parametrization is monotone along the segment. See Definition \[def:convex\]. We show that, if $X$ is connected and Hausdorff, $\Psi$ is proper, and each point has a neighbourhood $U$ such that $\Psi|_U$ is convex and open as a map to its image, then $\Psi$ is convex and open as a map to its image. We deduce that the image of $\Psi$ is convex and the level sets of $\Psi$ are connected. See Theorems \[maintheorem\] and \[Theorem\].
Our motivation comes from the Condevaux-Dazord-Molino proof [@CDM; @HNP] of the Atiyah-Guillemin-Sternberg convexity theorem in symplectic geometry [@A; @GS]. See section \[sec:moment\].
This paper is the result of an undergraduate research project that spanned over the years 2004–2006. The senior author takes the blame for the delay in publication after posting our arXiv eprint. While preparing this paper we learned of the paper [@BOR1] by Birtea, Ortega, and Ratiu, which achieves similar goals. In section \[sec:ratiu\] we discuss relationships between our results and theirs. After [@BOR1], our results are not essentially new, but our notion of “convex map" gives elegant statements, and our proofs are so elementary that they are accessible to undergraduate students with basic topology background.
Acknowledgements {#acknowledgements .unnumbered}
----------------
The first author is partially supported by an NSERC Discovery grant. The second author was partially funded by an NSERC USRA grant in the summers of 2004 and 2005. The authors are grateful to River Chiang and to Tudor Ratiu for helpful comments on the manuscript.
The Tietze-Nakajima Theorem
===========================
Let $B(x,r)$ \[$\overline{B}(x,r)$\] denote the open \[closed\] ball in $\R^n$ of radius $r$, centered at $x$. A closed subset $X$ of $\R^n$ is *locally convex* if for every $x \in X$ there exists $\delta_x > 0$ such that $B(x, \delta_x) \cap X$ is convex. The Tietze-Nakajima theorem [@T; @N] asserts that “local convexity implies global convexity":
Let $X$ be a closed, connected, and locally convex subset of $\R^n$. Then $X$ is convex.
A disjoint union of two closed balls is closed and locally convex but is not connected. A punctured disk is connected and satisfies the locally convexity condition but it is not closed.
A closed subset $X \subset \R^n$ is *uniformly locally convex* on a subset $A \subset X$ if there exists $\delta > 0$ such that $B(x, \delta) \cap X$ is convex for all $x \in A$.
\[Uniform local convexity on compact sets\] [\[Lemma 1\]]{} Let $X$ be a closed subset of $\R^n$. If $X$ is locally convex and $A \subset X$ is compact, then $X$ is uniformly locally convex on $A$.
Since $X$ is locally convex, for every $x \in X$ there exists a $\delta_x > 0$ such that $B(x, \delta_x) \cap X$ is convex. By compactness there exist points $x_1, \ldots, x_n$ such that $A \subset \bigcup_{i=1}^{n} B(x_i,\frac{1}{2}\delta_{x_i})$. Let $\delta = \min\{\frac{1}{2}\delta_{x_i}\}$. Then for every $x \in A$ there exists $i$ such that $B(x,\delta) \subset B(x_i,\delta_{x_i})$. It follows that $B(x,\delta) \cap X$ is convex.
Let $X$ be a closed, connected, locally convex subset of $\R^n$. For two points $x_0$ and $x_1$ in $X$, define their *distance in $X$*, denoted $d_X(x_0,x_1)$, as follows: $$d_X(x_0,x_1) = \inf\{l(\gamma) \,|\, \gamma \colon \left[0,1\right]
\rightarrow X, \ \gamma(0)=x_0, \ \gamma(1)=x_1 \},$$ where $l(\gamma)$ is the length of the path $\gamma$.
In this definition it doesn’t matter if we take the infimum over continuous paths or polygonal paths: let $\gamma \colon [0,1] \to X$ be a continuous path in $X$. Let $\delta$ be the radius associated with uniform local convexity on the compact set $\{\gamma(t), 0 \leq t \leq 1 \}$. By uniform continuity of $\gamma$ on the compact interval $[0,1]$, there exist $0 = t_0 < t_1 < \ldots < t_k = 1$ such that $\|\gamma(t_{i-1}) - \gamma(t_i)\| < \delta$ for $i=1, \ldots, k$. The polygonal path through the points $\gamma(t_0), \ldots, \gamma(t_{k})$ is contained in $X$ and has length $\leq l(\gamma)$.
Also note that $d_X(x_0,x_1) \geq \| x_1 - x_0 \|$, with equality if and only if the segment $[x_0,x_1]$ is contained in $X$.
[\[Lemma 3\]]{} Let $X$ be a closed, connected, and locally convex subset of $\R^n$. Let $x_0$ and $x_1$ be in $X$. Then there exists a point $x_{1/2}$ in $X$ such that $${\label{midpoint}{}}
d_X(x_0, x_{1/2}) = d_X(x_{1/2}, x_1) = \frac{1}{2}d_X(x_0,x_1).$$
Let $\gamma_j$ be paths in $X$ connecting $x_0$ and $x_1$ such that $\{l(\gamma_j)\}$ converges to $d_X(x_0, x_1)$. Let $t_j \in [0,1]$ be such that $\gamma_j(t_j)$ is the midpoint of the path $\gamma_j$: $$l(\gamma_j \arrowvert_{\scriptscriptstyle{\left[0,t_j\right]}}) =
l(\gamma_j \arrowvert_{\scriptscriptstyle{\left[t_j,1\right]}}) =
\frac{1}{2}l(\gamma_j).$$ Since the sequence of midpoints $\{\gamma_j(t_j)\}$ is bounded and $X$ is closed, this sequence has an accumulation point ${x_{1/2}}\in X$. We will show that the point ${x_{1/2}}$ satisfies equation .
We first show that for every $\vareps > 0$ there exists a path $\gamma$ connecting $x_0$ and $x_\half$ such that $l(\gamma) < \half d_X(x_0,x_1) + \vareps$.
Let $\delta > 0$ be such that $B(x_\half,\delta) \cap X$ is convex. Let $j$ be such that $ \| \gamma_j(t_j) - x_\half \| < \min ( \delta, \frac{\vareps}{2}) $ and such that $ l(\gamma_j) < d_X(x_0,x_1) + \vareps $. The segment $[\gamma_j(t_j),x_\half]$ is contained in $X$. Let $\gamma$ be the concatenation of $\gamma_j|_{[0,t_j]}$ with this segment. Then $\gamma$ is a path in $X$ that connects $x_0$ and $x_\half$, and $l(\gamma) < \half d_X(x_0,x_1) + \vareps$.
Thus, $d_X(x_0,x_\half) \leq \half d_X (x_0,x_1)$. By the same argument, $d_X(x_\half,x_1) \leq \half d_X(x_0,x_1)$. If either of these were a strict inequality, then it would be possible to construct a path in $X$ from $x_0$ to $x_1$ whose length is less than $d_X(x_0,x_1)$, which contradicts the definition of $d_X(x_0,x_1)$.
Fix $x_0$ and $x_1$ in $X$.
By Lemma \[Lemma 3\], there exists a point $x_{1/2}$ such that $$d_X(x_0, x_{1/2}) = d_X(x_{1/2}, x_1) = \frac{1}{2}d_X(x_0, x_1).$$ Likewise, there exists a point $x_{1/4}$ that satisfies $$d_X(x_0, x_{1/4}) = d_X(x_{1/4}, x_{1/2}) = \frac{1}{2}d_X(x_0,
x_{1/2}).$$ By iteration, we get a map $\frac{j}{2^m} \mapsto x_{\frac{j}{2^m}}$, for nonnegative integers $j$ and $m$ where $0 \leq j \leq 2^m$, such that $${\label{iteratedmidpoint}{}}
d_X(x_\frac{j-1}{2^m}, x_\frac{j}{2^m}) = d_X(x_\frac{j}{2^m},
x_\frac{j+1}{2^m}) = \frac{1}{2}d_X(x_\frac{j-1}{2^m},
x_\frac{j+1}{2^m}).$$
Let $$r > d_X(x_0,x_1).$$ For all $0 \leq j \leq 2^m$, the following is true: $$\|x_{\frac{j}{2^m}} - x_0\| \leq d_X(x_{\frac{j}{2^m}}, x_0) \leq
\sum_{i=1}^j d_X(x_{\frac{i-1}{2^m}}, x_{\frac{i}{2^m}}) =
\frac{j}{2^m}d_X(x_0, x_1) < r.$$ Thus $x_{\frac{j}{2^m}}$ belongs to the compact set $$\overline{B}(x_0,r)\cap X.$$ Let $\delta$ denote the radius associated with uniform local convexity on this compact set. Choose $m$ large enough such that $\frac{1}{2^m}d_X(x_0, x_1) < \delta$. Since the intersection $B(x_{\frac{j}{2^m}}, \delta) \cap X$ is convex and $x_{\frac{j-1}{2^m}} \in B(x_{\frac{j}{2^m}},\delta) \cap X$,
$${\label{intInX}{}}
\left[x_{\frac{j-1}{2^m}}, x_{\frac{j}{2^m}}\right] \subset X
\quad \text{for each} \quad 1 \leq j \leq 2^m.$$
Since also $x_{\frac{j+1}{2^m}} \in B(x_{\frac{j}{2^m}},\delta)$, $$\left[x_{\frac{j-1}{2^m}}, x_{\frac{j+1}{2^m}}\right] \subset X
\quad \text{for each} \quad 1 \leq j < 2^m.$$ It follows that $$d_X(x_{\frac{j-1}{2^m}}, x_{\frac{j}{2^m}}) = \|x_{\frac{j-1}{2^m}} -
x_{\frac{j}{2^m}}\| \quad \text{ and } \quad d_X(x_{\frac{j-1}{2^m}},
x_{\frac{j+1}{2^m}}) = \|x_{\frac{j-1}{2^m}} - x_{\frac{j+1}{2^m}}\|.$$
Thus equation can be rewritten as
$$\|x_\frac{j-1}{2^m} - x_\frac{j}{2^m}\| = \|x_\frac{j}{2^m} -
x_\frac{j+1}{2^m}\| = \frac{1}{2}\|x_{\frac{j-1}{2^m}} -
x_{\frac{j+1}{2^m}}\|,$$
which implies, by the triangle inequality, that the points $x_\frac{j-1}{2^m}$, $x_\frac{j}{2^m}$, $x_\frac{j+1}{2^m}$ are collinear. This and imply that $[x_0, x_1] \subset X$.
Local and global convexity of maps
==================================
[\[sec:local-global\]]{}
The Tietze-Nakajima theorem involves subsets of $\R^n$. We will now consider spaces with maps to $\R^n$ that are not necessarily inclusion maps.
Consider a continuous path $\gammaRn \colon [0,1] \to \R^n$. Its length, which is denoted $l(\gammaRn)$, is the supremum, over all natural numbers $N$ and all partitions $0 = t_0 < t_1 < \ldots < t_N = 1$, of $\sum_{i=1}^N \| \gammaRn(t_i) - \gammaRn(t_{i-1}) \|$. We have $l(\gammaRn) \geq \| \gammaRn(1) - \gammaRn(0) \|$ with equality if and only if one of two cases occurs:
1. The path $\gammaRn$ is constant.
2. The image of $\gammaRn$ is the segment $[\gammaRn(0),\gammaRn(1)]$, and $\gammaRn$ is a weakly monotone parametrization of this segment: if $0 \leq t_1 < t_2 < t_3 \leq 1$, then the point $\gammaRn(t_2)$ lies on the segment $[\gammaRn(t_1),\gammaRn(t_3)]$.
The path $\gammaRn \colon [0,1] \to \R^n$ is *monotone straight* if it satisfies (a) or (b).
[\[def:convex\]]{} Let $X$ be a Hausdorff topological space. A continuous map $\Psi$ from $X$ to $\R^n$, or to a subset of $\R^n$, is called *convex* if every two points $x_0$ and $x_1$ in $X$ can be connected by a continuous path $\gamma \colon [0,1] \to X$ such that $${\label{condition}{}}
\gamma(0) = x_0, \quad \gamma(1) = x_1, \quad
\text{and} \quad \Psi \circ \gamma \text{ is monotone straight.}$$
For a function $\psi$ from $\R$ to $\R$, the condition in Definition \[def:convex\] is equivalent to $\psi \colon \R \to \R$ being weakly monotone. This is different from the usual notion of a convex function (that $\psi(ta + (1-t)b) \leq t\psi(a) + (1-t)\psi(b)$ for all $a,b$ and for all $0 \leq t \leq 1$). In the usual notion of a convex function, the domain $X$ must be an affine space, and the target space must be $\R$. In Definition \[def:convex\], the domain $X$ is only a topological space, and the target space can be $\R^n$. In this paper, “convex map" is always in the sense of Definition \[def:convex\].
[\[weakly monotone\]]{} If $\Psi(x_0) = \Psi(x_1)$, condition means that the path $\gamma$ lies entirely within a level set of $\Psi$. If $\Psi(x_0) \neq \Psi(x_1)$, the condition implies that the image of $\Psi \circ \gamma$ is the segment $[\Psi(x_0),\Psi(x_1)]$.
Consider the two-sphere $S^2 = \{ x \in \R^3 \ | \ \|x\|^2 = 1 \}$. The height function $\Psi \colon S^2 \to \R$, given by $(x_1,x_2,x_3) \mapsto x_3$, is convex. The projection $\Psi \colon S^2 \to \R^2$, given by $(x_1,x_2,x_3) \mapsto (x_1,x_2)$, is not convex.
We shall prove the following generalization of the Tietze-Nakajima theorem:
[\[Theorem\]]{} Let $X$ be a connected Hausdorff topological space, let $\calT \subset \R^n$ be a convex subset, and let $$\Psi \colon X \rightarrow \calT$$ be a continuous and proper map. Suppose that for every point $x \in X$ there exists an open neighbourhood $U \subset X$ of $x$ such that the map $\Psi|_U \colon U \to \Psi(U)$ is convex and open. Then
1. The image of $\Psi$ is convex.
2. The level sets of $\Psi$ are connected.
3. The map $\Psi \colon X \to \Psi(X)$ is open.
[\[Tietze follows\]]{} The Tietze-Nakajima theorem is the special case of Theorem \[Theorem\] in which $\calT = \R^n$, the space $X$ is a subset of $\R^n$, and the map $\Psi \colon X \to \R^n$ is the inclusion map.
The convexity of a map has the following immediate consequences:
[\[restrict\]]{} If $\Psi \colon X \to \R^n$ is a convex map then, for any convex subset $A \subset \R^n$, the restriction of $\Psi$ to the preimage $\Psi\inv(A)$ is also a convex map.
Let $A \subset \R^n$ be convex, and let $x_0, x_1 \in \Psi\inv(A)$. Let $\gamma \colon [0,1] \rightarrow X$ be a path from $x_0$ to $x_1$ whose composition with $\Psi$ is monotone straight. The image of $\Psi \circ \gamma$ is the (possibly degenerate) segment $[\Psi(x_0), \Psi(x_1)]$. Because $A$ is convex and contains the endpoints of this segment, it contains the entire segment, so $\gamma$ is a path in $\Psi\inv(A)$. Thus, $x_0$ and $x_1$ are connected by a path in $\Psi\inv(A)$ whose composition with $\Psi$ is monotone straight.
[\[properties imply convexity\]]{} If $\Psi \colon X \to \R^n$ is a convex map, then its image, $\Psi(X)$, is convex, and its level sets, $\Psi\inv(w)$, for $w \in \Psi(X)$, are connected.
Take any two points in $\Psi(X)$; write them as $\Psi(x_0)$ and $\Psi(x_1)$ where $x_0$ and $x_1$ are in $X$. Because the map $\Psi$ is convex, there exists a path $\gamma$ in $X$ that connects $x_0$ and $x_1$ and such that the image of $\Psi \circ \gamma$ is the segment $[\Psi(x_0), \Psi(x_1)]$. In particular, the segment $[\Psi(x_0),\Psi(x_1)]$ is contained in the image of $\Psi$. This shows that the image of $\Psi$ is convex.
Now let $x_0$ and $x_1$ be any two points in $\Psi\inv(w)$. Because the map $\Psi$ is convex, there exists a path $\gamma$ that connects $x_0$ and $x_1$ and such that the curve $\Psi \circ \gamma$ is constant. Thus, this curve is entirely contained in the level set $\Psi\inv(w)$. This shows that the level set $\Psi\inv(w)$ is connected.
[\[path lifting\]]{} Suppose that the map $\Psi \colon X \to \R^n$ has the *path lifting property*, i.e., for every path $\ol{\gamma} \colon [0,1] \to \R^n$ and every point $x \in \Psi\inv(\ol{\gamma}(0))$ there exists a path $\gamma \colon [0,1] \to X$ such that $\gamma(0) = x$ and $\Psi \circ \gamma = \ol{\gamma}$. Then the converse of Lemma \[properties imply convexity\] holds: if the image $\Psi(X)$ is convex and the level sets $\Psi\inv(w)$, $w \in \Psi(X)$, are path connected, then the map $\Psi \colon X \to \R^n$ is convex.
The main ingredient in the proof of Theorem \[Theorem\] is the following theorem, which we shall prove in section \[sec:last proof\]:
\[Local convexity and openness imply global convexity and openness\] [\[maintheorem\]]{} Let $X$ be a connected Hausdorff topological space, let $\calT$ be a convex subset of $\R^n$, and let $\Psi \colon X \to \calT$ be a continuous proper map. Suppose that for every point $x \in X$ there exists an open neighbourhood $U$ of $x$ such that the map $ \Psi|_U \colon U \to \Psi(U)$ is convex and open.
Then the map $ \Psi \colon X \rightarrow \Psi(X)$ is convex and open.
Following [@HNP], one may call Theorem \[maintheorem\] a *Lokal-Global-Prinzip*.
[\[U not small\]]{} In Theorem \[maintheorem\], we assume that each point is contained in an open set on which the map is convex and is open as a map to its image, but we do not insist that these open sets form a basis to the topology. This requirement would be too restrictive, as is illustrated in the following two examples.
1. Consider the map $(x,y) \mapsto -y + \sqrt{x^2 + y^2} $ from $\R^2$ to $\R$. One level set is the non-negative $y$-axis $\{ (0,y) \ | \ y \geq 0 \}$; the other level sets are the parabolas $y = \frac{1}{2\alpha} x^2 - \frac{\alpha}{2}$ for $\alpha > 0$. This map is convex, but its restrictions to small neighborhoods of individual points on the positive $y$-axis are not convex. (These restrictions have disconnected fibres.)
2. Consider the map $(t,e^{i\theta}) \mapsto t e^{i\theta}$ from $\R \times S^1$ to $\C \cong \R^2$. This map is convex, but its restrictions to small neighborhoods of individual points on the zero section $\{ 0 \} \times S^1$ are not convex. (These restrictions have a non-convex image.)
\[Proof of Theorem \[Theorem\], assuming Theorem \[maintheorem\]\] By Theorem \[maintheorem\], the map $\Psi$ is convex, and it is open as a map to its image. By Lemma \[properties imply convexity\], the level sets of $\Psi$ are connected and the image of $\Psi$ is convex.
The bulk of this paper is devoted to proving Theorem \[maintheorem\].
Convexity for components of preimages of neighbourhoods
=======================================================
We first set some notation.
Let $X$ be a Hausdorff topological space and $\Psi \colon X \to \R^n$ a continuous map. For $x \in X$ with $\Psi(x) = w$, we denote by $[x]$ the path connected component of $x$ in $\Psi^{-1}(w)$, and, for $\eps > 0$, we denote by $U_{[x], \varepsilon}$ the path connected component of $x$ in $\Psi^{-1}(B(w, \varepsilon))$. Note that $U_{[x], \varepsilon}$ does not depend on the particular choice of $x$ in $[x]$.
Suppose that every point in $X$ has an open neighbourhood $U$ on which the restriction $\Psi|_U$ is convex. Then, in the definitions of $[x]$ and $U_{[x],\vareps}$, the term *path connected component* can be replaced by *connected component*. Indeed, let $Y = \Psi\inv(B(w,\eps))$ or $Y = \Psi\inv(w)$. If $\Psi|_U$ is convex, so is $\Psi|_{U \cap Y}$; in particular, $U \cap Y$ is path connected. Thus, every point in $Y$ has a path connected open neighborhood with respect to the relative topology on $Y$. So the connected components of $Y$ coincide with its path connected components.
A crucial step in the proof of Theorem \[maintheorem\] is that the neighbourhoods $U$ such that $\Psi|_U \colon U \to \Psi(U)$ is convex and open can be taken to be the entire connected components $U_{[x],\vareps}$:
[\[intgoalB\]]{} Let $X$ be a Hausdorff topological space, $\calT \subset \R^n$ a convex subset, and $\Psi \colon X \to \calT$ a continuous proper map. Suppose that for every point $x \in X$ there exists an open neighbourhood $U$ of $x$ such that the map $\Psi|_U \colon U \to \Psi(U)$ is convex and open.
Then for every point $x \in X$ there exists an $\vareps > 0$ such that the map $\Psi|_{U_{[x],\vareps}} \colon U_{[x],\vareps} \to \Psi(U_{[x],\vareps})$ is convex and open.
We digress to recall standard consequences of the properness of a map.
[\[consequences of proper\]]{} Let $X$ be a Hausdorff topological space, $\calT \subset \R^n$ a subset, and $\Psi \colon X \to \calT$ a continuous proper map. Let $w_0 \in \calT$.
1. Let $U$ be an open subset of $X$ that contains the level set $\Psi\inv(w_0)$. Then there exists $\eps > 0$ such that the pre-image $\Psi\inv(B(w_0,\eps))$ is contained in $U$.
2. Suppose that every point of $\Psi\inv(w_0)$ has a connected open neighborhood in $\Psi\inv(w_0)$ with respect to the relative topology. Then there exists $\eps > 0$ such that whenever $[x]$ and $[y]$ are distinct connected components of $\Psi\inv(w_0)$ the sets $U_{[x],\eps}$ and $U_{[y],\eps}$ are disjoint.
Suppose otherwise. Then, for every $\varepsilon > 0$ there exists $x_\varepsilon \in X \smallsetminus U$ such that $\| \Psi(x_\varepsilon) - w_0 \| < \varepsilon$.
Let $\varepsilon_j$ be a sequence such that $\varepsilon_j \rightarrow 0$ as $j \rightarrow \infty$. Then $x_{\varepsilon_j} \in X \smallsetminus U$ for all $j$, and $\Psi(x_{\varepsilon_j}) \rightarrow w_0$ as $j \rightarrow \infty$.
The set $\{\Psi(x_{\varepsilon_j})\}_{j=1}^\infty \cup \{w_0\}$ is compact. By properness, its preimage, $\cup_{j=1}^\infty \Psi^{-1}(\Psi(x_{\varepsilon_j})) \cup \Psi^{-1}(w_0)$, is compact. The sequence $\{x_{\varepsilon_j}\}_{j=1}^\infty$ is in this preimage. So there exists a point $x_\infty$ such that every neighborhood of $x_\infty$ contains $x_{\eps_j}$ for infinitely many values of $j$.
By continuity, $\Psi(x_\infty) = w_0$. Since $U$ contains $\Psi^{-1}(w_0)$ and is open, $U$ is a neighborhood of $x_\infty$, so there exist arbitrarily large values of $j$ such that $x_{\varepsilon_j} \in U$. This contradicts the assumption $x_{\eps_{j}} \in X \ssminus U$.
Because $\Psi$ is proper, the level set $\Psi\inv(w_0)$ is compact. Because $\Psi\inv(w_0)$ is compact and is covered by connected open subsets with respect to the relative topology, it has only finitely many components $[x]$. Because these components are compact and disjoint and $X$ is Hausdorff, there exist open subsets $\mathcal{O}_{[x]}$ in $X$ such that $[x] \subset \mathcal{O}_{[x]}$ for each component $[x]$ of $\Psi\inv(w_0)$ and such that for $[x]$ and $[y]$ in $\Psi^{-1}(w_0)$, if $[x] \neq [y]$ then $\mathcal{O}_{[x]} \cap \mathcal{O}_{[y]} = \emptyset$. The union of the sets $\mathcal{O}_{[x]}$ is an open subset of $X$ that contains the fiber $\Psi\inv(w_0)$. By part (1), this open subset contains $\Psi\inv(B(w_0,\vareps))$ for every sufficiently small $\eps$. For such an $\vareps$, because each $U_{[x],\vareps}$ is contained in $\mathcal{O}_{[x]}$ and the sets $\mathcal{O}_{[x]}$ are disjoint, the sets $U_{[x],\vareps}$ are disjoint.
We now prepare for the proof of Proposition \[intgoalB\]. In the remainder of this section, let $X$ be a Hausdorff topological space, $\calT \subset \R^n$ a subset, and $\Psi \colon X \to \calT$ a continuous map. Fix a point $w_0 \in \calT$. Let $\{ U_i \}$ be a collection of open subsets of $X$ whose union contains $\Psi\inv(w_0)$.
[\[sequence\]]{} Let $[x]$ be a connected component of $\Psi\inv(w_0)$. If $U_k \cap [x] \neq \emptyset$ and $U_l \cap [x] \neq \emptyset$, then there exists a sequence $k= i_0, i_1, \ldots, i_s=l$ such that $${\label{adjacent}{}}
U_{i_{q-1}} \cap U_{i_q} \cap [x] \neq \emptyset
\quad \text{ for } \quad q=1,\ldots,s.$$
Let $I_k$ denote the set of indices $j$ for which one can get from $U_k$ to $U_j$ through a sequence of sets $U_{i_0}, U_{i_1}, \ldots, U_{i_s}$ with the property . If $j \in I_k$ and $j' \not \in I_k$ then $U_j \cap [x]$ and $U_{j'} \cap [x]$ are disjoint. Thus $$[x] = \left( \bigcup_{j \in I_k} U_j \cap [x] \right)
\cup \left( \bigcup_{j' \not\in I_k} U_{j'} \cap [x] \right)$$ expresses $[x]$ as the union of two disjoint open subsets, of which the first is non-empty. Because $[x]$ is connected, the second set in this union must be empty. So $U_l \cap [x] \neq \emptyset$ implies $l \in I_k$.
Now assume, additionally, that the covering $\{ U_i \}$ is finite and that, for each $i$, the map $\Psi|_{U_i} \colon U_i \to \Psi(U_i)$ is open. Let $${\label{def of Wi}{}}
W_i := \Psi(U_i).$$
[\[closelythesame\]]{} Let $[x]$ be a connected component of $\Psi\inv(w_0)$. For sufficiently small $\varepsilon > 0$, the following is true.
1. For any $i$ and $j$, if $U_i \cap U_j \cap [x] \neq \emptyset$, then $$W_i \cap B(w_0, \varepsilon) = \Psi(U_i \cap U_j) \cap B(w_0, \varepsilon) .$$
2. For any $k$ and $l$, if $U_k \cap [x]$ and $U_l \cap [x]$ are non-empty, then $$W_k \cap B(w_0, \varepsilon) = W_l \cap B(w_0, \varepsilon) .$$
Suppose that $U_i \cap U_j \cap [x] \neq \emptyset$. Then the set $\Psi(U_i \cap U_j)$ contains $w_0$. Since $U_i \cap U_j$ is open in $U_i$, and since the restriction of $\Psi$ to $U_i$ is an open map to its image, the set $\Psi(U_i \cap U_j)$ is open in $W_i$. Let $\varepsilon_{ij} > 0$ be such that the set $\Psi(U_i \cap U_j)$ contains $W_i \cap B(w_0, \varepsilon_{ij})$. Because we also have $\Psi(U_i \cap U_j) \subset \Psi(U_i) = W_i$, $$W_i \cap B(w_0, \varepsilon_{ij})
= \Psi(U_i \cap U_j) \cap B(w_0, \varepsilon_{ij}).$$
Let $\vareps$ be any positive number that is smaller than or equal to $\vareps_{ij}$ for all the pairs $U_i$, $U_j$ for which $U_i \cap U_j \cap [x]
\neq \emptyset$. Then, for every such pair $U_i$, $U_j$, $$W_i \cap B(w_0,\vareps) = \Psi(U_i \cap U_j) \cap B(w_0,\vareps).$$ This proves (1).
Now suppose that $U_k \cap [x] \neq \emptyset$ and $U_l \cap [x] \neq \emptyset$. By Lemma \[sequence\], one can get from $U_k$ to $U_l$ by a sequence of sets $U_k = U_{i_0}, \ldots U_{i_s} = U_l$ such that $U_{i_{q-1}} \cap U_{i_q} \cap [x] \neq \emptyset$ for $q=1,\ldots,s$. Part (1) then implies that the intersections $W_{i_q} \cap B(w_0,\eps)$ are the same for all the elements in the sequence. Because the sequence begins with $U_k$ and ends with $U_l$, it follows that $$W_k \cap B(w_0,\vareps) = W_l \cap B(w_0,\vareps).$$ This proves (2).
Let $[x]$ be a connected component of $\Psi\inv(w_0)$. Fix an $\varepsilon > 0$ that satisfies the conditions of Lemma \[closelythesame\]. Let $${\label{Wx}{}}
{W_{[x],\varepsilon}}:= W_i \cap B(w_0, \varepsilon)
\qquad \text{ when } U_i \cap [x] \neq \emptyset.$$ By part (2) of Lemma \[closelythesame\], this is independent of the choice of such $i$. Also, define $${\label{Uxeps}{}}
\tU_{[x],\vareps} := \bigcup_{\substack{U_i \cap [x] \neq \emptyset}}
U_i \cap \Psi^{-1}(B(w_0, \varepsilon)).$$
We have $${\label{image is Wi}{}}
\begin{aligned}
\Psi({\tU_{[x],\varepsilon}}) &= \bigcup_{\substack{U_i \cap [x] \neq \emptyset}}
\Psi(U_i) \cap B(w_0, \varepsilon)
\qquad \text{by \eqref{Uxeps}} \\
&= W_{[x], \varepsilon} \qquad \text{by \eqref{def of Wi} and \eqref{Wx}} .
\end{aligned}$$
[\[components1\]]{} Suppose that, for each $i$, the level sets of $ \Psi|_{U_i} \colon U_i \to W_i $ are path connected. Then the level sets of $ \Psi|_{\tU_{[x],\vareps}} \colon
\tU_{[x],\vareps} \to W_{[x],\vareps} $ are path connected.
Let $w \in {W_{[x],\varepsilon}}$ and let $x_0, x_1 \in {\tU_{[x],\varepsilon}}\cap \Psi^{-1}(w)$. By there exist $i$ and $k$ such that $x_0 \in U_i$, $x_1 \in U_k$, $U_i \cap [x] \neq \emptyset$, and $U_k \cap [x] \neq \emptyset$. Fix such $i$ and $k$. By Lemma \[sequence\], there exists a sequence $i = i_0, i_1, \ldots, i_s = k$ such that $U_{i_{l-1}} \cap U_{i_l} \cap [x] \neq \emptyset$ for $l = 1, \ldots, s$. Part (1) of Lemma \[closelythesame\], and the definition of ${W_{[x],\varepsilon}}$, imply that $\Psi(U_{i_{l-1}} \cap U_{i_l} ) \cap B(w_0,\eps) = {W_{[x],\varepsilon}}$, and thus $U_{i_{l-1}} \cap U_{i_l} \cap \Psi^{-1}(w)$ is non-empty for each $1 \leq l \leq s$. Since each $U_{i_l} \cap \Psi^{-1}(w)$ is path connected, this implies that $x_0$ and $x_1$ can be connected by a path in ${\tU_{[x],\varepsilon}}\cap \Psi^{-1}(w)$.
[\[components2\]]{} Suppose that, for each $i$, the restriction of $\Psi$ to $U_i$ is a convex map. Then the map $${\label{rest}{}}
\Psi|_{{\tU_{[x],\varepsilon}}} \colon {\tU_{[x],\varepsilon}}\to \Psi({\tU_{[x],\varepsilon}})$$ is convex and open.
Let $x_0$ and $x_1$ be in ${\tU_{[x],\varepsilon}}$. Let $i$ be such that $x_0 \in U_i$ and $U_i \cap [x] \neq \emptyset$. By , $\Psi(x_1) \in {W_{[x],\varepsilon}}$. By and , there exists $y \in U_i$ such that $\Psi(y) = \Psi(x_1)$.
By assumption, the restriction of $\Psi$ to $U_i$ is a convex map. By Lemma \[restrict\], the restriction of $\Psi$ to $U_i \cap \Psi\inv(B(w_0,\eps))$ is also convex. Let $\gamma'$ be a path in $U_i \cap \Psi\inv(B(w_0,\eps))$ from $x_0$ to $y$ such that $\psi \circ \gamma'$ is monotone straight. By Lemma \[components1\] there exists a path $\gamma''$ in $\tU_{[x],\vareps}$ from $y$ to $x_1$ whose composition with $\Psi$ is constant. Let $\gamma$ be the concatenation of $\gamma'$ with $\gamma''$; then $\gamma$ is a path from $x_0$ to $x_1$ and $\Psi \circ \gamma$ is monotone straight.
Thus, the map is convex. To show that this map is open, we want to show that given any open set $\Omega \subset {\tU_{[x],\varepsilon}}$, its image $\Psi(\Omega)$ is open in ${W_{[x],\varepsilon}}$. By , $\Psi(\Omega) = \cup_i \Psi(\Omega \cap U_i)$ for $i$ such that $U_i \cap [x] \neq \emptyset$, and each $\Psi(\Omega \cap U_i)$ is contained in $B(w_0, \varepsilon)$. Since $\Psi|_{U_i} \colon U_i \rightarrow W_i$ is open, $\Psi(\Omega \cap U_i)$ is open in $W_i$. By , each $\Psi(\Omega \cap U_i)$ is open in ${W_{[x],\varepsilon}}$.
Let $w_0 = \Psi(x)$. For each $x' \in \Psi\inv(w_0)$, let $U_{x'}$ be an open neighbourhood of $x'$ such that the map $\Psi|_{U_{x'}} \colon U_{x'} \to \Psi(U_{x'})$ is convex and open. The sets $U_{x'}$, for $x' \in \Psi\inv(w_0)$, cover $\Psi\inv(w_0)$. Because $\Psi\inv(w_0)$ is compact, there exists a finite subcovering; let $\{U_i\}_{i=1}^n$ be a finite subcovering.
Because $\Psi\inv(w_0)$ is compact and each point has a connected neighborhood with respect to the relative topology, $\Psi\inv(w_0)$ has only finitely many components $[x]$. Let $\eps > 0$ satisfy the conditions of Lemma \[closelythesame\] for all these components. By Lemma \[consequences of proper\], after possibly shrinking $\eps$, we may assume that $\Psi\inv(B(w_0,\eps)) \subset \cup_i U_i$ and that whenever $[x]$ and $[y]$ are distinct connected components of $\Psi\inv(w_0)$ the sets ${U_{[x],\varepsilon}}$ and ${U_{[y],\varepsilon}}$ are disjoint.
Let ${\tU_{[x],\varepsilon}}$ and ${W_{[x],\varepsilon}}$ be the sets defined in and . Then the preimage $\Psi\inv(B(w_0,\eps))$ is the union of the sets ${\tU_{[x],\varepsilon}}$, for components $[x]$ of $\Psi\inv(w_0)$. Because each ${\tU_{[x],\varepsilon}}$ is connected and contains $[x]$, it is contained in the connected component ${U_{[x],\varepsilon}}$ of $x$ in $\Psi\inv(B(w_0,\eps))$. Because the sets ${U_{[x],\varepsilon}}$ are disjoint and the union of the sets ${\tU_{[x],\varepsilon}}$ is the entire preimage $\Psi\inv(B(w_0,\eps))$, each ${\tU_{[x],\varepsilon}}$ is *equal* to ${U_{[x],\varepsilon}}$. This and Lemma \[components2\] give Proposition \[intgoalB\].
Distance with respect to a locally convex map
=============================================
Let $X$ be a Hausdorff topological space and $\Psi \colon X \to \R^n$ a continuous map. Let $x_0$ and $x_1$ be two points in $X$. We define their $\Psi$-distance to be $$d_{\Psi}(x_0,x_1) =
\inf\{l(\Psi \circ \gamma) \ | \ \gamma \colon \left[0,1\right] \rightarrow X, \
\gamma(0)=x_0, \ \gamma(1)=x_1 \}.$$ Note that the $\Psi$-distance can take any value in $[0,\infty]$. Also note that $d_\Psi(x_0,x_1) = 0$ if and only if $x_0$ and $x_1$ are in the same path-component of a level set of $\Psi$.
In practice, we will work with a space $X$ which is connected and in which each point has a neighbourhood $U$ such that the restriction of $\Psi$ to $U$ is a convex map. For such a space, in the above definition of $\Psi$-distance, we may take the infimum to be over the set of paths $\gamma$ such that $\Psi \circ \gamma$ is polygonal:
Indeed, let $\gamma \colon [0,1] \rightarrow X$ be any path such that $\gamma(0) = x_0$ and $\gamma(1) = x_1$.
By our assumption on $X$, for every $\tau \in [0,1]$ there exists an open interval $J_\tau$ containing $\tau$ and an open subset $U_\tau \subset X$ such that the restriction of $\Psi$ to $U_\tau$ is a convex map and such that $\gamma(J_\tau \cap [0,1]) \subset U_\tau$.
The open intervals $\{ J_\tau \}$ form an open covering of $[0,1]$. Because the interval $[0,1]$ is compact, there exists a finite subcovering; denote it $J_1,\ldots,J_s$. Let $$\eps = \min \{ \, \text{length}(J_i \cap J_k) | \,
i,k \in \{1,\ldots,s\} \text{ and } J_i \cap J_k \neq \emptyset \}.$$
Any subinterval $[\alpha,\beta] \subset [0,1]$ of length $< \eps$ is contained in one of the $J_i$s. Indeed, given such a subinterval $[\alpha,\beta]$, consider those intervals of $J_1,\ldots,J_s$ that contain $\alpha$; let $J_i$ be the one whose upper bound $b_i$, is maximal; then $J_i$ also contains $\beta$.
Thus, for any subinterval $[\alpha,\beta] \subset [0,1]$ of length $< \eps$ there exists an open subset $U \subset X$ such that the restriction of $\Psi$ to $U$ is a convex map and such that $\gamma(\alpha)$ and $\gamma(\beta)$ are both contained in $U$.
Partition $[0,1]$ into $m$ intervals $0 = t_0 < \ldots < t_m = 1$ such that $|t_j - t_{j-1}| < \varepsilon$ for each $j$. By the previous paragraph, for every $1 \leq j \leq m$ there exists $U \subset X$ such that the restriction of $\Psi$ to $U$ is a convex map and such that $\gamma(t_{j-1})$ and $\gamma(t_j)$ are both contained in $U$. Because the restriction of $\Psi$ to $U$ is convex, there exists a path $\gamma_j$ in $X$ connecting $\gamma(t_{j-1})$ and $\gamma(t_j)$ such that the image of $\Psi \circ \gamma$ is a (possibly degenerate) segment with a weakly monotone parametrization. The path $\gamma'$ that is formed by concatenating $\gamma_1, \ldots, \gamma_m$ connects $x_0$ and $x_1$, the composition $\Psi \circ \gamma'$ is polygonal, and $l(\Psi\circ\gamma') \leq l(\Psi\circ\gamma)$.
Proof that local convexity and openness imply global convexity and openness
===========================================================================
[\[sec:last proof\]]{}
[\[Midpoint\]]{} Let $X$ be a connected Hausdorff topological space, $\calT \subset \R^n$ a convex subset, and $\Psi \colon X \rightarrow \calT$ a continuous and proper map. Suppose that for every point $x \in X$ there exists an open neighbourhood $U$ such that the restriction of $\Psi$ to $U$ is a convex map.
Let $x_0$ and $x_1$ be in $X$. Then there exists a point $x_{1/2} \in X$ such that $${\label{xhalf}{}}
d_{\Psi}(x_0, x_{1/2}) =
d_{\Psi}(x_{1/2}, x_1) = \frac{1}{2}d_{\Psi}(x_0, x_1).$$
Choose paths $\gamma_n$ connecting $x_0$ and $x_1$ such that the sequence $\{l(\Psi \circ \gamma_n)\}$ converges to $d_{\Psi}(x_0, x_1)$.
Let $t_j \in [0,1]$ be such that $\gamma_j(t_j)$ is the midpoint of the path $\gamma_j$:
$$l(\Psi \circ \gamma_j \arrowvert_{\scriptscriptstyle{\left[0,t_j\right]}}) =
l(\Psi \circ \gamma_j \arrowvert_{\scriptscriptstyle{\left[t_j,1\right]}}) =
\frac{1}{2}l(\Psi \circ \gamma_j).$$
Let $r > \half d_{\Psi}(x_0,x_1)$. Then all but finitely many of the midpoints $\gamma_j(t_j)$ lie in the set $$A = \Psi^{-1} ( \overline{B}(\Psi(x_0),r) ) .$$ This set is compact because $\Psi$ is proper. So there exists a point ${x_{1/2}}$ such that every neighbourhood of ${x_{1/2}}$ contains $\gamma_j(t_j)$ for infinitely many values of $j$. We will show that the point ${x_{1/2}}$ satisfies equation .
We first show that $d_{\Psi}(x_0,x_\frac{1}{2}) \leq \frac{1}{2}d_{\Psi}(x_0,x_1)$, or, equivalently, that for every $\varepsilon > 0$ there exists a path $\gamma$ connecting $x_0$ and $x_\frac{1}{2}$ such that $l(\Psi \circ \gamma) < \frac{1}{2}d_{\Psi}(x_0,x_1) + \varepsilon$.
Let $U$ be a neighbourhood of $x_{1/2}$ such that the restriction of $\Psi$ to $U$ is a convex map. Let $j$ be such that the following facts are true:
1. $\gamma_j(t_j) \in U$ and $\| \Psi(\gamma_{j}(t_{j})) - \Psi({x_{1/2}}) \|
< \frac{\varepsilon}{2}$.
2. $l(\Psi \circ \gamma_j) < d_{\Psi}(x_0,x_1) + \varepsilon$.
By (i) and since $\Psi|_U$ is a convex map, there exists a path $\mu$ connecting $\gamma_j(t_j)$ and ${x_{1/2}}$ such that $l(\Psi \circ \mu) < \frac{\varepsilon}{2}$. Let $\gamma$ be the concatenation of $\gamma_j|_{[0,t_j]}$ and $\mu$. Then $\gamma$ is a path connecting $x_0$ and ${x_{1/2}}$, and $l(\Psi \circ \gamma) < \frac{1}{2}d_{\Psi}(x_0, x_1) + \varepsilon$.
Thus, $d_{\Psi}(x_0,x_{1/2}) \leq \frac{1}{2}d_{\Psi}(x_0,x_1)$. By the same argument, $d_{\Psi}(x_{1/2},x_1) \leq \frac{1}{2}d_{\Psi}(x_0,x_1)$. If either of these were a strict inequality, then it would be possible to construct a path from $x_0$ to $x_1$ whose image has length less than $d_{\Psi}(x_0, x_1)$, which contradicts the definition of $d_\psi(x_0,x_1)$. Thus, $d_{\Psi}(x_0, x_{1/2}) = d_{\Psi}(x_{1/2}, x_1) =
\frac{1}{2}d_{\Psi}(x_0, x_1)$.
To prove Theorem \[maintheorem\], we need to have some uniform control on the sizes of $\vareps$ such that the restrictions of $\Psi$ to the connected components $U_{[x],\vareps}$ of $\Psi\inv(B(w_0,\eps))$ are convex. The precise result that we will use is established in the following proposition:
[\[intgoalA\]]{} Let $X$ be a Hausdorff topological space and let $\Psi \colon X \to \R^n$ be a continuous map. Suppose that for each $x \in X$ there exists an $\varepsilon > 0$ such that the restriction of $\Psi$ to the set $U_{[x],\vareps}$ is a convex map.
Then for every compact subset $A \subset X$ there exists $\varepsilon > 0$ such that for every $x \in A$ and $x' \in X$, if $d_{\Psi}(x,x') < \varepsilon$, then there exists a path $\gamma \colon [0,1] \to X$ such that $\gamma(0) = x$, $\gamma(1) = x'$, and $\Psi \circ \gamma$ is monotone straight.
For each $x \in X$, let $\vareps_x > 0$ be such that the restriction of $\Psi$ to the set $U_{[x],\vareps_x}$ is a convex map. The sets $U_{[x],\vareps_x/2}$, for $x \in A$, form an open covering of the compact set $A$. Choose a finite subcovering: let $x_1,\ldots,x_k$ be points of $A$ and $\vareps_1,\ldots,\vareps_k$ be positive numbers such that, for each $1 \leq i \leq k$, the restriction of $\Psi$ to the set $U_{[x_i],\vareps_i}$ is a convex map, and such that the sets $U_{[x_i],\vareps_i/2}$ cover $A$.
Let $\vareps = \min\limits_{1 \leq i \leq k } \frac{\vareps_i}{2} $.
Let $x \in A$. Let $1 \leq i \leq k$ be such that $x \in U_{[x_i],\vareps_i/2}$.
Because $U_{[x_i],\vareps_i/2}$, by its definition, is contained in $\Psi\inv(B(\Psi(x_i),\vareps_i/2))$, we have $\| \Psi(x) - \Psi(x_i) \| < \vareps_i/2$.
Because $x$ and $x_i$ are also contained in the larger set $U_{[x_i],\vareps_i}$, and the restriction of $\Psi$ to this set is a convex map, there exists a path $\gamma'$ from $x_i$ to $x$ such that $\Psi \circ \gamma'$ is monotone straight; in particular, $l(\Psi \circ \gamma') = \| \Psi(x) - \Psi(x_i) \|$, so $l(\Psi \circ \gamma') < \vareps_i / 2$.
Let $x' \in X$ be such that $d_\Psi(x,x') < \vareps$. Then, by the definition of $d_\Psi$, there exists a path $\gamma''$ from $x$ to $x'$ such that $l(\Psi \circ \gamma'') < \vareps$.
Let $\hat{\gamma}$ be the concatenation of $\gamma'$ and $\gamma''$. Then $\hat{\gamma}$ is a path from $x_i$ to $x'$, and $l(\Psi \circ \hat{\gamma}) \leq l(\Psi \circ \gamma') + l(\Psi \circ \gamma'')
< \vareps_i/2 + \vareps \leq \vareps_i$. Therefore, $\Psi \circ \hat{\gamma} \subset B(\Psi(x_i),\vareps_i)$. Thus, $x'$ and $x_i$ are in the same connected component of $\Psi\inv(B(\Psi(x_i),\vareps_i))$; that is, $x'$ is in the set $U_{[x_i],\vareps_i}$. Because $x$ is also in the set $U_{[x_i],\vareps_i}$, and because the restriction of $\Psi$ to this set is a convex map, there exists a path $\gamma$ from $x$ to $x'$ such that $\Psi \circ \gamma$ is monotone straight.
[\[tietzepsi\]]{} Let $X$ be a connected Hausdorff topological space, let $\calT \subset \R^n$ be a convex subset, and let $\Psi \colon X \to \calT$ be a continuous proper map. Suppose that for every compact subset $A \subset X$ there exists $\varepsilon > 0$ such that, for every $x \in A$ and $x' \in X$, if $d_{\Psi}(x,x') < \varepsilon$, then there exists a path $\gamma \colon [0,1] \to X$ such that $\gamma(0) = x$, $\gamma(1) = x'$, and $\Psi \circ \gamma$ is monotone straight.
Then $\Psi \colon X \to \R^n$ is a convex map.
Fix $x_0$ and $x_1$ in $X$.
By Lemma \[Midpoint\], there exists a point $x_{1/2}$ such that $$d_{\Psi}(x_0, x_{1/2})
= d_{\Psi}(x_{1/2}, x_1)
= \frac{1}{2}d_{\Psi}(x_0, x_1).$$
Likewise, there exists a point $x_{1/4}$ which satisfies $$d_{\Psi}(x_0, x_{1/4})
= d_{\Psi}(x_{1/4}, x_{1/2})
= \frac{1}{2}d_{\Psi}(x_0, x_{1/2}).$$
By iteration, we get a map $\frac{j}{2^m} \mapsto x_{\frac{j}{2^m}}$, for nonnegative integers $j$ and $m$ with $0 \leq j \leq 2^m$, such that $${\label{eq1}{}}
d_{\Psi}(x_\frac{j-1}{2^m}, x_\frac{j}{2^m}) = d_{\Psi}(x_\frac{j}{2^m},
x_\frac{j+1}{2^m}) = \frac{1}{2}d_{\Psi}(x_\frac{j-1}{2^m},
x_\frac{j+1}{2^m}).$$
Let $r > d_{\Psi}(x_0, x_1)$. Let $\vareps > 0$ be associated with the compact set $$A = \Psi^{-1}(\overline{B}(\Psi(x_0), r))$$ as in the assumption of the proposition.
Choose $m$ large enough such that for every $1 \leq j \leq 2^m$, $$d_{\Psi}(x_{\frac{j-1}{2^m}}, x_{\frac{j}{2^m}}) < \frac{\varepsilon}{2}.$$
By the assumption, there exists a path $\gamma_j$ from $x_{(j-1)/{2^m}}$ to $x_{{j}/{2^m}}$ such that $\Psi \circ \gamma_j$ is monotone straight. Thus, $${\label{colin}{}}
d_\Psi(x_{\frac{j-1}{2^m}}, x_{\frac{j}{2^m}}) =
\|\Psi(x_{\frac{j-1}{2^m}}) - \Psi(x_{\frac{j}{2^m}})\|
\quad \text{for each } 1 \leq j \leq 2^m .$$
Similarly, $$d_\Psi(x_{\frac{j-1}{2^m}}, x_{\frac{j+1}{2^m}}) =
\|\Psi(x_{\frac{j-1}{2^m}}) - \Psi(x_{\frac{j+1}{2^m}})\|
\quad \text{for each } 1 \leq j < 2^m.$$
Thus equation can be rewritten as $$\|\Psi(x_\frac{j-1}{2^m}) - \Psi(x_\frac{j}{2^m})\|
= \|\Psi(x_\frac{j}{2^m}) - \Psi(x_\frac{j+1}{2^m})\|
= \frac{1}{2}\|\Psi(x_{\frac{j-1}{2^m}}) - \Psi(x_{\frac{j+1}{2^m}})\|,$$ which implies, by the triangle inequality, that the points $\Psi(x_\frac{j-1}{2^m})$, $\Psi(x_\frac{j}{2^m})$, $\Psi(x_\frac{j+1}{2^m})$ are collinear. The concatenation of the paths $\gamma_j$, for $1 \leq j \leq 2^m$, is a path from $x_0$ to $x_1$ whose composition with $\Psi$ is monotone straight.
Let $X$ be a connected Hausdorff topological space, $\calT \subset \R^n$ a convex subset, and $\Psi \colon X \to \calT$ a continuous proper map. Suppose that for every point $x \in X$ there exists an open neighbourhood $U$ such that the map $\Psi|_U \colon U \to \Psi(U)$ is convex and open.
By Proposition \[intgoalB\], for every point $x$ there exists an $\vareps > 0$ such that the map $$\Psi|_{U_{[x],\vareps}} \colon U_{[x],\vareps} \to \Psi(U_{[x],\vareps})$$ is convex and open.
By Proposition \[intgoalA\], for every compact subset $A \subset X$ there exists an $\vareps > 0$ such that for every $x \in A$ and $x' \in X$, if $d_\Psi(x,x') < \vareps$, then there exists a path $\gamma \colon [0,1] \to X$ such that $\gamma(0) = x$, $\gamma(1) = x'$, and $\Psi \circ \gamma$ is monotone straight.
By Proposition \[tietzepsi\], the map $\Psi \colon X \to \R^n$ is convex.
To show that the map $\Psi \colon X \to \Psi(X)$ is open, it is enough to show that for each $w_0 \in \R^n$ there exists $\vareps > 0$ such that the restriction of $\Psi$ to $\Psi\inv(B(w_0,\vareps))$ is open as a map to its image.
Fix $w_0 \in \R^n$.
Because the map $\Psi \colon X \to \R^n$ is convex, the level set $\Psi\inv(w_0)$ is connected. Thus, this level set consists of a single connected component, $[x]$.
By Proposition \[intgoalB\], for sufficiently small $\vareps$, the restriction of $\Psi$ to the set $U_{[x],\vareps}$ is open as a map to its image. The set $U_{[x],\vareps}$ is an open set that contains $\Psi\inv(w_0)$. Because $\Psi$ is proper, there exists an $\vareps' > 0$ such that the set $U_{[x],\vareps}$ contains the preimage $\Psi\inv(B(w_0,\vareps'))$; see Lemma \[consequences of proper\]. Thus, the restriction of $\Psi$ to the preimage $\Psi\inv(B(w_0,\vareps'))$ is open as a map to its image.
Examples
========
[\[sec:moment\]]{}
[\[Cn\]]{} The map from $\C^n$ to $\R^n$ given by $${\label{PhiCn}{}}
(z_1,\ldots,z_n) \mapsto (|z_1|^2,\ldots,|z_n|^2)$$ is convex, and it is open as a map from $\C^n$ to the positive orthant $\R_+^n$.
Moreover, the restriction of the map to any ball $B_\rho = \{ z \in \C^n \ | \ \| z \| < \rho \}$ is convex, and it is open as a map to its image.
Consider the following commuting diagram of continuous maps: $${\label{diagramCn}{}}
\xymatrix{ \R_+^n \times (S^1)^n \ar[rrrd]^{\text{projection}}
\ar[d]_{(s_1,\ldots,s_n,e^{i\theta_1},\ldots,e^{i\theta_n})
\mapsto
(s_1^{1/2}e^{i\theta_1},\ldots,s_n^{1/2}e^{i\theta_n})} &&& \\
\C^n \ar[rrr]_{(z_1,\ldots,z_n) \mapsto (|z_1|^2,\ldots,|z_n|^2)}
&&& \R_+^n .}$$ Because the projection map is convex and open and the map on the left is onto, the bottom map is convex and open.
Because the ball $B_\rho$ is the pre-image of a convex set (namely, it is the preimage of the set $\{ (s_1,\ldots,s_n) \ | \ s_1 + \ldots + s_n < \rho^2 \}$), the restriction of the map to $B_\rho$ is also convex and open as a map to its image.
Let $\alpha_1,\ldots,\alpha_n$ be any vectors. Then the map $${\label{PhiH}{}}
\Phi_H \colon (z_1,\ldots,z_n)
\mapsto \sum_{j=1}^n |z_j|^2 \alpha_j$$ is convex, and it is open as a map to its image.
Moreover, the restriction of $\Phi_H$ to any ball $B_\rho = \{ z \in \C^n \ | \ \| z \| < \rho \}$ is convex, and it is open as a map to its image.
Because the map \[PhiCn\] is convex, so is its composition with the linear map $(s_1,\ldots,s_n) \mapsto
(s_1 \alpha_1 + \ldots + s_n \alpha_n)$. Because the restriction of a linear map to the positive orthant $\R_+^n$ is open as a map to its image[^2], so is this composition. Because the map is open as a map to its image, so is its restriction to the open ball $B_\rho$. Because this restriction is the composition of a convex map with a linear projection, it is convex.
We proceed with applications to symplectic geometry. Relevant definitions can be found e.g. in the original paper [@GS] of Guillemin and Sternberg. We first describe local models for Hamiltonian torus actions.
[\[modelY\]]{} Let $T \cong (S^1)^k$ be a torus, $\t \cong \R^k$ its Lie algebra, and $\t^* \cong \R^k$ the dual space. Let $H \subset T$ be a closed subgroup, $\h \subset \t$ its Lie algebra, and $\h^0 \subset \t^*$ the annihilator of $\h$ in $\t^*$. Let $H$ act on $\C^n$ through a group homomorphism $H \to (S^1)^n$ followed by coordinatewise multiplication. The corresponding quadratic moment map, $\Phi_H \colon \C^n \to \h^*$, has the form $z \mapsto \sum_{j=1}^n |z_j|^2 \alpha_j$ where $\alpha_1,\ldots,\alpha_n$ are elements of $\h^*$ (namely, they are the weights of the $H$ action on $\C^n$, times $\frac{1}{2}$).
Consider the model $$Y = T \times_H \C^n \times \h^0;$$ its elements are represented by triples $[a,z,\nu]$ with $a \in T, z \in \C^n$, and $\nu \in \h^0$, with $[ab,z,\nu] = [a, b\cdot z, \nu]$ for all $b \in H$. Fix a splitting $\t^* = \h^* \oplus \h^0$, and consider the map $$\Phi_Y \colon T \times_H \C^n \times \h^0 \to \t^*
\quad , \quad
[a,z,\nu] \mapsto \Phi_H(z) + \nu .$$
The map $\Phi_Y$ is convex and is open as a map to its image. This follows from the commuting diagram $$\begin{CD}
T \times \C^n \times \h^0 @> (a,z,\nu) \mapsto (\Phi_H(z),\nu) >>
\h^* \times \h^0 \\
@VVV @V \cong VV \\
T \times_H \C^n \times \h^0 @> \Phi_Y >> \t^*,
\end{CD}$$ in which the top map is convex and is open as a map to its image, the map on the left is onto, and the map on the right is a linear isomorphism.
Similarly, if $D \subset \C^n$ and $D' \subset \h^0$ are disks centered at the origin, the restriction of $\Phi_Y$ to the subset $T \times_H D \times D'$ of $T \times_H \C^n \times \h^0$ is convex and is open as a map to its image. This follows from the diagram $$\begin{CD}
T \times D \times D' @> (a,z,\nu) \mapsto (\Phi_H(z),\nu) >>
\h^* \times \h^0 \\
@VVV @V \cong VV \\
T \times_H D \times D' @> \Phi_Y >> \t^*.
\end{CD}$$
[\[GS local convexity\]]{} Let $T$ act on a symplectic manifold with a moment map $\Phi \colon M \to \t^*$. Then each point of $M$ is contained in an open set $U \subset M$ such that the restriction of $\Phi$ to $U$ is convex and is open as a map to its image, $\Phi(U)$.
Fix a point $x \in M$.
There exists a $T$-invariant neighbourhood $U$ of $x$ and an equivariant diffeomorphism $f \colon U \to T \times_H D \times D'$ that carries $\Phi|_U$ to a map that differs from $\Phi_Y$ by a constant in $\t^*$, where the model $T \times_H D \times D'$ and the map $\Phi_Y$ are as in §\[modelY\]. This follows from the local normal form theorem for Hamiltonian torus actions [@GS2]. Because the restriction of $\Phi_Y$ to $T \times_H D \times D'$ is convex and is open as a map to its image, so is $\Phi|_U$.
We can now recover the convexity theorem of Atiyah, Guillemin, and Sternberg along the lines given by Condevaux-Dazord-Molino.
Let $M$ be a manifold equipped with a symplectic form and a torus action, and let $\Phi \colon M \to \t^*$ be a corresponding moment map. Suppose that $\Phi$ is proper as a map to some convex subset of $\t^*$. Then the image of $\Phi$ is convex, its level sets are connected, and the moment map is open as a map to its image.
By Proposition \[GS local convexity\], every point in $M$ is contained in an open set $U$ such that the map $\Phi|_U$ is convex and is open as a map to its image, $\Phi(U)$. The conclusion then follows from Theorem \[Theorem\].
The results of Birtea-Ortega-Ratiu
==================================
[\[sec:ratiu\]]{}
The paper [@BOR1] of Birtea, Ortega, and Ratiu contains results that are similar to ours. For the benefit of the reader, we present their results here.
Let $X$ be a topological space that is connected, locally connected, first countable, and normal. Let $V$ be a finite dimensional vector space. Let $f \colon X \to V$ be a map that satisfies the following conditions.
1. The map $f$ is continuous and is closed.
2. The map $f$ has *local convexity data*: for each $x \in X$ and each sufficiently small neighborhood $U$ of $x$ there exists a convex cone $C \subset V$ with vertex at $f(x)$ such that the restriction $f|_U \colon U \to C$ is an open map with respect to the subset topology on $C \subset V$.
3. The map $f$ is *locally fiber connected*: for each $x \in X$, any open neighborhood of $x$ contains a neighborhood $U$ of $x$ that does not intersect two connected components of the fiber $f\inv(f(x'))$ for any $x' \in U$.
Then the fibers of $f$ are connected, the map $f$ is open onto its image, and the image $f(X)$ is a closed convex set.
The paper [@BOR2] contains a more general convexity result; in particular it contains a more liberal definition of having local convexity data: for each $x \in X$ there exist arbitrarily small neighborhoods $U$ of $x$ such that $f(U)$ is convex [@BOR2 Def.2.8]. Here, openness of the maps $f|_U \colon U \to f(U)$ is not part of the definition of “local convexity data", but it is assumed separately.
Birtea-Ortega-Ratiu also sketch a proof of the following infinite dimensional version:
Let $X$ be a topological space that is connected, locally connected, first countable, and normal. Let $(V,\| \ \|)$ be a Banach space that is the dual of another Banach space. Let $f \colon X \to V$ be a map that satisfies the following conditions.
1. The map $f$ is continuous with respect to the norm topology on $V$ and is closed with respect to the weak-star topology on $V$.
2. The map $f$ has *local convexity data* (see above).
3. The map $f$ is *locally fiber connected* (see above).
Then the fibers of $f$ are connected, the map $f$ is open onto its image with respect to the weak-star topology, and the image $f(X) \subset V$ is convex and is closed in the weak-star topology.
- We work with a convex subset of $V$; they similarly note that their theorem remains true with $V$ replaced by a convex subset of $V$ [@BOR1 remark 2.29].
- In [@BOR2] they allow more general target spaces, which are not vector spaces.
- We assume that the domain is Hausdorff and the map is proper (in the sense that the preimage of a compact set is compact); they assume that the domain is first countable and normal and that the map is closed. We are not aware of non-artificial examples where one of these assumptions holds and the other doesn’t.
- We assume that each point is contained in an open set on which the map is a *convex map*, a condition that we define in Definition \[def:convex\], section \[sec:local-global\]. They assume that the map *has local convexity data* (defined in [@BOR1 Def.2.7] and re-defined in [@BOR2 Def.2.8]) and satisfies the *locally fiber connected condition* (defined in [@BOR1 Def.2.15] as a slight generalization of [@benoist §3.4, after Def.3.6]).
The inclusion map of a closed ball into $\R^n$ is a convex map in our sense. It does not have local convexity data in the sense of [@BOR1], but it does have local convexity data in the sense of [@BOR2].
- If a map is convex, then it has local convexity data (in the broader sense, of [@BOR2]) and it is locally fiber connected. Thus, our “convexity/connectedness" assumptions are stricter than those of [@BOR1], but our conclusion is stronger.
- Both we and [@BOR1] allow a broad interpretation of “local":
- In [@BOR1], the “locally fiber connected condition" on a subset $A$ of $X$ with respect to a map $f \colon X \to V$ depends not only on the restriction of the map $f$ to the set $A$ but also on the information of which points in $A$ belong to the same fiber *in $X$*. (This is where the definition of [@BOR1] differs from that of Benoist.)
- In our paper, we assume that each point is contained in an open set on which the map is convex and is open as a map to its image, but we do not insist that these open sets form a basis to the topology (cf. Remark \[U not small\]). (E.G., in the presence of a group action, it’s fine to just check neighborhoods of orbits rather than neighborhoods of individual points.)
[CDM]{}
M. Atiyah, “[Convexity and commuting hamiltonians]{}”, Bull. London Math. Soc. **14** (1982), 1–15.
Y. Benoist, “[Actions symplectiques de groupes compacts]{}”, Geometriae Dedicata **89** (2002), 181–245.
P. Birtea, J-P. Ortega, and T. S. Ratiu, “[Openness and convexity for momentum maps]{}”, Trans. Amer. Math. Soc. **361** (2009), no. 2, 603–630.
P. Birtea, J-P. Ortega, and T. S. Ratiu, “[Metric convexity in the symplectic category]{}", arXiv:math/0609491, appeared as “A Local-to-Global Principle for Convexity in Metric Spaces", J. Lie Th. **18** (2008), no. 2, 445–469.
Leonard M. Blumenthal and Raymond W. Freese, “[Local convexity in metric spaces.]{}” (Spanish), Math. Notae **18** 1962, 15–22 (1962).
J. Cel, “[A generalization of Tietze’s theorem on local convexity for open sets]{}”, Bull. Soc. Roy. Sci. Liège 67 (1998), no. 1–2, 31–33.
M. Condevaux, P. Dazord, and P. Molino, “[Geometrie du moment]{}”, in Sem. Sud-Rhodanien 1988.
V. Guillemin and S. Sternberg, “[Convexity properties of the moment mapping]{}”, Invent. Math. **67** (1982), 491–513.
V. Guillemin and S. Sternberg, “[A normal form for the moment map]{}", in: Differential geometric methods in mathematical physics (Jerusalem, 1982), 161–175, Math. Phys. Stud., 6, Reidel, Dordrecht, 1984.
J. Hilgert, K.H. Neeb, and W. Plank, “Symplectic convexity theorems”, in Sem. Sophus Lie 3 (1993) 123–135.
David C. Kay, “[Generalizations of Tietze’s theorem on local convexity for sets in $R^d$]{}", Discrete geometry and convexity (New York, 1982), 179–191, Ann. New York Acad. Sci., 440, New York Acad. Sci., New York, 1985.
P. J. Kelly and M. L. Weiss, “Geometry and Convexity: a study of mathematical methods”, Pure and applied mathematics, John Wiley and Sons, Wiley-Interscience, New York, 1979.
Klee, V. L. Jr. “Convex sets in linear spaces", Duke Math. J. **18** (1951), 443–466.
F. Knop, “Convexity of Hamiltonian manifolds”, J. Lie Theory **12** (2002), no. 2, 571–582.
S. Nakajima, “Über konvexe Kurven and Fläschen", Tohoku Mathematical Journal, voll. **29** (1928), 227–230.
R. Sacksteder, E. G. Straus, and F. A. Valentine, “A generalization of a theorem of Tietze and Nakajima on Local Convexity”, *Journal of the London Mathematical Society* **36**, 1961, 52–56.
I. J. Schoenberg, “On local convexity in Hilbert space”, *Bulletin of the American Mathematical Society* **48**, 1942, 432–436.
Takayuki Tamura, “On a relation between local convexity and entire convexity.” J. Sci. Gakugei Fac. Tokushima Univ. **1**, (1950). 25–30.
H. Tietze, “[Über Konvexheit im kleinen und im großen und über gewisse den Punkter einer Menge zugeordete Dimensionszahlen.]{}” *Math. Z.* **28** (1928) 697–707.
[^1]: $^*$ formerly Christina Marshall
[^2]: This is a consequence of the following lemma:
> For any vectors $\alpha_1,\ldots,\alpha_n \in \R^{\ell}$ there exists $\eps > 0$ such that for every $\beta = \sum s_j \alpha_j$ with all $s_j \geq 0$, if $\| \beta \| < \eps$ then there exists $s' = (s'_1,\ldots,s'_n)$ such that $\beta = \sum s'_j \alpha_j$ and $\| s' \| < 1$.
Let $\beta = \sum s_j \alpha_j$ with all $s_j \geq 0$. Then there exist $s'_j$ such that $\beta = \sum s'_j \alpha_j$, all $s'_j \geq 0$, and the vectors $\{ \alpha_j \ | \ s'_j \neq 0 \}$ are linearly independent; cf. Carathéodory’s theorem in convex geometry. Let $J = \{ j \ | \ s'_j \neq 0 \}$. The map $s \mapsto \sum s_j \alpha_j$ from $\R^J$ to $\text{span} \{ \alpha_j \ | \ j \in J \}$ is a linear isomorphism; denote its inverse by $L_J$. Then $s' = L_J(\beta)$, so $\| s' \| \leq \| L_J \| \| \beta \|$ where $\| L_J \|$ is the operator norm. The lemma holds with any $\eps < \min\limits_J
\left\{ \frac{1}{\| L_J \|} \right\}$ where $J$ runs over the subsets of $\{ 1,\ldots,n \}$ for which $\{ \alpha_j \ | \ j \in J \}$ are linearly independent.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We ask the question “when will natural selection on a gene in a spatially structured population cause a detectable trace in the patterns of genetic variation observed in the contemporary population?”. We focus on the situation in which ‘neighbourhood size’, that is the effective local population density, is small. The genealogy relating individuals in a sample from the population is embedded in a spatial version of the ancestral selection graph and through applying a diffusive scaling to this object we show that whereas in dimensions at least three, selection is barely impeded by the spatial structure, in the most relevant dimension, $d=2$, selection must be stronger (by a factor of $\log(1/\mu)$ where $\mu$ is the neutral mutation rate) if we are to have a chance of detecting it. The case $d=1$ was handled in [@EFS2015].
The mathematical interest is that although the system of branching and coalescing lineages that forms the ancestral selection graph converges to a branching Brownian motion, this reflects a delicate balance of a branching rate that grows to infinity and the instant annullation of almost all branches through coalescence caused by the strong local competition in the population.
address:
- |
Alison Etheridge\
Department of Statistics\
University of Oxford\
24-29 St Giles\
Oxford\
England\
- |
Nic Freeman\
School of Mathematics and Statistics\
University of Sheffield\
Hounsfield Road\
Sheffield\
England\
- |
Sarah Penington\
Department of Statistics\
University of Oxford\
24-29 St Giles\
Oxford\
England\
- |
Daniel Straulino\
Department of Statistics\
University of Oxford\
24-29 St Giles\
Oxford\
England\
author:
- ','
- ','
-
-
bibliography:
- 'confirmation.bib'
title: 'Branching Brownian motion and Selection in the Spatial $\Lambda$-Fleming-Viot Process'
---
Introduction {#intro}
============
Our aims in this work are two-fold. On the one hand, we address a question of interest in population genetics: when will the action of natural selection on a gene in a spatially structured population cause a detectable trace in the patterns of genetic variation observed in the contemporary population? On the other hand, we investigate some of the rich structure underlying mathematical models for spatially evolving populations and, in particular, the systems of interacting random walks that, as dual processes (corresponding to ancestral lineages of the model), describe the genetic relationships between individuals sampled from those populations.
Since the seminal work of [@fisher:1937], a large literature has developed that investigates the interaction of natural selection with the spatial structure of a population. Traditionally, the deterministic action of migration and selection is approximated by what we now call the Fisher-KPP equation and predictions from that equation are compared to data. However, many important questions depend on how selection and migration interact with a third force, the stochastic fluctuations known as random genetic drift, and this poses significant new mathematical challenges.
For the most part, random drift is modelled through Wright-Fisher noise resulting in a stochastic PDE as a model for the evolution of gene frequencies $w$: $$\frac{\partial w}{\partial t}
=m\Delta w- s w (1-w)+\sqrt{\gamma w(1-w)}\dot{\mathcal W}$$ (for suitable constants $m$, $s$ and $\gamma$), where $\mathcal W$ is space-time white noise. This stochastic Fisher-KPP equation has been extensively studied, see, for example, [@mueller/mytnik/quastel:2008] and references therein. However, from a modelling perspective it has two immediate shortcomings. First, it only makes sense in one spatial dimension. This is generally overcome by artificially subdividing the population, and thus replacing the stochastic PDE by a system of stochastic ordinary differential equations, coupled through migration. The second problem is that, in deriving the equation, one allows the ‘neighbourhood size’ to tend to infinity. We shall give a precise definition of neighbourhood size in Section \[model\]. Loosely, it is inversely proportional to the probability that two individuals sampled from sufficiently close to one another had a common parent in the previous generation and small neighbourhood size corresponds to strong genetic drift. It is understanding the implications of dropping this (usually implicit) assumption of unbounded neighbourhood size that motivated the work presented here.
Our starting point will be the Spatial $\Lambda$-Fleming-Viot process with selection (), which (along with its dual) was introduced and constructed in [@EVY2014]. The dynamics of both the and its dual are driven by a Poisson Point Process of ‘events’ (which model reproduction or extinction and recolonisation in the population) and will be described in detail in Section \[model\]. The advantage of this model is that it circumvents the need to subdivide the population in higher dimensions. However, since our proof is based on an analysis of the branching and coalescing system of random walkers that describes the ancestry of a sample from the population, it would be straightforward to modify it to apply to, for example, an individual based model in which a fixed number of individuals reside at each point of a $d$-dimensional lattice.
In classical models of population genetics, in which there is no spatial structure, we generally think of population size as setting the timescale of evolution of frequencies of different genetic types. Evidently that makes no sense in our setting. However (even in the classical setting), as we explain in more detail in Section \[biology\], if natural selection is to leave a distinguishable trace in contemporary patterns of genetic variation, then a sufficiency of neutral mutations must fall on the genealogical trees relating individuals in a sample. Thus, in fact, it is the neutral mutation rate which sets the timescale and, since mutation rates are very low, this leads us to consider scaling limits.
In [@EVY2014], scaling limits of the (forwards in time) were considered in which the neighbourhood size tends to infinity. In that case, the classical Fisher-KPP equation and, in one spatial dimension, its stochastic analogue are recovered. The dual process of branching and coalescing lineages converges to branching Brownian motion, with coalescence of lineages (in one dimension) at a rate determined by the local time that they spend together. In this article we consider scaling limits in the (very different) regime in which neighbourhood size remains finite. In this context the interaction between genetic drift and spatial structure becomes much more important and, in contrast to [@EVY2014], it is the dual process which proves to be the more analytically tractable object.
We shall focus on the most biologically relevant case of two spatial dimensions. The case of one dimension was discussed in [@EFS2015]. The main interest there is mathematical: the dual process of branching and coalescing ancestral lineages, suitably scaled, converges to the Brownian net. However, the scaling required to obtain a non-trivial limit reveals a strong effect of the spatial structure. Here we shall identify the corresponding scalings in dimensions $d\geq 2$. Whereas in [@EVY2014], the scaling of the selection coefficient is independent of spatial dimension and, indeed, mirrors that for unstructured populations, for bounded neighbourhood size this is no longer the case. In $d=1$ and $d=2$ the scaling of the selection coefficient required to obtain a non-trivial limit reflects strong local competition.
Our main result, Theorem \[result d>1\], is that under these (dimension-dependent) scalings, the scaled dual process converges to a branching Brownian motion. For $d\geq 3$ this is rather straightforward, but in two dimensions things are much more delicate. The mathematical interest of our result is that in $d=2$, under our scaling, the rate of branching of ancestral lineages explodes to infinity but, crucially, all except finitely many branches are instantaneously annulled through coalescence. That this finely balanced picture produces a non-degenerate limit results from a combination of the failure of two dimensional Brownian motion to hit points and the strong (local) interactions of the approximating random walks, which cause coalescence.
From a biological perspective, the main interest is that, in contrast to the infinite neighbourhood size limit, here we see a strong effect of spatial dimension in our results. When neighbourhood size is very big, the probability of fixation for an advantageous genetic type, i.e. the probability that the genetic type establishes and sweeps through the entire population, is not affected by spatial structure. When neighbourhood size is small, in (one and) two spatial dimensions, selection has to be much stronger to leave a detectable trace than in a population with no spatial structure. Indeed, local establishment is no longer a guarantee of eventual fixation.
The rest of the paper is laid out as follows. In Section \[model\] we describe the and the dual process of branching and coalescing random walks, state our main result and provide a heuristic argument that explains our choice of scalings. In Section \[biology\] we place our findings in the context of previous work on selective sweeps in spatially structured populations and in Section \[proof\] we prove our result.
[**Acknowledgements**]{}
Our results (with different proofs) form part of the DPhil thesis of the last author. We would like to thank the examiners, Christina Goldschmidt and Anton Wakolbinger, for their careful reading of the thesis and detailed feedback. We would also like to thank the two anonymous referees for their careful reading of the paper and valuable comments.
The model and main result {#model}
=========================
The model
---------
To motivate the definition of the , it is convenient to recall (a very special case of) the model without selection, introduced in [@E2008; @BEV2010]. We shall call it the to emphasize that selection is not acting. We proceed informally, only carefully specifying the state space and conditions that are sufficient to guarantee existence of the process when we define the itself in Definition \[slfvdefn\]. The interested reader can find much more general conditions under which the exists in [@EK2014].
We restrict ourselves to the case of just two genetic types, which we denote $a$ and $A$, and we suppose that the population is evolving in $\R^d$. It is convenient to index time by the whole real line. At each time $t$, the random function $\{w_t(x),\, x\in \R^d\}$ is defined, up to a Lebesgue null set of $\R^d$, by $$\label{defn of w}
w_t(x):= \hbox{ proportion of type }a\hbox{ at spatial position }x\hbox{ at time }t.$$ The dynamics are driven by a Poisson point process $\Pi$ on $\R\times \R^d\times \R_+\times (0,1]$. Each point $(t,x,r,u)\in\Pi$ specifies a reproduction event which will affect that part of the population at time $t$ which lies within the closed ball $\mc{B}_r(x)$ of radius $r$ centred on the point $x$. First the location $z$ of the parent of the event is chosen uniformly at random from $\mc{B}_r(x)$. All offspring inherit the type $\alpha$ of the parent which is determined by $w_{t-}(z)$; that is, with probability $w_{t-}(z)$ all offspring will be type $a$, otherwise they will be $A$. A portion $u$ of the population within the ball is then replaced by offspring so that $$w_t(y)=(1-u)w_{t-}(y)+u \1_{\{\alpha=a\}},\qquad\forall y\in \mc{B}_r(x).$$ The population outside the ball is unaffected by the event. We sometimes call $u$ the impact of the event.
Under this model, the time reversal of the same Poisson Point Process of events governs the ancestry of a sample from the population. Each ancestral lineage that lies in the region affected by an event has a probability $u$ of being among the offspring of the event, in which case, as we trace backwards in time, it jumps to the location of the parent, which is sampled uniformly from the region. In this way, ancestral lineages evolve according to (dependent) compound Poisson processes and lineages can coalesce when affected by the same event. All lineages affected by an event inherit the type of the parent of that event.
In [@EK2014], the and its dual are constructed simultaneously on the same probability space, through a lookdown construction, as the limit of an individual based model, and so the dual process just described really can be interpreted as tracing the ancestry of individuals in a sample from the population.
We are now in a position to define the neighbourhood size.
Write $\sigma^2$ for the variance of the first coordinate of the location of a single ancestral lineage after one unit of time and $\eta(x)$ for the instantaneous rate of coalescence of two lineages that are currently at a separation $x\in\R^d$. Then the [*neighbourhood size*]{}, ${\cal N}$ is given by $${\cal N}=\frac{2dC_d\sigma^2}{\int_{\R^d}\eta(x)dx},$$ where $C_d$ is the volume of the unit ball in $\R^d$.
Neighbourhood size is used in biology to quantify the local number of breeding individuals in a continuous population; see [@barton/etheridge/kelleher/veber:2013a] for a derivation of this formula. If we assume that the impact is the same for all events, then the impact is inversely proportional to the neighbourhood size, see [@barton/etheridge/kelleher/veber:2013a].
There are very many different ways in which to introduce selection into the . Our approach here is a simple adaptation of that adopted in classical models of population genetics. The parental type in the is a uniform pick from the types in the region affected by the event. We can introduce a small advantage to individuals of type $A$ by choosing the parent in a weighted way. Thus if, immediately before reproduction, the proportion of type $a$ individuals in the region affected by the event is $\overline{w}$, then the offspring will be type $a$ with probability $\overline{w}/(1+\v{s}(1-\overline{w}))$. We say that the relative fitnesses of types $a$ and $A$ are $1$ and $1+\v{s}$ respectively and refer to $\v{s}$ as the selection coefficient. We are interested only in small values of $\v{s}$ and so we expand $$\frac{\overline{w}}{1+\v{s}(1-\overline{w})}=\overline{w} \{1-\v{s}(1-\overline{w})\}
+{\mathcal O}(\v{s}^2)
=(1-\v{s})\overline{w}+\v{s}\overline{w}^2 +{\mathcal O}(\v{s}^2).$$ We shall regard $\v{s}^2$ as being negligible. We can then think of each event, independently, as being a ‘neutral’ event with probability $(1-\v{s})$ and a ‘selective’ event with probability $\v{s}$. Reproduction during neutral events is exactly as before, but during selective events, we sample two [*potential*]{} parents; only if both are type $a$ will the offspring be of type $a$.
Let us now give a more precise definition of the . We retain the notation of (\[defn of w\]). A construction of an appropriate state space for $x\mapsto w_t(x)$ can be found in [@veber/wakolbinger:2013]. Using the identification $$\int_{\R^d\times \{a,A\}} f(x,\kappa) M(dx,d\kappa) = \int_{\R^d} \big\{w(x)f(x,a)+ (1-w(x))f(x,A)\big\}\, dx,$$ this state space is in one-to-one correspondence with the space ${\cal M}_\lambda$ of measures on $\R^d\times\{a,A\}$ with ‘spatial marginal’ Lebesgue measure, which we endow with the topology of vague convergence. By a slight abuse of notation, we also denote the state space of the process $(w_t)_{t\in\R}$ by ${\cal M}_\lambda$.
\[slfvdefn\] Fix $\mc{R}\in(0,\infty)$. Let $\mu$ be a finite measure on $(0,\mc{R}]$ and, for each $r\in (0,\mc{R}]$, let $\nu_r$ be a probability measure on $(0,1]$. Further, let $\Pi$ be a Poisson point process on $\R\times \R^d\times (0,\mc{R}]\times (0,1]$ with intensity measure $$\label{slfvdrive}
dt\otimes dx\otimes \mu(dr)\nu_r(du).$$ The [*spatial $\Lambda$-Fleming-Viot process with selection*]{} () driven by is the ${\cal M}_\lambda$-valued process $(w_t)_{t\in\R}$ with dynamics given as follows.
If $(t,x,r,u)\in \Pi$, a reproduction event occurs at time $t$ within the closed ball $\mc{B}_r(x)$ of radius $r$ centred on $x$. With probability $1-\v{s}$ the event is [*neutral*]{}, in which case:
1. Choose a parental location $z$ uniformly at random within $\mc{B}_r(x)$, and a parental type, $\alpha$, according to $w_{t-}(z)$, that is $\alpha=a$ with probability $w_{t-}(z)$ and $\alpha=A$ with probability $1-w_{t-}(z)$.
2. For every $y\in \mc{B}_r(x)$, set $w_t(y) = (1-u)w_{t-}(y) + u{\mathbf{1}}_{\{\alpha=a\}}$.
With the complementary probability $\v{s}$ the event is [*selective*]{}, in which case:
1. Choose two ‘potential’ parental locations $z,z'$ independently and uniformly at random within $\mc{B}_r(x)$, and at each of these sites ‘potential’ parental types $\alpha$, $\alpha'$, according to $w_{t-}(z), w_{t-}(z')$ respectively.
2. For every $y\in \mc{B}_r(x)$ set $w_t(y) = (1-u)w_{t-}(y) + u{\mathbf{1}}_{\{\alpha =\alpha'=a\}}$. Declare the parental location to be $z$ if $\alpha=\alpha'=a$ or $\alpha=\alpha'=A$ and to be $z$ (resp. $z'$) if $\alpha=A,\alpha'=a$ (resp. $\alpha=a, \alpha'=A$).
This is a very special case of the introduced in [@EVY2014].
We are especially concerned with the dual process of the . Whereas in the neutral case we can always identify the distribution of the location of the parent of each event, without any additional information on the distribution of types in the region, now, at a selective event, we are unable to identify which of the ‘potential parents’ is the true parent of the event without knowing their types. These can only be established by tracing further into the past. The resolution is to follow all [*potential*]{} ancestral lineages backwards in time. This results in a system of branching and coalescing walks.
As in the neutral case, the dynamics of the dual are driven by the same Poisson point process of events, $\Pi$, that drove the forwards in time process. The distribution of this Poisson point process is invariant under time reversal and so we shall abuse notation by reversing the direction of time when discussing the dual.
We suppose that at time $0$ (which we think of as ‘the present’), we sample $k$ individuals from locations $x_1,\ldots ,x_k$ and we write $\xi_s^1,\ldots ,\xi_s^{N_s}$ for the locations of the $N_s$ [*potential ancestors*]{} that make up our dual at time $s$ before the present.
\[dualprocessdefn\] The branching and coalescing dual process $(\Xi_t)_{t\geq 0}$ driven by $\Pi$ is the $\bigcup_{m\geq 1}(\R^d)^m$-valued Markov process with dynamics defined as follows: at each event $(t,x,r,u)\in \Pi$, with probability $1-\v{s}$, the event is neutral:
1. For each $\xi_{t-}^i\in \mc{B}_r(x)$, independently mark the corresponding lineage with probability $u$;
2. if at least one lineage is marked, all marked lineages disappear and are replaced by a single lineage, whose location at time $t$ is drawn uniformly at random from within $\mc{B}_r(x)$.
With the complementary probability $\v{s}$, the event is selective:
1. For each $\xi_{t-}^i\in \mc{B}_r(x)$, independently mark the corresponding lineage with probability $u$;
2. if at least one lineage is marked, all marked lineages disappear and are replaced by [*two*]{} lineages, whose locations at time $t$ are drawn independently and uniformly from within $\mc{B}_r(x)$.
In both cases, if no lineages are marked, then nothing happens.
Since we only consider finitely many initial individuals in the sample, and the jump rate of the dual is bounded by a linear function of the number of potential ancestors, this description gives rise to a well-defined process.
This dual process is the analogue for the of the Ancestral Selection Graph (ASG), introduced in the companion papers [@krone/neuhauser:1997; @neuhauser/krone:1997], which describes all the potential ancestors of a sample from a population evolving according to the Wright-Fisher diffusion with selection. Perhaps the simplest way of expressing the duality between the and the branching and coalescing dual process is to observe that all the individuals in our sample are of type $a$ if and only if all potential ancestral lineages are of type $a$ at any time $t$ in the past. This is analogous to the [*moment duality*]{} between the ASG and the Wright-Fisher diffusion with selection. However, to state this formally for the , we would need to be able to identify $\E[\prod_{i=1}^n w_t(x_i)]$ for any choice of points $x_1,\ldots ,x_n\in\R^d$. The difficulty is that, just as in the neutral case, the $w_t(x)$ is only defined at Lebesgue almost every point $x$ and so we have to be satisfied with a ‘weak’ moment duality.
\[prop: dual\]\[[@EVY2014]\] The spatial $\Lambda$-Fleming-Viot process with selection is dual to the process $(\Xi_t)_{t\geq 0}$ in the sense that for every $k\in \N$ and $\psi\in C_c((\R^d)^k)$, we have $$\begin{aligned}
\E_{w_0}\bigg[\int_{(\R^d)^k} & \psi(x_1,\ldots,x_k)\bigg\{\prod_{j=1}^k w_t(x_j)\bigg\}\, dx_1\ldots dx_k\bigg] \nonumber\\
& = \int_{(\R^d)^k} \psi(x_1,\ldots,x_k)\E_{\{x_1,\ldots,x_k\}}\bigg[\prod_{j=1}^{N_t} w_0\big( \xi_t^j\big)\bigg]\, dx_1 \ldots dx_k. \label{dual formula}\end{aligned}$$
The main result {#results}
---------------
Our main result concerns a diffusive rescaling of the dual process of Definition \[dualprocessdefn\] and so from now on it will be convenient if
*forwards in time refers to forwards for the dual process.*
We shall take the impact parameter, $u$, to be a fixed number in $(0,1]$ (i.e. $\nu_r=\delta_u$ for all $r$). In fact, the same arguments work when $u$ is allowed to be random, as long as $\int_{\mc R'}^{\mc R}\int_0^1 u \nu_r (du)\mu(dr)>0$ for some $0<\mc R'< \mc R$, but this would make our proofs notationally cumbersome.
Let us describe the scaling more precisely. Suppose that $\mu$ is a finite measure on $(0,\mc{R}]$. We shall assume for convenience that $\mc R$ is defined in such a way that for any $\delta>0$, $\mu((\mc R -\delta , \mc R])>0$. For each $n\in\N$, define the measure $\mu^n$ by $\mu^n(B)=\mu(n^{1/2}B),$ for all Borel subsets $B$ of $\R_{+}$. It will be convenient to write $\mc{R}_n=\mc{R}/\sqrt{n}$. At the $n$th stage of the rescaling, our rescaled dual is driven by the Poisson point process $\Pi^n$ on $\R\times \R^d \times (0,\mc{R}_n]$ with intensity $$\label{rescalingeq}
n\,dt \otimes n^{d/2}\,dx \otimes \mu^n(dr).$$ This corresponds to rescaling space and time from $(t,x)$ to $(n^{-1}t,n^{-1/2}x)$. Importantly, we do not scale the impact $u$. Each event of $\Pi^n$, independently, is neutral with probability $1-\v{s}_n$ and selective with probability $\v{s}_n$, where $$\label{sdef}
\v{s}_n=
\begin{cases}
\frac{\log n}{n} & d=2, \\
\frac{1}{n} & d\geq 3.
\end{cases}$$ In [@EFS2015] it was shown that in $d=1$, one should take $\v{s}_n=1/\sqrt{n}$.
Although not obvious for the itself, when considering the dual process it is not hard to understand why the scalings and should lead to a non-trivial limit.
If we ignore the selective events, then a single ancestral lineage evolves as a pure jump process which is homogeneous in both space and time. Write $V_r$ for the volume of $\mc{B}_r(0)$. The rate at which the lineage jumps from $y$ to $y+z$ can be written $$\label{jump of size z}
m_n(dz)=nu\int_0^{\mc{R}_n}n^{d/2}\frac{V_r(0,z)}{V_r}\mu^n(dr)\,dz,$$ where $V_r(0,z)$ is the volume of ${\mc B}_r(0)\cap {\mc B}_r(z)$. To see this, by spatial homogeneity, we may take the lineage to be at the origin in $\R^d$ before the jump, and then, in order for it to jump to $z$, it must be affected by an event that covers both $0$ and $z$. If the event has radius $r$, then the volume of possible centres, $x$, of such events is $V_r(0,z)$ and so the intensity with which such a centre is selected is $n\,n^{d/2}V_r(0,z)\mu^n(dr)$. The parental location is chosen uniformly from the ball $\mc{B}_r(x)$, so the probability that $z$ is chosen as the parental location is $dz/V_r$ and the probability that our lineage is actually affected by the event is $u$. Combining these yields .
The total rate of jumps is $$\begin{aligned}
\int_{\R^d}m_n(dz)&=&\int_0^{\mc{R}_n}nu\,n^{d/2}\frac{1}{V_r}
\int_{\R^d}\int_{\R^d}\1_{|x|<r}\1_{|x-z|<r}dx\,dz\,\mu^n(dr)
\nonumber\\
&=&\int_0^{\mc{R}_n}nu\,n^{d/2}V_r\mu^n(dr)\nonumber \\
&=&n u V_1\int_0^{\mc{R}}r^d\mu(dr)=\Theta(n),\label{jump rate}\end{aligned}$$ and the size of each jump is $\Theta(n^{-1/2})$ and so in the limit a single lineage will evolve according to a (time-changed) Brownian motion.
Now, consider what happens at a selective event. The two new lineages are created at a separation of order $1/\sqrt{n}$. If we are to see both lineages in the limit then they must move apart to a separation of order $1$ (before, possibly, coalescing back together). Ignoring possible interactions with other lineages, the probability that a pair of lineages makes such an excursion is of order $1$ in $d\geq 3$, order $1/\log n$ in $d=2$ and order $1/\sqrt{n}$ in $d=1$. Therefore, in order to have a positive probability of seeing branching in the scaling limit, in $d\geq 3$ we only need that there are a positive number of selective events in unit (rescaled) time, and, for this, it is enough that $\v{s}_n$ is order $1/n$. However, for $d=2$, we need order $\log n$ branches before we expect to find one that is visible to us, hence the choice $\v{s}_n=\log n/n$.
Our scaling mirrors that described in [@durrett/zahle:2007] for a model of a [*hybrid zone*]{} (by which we mean a region in which we see both genetic types) which develops around a boundary between two regions, in one of which type $a$ individuals are selectively favoured and in the other of which type $A$ individuals are selectively favoured. In contrast to our continuum setting, their model is a spin system in which exactly one individual lives at each point of $\Z^d$.
Before formally stating our main result, we need some notation. We shall denote by $\text{BBM}(p, V)$ binary branching Brownian motion started from the point $p\in\R^d$, with branching rate $V$ and diffusion constant given by $$\label{first sigma squared}
\sigma ^2 =
\tfrac{1}{d}\int_{\R^d}|z|^2 m^n(dz)
= \tfrac{1}{d}\int_{\R^d}\int_0^\infty |z|^2 u \frac{V_r (0,z)}{V_r}\mu(dr)\,dz
$$ where $m^n(dz)$ is defined in . In other words, during their lifetime, which is exponentially distributed with parameter $V$, individuals follow $d$-dimensional Brownian motion with diffusion constant $\sigma^2$, at the end of which they die, leaving behind at the location where they died exactly two offspring. We view $\text{BBM}(p,V)$ as a set of (continuous) paths, each starting at $p$, with precisely one path following each possible distinct sequence of branches.
Similarly, we write $\mc{P}^{(n)}(p)$ for the dual process of Definition \[dualprocessdefn\], rescaled as in and , started from a single individual at the point $p\in\R^d$ and viewed as a collection of paths. Each path traces out a ‘potential ancestral lineage’, defined exactly as the ancestral lineages in the neutral case except that at each selective event, if a lineage is affected then it jumps to the location of (either) one of the ‘potential parents’. Precisely one potential ancestral lineage follows each possible route through the branching and coalescing dual process.
We define the events $$\begin{aligned}
\mc{D}_n(\epsilon, T)=&\l\{\forall l\in \mc{P}^{(n)}(p),\;\exists l'\in \text{BBM}(p,V):\;\sup\limits_{t\in[0,T]}|l(t)-l'(t)|\leq\epsilon\r\},\notag\\
\mc{D}'_n(\epsilon, T)=&\l\{\forall l\in \text{BBM}(p,V),\;\exists l'\in \mc{P}^{(n)}(p):\;\sup\limits_{t\in[0,T]}|l(t)-l'(t)|\leq\epsilon\r\}.\label{Devents}\end{aligned}$$
\[result d>1\] Let $d\geq 2$. There exists $V\in (0,\infty)$ such that the following holds. Let $T<\infty$, $p\in \R^2$; then given $\epsilon >0$, there exists $N\in\N$ such that, for all $n\geq N$ there is a coupling between $\text{BBM}(p,V)$ and $\mc{P}^{(n)}(p)$ with $\P\l[\mc{D}_n(\epsilon,T)\cap\mc{D}'_n(\epsilon,T)\r]\geq 1-\epsilon.$
We will give a proof of Theorem \[result d>1\] only for $d=2$. The case $d\geq 3$ follows from a simplified version of the $2$-dimensional proof presented here.
Sketch of proof {#sketch of proof}
---------------
Consider a pair of potential ancestral lineages, $\xi^{n,1}$ and $\xi^{n,2}$, created in some selective event which, without loss of generality, we suppose happens at time zero. Suppose that we forget about further branches and when $\xi^{n,i}$ is affected by a neutral event it jumps to the location of the parent; when it is affected by a selective event it jumps to the location of one of the potential parents (picked at random). Thus $\xi^{n,1}$ and $\xi^{n,2}$ are compound Poisson processes which interact when (and only when) $|\xi^{n,1}-\xi^{n,2}|\leq 2\mc{R}_n$.
We choose a large constant $c>0$. We begin by showing that $\xi^{n,1}$ and $\xi^{n,2}$ have probability $\Theta(1/\log n)$ of reaching a distance $1/(\log n)^c$ from each other without coalescing (we then say they have diverged). We also show that the probability that $\xi^{n,1}$ and $\xi^{n,2}$ have not diverged or coalesced by time $1/(\log n)^c$ is $o(1/(\log n))$, so coalescence will be instantaneous in the limit. Moreover, once they are $1/(\log n)^c$ apart, they won’t get within distance $2\mc R_n$ of each other again on a timescale of $\mc{O}(1)$. Hence from the point of view of our scaling they stay apart and evolve essentially independently of each other.
We exploit this observation by coupling the whole rescaled dual process with a process in which diverged lineages move independently. We use an object that we call a caterpillar which is defined in the same way as the rescaled dual process, except that selective events only result in branching if at least time $1/(\log n)^c$ has elapsed since the previous branching. We stop the caterpillar at the first time a pair of lineages has either diverged or failed to coalesce in time $1/(\log n)^c$ after branching. We then start two new independent caterpillars at the positions of the pair of lineages, and continue in the same way, giving a ‘branching caterpillar’.
The branching caterpillar can be coupled with the rescaled dual process by piecing together the independent Poisson point processes of events which drive each caterpillar into a single driving Poisson point process. We show that under the coupling, the branching caterpillar and the rescaled dual process coincide with high probability, using the result that lineages at a separation of at least $1/(\log n)^c$ are unlikely to interact again. Each individual caterpillar converges in an appropriate sense to a segment of a Brownian path run for an exponentially distributed lifetime, so we can couple the branching caterpillar with the limiting branching Brownian motion.
This programme is carried out in Section \[proof\].
Biological background {#biology}
=====================
In this section, we shall set our work in the context of the substantial biological literature. The reader concerned only with the mathematics can safely skip to Section \[proof\].
The interplay between natural selection and the spatial structure of a population is a question of longstanding interest in population genetics. [@fisher:1937] studied the advance of selectively advantageous genetic types through a one-dimensional population using the deterministic differential equation now known as the Fisher-KPP equation. This equation also makes sense in higher dimensions, but ignores [*genetic drift*]{} (the randomness due to reproduction in a finite population). Work incorporating genetic drift has been restricted to either one spatial dimension (see [@barton/etheridge/kelleher/veber:2013b] and references therein) or, more commonly, to subdivided populations. [@maruyama:1970] studied the probability of [*fixation*]{} of an advantageous genetic type (the probability that eventually the whole population carries this genetic type) in a subdivided population. The assumptions made in that article are rather strong: if we think of the population as living on islands (or in colonies), then each island has constant total population size and its contribution to the next generation is in proportion to that size. Under these assumptions, the probability of fixation is not affected by the population structure: it is the same as for a gene of the same selective advantage in an unstructured population of the same total size. Much subsequent work retained Maruyama’s assumptions, and so it is often assumed that spatial structure has no influence on the accumulation of favourable genes. However, [@barton:1993] showed that the extra stochasticity produced by the introduction of local extinctions and colonisations could significantly change the fixation probability. This work was extended in, for example, [@cherry:2003] and [@whitlock:2003].
A fundamental problem in genetics is to identify which parts of the genome have been the target of natural selection. The random nature of reproduction in finite populations means that some genetic types (alleles) will be carried by everyone in the population, even though they convey no particular selective advantage. However, if a favourable mutation arises in a population and ‘sweeps’ to fixation (i.e. increases in frequency until everybody carries it), we expect the genealogical trees (that is the trees of ancestral lineages) relating individuals in a sample from the population to differ from those that we observe in the absence of selection. In particular, they will be more ‘star-shaped’. Of course we cannot observe the genealogical trees directly, and so, instead, geneticists exploit the fact that genes are arranged on chromosomes: the ancestry at another position on the same chromosome will be correlated with that at the part of the genome that is the target of selection. In order to detect selection one therefore examines the patterns of variation at other points on the same chromosome, so-called linked loci.
In order for this approach to work, we require sufficient variability at the linked loci that we see a signal of the distortion in the genealogical tree. This means that we must consider the genealogy of a sample from the population on the timescale set by the neutral mutation rate. If selection is too strong, the genealogy will be very short and we see no mutations and so we can recover no information about the genealogical trees; if selection is too weak, we won’t be able to distinguish the patterns from those seen under neutral evolution.
Since neutral mutation rates are rather small, this means that we are interested in long timescales. Without selection, ancestral lineages in our model follow symmetric random walks with bounded variance jumps and so we expect a diffusive scaling to capture patterns of neutral variation. Since we are looking for deviations from those patterns due to the action of selection, it makes sense to consider a diffusive rescaling in the selective case too. Thus, if the neutral mutation rate is $\mu$, then we look at the rescaled dual process with $n=1/\mu$. If the branches produced by selection persist long enough to be visible at this scale, then there is positive probability that the pattern of (neutral) variation we see in a sample from the population will look different from the pattern we’d expect without selection.
Our results in this paper are relevant to populations evolving in spatial continua. The question they address is ‘When can we hope to detect a signal of natural selection in data?’. Whereas in the classical models of subdivided populations it is typically assumed that the population in each ‘island’ is large, so that neighbourhood size is big, by fixing the ‘impact’ parameter $u$ in our model, we are assuming that neighbourhood size is small. As a result, reproduction events are somewhat akin to local extinction and recolonisation events, in which a significant proportion of the local population is replaced in a single event. Our main result shows that our ability to detect selection is then critically dependent on spatial dimension. For populations living in at least three spatial dimensions (of which there are very few), spatial structure has a rather weak effect. However, in two spatial dimensions, selection must be stronger and in one spatial dimension (as appropriate for example for populations living in intertidal zones) much stronger, before we can expect to be able to detect it. The explanation is that in low dimensions, it is harder for individuals carrying the favoured gene to escape the competition posed by close relatives who carry the same gene. In our mathematical work, this is reflected in the vast majority of branches in our dual process being cancelled by a coalescence event on a timescale which is negligible compared to the timescale set by the neutral mutation rate so that no evidence of these branches having occurred will be seen in the pattern of neutral mutation.
Proof of Theorem \[result d>1\] {#proof}
==================================
Our proof is broken into two steps. First in Subsection \[excursionsec\] we consider how the pair of potential ancestral lineages created during a selective event interact with each other. In particular we find asymptotics for the probability that they diverge in a short time. This will allow us to identify the branching rate in the limiting Brownian motion. Then in Subsection \[convtobbmsec\] we define the caterpillar and show how to couple the dual of the to a system of branching caterpillars. With this construction in hand, Theorem \[result d>1\] follows easily.
Pairs of paths {#excursionsec}
--------------
In this subsection we are interested in the behaviour of a pair of potential ancestral lineages in the rescaled dual. In order that they be uniquely defined, if either is hit by a selective event then we (arbitrarily) declare that it jumps to the location of the first potential parent sampled in that event. In particular, if they are both affected by the same event, then they will necessarily coalesce. We write $\xi^{n,1}$ and $\xi^{n,2}$ for the resulting potential ancestral lineages and $$\eta^n=\xi^{n,1}-\xi^{n,2}$$ for their separation.
Throughout this subsection, we use the notation $\P_{[r,r']}$ to mean that $|\eta^n_0|\in [r,r']$ and we adopt the convention that estimates of $\P_{[r,r']}[B]$ hold uniformly for all initial laws with mass concentrated on $[r,r']$. We extend this notation to open intervals in the obvious manner. We will also write $\P_r=\P_{[r,r]}$.
We are concerned with the behaviour of two potential ancestral lineages created during a selective event which, without loss of generality, we suppose to happen at time $0$. We shall then refer to $\eta^n$ as an excursion. In this case $|\eta^n_0|\leq 2\mc{R}_n$ and we wish to establish whether or not $|\eta_t^n|$ ever exceeds $$\label{gammadef}
\gamma_n=\frac{1}{(\log n)^{c}},$$ where, in this section, we suppose that $c\geq 3$.
We will, eventually, set $c=4$, although any larger constant $c$ would give the same result; for now we keep the dependence on $c$ visible in our estimates.
For reasons that will soon become apparent, it is convenient to assume that $n$ is large enough that $7\mc{R}_n<\gamma_n$.
The picture of an excursion $\eta^n$ that we would like to build up is, loosely speaking, as follows.
1. With probability $\kappa_n=\Theta(\frac{1}{\log n})$, $|\eta^n|$ reaches displacement $\gamma_n$ within time $1/(\log n)^c$ and then $\xi^{n,1}$ and $\xi^{n,2}$ will not interact again before a fixed time $T>0$. Consequently the displacement between them becomes macroscopic and we see two distinct paths in the limit. Moreover, $\kappa_n\log n \to \kappa\in(0,\infty)$ as $n\to\infty$.
2. With probability $1-\Theta(\frac{1}{\log n})$, $|\eta^n|$ does not reach displacement $\gamma_n$, and $\xi^{n,1}$ and $\xi^{n,2}$ coalesce within time $1/(\log n)^c$. In this case the difference between them is microscopic and we see only one path in the limit.
3. All other outcomes have probability $\mc O\big(\frac{1}{(\log n)^{c-3/2}}\big)$, which means that we won’t see them in the limit.
Much of the work in making this rigorous results from the fact that $\xi^{n,1}$, $\xi^{n,2}$ only evolve independently when their separation is greater than $2\mc{R}_n$. Our strategy is similar to that in the proof of Lemma 4.2 in [@etheridge/veber:2012], but here we require a stronger result: rather than an estimate of the form $\kappa_n\geq C/\log n$ we need convergence of $ \kappa_n\log n$.
### Inner and outer excursions {#inoutexc}
We shall characterise the behaviour of $\eta^n$ using several stopping times. Set $\tau^{out}_{0}=0$ and define inductively, for $i\geq 0$, $$\begin{aligned}
\tau^{in}_i&=\inf\{s>\tau^{out}_i\-|\eta^n_s|\geq 5\mc{R}_n\},\label{intimes}\\
\tau^{out}_{i+1}&=\inf\{s>\tau^{in}_{i}\-|\eta^n_s|\leq 4\mc{R}_n\}.\notag\end{aligned}$$ We refer to the interval $[\tau^{out}_{i},\tau^{in}_i)$ (and also to the path of $\eta^n$ during it) as the $i^{th}$ inner excursion and similarly to $[\tau^{in}_{i-1},\tau^{out}_i)$ (and corresponding path) as the $i^{th}$ outer excursion.
Since a jump of $\eta^n$ has displacement at most $2\mc{R}_n$, although the initial ($0^{th}$) inner excursion starts in $(0,2\mc{R}_n]$, for $i\geq 1$ we have $|\eta^n_{\tau^{in}_i}|\in [5\mc{R}_n,7\mc{R}_n]$ and $|\eta^n_{\tau^{out}_i}|\in[2\mc{R}_n,4\mc{R}_n]$.
\[ioexctypedef\] We define the stopping times $$\begin{aligned}
\tau^{coal}&=\inf\{s>0\-|\eta^n_s|=0\},\\
\tau^{div}&=\inf\{s>0\-|\eta^n_s|\geq \gamma_n\},\\
\tau^{over}&=\frac{1}{(\log n)^c}.\end{aligned}$$ We shall say that the $i$th inner excursion coalesces if $\tau^{coal}\in [\tau^{out}_{i},\tau^{in}_i)$. Similarly, the $i$th outer excursion diverges if $\tau^{div}\in [\tau^{in}_{i-1},\tau^{out}_i)$.
We define $\tau^{type}=\min(\tau^{coal},\tau^{div},\tau^{over})$ and say that $\eta^n$
1. coalesces if $\tau^{type}=\tau^{coal}$,
2. diverges if $\tau^{type}=\tau^{div}$,
3. overshoots if $\tau^{type}=\tau^{over}$.
Since almost surely $\eta^n$ only jumps a finite number of times before time $(\log n)^{-c}$, almost surely $\tau^{type}$ occurs during either an inner or an outer excursion, whose index we denote by $i^*$.
We use $\zeta^n$ to denote the distribution of the distance between the two potential parents sampled during a selective event.
\[istar\] There exists $\alpha\in(0,1)$ such that, uniformly in $n$, $\P_{\zeta^n}\l[i^*> m\r]\leq \alpha^m$.
\[Pover\] As $n\to \infty$, $\P_{\zeta^n}\l[\eta^n\text{ overshoots}\r]=\mc{O}\l(\frac{1}{(\log n)^{c- 3/2}}\r)$.
\[Pdiv\] As $n\to \infty$, $\P_{\zeta^n}\l[\eta^n\text{ diverges}\r]=\Theta\l(\frac{1}{\log n}\r)$.
\[Pcoal\] As $n\to \infty$, $\P_{\zeta^n}\l[\eta^n\text{ coalesces}\r]=1-\Theta\l(\frac{1}{\log n}\r)$.
Thus, overshoots are relatively unlikely, and typically $\eta^n$ consists of a finite number of inner/outer excursions until either (1) it coalesces, with probability $1-\Theta(\frac{1}{\log n})$, or (2) the two lineages separate to distance $\gamma_n$, with probability $\Theta(\frac{1}{\log n})$.
The remainder of this Section \[inoutexc\] is devoted to the proof of Lemmas \[istar\]-\[Pdiv\]. Lemma \[Pcoal\] then follows immediately, since $c\geq 3$.
We will need two more stopping times: $$\begin{aligned}
\label{taurr}
\tau_r&=\inf\{s>0\-|\eta^n_s|\leq r\}, \notag \\
\tau^r&=\inf\{s>0\-|\eta^n_s|\geq r\}.\end{aligned}$$ Note that $\tau_0=\tau^{coal}$.
Note that the random variables $\tau^{type}$, $\tau^r$ and so on depend implicitly on $n$; throughout this section these random variables refer to the stopping times for the process $\eta^n$.
(Of Lemma \[istar\].) First consider a single inner excursion of $\eta^n$. It is easily seen that there exists some $\alpha'>0$ such that, for all $n$:
- For any $x\in(0,5\mc{R}_n)$, if $|\eta^n_t|=x$ then the probability that $\eta^n$ will hit $0$ but not exit $\mc{B}_{5\mc{R}_n}(0)$ within its next three jumps is at least $\alpha'$.
In particular, the probability that the first three jumps of an inner excursion result in a coalescence is bounded away from $0$ uniformly for any $|\eta^n_{\tau^{out}_i}|\in [2\mc R_n, 4 \mc R_n]$. If $i^*>m$ then at least $m$ inner excursions must occur without a coalescence. The strong Markov property applied at the time $\tau^{out}_i$ means that, conditionally given $\eta^n_{\tau^{out}_i}$, the $i^{th}$ inner excursion is independent of $(\eta^n_t)_{t<\tau^{out}_i}$. Repeated application of this fact, coupled with $(\dagger)$, shows that the probability of seeing at least $m$ inner excursions without a single coalescence is at most $(1-\alpha')^{m}$. This completes the proof.
We will shortly require a tail estimate on the supremum of the modulus of two dimensional Brownian motion $W$, which we record first for clarity. We write $W_t=(W^1_t,W^2_t)$ and note $$\begin{aligned}
\P\l[\sup\limits_{s\in[0,t]}|W_s-W_0|\geq x\r]
&\leq 2\P\l[\sup\limits_{s\in[0,t]}|W^1_s-W^1_0|\geq x/2\r]\notag\\
&\leq 4\P\l[\sup\limits_{s\in[0,t]}(W^1_s-W^1_0)\geq x/2\r]\notag\\
&\leq 4e^{-x^2/8t}.\label{eq:2d_BM_sup}\end{aligned}$$ In the first line of the above we use the triangle inequality and the fact that $W^1$ and $W^2$ have the same distribution. To deduce the second line, we note that $W^{1}$ and $-W^1$ have the same distribution. For the final line, we use the (standard) tail estimate $\P[\sup_{s\in[0,t]}(B_s-B_0)\geq x]\leq e^{-x^2/2t}$ for a one dimensional Brownian motion $B$, which can be deduced via Doob’s martingale inequality applied to the submartingale $(\exp (xB_s /t))_{s\geq 0}$.
During an outer excursion, $\eta^n$ is the difference between two independent walkers and so we can use Skorohod embedding to approximate its behaviour using elementary calculations for two-dimensional Brownian motion. The next lemma exploits this to bound the duration of the outer excursion and the probability that it diverges.
\[exclengths\] As $n\to \infty$, $$\P_{[5\mc{R}_n,7\mc{R}_n]}\l[\tau^{\gamma_n}\wedge\tau_{4\mc{R}_n}>(\log n)^{-c-1}\r]=\mc{O}\l(\frac{1}{(\log n)^{c-1}}\r),\label{Eouterbound}$$ and $$\P_{[5\mc{R}_n,7\mc{R}_n]}\l[\tau^{\gamma_n}<\tau_{4\mc{R}_n}\r]=\Theta\l(\frac{1}{\log n}\r).\label{excsucprob}$$
For $i=1,2$ let $\hat{\xi}^{n,i}$ be a pair of independent processes such that $\hat{\xi}^{n,1}$ has the same distribution as $\xi^{n,1}$ and $\hat{\xi}^{n,2}$ has the same distribution as $\xi^{n,2}$. The process $
\hat{\xi}^{n,1}-\hat{\xi}^{n,2}
$ is a compound Poisson process with a rotationally symmetric jump distribution and a maximum displacement of $2\mc{R}_n$ on each jump. Moreover (essentially by Skorohod’s Embedding Theorem, see e.g. [@billingsley:1995]), we can construct a process $\hat{\eta}^n$ with the same distribution as $\hat{\xi}^{n,1}-\hat{\xi}^{n,2}$ as follows.
Let $(r_m,J_m)_{m\geq 1}$ denote a sequence distributed as the jump magnitudes and jump times of $\hat{\xi}^{n,1}-\hat{\xi}^{n,2}$. Let $W$ be a two-dimensional Brownian motion with $W_0=\hat{\xi}^{n,1}_0-\hat{\xi}^{n,2}_0$, independent of $(r_m,J_m)_{m\geq 1}$. Now set $$\begin{aligned}
\label{skembed}
\hat{\eta}^n_t&=W_{T^{(S(t))}}\mbox{ where } T^{(0)}=0, J_0=0,\\
T^{(m+1)}&=\inf\{s>T^{(m)}\-|W_s-W_{T^{(m)}}|\geq r_m\},\notag\\
S(t)&=\sup\l\{i\geq 0\-J_i\leq t\r\}.\notag\end{aligned}$$ We may then couple $$\hat{\eta^n}=\hat{\xi}^{n,1}-\hat{\xi}^{n,2}.$$ We define $\hat{\tau}^r$ and $\hat{\tau}_r$ analogously to $\tau^r$ and $\tau_r$, as stopping times of the process $\hat{\eta}^n$.
Note that since $(\xi^{n,1}_t,\xi^{n,2}_t)_{t\leq \tau_{4\mc{R}_n}}$ has the same distribution as $(\hat\xi^{n,1},\hat\xi^{n,2})_{t\leq \tau_{4\mc{R}_n}}$, we may couple them so that they are almost surely equal during this time. Thus $$\{\hat\tau^{\gamma_n}<\hat\tau_{4\mc{R}_n}\}=\{\tau^{\gamma_n}<\tau_{4\mc{R}_n}\}.$$
Let $T^r$ and $T_r$ be the analogues of $\tau^r$ and $\tau_r$ for $W$ (not to be confused with $T^{(m)}$ in ). By the definition of the Skorohod embedding in we have $$\begin{aligned}
\P_{[5\mc{R}_n, 7\mc{R}_n]}\l[\hat\tau^{\gamma_n}<\hat\tau_{4\mc{R}_n}\r]
&\geq \P_{[5\mc{R}_n,7\mc{R}_n]}\l[T^{\gamma_n+2\mc{R}_n}<T_{4\mc{R}_n}\r]\notag\\
&\geq \P_{5\mc{R}_n}\l[T^{\gamma_n+2\mc{R}_n}<T_{4\mc{R}_n}\r].\label{skusenotimechange}\end{aligned}$$ The right hand side concerns only the modulus of two-dimensional Brownian motion and so can be expressed in terms of the scale function for a two-dimensional Bessel process: $$\begin{aligned}
\P_{5\mc{R}_n}\l[T^{\gamma_n+2\mc{R}_n}<T_{4\mc{R}_n}\r]=\frac{\log(5\mc{R}_n)-\log(4\mc{R}_n)}{\log(\gamma_n+2\mc{R}_n)-\log(4\mc{R}_n)}=\Theta\l(\frac{1}{\log n}\r),\label{exit1}\end{aligned}$$ which proves the lower bound in . Similarly, to see the upper bound we note that $$\begin{aligned}
\P_{[5\mc{R}_n,7\mc{R}_n]}\l[\hat\tau^{\gamma_n} < \hat\tau_{4\mc{R}_n}\r] &\leq \P_{[5\mc{R}_n,7\mc{R}_n]}[T^{\gamma_n}<T_{2\mc{R}_n}]\\
&\leq \P_{7\mc{R}_n}\l[T^{\gamma_n}<T_{2\mc{R}_n}\r]\\
&=\frac{\log(7\mc{R}_n)-\log(2\mc{R}_n)}{\log(\gamma_n)-\log(2\mc{R}_n)}\\
&=\Theta\l(\frac{1}{\log n}\r).\end{aligned}$$ It remains to prove . We have $$\tau^{\gamma_n}\wedge \tau_{4\mc{R}_n}=\hat\tau^{\gamma_n}\wedge \hat\tau_{4\mc{R}_n}\leq \hat\tau^{\gamma_n}.$$
The above inequality is a very crude estimate, but will be enough to prove , which in turn will be enough to give useful bounds on the duration of excursions due to the freedom in the choice of $c$.
Hence $$\label{eq:excursion_time}
\P_{[5\mc{R}_n,7\mc{R}_n]}\l[\tau^{\gamma_n}\wedge\tau_{4\mc{R}_n}>(\log n)^{-c-1}\r]\leq
\P_{[5\mc{R}_n,7\mc{R}_n]}\l[|\hat{\eta}^n_{(\log n)^{-c-1}}|\leq \gamma_n \r].$$ The remainder of the proof focuses on bounding the right side of . To do so, we must relate our compound Poisson process to another Brownian motion.
For $j\geq 1$, let $X_j=\hat{\eta}^n_{j/n}-\hat{\eta}^n_{(j-1)/n}$. Then $(X_j)_{j\geq 1}$ are i.i.d. and since $\hat{\xi}^{n,1}$ and $\hat{\xi}^{n,2}$ are independent, $\E \l[|X_1|^2 \r]=2\E\l[|\hat{\xi}^{n,1}_{1/n}-\hat{\xi}^{n,1}_{0}|^2 \r]$.
Recall from that the rate at which $\hat{\xi}^{n,1}$ jumps from $y$ to $y+z$ is determined by the intensity measure $m^n(dz)$ so that $$\label{sigma squared}
\E \l[|X_1|^2 \r]= \frac{2}{n}\int_{\R^2} |z|^2 m^n (dz)
= \frac{4 \sigma ^2}{n},$$ where $\sigma^2$ was defined in . Now recall the definition of $S(t)$ in ; the rate at which $\hat{\xi}^{n,1}$ jumps is $\int_{\R^2}m^n(z)dz=\Theta(n)$ by , so $S(n^{-1})$ is bounded by the sum of two Poisson$(\Theta(1))$ random variables. Hence since each jump of $\hat{\eta}^n$ is bounded by $2\mc R_n$, $$\begin{aligned}
\E \l[ |X_1 |^4 \r]&\leq (2\mc R_n)^4 \E \l[ S(n^{-1})^4\r]=\mc O (n^{-2}).
\label{bound of fourth moment of X}\end{aligned}$$ Once again (since the distribution of $X_1$ is rotationally symmetric) we may use Skorohod’s Embedding Theorem to couple $(X_i)_{i\geq 1}$ to a two-dimensional Brownian motion $B$ started at $\eta^n_0$ and a sequence $\upsilon_1, \upsilon_2, \ldots $ of stopping times for $B$ such that setting $\upsilon_0=0$, $(\upsilon_i - \upsilon_{i-1})_{i\geq 1}$ are i.i.d. and $$\begin{aligned}
\label{skembed2}
B_{\upsilon_i}-B_{\upsilon_{i-1}} &= X_i,\\
\E[\upsilon_i - \upsilon_{i-1}]&=\tfrac{1}{2}\E[|X_1|^2] =\tfrac{2 \sigma^2 }{n} \notag\\
\text{ and }\E[(\upsilon_i - \upsilon_{i-1})^2 ] &= \mc{O}(n^{-2}). \notag\end{aligned}$$ It follows that $\E[\upsilon_{\lfloor tn \rfloor}]=\tfrac{2 \sigma^2\lfloor tn \rfloor}{n}$ and $\text{Var} (\upsilon_{\lfloor tn \rfloor})= \mc{O}(t n^{-1})$. Hence by Chebychev’s inequality, $$\P[|\upsilon_{\lfloor tn \rfloor}-2 \sigma ^2 t|\geq n^{-1/3}]\leq \mc{O}(t n^{-1/3}).$$ Applying this result with $t=t_n:=(\log n)^{-c-1}$, since $\hat{\eta}^n_{\lfloor t_n n \rfloor /n}=B_{\upsilon_{\lfloor t_n n \rfloor}}$ we have $$\begin{aligned}
&\P_{[5\mc{R}_n,7\mc{R}_n]}\l[|\hat{\eta}^n_{t_n}|\leq \gamma_n \r]\notag\\
&\hspace{2pc}\leq \P\Bigg[\inf\l\{|B_t-B_0|\-t\in[2\sigma ^2 t_n-n^{-1/3},2\sigma ^2 t_n+n^{-1/3}]\r\}\\
&\hspace{20pc}\leq \gamma_n +n^{-1/8}+7\mc R_n\Bigg]\notag\\
&\hspace{4pc} + \P \l[\big|\hat{\eta}^n_{t_n}- \hat{\eta}^n_{\lfloor t_n n\rfloor /n }\big|\geq n^{-1/8}\r] + \mc{O}(t_n n^{-1/3}).\label{eq:excursion_time_2}\end{aligned}$$ For the first term on the right hand side we have for $n$ sufficiently large $$\begin{aligned}
&\P\l[\inf\l\{|B_t-B_0|\-t\in[2\sigma ^2 t_n-n^{-1/3},2\sigma ^2 t_n+n^{-1/3}]\r\}\leq \gamma_n +n^{-1/8}+7\mc R_n\r]\notag\\
&\hspace{2pc}\leq\P \l[ |B_{2\sigma ^2 t_n}-B_0|\leq \gamma_n +3n^{-1/8} \r]+\P \l[ \sup_{t\in [0,2n^{-1/3}]} |B_t - B_0| \geq \tfrac{1}{2} n^{-1/8} \r]\notag\\
&\hspace{2pc}=\mc{O} (\gamma_n ^2 t_n^{-1} ) +\mc{O} (e^{-\frac{1}{64}n^{1/12}})\notag\\
&\hspace{2pc}=\mc O ((\log n)^{1-c}),\label{eq:excursion_time_3}\end{aligned}$$ For the second inequality, we use that the density of $B_t$ is bounded by $(2\pi t)^{-1}$ for the first term and we apply for the second term.
Moving on to the second term on the right hand side of , since from (\[sigma squared\]) we have $\E \l[ |\hat{\eta}^n_{t_n}- \hat{\eta}^n_{\lfloor t_n n\rfloor /n }|^2 \r]=\mc O (n^{-1})$, by Markov’s inequality $$\label{eq:excursion_time_4}
\P \l[|\hat{\eta}^n_{t_n}- \hat{\eta}^n_{\lfloor t_n n\rfloor /n }|\geq n^{-1/8}\r]=\mc O (n^{-3/4}).$$ Putting and into we have $$\P_{[5\mc{R}_n,7\mc{R}_n]}\l[|\hat{\eta}^n_{t_n}|\leq \gamma_n \r]=\mc{O}((\log n)^{1-c}).$$ In view of , this completes the proof.
(Of Lemma \[Pover\].) First consider a single inner excursion. Evidently there exists $\beta>0$ such that, for all $n$:
- For any $x\in(0,5\mc{R}_n)$, if $|\eta^n_t|=x$ then the probability that $\eta^n$ will either exit $\mc{B}_{5\mc{R}_n}(0)$ or hit $0$ within its next three jumps is at least $\beta$.
Let $(J_l)_{l\geq 0}$ be the (a.s. finite) sequence of jump times of our inner excursion, and let $B_k$ be the event that the excursion either coalesces or exits $\mc{B}_{5\mc{R}_n}(0)$ at one of $\{J_{3k+1},J_{3k+2},J_{3k+3}\}$. By the strong Markov property (applied at $J_{3k}$) and $(\ddagger)$, $\inf\{k\geq 0\-\1 _{B_k}=1\}$ is stochastically bounded above by a geometric random variable $G$ with success probability $\beta $.
Moreover, for as long as $\eta^n$ is not at $0$, the rate at which it jumps is bounded below by the rate at which $\xi^{n,1}$ jumps, which is $\int_{\R^2}m^n(dz)=\Theta(n)$ where $m^n$ is given by . Hence for each $l\geq 0$, $J_{l+1}-J_l$ is stochastically bounded above by $E_l$ where the $(E_i)_{i\geq 0}$ are i.i.d. exponential random variables of this rate.
Combining these observations, $$\begin{aligned}
\label{innerlength}
\P_{(0,5\mc{R}_n)}\l[\tau^{5\mc{R}_n}\wedge \tau_0 > n^{-1/2}\r]&\leq
\P \l[ J_{\lceil 3n^{1/3}+3\rceil}\geq n^{-1/2} \r]+\P\l[ G> n^{1/3}\r]\\ \nonumber
&=\mc{O}(n^{-1/6})+(1- \beta)^{n^{1/3}}=\mc O (n^{-1/6})\end{aligned}$$ where the last line follows by Markov’s inequality.
We are now in a position to complete the proof. Recall that $\eta^n$ overshoots if it has neither coalesced nor diverged by time $(\log n)^{-c}$. Let $n$ be sufficiently large that $$(\log n)^{1/2}(n^{-1/2}+(\log n)^{-c-1})\leq (\log n)^{-c}.$$ Thus, if $\eta^n$ overshoots and $i^*<(\log n)^{1/2}$, then at least one inner excursion must have lasted longer than $n^{-1/2}$ or at least one outer excursion must have lasted longer than $(\log n)^{-c}$. Hence, $$\begin{aligned}
\P_{\zeta^n}\l[\eta^n\text{ overshoots}\r]&\leq
(\log n)^{1/2}\bigg(\P_{(0,5\mc{R}_n)}\l[\tau^{5\mc{R}_n}\wedge \tau_0 > n^{-1/2}\r]\\
&\hspace{2.5cm}+\P_{[5\mc{R}_n,7\mc{R}_n]}\l[\tau^{\gamma_n}\wedge\tau_{4\mc{R}_n}>(\log n)^{-c-1}\r] \bigg)
\\
& \hspace{2cm} +\P_{\zeta^n}\l[i^*>(\log n)^{1/2}\r].\end{aligned}$$ Using , and Lemma \[istar\] to bound the right hand side of the above equation, we obtain $$\begin{aligned}
\P_{\zeta^n}\l[\eta^n\text{ overshoots}\r]
&\leq (\log n)^{1/2} (\mc O (n^{-1/6})+\mc O ((\log n)^{1-c}))+\alpha^{(\log n)^{1/2}}\\
&=\mc{O}((\log n)^{3/2 -c}),\end{aligned}$$ which completes the proof.
(Of Lemma \[Pdiv\].) We note that the probability that $\eta^n$ diverges is bounded above by the probability that a divergent outer excursion occurs before a coalescing inner excursion occurs. Let us write $\eta^{n,i,in}$ for the $i^{th}$ inner excursion and $\eta^{n,i,out}$ for the $i^{th}$ outer excursion and let us write $\tau^{r,i,in},\tau_{r,i,in}$ and $\tau^{r,i,out},\tau_{r,i,out}$ for the associated equivalents of $\tau^r$ and $\tau_r$. Thus, $$\begin{aligned}
&\P_{\zeta^n}\l[\eta^n\text{ diverges}\r] \\
&\hspace{1pc}\leq \P_{\zeta^n}\l[\inf\l\{i\geq 1\-\tau^{\gamma_n,i,out}<\tau_{4\mc{R}_n,i,out}\}
\leq \inf\{i\geq 0\-\tau_{0,i,in}<\tau^{5\mc{R}_n,i,in}\r\}\r].\end{aligned}$$ By the strong Markov property (applied successively at times $\tau^{out}_i$ and $\tau^{in}_i$), along with and $(\dagger)$, the right hand side of the above equation is bounded above by the probability that a geometric random variable with success probability $\Theta(\frac{1}{\log n})$ is smaller than an (independent) geometric random variable with success probability $\alpha '>0$. With this in hand, an elementary calculation shows that $$\P_{\zeta^n}\l[\eta^n\text{ diverges}\r]=\mc{O}\l(\frac{1}{\log n}\r).$$ It remains to prove a lower bound of the same order.
In similar style to $(\dagger)$ and $(\ddagger)$, it is easily seen that there exists $\delta>0$ such that for all $n$:
- For any $x\in [\mc{R}_n,4\mc{R}_n]$, if $|\eta^n_0|=x$, the probability that $\eta^n$ will exit $\mc{B}_{5\mc{R}_n}(0)$ without coalescing is at least $\delta$.
We note also that $\zeta^n$ is equal to $n^{-1/2}\zeta^1$ in distribution, so since we assumed that $\mu((\tfrac{3}{4}\mc R, \mc R])>0$, there exists $\epsilon>0$ such that $\P[\zeta^n\geq \mc{R}_n]\geq \epsilon$ for all $n$. Thus, applying the strong Markov Property at time $\tau^{in}_0$ and using $(\star)$, we obtain $$\begin{aligned}
\P_{\zeta^n}\l[\eta^n\text{ diverges}\r]
&\geq \epsilon \delta \P _{[5 \mc R _n, 7\mc R_n]}\l[\tau^{\gamma_n}<\tau_{4\mc R_n} \r]-\P_{\zeta^n} \l[\eta^n \text{ overshoots} \r]\\
&=\Theta \l( \frac{1}{\log n}\r)\end{aligned}$$ as required, where the final statement follows from Lemma \[exclengths\] and Lemma \[Pover\] (since $c\geq 3$).
### Production of branches {#homogsec}
The next step of the proof of Theorem \[result d>1\] involves further analysis of pairs of potential ancestral lineages: first we need to check that once a pair has separated to a distance $\gamma_n$ they won’t come back together again before a fixed time $K$; second we need to see that $\log n$ times the divergence probability actually converges (c.f. Lemma \[Pdiv\]) as $n\rightarrow\infty$, since this will determine the branching rate in our branching Brownian motion limit. These two statements are the object of the next two lemmas.
\[Pinteract\] Fix $K\in(0,\infty)$. Then $$\P_{[(\log n)^{-c},\infty)}\l[\tau_{4 \mc R_n}\leq K\r]=\mc{O}\l(\frac{\log \log n}{\log n}\r).$$
\[Pdiv2\] There exists $\kappa\in(0,\infty)$ such that $(\log n)\P_{\zeta^n}\l[\eta^n\text{ diverges}\r]\to\kappa$ as $n\to\infty$.
The remainder of this subsection is occupied with proving Lemmas \[Pinteract\] and \[Pdiv2\].
(Of Lemma \[Pinteract\].) We use the Skorohod embedding of $\hat\eta$ into the Brownian motion $W$, as defined in , to reduce the claim to an equivalent statement about a two-dimensional Bessel process.
Recall that $\eta^n_0=\hat\eta^n_0=W_0$ and recall $\tau_r$ from , and that $\hat{\tau}_r$ and $T_r$ are the analogues of $\tau_r$ for $\hat{\eta}$ and $W$ respectively. We have that $\eta^n_s=\hat\eta^n_s$ for all $s\leq \tau_{4\mc R_n }$ so $$\begin{aligned}
\label{div_prob_time}
\P_{[(\log n)^{-c},\infty)}\l[\tau_{4 \mc R_n}\leq K\r]
&=\P_{[(\log n)^{-c},\infty)}\l[\hat\tau_{4 \mc R_n}\leq K\r] \nonumber \\
&\leq \P_{[(\log n)^{-c},\infty)}\l[T_{4 \mc R_n}\leq T^{(S(K))}\r],\end{aligned}$$ where we used the Skorohod embedding given in in the last line. For all $\tilde K,C>0$, since $T^{(k)}$ is increasing in $k$ we have $$\label{time_change_union}
\P\l[T^{(S(K))}\geq \tilde K \r]\leq \P \l[ S(K)\geq Cn \r] + \P \l[ T^{(Cn)} \geq \tilde K \r].$$ By its definition in , $S(K)$ is bounded by the sum of two Poisson random variables with parameter $\chi=K\int_{\R^2}m^n (dz)$, where $m^n$ is given by (\[jump of size z\]). In particular, $\chi=\Theta(n)$. Recall that if $Z'$ is Poisson with parameter $\chi$, then (using a Chernoff bound argument) for $k>\chi$, $$\label{poisson tail}
\P[Z'>k]\leq \frac{e^{-\chi}(e\chi)^k}{k^k}.$$ Hence, for $C$ sufficiently large, there exists $\delta_1>0$ such that $$\label{delta_1_exp}
\P\l[S(K)\geq Cn \r]\leq \mc O (e^{-\delta_1 n}).$$ Now by the definition of $(T^{(m)})_{m\geq 1}$ in , and since $r_m\leq 2 \mc R_n$ for each $m$, $$\P \l[ T^{(Cn)} \geq \tilde K \r] \leq \P\l[\sum_{i=1}^{Cn} R_i \geq \tilde K n \r],$$ where $(R_i)_{i\geq 1}$ is an i.i.d. sequence with $R_1 \stackrel{d}{=}\inf \{ t\geq 0 :|W_t|\geq 2\mc R\}.$ Since $$\P\l[ R_1\geq k\r]\leq \P \l[ R_1 \geq k-1 \r] \P \l[ |W_k -W_{k-1}|\leq 4\mc R\r]\leq
\P \l[ |W_1 -W_{0}|\leq 4\mc R\r]^k,$$ there exists $\lambda>0$ such that $\E\l[ e^{\lambda R_1}\r] <\infty$. Hence by Cramér’s theorem, for $\tilde K$ a sufficiently large constant, there exists $\delta_2>0$ such that $$\label{delta_2_exp}
\P\l[ T^{(Cn)}\geq \tilde K\r] =\mc O (e^{-\delta_2 n}).$$ By and together with and , we now have for $\tilde K$ sufficiently large $$\label{time_change_div}
\P_{[(\log n)^{-c},\infty)}\l[\tau_{4 \mc R_n}\leq K\r]\leq \P_{[(\log n)^{-c},\infty)}
\l[T_{4 \mc R_n}\leq \tilde K\r]+\mc O (e^{-\delta_1 n})+\mc O (e^{-\delta_2 n}).$$ To finish, we note that $$\begin{aligned}
&\P_{[(\log n)^{-c},\infty)}\l[ T_{4\mc R_n}\leq \tilde K \r] \\
&\hspace{2pc}\leq \sup\limits_{x\geq(\log n)^{-c}}\l(\P_{x} \l[ T_{4\mc R_n}\leq T^{x +\log n}\r]+\P_{x} \l[ T^{x+\log n}\leq \tilde K \r]\r)\\
&\hspace{2pc}\leq \sup\limits_{x\geq(\log n)^{-c}}\l(\frac{\log (x+\log n) -\log x}{\log(x+\log n)-\log (4\mc R_n)}\r)+\P\l[\sup_{t\leq \tilde K}|W_t -W_0|\geq \log n\r]\\
&\hspace{2pc}=\mc O \l( \frac{\log \log n}{\log n}\r)+\mc O (e^{-(8\tilde K)^{-1}(\log n)^2}),\end{aligned}$$ where the second line uses the scale function for a two-dimensional Bessel process, and the third line uses . Substituting this into , we have the required result.
(Of Lemma \[Pdiv2\].) Let $p_n:= \P_{\zeta^n} \l[ \tau^{\gamma_n}<\tau_0 \r]$. Note that by Lemma \[Pover\], $$\label{Pdiffoverdiv}
|p_n - \P_{\zeta^n} \l [ \eta^n \text{ diverges} \r] |=\mc O \l( \frac{1}{(\log n)^{c-3/2}}\r).$$ Hence by Lemma \[Pdiv\], there exist $0<d\leq D<\infty$ such that for all $n\geq 2$, $$d\leq (\log n) p_n \leq D.$$ It follows that $(p_n)_{n\geq 1}$ has a subsequence $(p_{n_k})_{k\geq 1}$ such that $(\log n_k ) p_{n_k}\to \kappa \in (0,\infty). $ Let $\epsilon>0$ and let $N\in \N$ be such that $N\geq 1/\epsilon$ and $|(\log N)p_N - \kappa | \leq \epsilon .$ By rescaling, noting that $\zeta^n\stackrel{d}{=}\zeta^N(\frac{N}{n})^{1/2}$, and similarly for $\eta^n$, we have $$\label{rescaled_pN}
p_N = \P_{\zeta^n}\l[\tau^{\gamma_N (Nn^{-1})^{1/2}}<\tau_0 \r].$$ Recall, for clarity, that here (as throughout this section) $\tau^r$ and $\tau_0$ refer to the stopping times for the process $\eta^n$.
Define $X^{n,N} := |\eta^n_{\tau^{\gamma_N (N n^{-1})^{1/2}}}|$. Increasing $N$, we may assume that $7\mc R_n<\gamma_N (Nn^{-1})^{1/2}\leq \gamma_n$ for $n\geq N$. Thus, $$\begin{aligned}
\label{SMPpn}
p_n &= \P_{\zeta^n}\l[\tau^{\gamma_N (Nn^{-1})^{1/2}}\leq \tau^{\gamma_n}<\tau_0\r] \notag \\
&= \E_{\zeta^n} \l[ \1 _{\tau^{\gamma_N (N n^{-1})^{1/2}}<\tau_0} \P _{X^{n,N}}\l[\tau^{\gamma_n}<\tau_0 \r] \r].\end{aligned}$$ Here, the first line holds since $\zeta_n <\gamma_N (Nn^{-1})^{1/2}\leq \gamma_n$, and the second line follows from the first by applying the Strong Markov Property at time $\tau^{\gamma_N (N n^{-1})^{1/2}}$.
To estimate (\[SMPpn\]), note that $$X^{n,N} \in [l^{n,N},r^{n,N}]:=[\gamma_N(Nn^{-1})^{1/2},\gamma_N(Nn^{-1})^{1/2}+2\mc R_n].$$ Using the Skorohod embedding defined in , $$\begin{aligned}
\label{pnpNlower}
\P_{[l^{n,N},r^{n,N}]} \l[ \tau^{\gamma_n} <\tau_0\r]& \geq \inf_{x\geq \gamma_N(Nn^{-1})^{1/2}}\P_x \l[ \tau^{\gamma_n} <\tau_{7\mc R_n}\r]\nonumber \\
&\geq \inf_{x\geq \gamma_N(Nn^{-1})^{1/2}}\P_x \l[ T^{\gamma_n + 2\mc R_n}<T_{7\mc R_n}\r] \nonumber \\
&= \frac{\log (\gamma_N (Nn^{-1})^{1/2})-\log (7\mc R_n)}{\log (\gamma_n +2\mc R_n)-\log (7\mc R_n)} \nonumber \\
&= \frac{\frac{1}{2} \log N +\mc O (\log \log N)}{\frac{1}{2} \log n +\mc O (\log \log n)}.\end{aligned}$$ Note that, in the above, we (again) use the scale function for a two-dimensional Bessel process to deduce the third line.
We require slightly more work to establish an upper bound. We have $$\label{pnpNupper_1}
\P_{[l^{n,N},r^{n,N}]} \l[ \tau^{\gamma_n} < \tau_0\r] \leq \P_{[l^{n,N},r^{n,N}]} \l[ \tau^{\gamma_n}<\tau_{7 \mc R_n }\r] + \P_{[l^{n,N},r^{n,N}]} \l[ \tau_{7\mc R_n} <\tau^{\gamma_n} <\tau_0 \r].$$ We begin by controlling the second term on the right hand side of . By the Strong Markov Property at time $\tau_{7 \mc R_n}$, $$\begin{aligned}
\P_{[l^{n,N},r^{n,N}]} \l[ \tau_{7\mc R_n}<\tau^{\gamma_n} <\tau_0 \r]&= \E_{[l^{n,N},r^{n,N}]} \l[\1 _{\tau_{7\mc R_n}<\tau^{\gamma_n}} \P_{|\eta^n_{\tau_{7\mc R_n}}|} \l[ \tau^{\gamma_n} <\tau_0 \r] \r]\\
&\leq \E_{[l^{n,N},r^{n,N}]} \l[ \P_{|\eta^n_{\tau_{7\mc R_n}}|} \l[ \tau^{\gamma_n} <\tau_0 \r] \r]. \end{aligned}$$ Since $\big|\eta^n_{\tau_{7\mc R_n}}\big|\in [5 \mc R_n, 7 \mc R_n]$, using in the same way as in the proof of Lemma \[Pdiv\], $$\label{upper_logn}
\P_{[l^{n,N},r^{n,N}]} \l[ \tau_{7\mc R_n}<\tau^{\gamma_n} <\tau_0 \r]=\mc O \l(\frac{1}{\log n}\r).$$ Next, we control the first term on the right hand side of , again using the Skorohod embedding : $$\begin{aligned}
\label{pnpNupper_2}
\P_{[l^{n,N},r^{n,N}]} \l[ \tau^{\gamma_n} <\tau_{7 \mc R_n}\r]& \leq \P_{[l^{n,N},r^{n,N}]} \l[ T^{\gamma_n}<T_{5\mc R_n}\r] \nonumber \\
&\leq \frac{\log (\gamma_N (Nn^{-1})^{1/2}+2 \mc R_n)-\log (5\mc R_n)}{\log (\gamma_n )-\log (5\mc R_n)} \nonumber \\
&= \frac{\frac{1}{2} \log N +\mc O (\log \log N)}{\frac{1}{2} \log n +\mc O (\log \log n)}.\end{aligned}$$ Combining , , and , $$\P_{[l^{n,N},r^{n,N}]} \l[ \tau^{\gamma_n} < \tau_0\r]=\frac{\log N +\mc O (\log \log N)}{ \log n +\mc O (\log \log n)}+\mc O \l(\frac{1}{\log n} \r).$$ Hence by , $$\begin{aligned}
p_n&=\P_{\zeta^n} \l[ \tau^{\gamma_N (N n^{-1})^{1/2}}<\tau_0 \r]\l(\frac{\log N +\mc O (\log \log N)}{ \log n +\mc O (\log \log n)}+\mc O \l(\frac{1}{\log n} \r)\r)\\
&=\frac{(\log N)p_N}{\log n}\l(\frac{1 +\mc O \big(\frac{\log \log N}{\log N}\big)}{ 1 +\mc O \big(\tfrac{\log \log n}{\log n}\big)}+\mc O \l(\frac{1}{\log N} \r)\r),\end{aligned}$$ where we used in the last line. Since $ |(\log N)p_N - \kappa | \leq \epsilon $ we obtain for $n\geq N$ $$\begin{aligned}
(\log n)p_n &\geq (\kappa -\epsilon )\l(\frac{1 +\mc O (\frac{\log \log N}{\log N})}{ 1 +\mc O (\frac{\log \log n}{\log n})}+\mc O \l(\frac{1}{\log N} \r)\r)\\
\text{ and }\quad (\log n)p_n &\leq (\kappa +\epsilon )\l(\frac{1 +\mc O \big(\frac{\log \log N}{\log N}\big)}{ 1 +\mc O \big(\frac{\log \log n}{\log n}\big)}+\mc O \l(\frac{1}{\log N} \r)\r).\end{aligned}$$ Letting $\epsilon \to 0$ and hence $N\to \infty$, $\lim_{n \to \infty} (\log n)p_n = \kappa$. The result follows by .
Convergence to branching Brownian motion {#convtobbmsec}
----------------------------------------
In this subsection we identify particular subsets of the dual process that we couple with objects that we call ‘caterpillars’. The caterpillars play the rôle of individual branches in the limiting branching Brownian motion. Our (eventual) goal is to write down a system of ‘branching caterpillars’ and couple it to the dual. Establishing these couplings is greatly simplified by viewing the branching and coalescing dual as a deterministic function of an augmented driving Poisson point process and so our first task is to recast the dual in this way.
Recall that we have a fixed impact parameter $u\in (0,1]$. We define, recursively, a sequence of subsets of $[0,1]$ as follows: $$A^1_u=[0,u],\mbox{ and for }k\geq 1, \, A_u^{k+1}=uA^k_u\cup (u+(1-u)A_u^k).$$ Then if $U\sim \text{Unif}[0,1]$, $(\1_{A^k_u}(U))_{k\geq 1}$ is an i.i.d. sequence of Bernoulli$(u)$ random variables (see Lemma 3.20 in [@kallenberg:2006] for a proof in the case $u=\frac{1}{2}$, where $(\1_{A^k_u}(U))_{k\geq 1}$ is the binary expansion of $U$; the general case is an easy extension of this).
Let $$\mathscr{X}=\R \times \R^2 \times \R_{+} \times \mathcal B_1(0)^2\times [0,1]^2.$$
\[slfvs\_dual\_determ\]
Given a simple point process $\Pi$ on $\mathscr{X}$, and some $p\in \R^2$, we define $(\mathcal P _t (p,\Pi))_{t\geq 0}$ as a process on $\cup_{k=1}^\infty (\R^2)^k$ as follows.
For each $t\geq 0$, $\mathcal P _t (p,\Pi)=(\xi _t ^1,\ldots ,\xi^{N_t}_t)$ for some $N_t \geq 1$. We refer to $i$ as the index of the ancestor $\xi^i_t$. We begin at time $t=0$ from a single ancestor $\mathcal P_0(p,\Pi)=\xi_0^1=p$ and proceed as follows.
At each $(t,x,r,z_1,z_2,q,v)\in \Pi$ with $v\geq \v {s_n}$, a neutral event occurs:
1. Let $\xi ^{n_1}_{t-},\ldots , \xi ^{n_m}_{t-}$ denote the ancestors in $\mathcal B _r(x)$ which have not yet coalesced with an ancestor of lower index, with $n_1<\ldots < n_m$. For $1\leq i\leq m$, mark the ancestor $\xi ^{n_i}_{t-}$ iff $q\in A_u^i$. Let $\xi ^{r_1}_{t-},\ldots \xi ^{r_l}_{t-}$ denote the marked ancestors.
2. If at least one ancestor is marked, we set $\xi ^{r_i}_{t}=x+rz_1$ for each $i$ and call this the parental location for the event. We say that the ancestor $\xi ^{r_i}_t$ has coalesced with the ancestor $\xi^{r_1}_t$, for each $i\geq 2$.
At each $(t,x,r,z_1,z_2,q,v)\in \Pi$ with $v<\v {s_n}$, a selective event occurs:
1. Let $\xi ^{n_1}_{t-},\ldots , \xi ^{n_m}_{t-}$ denote the ancestors in $\mathcal B _r(x)$ which have not yet coalesced with an ancestor of lower index, with $n_1<\ldots < n_m$. For $1\leq i\leq m$, mark the ancestor $\xi ^{n_i}_{t-}$ iff $q\in A_u^i$. Let $\xi ^{r_1}_{t-},\ldots \xi ^{r_l}_{t-}$ denote the marked ancestors.
2. If at least one ancestor is marked, we set $\xi ^{r_i}_{t}=x+rz_1$ for each $i$ and add an ancestor $\xi ^{N_{t-}+1}_{t}=x+rz_2$. We call $x+rz_1$ and $x+rz_2$ the parental locations of the event. We say that the ancestor $\xi ^{r_i}_t$ has coalesced with the ancestor $\xi^{r_1}_t$, for each $i\geq 2$.
For each $l\in \N$, if $\xi^l_{\tau}$ has coalesced with an ancestor $\xi^k_{\tau}$ of lower index at time $\tau$, we set $\xi^l_t = \xi_t^{k}$ for all $t\geq \tau$.
In the same way as for the definition of $\mc P^{(n)}(p)$ before the statement of Theorem \[result d>1\], we shall view $(\mathcal P _t (p,\Pi))_{t\geq 0}$ as a collection of potential ancestral lineages. Given a realization of $\Pi$, we say that a path that begins at $p$ is a potential ancestral lineage if (1) at each neutral event that it encounters, it moves to the (single) parent and (2) at each selective event it encounters, it moves to one of the parents of that event.
Note that if $\Pi$ is a Poisson point process on $\mathscr{X}$ with intensity measure $$\label{eq:ppp_intensity}
n\,dt\otimes n\,dx
\otimes \mu^n(dr)\otimes \pi^{-1} dz_1 \otimes \pi^{-1} dz_2 \otimes dq \otimes dv$$ then as a collection of potential ancestral lineages, $(\mathcal P _t (p,\Pi))_{t\geq 0}$ has the same distribution as $\mc P^{(n)}(p)$.
When $\Pi$ takes this form, the result is that the driving Poisson Point Process in has been augmented by components that determine the nature of each event (neutral or selective), the parental locations of each event and which lineages in the region of the event are affected by it. We have abused notation by retaining the notation $\Pi$ for this augmented process.
### The caterpillar
We now introduce the notion of a caterpillar, which involves following a pair of potential ancestral lineages in the dual. We stop the caterpillar if the pair of lineages reaches displacement of $(\log n)^{-c}$, or if the pair does not coalesce within time $(\log n)^{-c}$ after last branching. While doing so, we suppress the creation of the second potential parent at any selective events that occur within time $(\log n)^{-c}$ of the previous (unsuppressed) selective event.
Let $\Pi$ be a Poisson point process on $\mathscr{X}$ with intensity measure . We write $(\mc P_t(p,\Pi))_{t\geq 0}=(\xi _t ^1,\ldots ,\xi^{N_t}_t)_{t \geq 0}$ as defined in Definition \[slfvs\_dual\_determ\].
\[caterpillar\_defn\] For $p\in \R^2$, we define a lifetime $h(p,\Pi)>0$, and a process $(c_t (p,\Pi))_{0 \leq t \leq h(p,\Pi)}$ on $(\R^2)^2$, which we shall refer to as a caterpillar. For each $t\geq 0$, we write $$c _t (p,\Pi)=\l(c_t ^1(p,\Pi),c^{2}_t(p,\Pi)\r),$$ dropping the dependence on $(p,\Pi)$ from our notation, when convenient. As part of the definition, we will also define $k^*(p,\Pi)\in \N$ and a sequence $(\tau^{\text{br}}_k)_{k\leq k^*}$ of stopping times.
Set $\tau^{\text{br}}_0=0$ and let $\tau^{\text{br}}_{1}$ be the time of the first selective event after $(\log n)^{-c}$ to affect $\xi^1$. For $t \leq \tau^{\text{br}}_{1}$, let $c_t^1=c_t^2=\xi^1_t$.
Then, for $k\geq 1$, suppose we have defined $(\tau^{\text{br}}_l)_{l \leq k}$; let $m(k)=N_{\tau^{\text{br}}_k}$.
For $t\in[\tau^{\text{br}}_k, \tau^{\text{br}}_k+(\log n)^{-c}]$, define $c_t^1(p,\Pi)=\xi^1_t$ and $c_t^2(p,\Pi)=\xi^{m(k)}_t$.
In analogy with Definition \[ioexctypedef\], define $$\begin{aligned}
\label{tau_type_cat}
\tau_k^{\text{div}}&=\inf \{t\geq \tau^{\text{br}}_k :|c^1_t -c^2_t| \geq (\log n)^{-c}\}, \notag\\
\tau_k^{\text{coal}}&=\inf \{t\geq \tau^{\text{br}}_k :c^1_t=c^2_t\}, \notag \\
\tau_k^{\text{over}}&=\tau^{\text{br}}_k +(\log n)^{-c},\end{aligned}$$ and let $\tau_k^{\text{type}}=\min(\tau_k^{\text{div}}, \tau_k^{\text{coal}},\tau_k^{\text{over}})$. If $\tau_k^{\text{type}}\neq \tau_k^{\text{coal}}$ then set $k^*(p,\Pi)=k$ and $h(p,\Pi)=\tau_{k^*}^{\text{type}}$. The definition is then complete. If not, we proceed as follows.
Let $\tau^{\text{br}}_{k+1}$ be the time of the first selective event occurring strictly after $\tau^{\text{br}}_k+(\log n)^{-c}$ to affect $\xi^1$. For $t\in[\tau^{\text{br}}_k+(\log n)^{-c}, \tau^{\text{br}}_{k+1})$, let $c_t^1(p,\Pi)=c_t^2(p,\Pi)=\xi^1_t$.
We then continue iteratively for each $k\leq k^*(p,\Pi)$.
We refer to $(\tau^{\text{br}}_k)_{k\leq k^*}$, the times at which a selective event results in branching, as branching events. We shall abuse our previous terminology and say that a branching event diverges, coalesces or overshoots when the same is true of the excursion corresponding to the pair $(c^1, c^2)$.
Note that $(c_t)_{t \geq 0}$ is not a Markov process with respect to its natural filtration, since $c^1$ and $c^2$ are not allowed to branch off from each other within $(\log n)^{-c}$ of the previous branching event. However, for $i=1,2$, $(c^i_t (p,\Pi))_{0 \leq t \leq h(p,\Pi)}$ is a Markov process with the same jump rate and jump distribution as a single potential ancestral lineage in the rescaled dual. Moreover for each $1\leq k \leq k^*$, $(c^1_t, c^2_t)_{\tau^{\text{br}}_k \leq t \leq \tau^{\text{type}}_k}$ is an excursion as defined in Section \[excursionsec\].
Recall the definition of $m^n(dz)$ from and let $$\label{kappa_n_lambda}
\kappa_n = (\log n)\P [ \tau_1^{type} \neq \tau_1^{coal}]\quad \text{ and }\quad \lambda = n^{-1} \int_{\R^2}m^n(dz)=\Theta (1).$$ By combining Lemma \[Pdiv2\] and Lemma \[Pover\], $$\label{kappaconv}
\kappa_n \to \kappa$$ as $n\to \infty$.
By the strong Markov property of $\Pi$, and since $\tau_k^{\text{type}}\leq \tau^{\text{br}}_{k}+(\log n)^{-c}\leq \tau^{\text{br}}_{k+1}$ for each $k$, the types of the selective events, $(\{\tau_k^{\text{type}}=\tau_k^{\text{div}}\})_{k\geq 1}$, $(\{\tau_k^{\text{type}}=\tau_k^{\text{coal}}\})_{k\geq 1}$ and $(\{\tau_k^{\text{type}}=\tau_k^{\text{over}}\})_{k\geq 1}$ are each i.i.d. sequences. Thus, $$\label{k_star_distn}
k^*(p,\Pi)\sim \text{Geom}(\kappa _n (\log n)^{-1}).$$ By , there exist constants $0<a\leq A <\infty$ such that $\kappa_n \in [a,A]$ for all $n$ sufficiently large, so $$\label{k_star_exp_bound}
\P[k^*\geq (\log n)^{9/8}]=(1-\tfrac{\kappa_n}{\log n})^{(\log n)^{9/8}}=\mc O(e^{-\delta (\log n)^{1/8}})$$ for some $\delta>0$.
\[lifetime\] We can couple $h(p,\Pi)$ with $H\sim \text{Exp}(\kappa_n \lambda)$ in such a way that for some $\delta>0$, with probability at least $1-\mc O(e^{-\delta (\log n)^{1/8}})$ $$|h(p,\Pi)-H|\leq 3(\log n)^{-1/4}.$$
Recall the definition of $\lambda$ in . Since the total rate at which $c^1$ jumps is given by $\lambda n$, and each jump is from a selective event independently with probability $\v{s_n}=\frac{\log n}{n}$, by the strong Markov property of $\Pi$ we have that $$\label{eq:Ek_distn}
E_k:=\tau^{\text{br}}_{k}-(\tau^{\text{br}}_{k-1}+(\log n)^{-c})\sim \text{Exp}(\lambda \log n)$$ and $(E_k , \1_{\{\tau_k^{\text{type}}\neq \tau_k^{\text{coal}}\}})_{k\geq 1}$ is an i.i.d. sequence.
Since (for example) $\{\tau_k^{\text{type}}\neq \tau_k^{\text{coal}}\}$ is not independent of the radius of the event at $\tau^{\text{br}}_k$, we note that $E_k$ and $\1_{\{\tau_k^{\text{type}}\neq\tau_k^{\text{coal}}\}}$ are not independent; therefore $(E_k)_{k\geq 1}$ is not independent of $k^*$. However, we can couple $(E_k , \1_{\{\tau_k^{\text{type}}\neq\tau_k^{\text{coal}}\}})_{k\geq 1}$ with a sequence $(E'_k)_{k\geq 1}$ which is independent of $k^*$ as follows.
First sample the sequence $(\1_{\{\tau_k^{\text{type}}\neq\tau_k^{\text{coal}}\}})_{k\geq 1}$, and then independently sample a sequence $(E'_k , A_k)_{k\geq 1}$ with the same distribution as $(E_k , \1_{\{\tau_k^{\text{type}}\neq\tau_k^{\text{coal}}\}})_{k\geq 1}$. Then, for each $k\geq 1$, if $A_k=\1_{\{\tau_k^{\text{type}}\neq\tau_k^{\text{coal}}\}}$ set $E_k = E'_k$, and if not sample $E_k$ according to its conditional distribution given $\1_{\{\tau_k^{\text{type}}\neq\tau_k^{\text{coal}}\}}$.
We now have a coupling of $(E_k , \1_{\{\tau_k^{\text{type}}\neq\tau_k^{\text{coal}}\}})_{k\geq 1}$ and $(E'_k)_{k\geq 1}$ such that $(E'_k)_{k\geq 1}$ is an i.i.d. sequence, independent of $k^*$, with $E'_1 \sim \text{Exp}(\lambda \log n)$. Also, since $\P[\tau_k^{\text{type}}\neq \tau_k^{\text{coal}}]=\Theta((\log n)^{-1})$, we have that independently for each $k$, $E_k=E'_k$ with probability at least $1-\Theta((\log n)^{-1})$.
We write $$\sum_{k=1}^{k^*}E_k = \sum_{k=1}^{k^*}E'_k + \sum_{k=1}^{k^*}D_k,$$ where $D_k=E_k-E'_k$ and, by , $\sum_{k=1}^{k^*}E'_k \sim \text{Exp}(\lambda \kappa_n)$.
Our next step is to bound $\sum_{k=1}^{k^*}D_k$. Firstly, applying a Chernoff bound to the binomial distribution yields $$\begin{aligned}
&\P\l[\l|\big\{k<(\log n)^{9/8} :D_k\neq 0\big\}\r|\geq (\log n)^{1/4}\r] \notag\\
&\hspace{4pc} = \P\l[ \text{Bin}\big((\log n)^{9/8}, \Theta ((\log n)^{-1})\big)\geq (\log n)^{1/4}\r] \notag\\
&\hspace{4pc} = \mc O \big(\exp (-\delta ' (\log n)^{1/4})\big)\label{chernoff_zk}\end{aligned}$$ for some $\delta'>0$. Secondly, $$\begin{aligned}
\label{bound_zk}
\P \l[ |D_1 | \geq (\log n)^{-1/2} \r]&\leq \P \l[ E_1 \geq \tfrac{1}{2}(\log n)^{-1/2} \r]+\P \l[ E'_1 \geq \tfrac{1}{2}(\log n)^{-1/2} \r] \notag \\
&= 2 \exp(-\lambda (\log n)^{1/2}/2).\end{aligned}$$ Combining , and , we have that $$\label{bound_zk_sum}
\P\l[\sum_{k=1}^{k^*}D_k\geq (\log n)^{-1/4}\r]=\mc O \l(e^{-\delta '' (\log n)^{1/8}}\r),$$ for some $\delta''\in (0,\delta)$.
Note that $$\sum_{k=1}^{k^*}E_k=\tau^{\text{br}}_{k^*}-k^*(\log n)^{-c}=h-k^*(\log n)^{-c}-(\tau_{k^*}^{\text{type}}-\tau_{k^*}^{\text{br}}),$$ with $0\leq \tau_{k^*}^{\text{type}}-\tau_{k^*}^{\text{br}}\leq (\log n)^{-c}$. Let $H=\sum_{k=1}^{k^*}E'_k$. Then by and , we have $$\P\l[|h(p,\Pi)-H|\geq (\log n)^{9/8-c}+(\log n)^{-c}+(\log n)^{-1/4}\r]=\mc O\l(e^{-\delta '' (\log n)^{1/8}}\r).$$ The result follows since $c\geq 3$.
Our next step is to show that a caterpillar is unlikely to end with an overshooting event.
\[caterpillar\_overshoot\] As $n \to \infty$, $\P \l[\tau_{k^*}^{\text{type}}= \tau_{k^*}^{\text{over}}\r]=\mc O \l((\log n)^{\frac{21}{8}-c}\r).$
By Lemma \[Pover\], for $k\geq 1$ $$\label{prob_undec}
\P[\tau_k^{\text{type}}=\tau_k^{\text{over}}]=\mc O((\log n)^{\frac{3}{2}-c}).$$ Moreover, $$\{\tau_{k^*}^{\text{type}}=\tau_{k^*}^{\text{over}}\}
\subset \{k^*\geq (\log n)^{9/8}\}\cup
\bigcup _{k=1}^{(\log n)^{9/8}}\{\tau_k^{\text{type}}=\tau_k^{\text{over}}\}.$$ It follows, using , that $$\begin{aligned}
\P[\tau_{k^*}^{\text{type}}=\tau_{k^*}^{\text{over}}]
&=\mc O(e^{-\delta(\log n)^{1/8}})+\mc O((\log n)^{\frac{3}{2} +\frac{9}{8}-c})=\mc O((\log n)^{\frac{21}{8}-c}).\end{aligned}$$ This completes the proof.
We now show that a single caterpillar can be coupled to a Brownian motion in such a way that the caterpillar closely follows the Brownian motion, during time $[0,h(p,\Pi)]$.
Recall that the rate at which $\xi^1$ jumps from $y$ to $y+z$ is given by intensity measure $m^n(dz)$, defined in . Thus for $(c_t)_{t\geq 0}$ started at $p$, $\E[c^1_t]=p$ and the covariance matrix of $c^1_t$ is $ \sigma ^2 t \text{Id}$ since by , $$\sigma ^2 = \tfrac{1}{2}\int _{\R^2}|z|^2 m^n (dz).$$
Armed with this, the following lemma is no surprise.
\[single\_caterpillar\] Let $(W_t)_{t\geq 0}$ be a two-dimensional Brownian motion with $W_0=p$. We can couple $(c _t (p,\Pi))_{t\leq h(p,\Pi)}$ with $(W_t)_{t\geq 0}$, in such a way that $(W_t)_{t\geq 0}$ is independent of $(\tau^{br}_k)_{k\geq 1}$ and $k^*(p,\Pi)$, and for any $r>0$, with probability at least $1-\mc O((\log n)^{-r})$, for $t\leq h(p,\Pi)$, $$|c^1_t(p,\Pi)-W_{\sigma^2 t}|\leq (\log n)^{\frac{9}{8}-\frac{c}{3}}.$$
By the definition of the caterpillar in Definition \[caterpillar\_defn\], for all $t\leq h(p,\Pi)$, $|c^2_t-c^1_t|\leq (\log n)^{-c}$. Hence under the coupling of Lemma \[single\_caterpillar\], with probability at least $1-\mc O((\log n)^{-r})$, $|c^2_t(p,\Pi)-W_{\sigma^2 t}|\leq 2(\log n)^{\frac{9}{8}-\frac{c}{3}}$.
The proof is closely related to the second half of the proof of Lemma \[exclengths\]. Note for $k\geq 0$, on the time interval $[\tau^{br}_k+(\log n)^{-c},\tau^{br}_{k+1})$, $c^1_t$ is a pure jump process with rate of jumps from $y$ to $y+z$ given by $(1-\v{s_n})m^n(dz)$. Let $(\tilde c _t)_{t \geq 0}$ be a pure jump process with $\tilde c_0=0$ and rate of jumps from $y$ to $y+z$ given by $(1-\v{s_n})m^n(dz)$. For $i\geq 1$, let $$X_i = \tilde c _{i/n} - \tilde c _{(i-1)/n}.$$ Then $(X_i)_{i\geq 1}$ are i.i.d., and as in and , we have $\E[|X_1|^2]=\frac{2\sigma^2 (1-\v{s_n}) }{n}$ and $\E[|X_1|^4]= \mc O(n^{-2})$.
By the same Skorohod embedding argument as for , there is a two-dimensional Brownian motion $W$ started at $0$ and a sequence $\upsilon_1, \upsilon_2, \ldots $ of stopping times for $W$ such that for $i\geq 1$, $W_{\upsilon_i}=\tilde c_{i/n}$ and $$\P[|\upsilon_{\lfloor tn \rfloor}-\sigma ^2 (1-\v{s_n}) t|\geq n^{-1/3}]\leq \mc O(t n^{-1/3}).$$ Fix $t>0$. Since $\v{s_n}=\frac{\log n}{n}$, for $n$ sufficiently large, $$\P[|\upsilon_{\lfloor tn \rfloor}-\sigma ^2 t|\geq 2n^{-1/3}]\leq \mc O(n^{-1/3}).$$ Then by a union bound over $j=1,\ldots,\lfloor n^{1/4}t\rfloor$, $$\begin{aligned}
\P\l[\exists j\leq \lfloor n^{1/4}t \rfloor : |\upsilon_{\lfloor j n^{3/4} \rfloor} - \sigma ^2 j n^{-1/4}| \geq 2 n^{-1/3}\r]
&\leq (n^{1/4}t) \mc O(n^{-1/3})\label{union bound} \\
&=\mc O(n^{-1/12}).\notag\end{aligned}$$ Again by a union bound over $j$, $$\begin{aligned}
&\P\bigg[\exists j \leq \lfloor n^{1/4}t \rfloor:\,\sup\bigg\{|W_{\sigma ^2 j n^{-1/4}} - W_u |\notag\\
&\hspace{4pc}\-u\in [\sigma^2 j n^{-1/4}-2n^{-1/3},\sigma^2 (j+1) n^{-1/4}+2n^{-1/3}]\bigg\}\geq n^{-1/10} \bigg]\notag \\
&\hspace{2pc}\leq (n^{1/4} t) 2 \P\l[\sup \{|W_s-W_0|:s\in [0, 4n^{-1/3}]\}\geq \tfrac{1}{2}n^{-1/10}\r]\notag\\
&\hspace{2pc}\leq 4n^{1/4}t \exp (-n^{2/15}/128)=o(n^{-1/12}).\label{eq:W_sup}\end{aligned}$$ Here, the last line follows by .
Under the complement of the event of (\[union bound\]), for all $j<\lfloor n^{1/4}t\rfloor $, $$|\upsilon_{\lfloor j n^{3/4} \rfloor} - \sigma ^2 j n^{-1/4}| \leq 2n^{-1/3}
\mbox{ and }
|\upsilon_{\lfloor (j+1) n^{3/4} \rfloor} - \sigma ^2 (j+1) n^{-1/4}| \leq 2n^{-1/3},$$ which implies that for $i$ such that $jn^{-1/4}\leq i n^{-1}\leq (j+1)n^{-1/4}$, $$\upsilon_i\in \l[\sigma^2 j n^{-1/4} - 2n^{-1/3},\sigma^2 (j+1) n^{-1/4} + 2n^{-1/3}\r].$$ Hence combining (\[union bound\]) and (\[eq:W\_sup\]), $$\P\l[\exists i \leq \lfloor tn \rfloor : |\tilde c_{i/n} - W_{\sigma ^2 i/n}|\geq 2n^{-1/10}\r]=
\mc{O}(n^{-1/12}).$$
Our next step is to control $|\tilde c_s-\tilde c_{i/n}|$ during the interval $s\in[i/n, (i+1)/n]$. The distribution of the number of jumps made by $\tilde c$ on an interval $[i/n,(i+1)/n]$ is Poisson with parameter $(1-\v{s_n})\lambda$, where $\lambda$ is given by , and the maximum jump size is $2\mc{R}_n$; using (\[poisson tail\]) with $\chi=(1-\v{s_n})\lambda$ and $k=\log n$ gives that $$\P\l[\exists i\leq \lfloor tn \rfloor :\sup_{s\in [i/n,(i+1)/n]} |\tilde c_s - \tilde c _{i/n}| \geq (\log n)2 \mathcal R_n \r] =
o(n^{-1}).$$ Hence for $n$ large enough that $(\log n)2 \mathcal R_n\leq n^{-1/10}$, using again to bound $|W_s-W_{\sigma^2 i/n}|$ during the interval $[\sigma^2 i/n,\sigma^2 (i+1)/n]$ we have $$\label{eq:W_coupling}
\P\l[\sup_{s\leq t} |\tilde c_s - W_{\sigma ^2 s}| \geq 4n^{-1/10}\r]=\mc{O}(n^{-1/12}).$$
We now apply this coupling to $(c^1_t)_{\tau^{br}_k+(\log n)^{-c}\leq t \leq \tau^{br}_{k+1}}$ for each $k\geq 0$, and let the caterpillar evolve independently of the Brownian motion on each interval $[\tau^{\text{br}}_k, \tau^{\text{br}}_k+(\log n)^{-c}]$.
More precisely, let $(\tilde c ^k)_{k \geq 0}$ be an i.i.d. sequence of pure jump processes with $\tilde c ^k_0=0$ and rate of jumps from $y$ to $y+z$ given by $(1-\v{s_n})m^n(dz)$. Let $(W^k)_{k\geq 0}$ be an i.i.d. sequence of 2-dimensional Brownian motions started at $0$ and for each $k\geq 0$, couple $W^k$ and $\tilde c^k$ in the same way as above, so that for fixed $t>0$, for each $k\geq 0$, $$\label{eq:Wk_coupling}
\P\l[\sup_{s\leq t} |\tilde c^k_s - W^k_{\sigma ^2 s}| \geq 4n^{-1/10}\r]=\mc{O}(n^{-1/12}).$$ Then by the Strong Markov property for the process $c^1$, we can couple $(\tilde c ^k,W^k)_{k\geq 0}$ and $c^1$ in such a way that for $k\geq 0$ and $s\in [0, \tau_{k+1}^{br}-(\tau_k^{br}+(\log n)^{-c}))$, $$c^1_{s+\tau_k^{br}+(\log n)^{-c}}-c^1_{\tau_k^{br}+(\log n)^{-c}}=\tilde c^k_s.$$ and $(\tilde c ^k,W^k)_{k\geq 0}$ is independent of $\big(\tau^{\text{br}}_k, (c^1_t-c^2_t)|_{[\tau^{\text{br}}_k,\tau^{\text{br}}_k +(\log n)^{-c})}\big)_{k\geq 0}$.
Let $B$ be another independent 2-dimensional Brownian motion started at 0. We now define a single Brownian motion $W$ by piecing together increments of $B$ and $(W^k)_{k \geq 0}$. For $s<\sigma^2(\log n)^{-c}$, let $W_s=B_s+p$. Then for $k\geq 0$, define the increments of $W$ on the time interval $[\sigma^2(\tau^{br}_k+(\log n)^{-c}),\sigma^2(\tau^{br}_{k+1}+(\log n)^{-c}))$ as follows. For $s \in [0,\sigma^2(\tau^{br}_{k+1}-\tau^{br}_k))$, let $$W_{s+\sigma^2(\tau^{br}_k+(\log n)^{-c})}-W_{\sigma^2(\tau^{br}_k+(\log n)^{-c})}=W^k_s.$$ Then $W$ is a Brownian motion independent of $\big(\tau^{\text{br}}_k, (c^1_t-c^2_t)|_{[\tau^{\text{br}}_k,\tau^{\text{br}}_k +(\log n)^{-c})}\big)_{k\geq 0}$, which implies that $W$ is independent of both $k^*$ and $(\tau^{\text{br}}_k)_{k\geq 1}$.
We now check that $W_t$ is close to $c^1_t$ for $t<h$. By , $$\P\l[\tau^{\text{br}}_{k+1}-\tau^{\text{br}}_k \geq 1+(\log n)^{-c}\r]\leq n^{-\lambda}.$$ Hence applying with $t=1+(\log n)^{-c}$ for each $k\leq (\log n)^{9/8}$ and using , we have that with probability at least $1- \mc O(e^{-\delta (\log n)^{1/8}})$, for $0\leq k\leq k^*$ and $t\in [\tau^{\text{br}}_k +(\log n)^{-c},\tau^{\text{br}}_{k+1})$, $$\label{after_lognc}
\l|\l(c^1_t- c^1_{\tau^{\text{br}}_k +(\log n)^{-c}}\r)-\l(W_{\sigma^2 t}-W_{\sigma^2(\tau^{\text{br}}_k +(\log n)^{-c})}\r)\r|\leq 4n^{-1/10}.$$ For each $k$, by , $$\begin{aligned}
&\P\l[\sup \l\{|W_{\sigma^2 t}-W_{\sigma ^2 \tau^{\text{br}}_k}|:t\in [\tau^{\text{br}}_k,\tau^{\text{br}}_k+(\log n)^{-c}]\r\}\geq \tfrac{1}{3}(\log n)^{-c/3}\r] \notag\\
&\hspace{5pc}\leq 4 \exp(-(\log n)^{c/3}/72 \sigma^2)\notag \\
&\hspace{5pc}=o\l((\log n)^{-r-\frac{9}{8}}\r), \label{bm_jump}\end{aligned}$$ for any $r>0$. Hence, using again, $$\begin{aligned}
\label{mult_bm_jumps}
&\P\l[\sum_{k=1}^{k^*}\,\sup\l\{|W_{\sigma^2 t}-W_{\sigma ^2 \tau^{\text{br}}_k}|:t\in [\tau^{\text{br}}_k,\tau^{\text{br}}_k+(\log n)^{-c}]\r\}\geq \tfrac{1}{3}(\log n)^{\frac{9}{8}-\frac{c}{3}}\r]\notag\\
&\hspace{2pc}\leq \P\l[k^*\geq (\log n)^{9/8}\r]+(\log n)^{9/8} o((\log n)^{-r-\frac{9}{8}})\notag\\
&\hspace{2pc}=o((\log n)^{-r}).\end{aligned}$$ For $k\geq 0$, on the time interval $ [\tau^{\text{br}}_k, \tau^{\text{br}}_k+(\log n)^{-c}]$ the process $c^1_t$ is a pure jump process with rate of jumps from $y$ to $y+z$ given by $m^n(dz)$. Hence using the same Skorohod embedding argument as for , we can couple $(c^1_{s+\tau^{\text{br}}_k}-c^1_{\tau^{\text{br}}_k})_{s\leq (\log n)^{-c}}$ with a Brownian motion $W'$ started at $0$ in such a way that $$\P\l[\sup_{s\leq (\log n)^{-c}} |(c^1_{s+\tau^{\text{br}}_k}-c^1_{\tau^{\text{br}}_k}) - W_{\sigma ^2 s}| \geq 4n^{-1/10}\r]=\mc{O}(n^{-1/12}).$$
Applying and , it follows that $$\begin{gathered}
\P\bigg[\sum_{k=1}^{k^*}\,\sup\l\{|c^1_{t}-c^1_{\tau^{\text{br}}_k}|:t\in [\tau^{\text{br}}_k,\tau^{\text{br}}_k+(\log n)^{-c}]\r\}\notag\\
\hspace{6pc}\geq \tfrac{1}{3}(\log n)^{\frac{9}{8}-\frac{c}{3}}+4n^{-1/10}(\log n)^{9/8}\bigg]\\
= \mc{O}((\log n)^{-r}).\end{gathered}$$
The stated result follows by combining the above equation with , and .
### The branching caterpillar {#sec:branching_cat_sec}
We now construct a branching process of caterpillars. We start from a single caterpillar and allow it to evolve until the time $h$. We start two independent caterpillars from the locations of $c^1_h$ and $c^2_h$. Now iterate. The independent caterpillars defined in this way will be indexed by points of $\mathcal U=\{\emptyset\}\cup\bigcup _{k=1}^\infty \{1,2\}^k$. More formally:
\[branching\_defn\]
Let $(\Pi_j)_{j\in \mathcal U}$ be a sequence of independent Poisson point processes on $\mathscr{X}$ with intensity measure . For $p\in \R^2$, we define $(\mathcal C _t (p,(\Pi_j)_{j\in \mathcal U}))_{t\geq 0}$ as a process on $\cup_{k=1}^\infty (\R^2)^k$ as follows. For $s>0$, let $$\begin{aligned}
\label{eq:Pij}
\Pi_j^s = \{(t-s,x,r,z_1,z_2,q,v):(t,x,r,z_1,z_2,q,v)\in \Pi_j\}.\end{aligned}$$ Define $(p_j,t_j,h_j)$ inductively for $j\in \mc U$ by $p_\emptyset =p$, $t_\emptyset =0$ and $$\begin{aligned}
h_j&=t_j+h(p_j,\Pi_j^{t_j})\\
t_{(j,1)}&=t_{(j,2)}=h_j\\
p_{(j,1)}&=c^1_{h_j-t_j}(p_j,\Pi_j^{t_j})\\
p_{(j,2)}&=c^{2}_{h_j-t_j}(p_j,\Pi_j^{t_j}).\end{aligned}$$ Finally, define $\mathcal U(t)=\{j\in \mathcal U:t_j\leq t \leq h_j\}$ and $$\mathcal C _t (p,(\Pi_j)_{j\in \mathcal U})=(c_{t-t_j}(p_j,\Pi_j^{t_j}))_{j\in \mathcal U(t)}.$$
In words, $\mc{U}(t)$ is the set of indices of the caterpillars that are active at time $t$, and $\mc{C}_t$ is the set of (positions of) those caterpillars. Note that we translate the time coordinates in to match our definition of a caterpillar, which began at time $0$. The jumps in $\mc{C}_t$ occur at the time coordinates of events in $\cup_{j\in \mc U}\Pi_j$.
We now show that for any constant $a>0$, with high probability, the longest ‘chain’ of caterpillars has length at most $a \log \log n+1$. For $k\in \N$, let $\mathcal U _k=\{\emptyset\}\cup \bigcup_{j=1}^k \{1,2\}^j$.
\[branching\_in\_loglog\] Fix $T>0$; then for any $r>0$, $a>0$, $\P[\mathcal U (T)\not\subseteq \mathcal U_{\lfloor a\log \log n \rfloor}]=o((\log n)^{-r})$.
Fix $v\in\{1,2\}^{\lfloor a\log \log n \rfloor+1}$. Then by a union bound, $$\label{contained in k levels}
\P\l[\exists w\in \{1,2\}^{\lfloor a\log \log n \rfloor+1}\text{ s.t. }t_w\leq T\r]\leq 2^{\lfloor a\log \log n \rfloor+1}\P[t_v\leq T].$$
Note that by Lemma \[lifetime\], $t_v = \sum_{i=1}^{\lfloor a\log \log n \rfloor+1}H_i +R$ where $(H_i)_{i \geq 1}$ are i.i.d. with $H_1\sim \text{Exp}(\lambda \kappa_n)$ and $$\P\l[R\geq 3(a\log \log n+1 )(\log n)^{-1/4}\r]=\mc O((\log \log n) e^{-\delta (\log n)^{1/8}}).$$ Hence (if $n$ is sufficiently large that $3(a\log \log n+1)(\log n)^{-1/4}\leq T/2$), if $Z'$ is Poisson with parameter $\lambda \kappa_n T/2$, $$\P[t_v\leq T]\leq \P[Z'\geq a\log \log n+1]+\mc O\l((\log \log n) e^{-\delta (\log n)^{1/8}}\r).$$ We use (\[poisson tail\]) and combine with (\[contained in k levels\]) to deduce that, for any $r>0$, $$\P[\mathcal U (T)\not\subseteq \mathcal U_{\lfloor a\log \log n \rfloor}]= \P\l[\exists w\in \{1,2\}^{\lfloor a\log \log n \rfloor+1}\text{ s.t. }t_w \leq T\r]=o((\log n)^{-r}).$$ This completes the proof.
The next task is to couple the branching caterpillar to the rescaled dual of the . Since we have expressed the dual as a deterministic function of the driving point process of events in Definition \[slfvs\_dual\_determ\], it is enough to find an appropriate coupling of the driving events for the branching caterpillar and those of a dual.
The idea, roughly, is as follows. Each ‘branch’ of the branching caterpillar is constructed from an independent driving process. For each of these we should like to retain those events that affected the caterpillar, but we can discard the rest. If two or more caterpillars are close enough that the events affecting them could overlap, to avoid having too many events in these regions we have to arbitrarily choose one caterpillar and discard the events affecting the others. We then supplement these with additional events, appropriately distributed to fill in the gaps and arrive at the driving Poisson point process for a dual, with intensity as in . We will then check that the dual corresponding to this point process coincides with our branching caterpillar, with probability tending to one as $n\rightarrow\infty$.
To put this strategy into practice we require some notation. Let $\mathcal U_0=\mathcal U \cup \{0\}$. For $V\subset \mc U_0$ let $\max(V)$ refer to the maximum element of $V$ with respect to a fixed ordering in which $0$ is the minimum value (it does not matter precisely which ordering we use, but we must fix one). Given a sequence $(\Pi_j)_{j\in \mathcal U_0}$ of independent Poisson point processes on $\mathscr{X}$ with intensity measure , define a simple point process $\Pi$ as follows. Let $$\label{jdefn}
j(t,x)=\max \l(\l\{k\in \mathcal U(t): \exists i \in \{1,2\}\text{ with }|c^i_{t-t_k}(p_k,\Pi_k^{t_k})-x|\leq \mathcal R_n\r\}\cup \{0\}\r).$$ Note that $j(t,x)=0$ corresponds to regions of space-time that are not near a caterpillar, so that for $(t,x,r,z_1,z_2,q,v)\in \Pi_0$, $\mc B_r(x)$ does not contain a caterpillar. Then we define $$\label{Pidefn}
\Pi = \bigcup\limits_{k\in \mc{U}_0}\l\{(t,x,r,z_1,z_2,q,v)\in \Pi_k\-j(t,x)=k\r\}.$$
\[lemma\_build\_pp\] $\Pi$ is a Poisson point process with intensity measure given by .
We defined the coupling for each $n\in\N$. As such, in the proof of Lemma \[lemma\_build\_pp\] we regard $n$ as a constant and we will not include it inside $\mc{O}(\cdot)$, etc.
Let $\nu (dt,dx,dr,dz_1, dz_2,dq,dv)$ be the intensity measure given in .
Let $\mathcal B_0$ be the set of bounded Borel subsets of $\R_+ \times \R^2 \times \R_{+} \times \mathcal B_1(0)^2\times [0,1]^2$; for $B\in \mathcal B_0$, let $N(B)=|\Pi\cap B|$ and for $j\in \mathcal U_0$, let $N_j(B)=|\Pi_j\cap B|$. Suppose $B=\cup_{i=1}^k B_i\in \mathcal B_0$ where for each $i$, $B_i=[a_i,b_i]\times D_i$ for some $a=a_1<b_1\leq a_2<\ldots <b_k=b$. Let $\mathcal B_R\subset \mathcal B_0$ denote the collection of such sets $B$. Note that $\Pi$ is a simple point process, and that therefore $\Pi$ is a Poisson point process with intensity $\nu$ if and only if $$\label{eqn:ppp_criterion}
\P\l[N(B)=0\r]=e^{-\nu(B)}$$ for all $B\in\mc B_R$. (See e.g. Section 3.4 of [@Kingman1992].)
For some $\delta>0$, assume that $b_i-a_i\leq \delta $, $\forall i$ (by partitioning the $B_i$ further if necessary). Since $B$ is bounded, $\exists$ $ d<\infty$ s.t. $|x|\leq d$ for all $(t,x,r,z_1,z_2,q,v)\in B$. We can write $$\begin{aligned}
\label{void_prob}
\P[N(B)=0]&=\P[\cap_{i=1}^k \{N(B_i)=0\}]\nonumber \\
&=\E \l[\prod _{i=1}^{k-1}\1_{\{N(B_i)=0\}}\P\bigg(N(B_k)=0\bigg|(\Pi _j(a_k))_{j\in \mathcal U_0}\bigg) \r]\end{aligned}$$ where $\Pi _j(t):=\Pi_j |_{[0,t] \times \R^2 \times \R_{+} \times \mathcal B_1(0)^2\times [0,1]^2}$.
For $j\in \mathcal U_0$, let $D^j_k=\{(x,r,z_1,z_2,q,v)\in D_k:j(a_k,x)=j\}$ and $B^j_k=[a_k,b_k]\times D^j_k$. Also let $$\tilde B _k =[a_k,b_k] \times \mc B _{d+3\mc R_n}(0) \times \R_{+} \times \mathcal B_1(0)^2 \times [0,1]^2,$$ and let $\mathcal V (t)=\cup_{s\leq t}\mathcal U(s)$.
For $t\in [a_k,b_k]$, if none of the caterpillars in $\mc B_{d+3\mc R_n}(0)$ move during the time interval $[a_k,t]$ then $j(a_k,x)=j(t,x)$ $\forall x\in \mc B_d(0)$; thus a point $(t,x,r,z_1,z_2,q,v)$ in $\Pi \cap B_k$ must be a point in $\Pi_j \cap B^j_k$ for some $j$, and vice versa. We can use this observation to relate $\{N(B_k)=0\}$ and $\cap_{j\in \mathcal U_0}\{N_j(B^j_k)=0\}$, as follows.
If $N(B_k)=0$ and $N_j(B_k^j)\neq 0$ for some $j\in \mathcal U_0$, then $D^j_k\neq \emptyset$ so $j\in \mc V(a_k)\cup \{0\}$ (either $j=0$ or the caterpillar indexed by $j$ is alive at time $a_k$). Also after $a_k$ and before the point in $\Pi_j \cap B^j_k$, one of the caterpillars in $\mc B_{d+3\mc R_n}(0)$ must have moved, so there must be a point in $\Pi_l \cap \tilde B_k$ for some $l \in \mathcal V(b_k)$. Conversely, if $N_j(B^j_k)=0$ $\forall j\in \mathcal U_0$ and $N(B_k)\neq 0$, then there must be a point in $\Pi_l \cap \tilde B_k$ followed by either a point in $\Pi_0 \cap B_k$ or a point in $\Pi_{l'} \cap B_k$ for some $l,l' \in \mathcal V(b_k)$. Hence $$\begin{aligned}
\label{symmetric_diff}
&\{N(B_k)=0\}\triangle (\cap_{j\in \mathcal U_0}\{N_j(B^j_k)=0\})\subset \l\{N_0(B_k)+
\sum_{l\in \mathcal V (b_k)}N_l (\tilde{B_k})\geq 2\r\}.\end{aligned}$$ Note that by the definition of a caterpillar in Definition \[caterpillar\_defn\], for each $j\in \mc U$, $h(p_j,\Pi^{t_j}_j)\geq (\log n)^{-c}$. It follows that $\mc V (b_k)\subseteq \bigcup _{m=0}^{\lceil b_k (\log n)^c \rceil }\{1,2\}^m$. Also if $J\subset \mathcal U_0$ with $|J|=K$ then $\sum_{j\in J}N_j (\tilde B_k)$ has a Poisson distribution with parameter $K\nu (\tilde{B_k})$, and since $b_k-a_k \leq \delta$, $\nu(\tilde B_k)\leq n^2 \pi(d+3\mathcal R_n)^2 \mu((0,\mathcal R])\delta$. Hence for $Z'$ a Poisson random variable with parameter $(2^{2+b_k (\log n)^c}+1)\nu(\tilde B_k)=\mc O (\delta)$, $$\P\l[N_0(B_k)+\sum_{j\in \mathcal V (b_k)}N_j (\tilde{B_k})\geq 2\bigg|(\Pi _j(a_k))_{j\in \mathcal U_0}\r]\leq \P\l[Z'\geq 2\r]=\mc{O}(\delta^2).$$ By , we now have that $$\begin{aligned}
\P[N(B_k)=0|(\Pi _j(a_k))_{j\in \mathcal U_0}]&=\P[\cap_{j\in \mathcal U_0}\{N_j(B^j_k)=0\}]+\mc{O}(\delta^2)\\
&=\prod _{j\in \mathcal U_0} \exp (-\nu (B^j_k))+\mc{O}(\delta^2)\\
&=\exp(-\nu (B_k))+\mc{O}(\delta^2).\end{aligned}$$ Substituting this into and then repeating the same argument for $k-1,k-2,\ldots,1$, $$\begin{aligned}
\P[N(B)=0]&=\prod _{i=1}^k \exp(-\nu (B_k))+ \sum_{i=1}^k \mc{O}(\delta^2)\\
&= \exp (-\nu (B))+ k \mc{O}(\delta^2).\end{aligned}$$ By partitioning $B$ further, we can let $\delta t \rightarrow 0$ with $k=\Theta(1/\delta)$. It follows that $\P[N(B)=0]=\exp (-\nu(B))$. By , this completes the proof.
It follows immediately from Lemma \[lemma\_build\_pp\] that the collection of potential ancestral lineages in $(\mathcal P _t (p,\Pi))_{t\geq 0}$ has the same distribution as $\mc P^{(n)}(p)$, the rescaled dual. We now show that under this coupling the rescaled dual and branching caterpillar coincide with high probability.
We consider $(\mathcal C _t (p,(\Pi_j)_{j\in \mathcal U}))_{0\leq t\leq T}$ as a collection of paths as follows. The set of paths through a single caterpillar $(c _t (p,\Pi))_{t\leq h(p,\Pi)}$ with $k^*(p,\Pi)=k^*$ is given by $\{l^i\}_{i\in \{1,2\}^{k^*}}$, where $l^i(t)=c ^{1}_t (p,\Pi)$ for $t\in [0,(\log n)^{-c}]$ and for each $1\leq k \leq k^*$, $l^i(t)=c ^{i_k}_t (p,\Pi)$ for $t\in [\tau^\text{br}_{k-1}+(\log n)^{-c},(\tau^\text{br}_{k}+(\log n)^{-c})\wedge h(p,\Pi)]$. Then the collection of paths through $(\mathcal C _t (p,(\Pi_j)_{j\in \mathcal U}))_{0\leq t\leq T}$ is given by concatenating paths through the individual caterpillars, i.e. paths $l:[0,T]\to \R^2$ such that for some sequence $(u_m)_{m\geq 0}\subset \mc U$ with $u_{m+1}=(u_m,i_m)$ for some $i_m\in \{1,2\}$ for each $m$, for $t\in [t_{u_m},h_{u_m}]$, $l(t)$ follows a path through $(c_{t-t_{u_m}}(p_{u_m},\Pi^{t_{u_m}}_{u_m}))_t$ with $l(h_{u_m})=p_{u_{m+1}}$.
\[coupling with bc\] Fix $T>0$. Let $(\Pi_j)_{j\in \mathcal U_0}$ be independent Poisson point processes with intensity measure and let $\Pi$ be defined from $(\Pi_j)_{j\in \mathcal U_0}$ as in . Then $(\mathcal C _t (p,(\Pi_j)_{j\in \mathcal U}))_{0\leq t\leq T}$ and $(\mathcal P _t (p,\Pi))_{0\leq t\leq T}$, viewed as collections of paths, are equal with probability at least $1-\mc{O}((\log n)^{-1/4})$.
We shall use Lemma \[branching\_in\_loglog\] with $a=(16\log 2)^{-1}$. Writing, for $j\in \mathcal U$, $k^*(j)=k^*(p_j, \Pi_j^{t_j})$, the number of branching events in $c_{t-t_j}(p_j,\Pi_j^{t_j})$ before $h_j$, by a union bound over $\mc U_{\lfloor a\log \log n \rfloor }$ and , $$\begin{aligned}
\P[\exists j \in \mathcal U_{\lfloor a\log \log n \rfloor}: k^*(j)&\geq (\log n)^{9/8}]\leq 2^{2+a\log \log n}\mc O(e^{-\delta (\log n)^{1/8}})\notag\\
&=\mc{O}(e^{-\delta (\log n)^{1/8}/2}).\label{multi_kstar_bound}\end{aligned}$$ Let $(\tau^{\text{br}}_k(j))_{k\geq 1}$ denote the sequence of branching events in $c_{t-t_j}(p_j,\Pi_j^{t_j})$, and similarly define $(\tau^{\text{type}}_k(j))_{k\geq 1}$ and $(\tau^{\text{over}}_k(j))_{k\geq 1}$ as in . Note that $(\mathcal C_t)_{t\leq T}$ and $(\mathcal P_t)_{t\leq T}$ only differ as collections of paths if either a selective event affects a caterpillar during a time interval in which it ignores branching, or if two different caterpillars are simultaneously within $\mc R_n$ of some $x\in \R^2$ and so one of them is not driven by the pieced together Poisson point process $\Pi$. More formally, if $(\mathcal C_t)_{t\leq T}$ and $(\mathcal P_t)_{t\leq T}$ differ as collections of paths then one or more of the following events occurs.
1. $\mathcal U (T)\not\subseteq \mathcal U_{\lfloor a\log \log n \rfloor}$ or $k^*(j)\geq (\log n)^{9/8}$ for some $j\in \mathcal U_{\lfloor a\log \log n \rfloor}$.
2. For some $j\in \mathcal U_{\lfloor a\log \log n \rfloor}$ and $k\leq (\log n)^{9/8}$, the event $E_1(j,k)$ occurs: one of the lineages $c^{1}_{t-t_j}(p_j,\Pi_j^{t_j})$ and $c^{2}_{t-t_j}(p_j,\Pi_j^{t_j})$ is affected by a selective event in the time interval $[\tau^{\text{br}}_k(j),\tau^{\text{br}}_k(j)+(\log n)^{-c}]$.
3. For some $w\neq v\in \mathcal U_{\lfloor a\log \log n \rfloor}$, the event $E_2(v,w)$ occurs: there are $i_1,i_2\in \{1,2\}$ with $|c^{i_1}_{t-t_w}(p_w,\Pi_w^{t_w})-c^{i_2}_{t-t_v}(p_v,\Pi_v^{t_v})|\leq 2\mathcal R_n$ for some $t\leq T$.
Recall from and that selective events affect a single lineage with rate $\lambda \log n$. Hence for $k\in \N$ and $j\in \mathcal U$, $\P[E_1(j,k)]=\mc{O}((\log n)^{1-c})$.
We now consider the event $E_2(v,w)$. For $w\neq v \in \mc U$, let $i= \min \{j\geq 1:w_j \neq v_j \}$. Then let $$w \wedge v = \begin{cases} (w_1,\ldots , w_{i-1}) \quad \text{if }i\geq 2\\
\emptyset \quad \text{if }i=1.
\end{cases}$$ At time $h_{w\wedge v}$, either $\tau_{k^*(w\wedge v)}^{\text{type}}(w\wedge v)=\tau_{k^*(w\wedge v)}^{\text{over}}(w\wedge v)$ or $\tau_{k^*(w\wedge v)}^{\text{type}}(w\wedge v)=\tau_{k^*(w\wedge v)}^{\text{div}}(w\wedge v)$, in which case $|p_{(w\wedge v,1)}-p_{(w\wedge v,2)}|\geq (\log n)^{-c}$. Conditional on $|p_{(w\wedge v,1)}-p_{(w\wedge v,2)}|\geq (\log n)^{-c}$, for $i_1$, $i_2\in \{1,2\}$, $$\l(c^{i_1}_{t-t_w}(p_w,\Pi_w^{t_w}),c^{i_2}_{t-t_v}(p_v,\Pi_v^{t_v})\r)_{t\in[t_w,h_w]\cap [t_v,h_v]\cap [0,T]}$$ is part of the pair of potential ancestral lineages of an excursion started at time $h_{w\wedge v}$ with initial displacement at least $(\log n)^{-c}$. Hence by Lemmas \[Pinteract\] and \[caterpillar\_overshoot\], $$\P[E_2(w,v)]=\mc{O}\l(\frac{\log \log n}{\log n}\r)+\mc O \l((\log n)^{\frac{21}{8}-c}\r)=\mc O \l((\log n)^{-3/8}\r)$$ since $c\geq 3$. By a union bound, and using Lemma \[branching\_in\_loglog\] and it follows that $$\begin{aligned}
&\P\l[(\mathcal C_t)_{t\leq T} \neq \mathcal (P_t)_{t\leq T}\r]\\
&\hspace{2pc}\leq o((\log n)^{-1})+4(\log n)^{a\log 2 +{9/8}}\P[E_1(j,k)]
+16(\log n)^{2a\log 2}\P[E_2(w,v)]\\
&\hspace{2pc}=\mc{O}\l((\log n)^{a\log 2 +\frac{9}{8}+1-c}\r)+\mc{O}\l((\log n)^{2a\log 2-3/8}\r)\\
&\hspace{2pc}=\mc{O}\l((\log n)^{-1/4}\r),\end{aligned}$$ by our choice of $a=(16\log 2)^{-1}$ and since $c\geq 3$.
We are now ready to complete the proof of Theorem \[result d>1\].
(Of Theorem \[result d>1\]) Set $c=4$. By Lemmas \[lemma\_build\_pp\] and \[coupling with bc\], we have a coupling of the rescaled S$\Lambda$FV dual and the branching caterpillar under which the two processes are equal (as collections of paths) with probability at least $1-\mc{O}((\log n)^{-1/4})$.
We now couple $(\mathcal C _t (p,(\Pi_j)_{j\in \mathcal U}))_{0\leq t\leq T}$ to a branching Brownian motion with branching rate $\lambda \kappa_n$. Let $((W_t^j)_{t\geq 0},H_j)_{j\in \mathcal U}$ be an i.i.d. sequence, where $(W^j_t)_{t\geq 0}$ is a Brownian motion starting at $0$ and $H_j\sim \text{Exp}(\lambda \kappa_n )$ independent of $(W^j_t)_{t\geq 0}$. For each $j\in \mc U$, we couple $(c_{t-t_j}(p_j,\Pi_j^{t_j}))_{t \in [t_j,h_j]}$ to $((W_t^j)_{t\geq 0},H_j)$ as in Lemmas \[lifetime\] and \[single\_caterpillar\].
For $j\in \mc U$, let $A_1(j)$ be the event that both $|(h_j-t_j)-H_j|\leq 3(\log n)^{-1/4}$ and for $i=1,2$ and $t\in [t_j,h_j]$, $$\l|(c^i_{t-t_j}(p_j,\Pi_j^{t_j})-p_j)-W^j_{\sigma^2 (t-t_j)}\r|\leq 2(\log n)^{\frac{9}{8}-\frac{c}{3}}=2(\log n)^{-5/24}.$$ By Lemmas \[lifetime\] and \[single\_caterpillar\], for any $r>0$, for each $j\in \mc U$, $\P[A_1(j)] \geq 1-\mc{O}\l((\log n)^{-r}\r)$. Hence, taking a union bound over $j\in \mathcal U_{\lfloor \log \log n \rfloor}$, $$\P[\cap_{j\in \mathcal U_{\lfloor \log \log n \rfloor}} A_1(j)]\geq 1- \mc{O}((\log n)^{\log 2-r}).$$ Also, for $j\in \mc U$, define the event $$\begin{aligned}
&A_2(j)=\bigg\{\sup_{t\in [0,3(\log n)^{-1/4}]}|W_{\sigma^2 t}^j|\\
&\hspace{8pc}+\sup_{t\in [H_j-3(\log n)^{-1/4},H_j]}|W_{\sigma^2 t}^j-W_{\sigma^2 H_j}^j|\leq (\log n)^{-1/9}\bigg\}. \end{aligned}$$ Then by another union bound over $\mathcal U_{\lfloor \log \log n \rfloor}$, since for a Brownian motion $(W_t)_{t\geq 0}$ started at $0$, $\P \l[ \sup_{t\in [0,3(\log n)^{-1/4}]}|W_t| \geq \tfrac{1}{2}(\log n)^{-1/9}\r]=o((\log n)^{-r})$, we have that $$\P[\cap_{j\in \mathcal U_{\lfloor \log \log n \rfloor}} A_2(j)]\geq 1- \mc{O}((\log n)^{\log 2-r}).$$ By Lemma \[branching\_in\_loglog\], $\P[\mathcal U (T)\not\subseteq \mathcal U_{\lfloor \log \log n \rfloor}]=o((\log n)^{-r}) $.
Define a branching Brownian motion starting at $p$ with diffusion constant $\sigma^2$ from $((W_t^j)_{t\geq 0},H_j)_{j\in \mathcal U}$ by letting the increments of the initial particle be given by $(W^{\emptyset}_{\sigma^2 t})_{t\geq 0}$ until time $H_\emptyset$, when it is replaced by two particles which have lifetimes $H_1$ and $H_2$ and increments given by $(W^{1}_{\sigma^2 t})_{t\geq 0}$, $(W^{2}_{\sigma^2 t})_{t\geq 0}$ and so on.
If $\mathcal U (T)\subseteq \mathcal U_{\lfloor \log \log n \rfloor}$ and $A_1(j)\cap A_2(j)$ occurs for each $j\in \mc U_{\lfloor \log \log n \rfloor}$, each path in the branching caterpillar stays within distance $2(\log \log n+1)(\log n)^{-1/9}+2(\log \log n+1)(\log n)^{-5/24}$ of some path through the branching Brownian motion and vice versa.
Setting $r=\log 2 +1/4$ gives us a coupling between the branching caterpillar and branching Brownian motion (with diffusion constant $\sigma^2$ and branching rate $\kappa_n \lambda$) such that with probability at least $1-\mc O((\log n)^{-1/4})$, up to time $T$ each path in the rescaled dual stays within distance $2(\log \log n)(\log n)^{-1/9}+2(\log \log n)(\log n)^{-5/24}$ of some path through the branching Brownian motion and vice versa. Finally, we need to couple this branching Brownian motion up to time $T$ with a branching Brownian motion with branching rate $\kappa \lambda$. By , $\kappa_n \to \kappa$ as $n\to \infty$, so this follows by straightforward bounds on the difference between the branching times and the increments of a Brownian motion during such a time.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'This report summarises our method and validation results for the ISIC Challenge 2018 - Skin Lesion Analysis Towards Melanoma Detection - Task 1: Lesion Segmentation. We present a two-stage method for lesion segmentation with optimised training method and ensemble post-process. Our method achieves state-of-the-art performance on lesion segmentation and we win the first place in ISIC 2018 task1.'
author:
- Chengyao Qian
- Ting Liu
- Hao Jiang
- Zhe Wang
- Pengfei Wang
- Mingxin Guan
- Biao Sun
bibliography:
- 'cy\_isic\_manuscript.bib'
title: A Detection and Segmentation Architecture for Skin Lesion Segmentation on Dermoscopy Images
---
Introduction
============
Melanoma is the most dangerous type of skin cancer which cause almost 60,000 deaths annually. In order to improve the efficiency, effectiveness, and accuracy of melanoma diagnosis, International Skin Imaging Collaboration (ISIC) provides over 2,000 dermoscopic images of various skin problems for lesion segmentation, disease classification and other relative research.
Lesion segmentation aims to extract the lesion segmentation boundaries from dermoscopic images to assist exports in diagnosis. In recent years, U-Net, FCN and other deep learning methods are widely used for medical image segmentation. However, the existing algorithms have the restriction in lesion segmentation because of various appearance of lesion caused by the diversity of persons and different collection environment. FCN, U-Net and other one-stage segmentation methods are sensitive to the size of the lesion. Too large or too small size of lesion decreasing the accuracy of these one-stage methods. Two-stage method could reduce the negetive influence of diverse size of lesion. Mask R-CNN can be viewed as a two-stage method which has an outstanding performance in COCO. However, Mask R-CNN still has some brawbacks in lesion segmentation. Unlike clear boundary between different objects in COCO data, the boundary between lesion and healthy skin is vague in this challenge. The vague boundary reduces the accuracy of RPN part in Mask R-CNN which may cause the negative influence in following segmentation part. Furthermore, the low resolution of input image in the segmentation of Mask R-CNN also reduce the accuracy of segmentation.
In this report, we propose a method for lesion segmentation. Our method is a two-stage process including detection and segmentation. Detection part is used to detect the location of the lesion and crop the lesion from images. Following the detection, segmentation part segments the cropped image and predict the region of lesion. Furthermore, we also propose an optimised processed for cropping image. Instead of cropping the image by bounding box exactly, in training, we crop the image with a random expansion and contraction to increase robustness of neural networks. Finally, image augmentation and other ensemble methods are used in our method. Our method is based on deep convolutional networks and trained by the dataset provided by ISIC [@Tschandl2018_HAM10000] [@DBLP:journals/corr/abs-1710-05006].
Materials and Methods
=====================
{height="6cm" width="75.00000%"}
Database and Metrics
--------------------
For Lesion Segmentation, ISIC 2018 provides 2594 images with corresponding ground truth for training. There are 100 images for validation and 1000 images for testing without ground truth. The number of testing images in this year is 400 more than which in 2017. The size and aspect ratio of images are various. The lesion in images has different appearances and appear in different parts of people. There are three criterions for ground truth which are a fully-automated algorithm, a semi-automated flood-fill algorithm and manual polygon tracing. The ground truth labelled under different criterions has different shape of boundary which is a challenge in this task. We split the whole training set into two sets with ratio 10:1.
The Evaluation of this challenge is Jaccard index which is shown in Equation \[eq:jaccard\]. In 2018, the organiser adds a penalty when the Jaccard index is below 0.65. $$J(A,B) =
\begin{cases}
\frac{|A \bigcap B|}{|A \bigcup B|} & J(A,B) \geq 0.65 \\
0 & \text{otherwise}\\
\end{cases}
\label{eq:jaccard}$$
Methods
-------
We design a two-stage process combining detection and segmentation. Figure \[fig:Process\] shows our process. Firstly, the detection part detects the location of the lesion with the highest probability. According to the bounding box, the lesion is cropped from the original image and the cropped image is normalised to 512\*512 which is the size of the input image of segmentation part. A fine segmentation boundray of the cropped image is provided by the segmentation part.
### Detection Part
In the detection process, we use the detection part in MaskR-CNN [@DBLP:journals/corr/HeGDG17]. We also use the segmentation branch of Mask R-CNN to supervise the training of neural networks.
![Detection[]{data-label="fig:Detection"}](detection.png){width="40.00000%"}
### Segmentation Part
In the segmentation part, we design an encode-decode architecture of network inspired by DeepLab [@DBLP:journals/corr/ChenPK0Y16], PSPNet [@DBLP:journals/corr/ZhaoSQWJ16], DenseASPP [@Yang_2018_CVPR] and Context Contrasted Local [@Ding_2018_CVPR]. Our architecture is shown in Figure \[fig:Segmentation\]. The features are extracted by extended ResNet 101 with three cascading blocks. After ResNet, a modified ASPP is used to compose various scale features. We also use a skip connection to transfer detailed information.
![Segmentation[]{data-label="fig:Segmentation"}](segmentation.png){width="50.00000%"}
Figure \[fig:ASPP\] shows the structure of the modified ASPP block. A 1x1 convolutional layer is used to compose the feature extracted by ResNet and reduce the number of feature maps. After that, we use a modified ASPP block to extract information in various scales. The modified ASPP has three parts which are dense ASPP, standard convolution layers and pooling layers. Dense ASPP is proposed by [@Yang_2018_CVPR] which reduce the influence of margin in ASPP. Considering the vague boundary and low contrast appearance of the lesion, we add standard convolution layers to enhance the ability of neural networks in distinguishing the boundary. The aim of pooling layers is to let the networks consider the surrounding area of the low contrast lesion. The modified ASPP includes three dilated convolutions with rate 3, 6, 12 respectively, three standard convolution layers with size 3, 5, 7 respectively and four pooling layers with size 5, 9, 13, 17 respectively.
![Modified ASPP[]{data-label="fig:ASPP"}](ASPP.png){height="13cm" width="40.00000%"}
Pre-processing
--------------
Instead of only using RBG channels, we combine the SV channels in Hue-Saturation-Value colour space and lab channels in CIELAB space with the RGB channels. These 8 channels are the input of segmentation part. Figure \[fig:hsvlab\] shows different channels of HSV and CIELAB colour space.
![Single channel in HSV and CIELAB colour space[]{data-label="fig:hsvlab"}](hsvlab.png){width="40.00000%"}
Post-processing
---------------
The ensemble is used as the post-processing to increase the performance of segmentation. The input image of the segmentation part is rotated 90, 180 degrees and flipped to generate the other three images. Each image has a result predicted by the segmentation part. The results of the rotated and flipped image need to rotate and flip back to the original image. The final mask is the average of four results of these images.
![Ensamble[]{data-label="fig:Ensamble"}](ensemable.png){width="50.00000%"}
Training
--------
Image augmentation is used in training of detection and segmentation. Rotation, colour jitter, flip, crop and shear is operated in each image. Each channel of images is scaled to 0 to 1. In segmentation part the size of input images is set to 512x512. Some examples are shown in Figure \[fig:aug1\].
[.25]{} ![Examples for image augmentation[]{data-label="fig:aug1"}](fliplr.png "fig:"){width="0.4\linewidth"}
[.25]{} ![Examples for image augmentation[]{data-label="fig:aug1"}](rotate45.png "fig:"){width="0.4\linewidth"}
We use the Adam optimiser and set the learning rate to 0.001. The learning rate will be set at 92% of the previous after each epoch. The batch size is 8. We early stop the training when the net start overfitting. We use the dice loss which is shown in Equation \[eq:diceloss\], where $p_{i,j}$ is the prediction in pixel $(i,j)$ and $g_{i,j}$ is the ground truth in pixel $(i,j)$.
$$\label{eq:diceloss}
L = -\frac{\sum_{i,j}(p_{i,j} g_{i,j})}{\sum_{i,j}p_{i,j} + \sum_{i,j}g_{i,j}-\sum_{i,j}(p_{i,j} g_{i,j})}$$
The input image of the segmentation part is crop randomly in a range near the bounding box predicted by detection part. In order to improve the diversity of input images and provide more information about background context. In training, the input image of the segmentation part is cropped from 81% to 121% of the bounding box randomly which is shown in Figure \[fig:crop\].
![Crop image[]{data-label="fig:crop"}](crop.png){width="30.00000%"}
{width="90.00000%"}
Implementation
--------------
The detection part of our method is implemented by using Pytorch 0.4 in Ubuntu 14.04. The framework is from <https://github.com/roytseng-tw/Detectron.pytorch>. The segmentation part is implemented by using Pytorch 0.3.1 in Ubuntu 14.04. The neural networks are trained by two Nvidia 1080 Ti with 60 GB of RAM.
Results
=======
Method Jaccard Jaccard(>0.65)
------------------------ --------- ------------------- --
Mask R-CNN 0.825 0.787
One-stage Segmentation 0.820 0.783
Two-stage Segmentation 0.846 0.816
: Evaluation metrics of different segmentation methods
\[table:results\]
The evaluation metrics on 257 images of our two-stage segmentation method with MaskR-CNN and our one-stage segmentation method is shown in Table 1. The thresholded Jaccard of our two-stage method on the official testing set in 0.802. Figure \[fig:result\] shows the outputs of different segmentation methods. Compared with other methods, The results of our two-stage method has a better location and more smooth edge.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Marco Bochicchio
title:
-
- ' Glueball and meson propagators of any spin in large-$N$ $QCD$'
---
Introduction and Conclusions
============================
An asymptotic structure theorem for glueball and meson propagators of any spin in large-$N$ $QCD$
-------------------------------------------------------------------------------------------------
Firstly, we prove in sect.(3) an asymptotic structure theorem for glueball and meson propagators of any integer spin in ’t Hooft large-$N$ limit of $QCD$ with massless quarks. In fact, the asymptotic theorem applies also to large-$N$ $\mathcal{N}=1$ $SUSY$ $QCD$ with massless quarks or to any large-$N$ confining asymptotically-free gauge theory massless to every order of perturbation theory.
Because of confinement we assume that the spectrum of glueball and meson masses for fixed integer spin $s$ is a discrete diverging sequence $ \{ m^{(s)}_n \}$ at the leading large-$N$ order. At the same time we assume that the spectrum $ \{ m^{(s)}_n \}$ is characterized by a smooth renormalization group ($RG$) invariant asymptotic spectral density of the masses squared $\rho_s(m^{2})$ for large masses and fixed spin, with dimension of the inverse of a mass squared, defined by: \_[n=1]{}\^ f(m\^[(s)2]{}\_n ) \~\_[1]{}\^ f(m\^[(s)2]{}\_n ) dn = \_[ m\^[(s)2]{}\_1 ]{}\^ f(m\^2) \_s (m\^2) dm\^2 for any test function $f$. The symbol $\sim$ in this paper always means asymptotic equality in some specified sense up to perhaps a constant overall factor.
The asymptotic theorem reads as follows.
The connected two-point Euclidean correlator of a local single-trace gauge-invariant operator $\mathcal{O}^{(s)}$, of integer spin $s$ and naive mass dimension $D$ and with anomalous dimension $\gamma_{\mathcal{O}^{(s)}}(g)$, must factorize asymptotically for large momentum, and at the leading order in the large-$N$ limit, over the following poles and residues: \[eq:1\] \^[(s)]{}(x) \^[(s)]{}(0) \_[conn]{}e\^[-ipx]{}d\^4x \~\_[n=1]{}\^ P\^[(s)]{} () where $ P^{(s)} \big( \frac{p_{\alpha}}{m^{(s)}_n} \big)$ is a dimensionless polynomial in the four momentum $p_{\alpha}$ that projects on the free propagator of spin $s$ and mass $m^{(s)}_n$ and: \_[\^[(s)]{}]{}(g)= - =-\_[0]{} g\^2 + O(g\^4) with $Z_n^{(s)}$ the associated renormalization factor computed on shell, i.e. for $p^2=m^{(s)2}_n$: Z\_n\^[(s)]{}Z\^[(s)]{}(m\^[ (s)]{}\_n)= The sum in the $RHS$ of Eq.(\[eq:1\]) is in fact badly divergent, but the divergence is a contact term, i.e. a polynomial of finite degree in momentum. Thus the infinite sum in the $RHS$ of Eq.(\[eq:1\]) makes sense only after subtracting the contact terms (see remark below Eq.(\[eq:2\])). Fourier transforming Eq.(\[eq:1\]) in the coordinate representation projects away for $x\neq 0$ the contact terms and avoids convergence problems: \[eq:0\] \^[(s)]{}(x) \^[(s)]{}(0) \_[conn]{} \~\_[n=1]{}\^ P\^[(s)]{} () e\^[ipx]{}d\^4p In fact, the coordinate representation is the most convenient to get an actual proof of the asymptotic theorem, as we will see in sect.(3).
The physics content of the asymptotic theorem is that the residues of the poles (after analytic continuation to Minkowski space-time) are determined asymptotically by dimensional analysis, by the anomalous dimension and by the spectral density. More precisely the asymptotic behavior of the residues is fixed by the asymptotic theorem within the universal, i.e. the scheme-independent, leading and next-to-leading logarithmic accuracy. This implies that the renormalization factors are fixed asymptotically for large $n$ to be: $$\label{eqn:zk_as_behav}
Z_n^{(s)2}\sim
\Biggl[\frac{1}{\beta_0\log \frac{ m^{ (s) 2}_n }{ \Lambda^2_{QCD} }} \biggl(1-\frac{\beta_1}{\beta_0^2}\frac{\log\log \frac{ m^{ (s) 2}_n }{ \Lambda^2_{QCD} }}{\log \frac{ m^{ (s) 2}_n }{ \Lambda^2_{QCD} }} + O(\frac{1}{\log \frac{ m^{ (s) 2}_n }{ \Lambda^2_{QCD} } } ) \biggr)\Biggr]^{\frac{\gamma_0}{\beta_0}}$$ where $\beta_0, \beta_1,\gamma_0$ are the first and second coefficients of the beta function and the first coefficient of the anomalous dimension respectively (see for definitions subsect.(2.4) or [@MB]) and $\Lambda_{QCD}$ is the $QCD$ $RG$-invariant scale in some scheme.
The asymptotic theorem does not require any assumption on the possible degeneracy of the spectrum for fixed spin. If there is any degeneracy it is implicit in the spectral density. We show in sect.(3) that Eq.(\[eq:1\]) for the propagator can be rewritten equivalently as: \[eq:2\] \^[(s)]{}(x) \^[(s)]{}(0) \_[conn]{}e\^[-ipx]{}d\^4x \~P\^[(s)]{} ( ) p\^[2D-4]{} \_[n=1]{}\^ + where now the sum in the $RHS$ is convergent for $\gamma'=\frac{\gamma_0}{\beta_0} > 1$. Otherwise it is divergent, but the divergence is again a contact term (see sect.(3)). The dots in Eq.(\[eq:2\]) represent a divergent contact term, i.e. a polynomial of finite degree in momentum[^1], i.e. a distribution supported at coinciding points in the coordinate representation, and $P^{(s)} \big(\frac{p_{\alpha}}{p} \big)$ is the projector obtained substituting $-p^2$ to $m_n^2$ in $P^{(s)} \big(\frac{p_{\alpha}}{m_n} \big)$ [^2]. From the proof of the asymptotic theorem in sect.(3) it follows that the divergent contact term contains at least one power of the mass squared, i.e. two powers of $\Lambda_{QCD}$. Thus these divergent contact terms do not arise in perturbation theory. Divergent contact terms of precisely the same kind occur in a recent computation by Zoller-Chetyrkin for the two-point glueball scalar correlator in $QCD$ by means of the standard operator product expansion ($OPE$). To mention their words [@chet:tensore] (p. 12): “The two-loop part is new and has a feature that did not occur in lower orders, namely, a divergent contact term.”
Then the proof of the asymptotic theorem reduces to showing that Eq.(\[eq:2\]) matches asymptotically for large momentum, within the universal leading and next-to-leading logarithmic accuracy, the $RG$-improved perturbative result [^3] implied by the Callan-Symanzik equation (see subsect.(2.4)): \[CS\] && \^[(s)]{}(x) \^[(s)]{}(0) \_[conn]{}e\^[-ipx]{}d\^4x\
&& \~P\^[(s)]{}() p\^[2D-4]{} Z\^[(s)2]{}(p) \_0(g(p))\
&& \~P\^[(s)]{}() p\^[2D-4]{} \^[-1]{} up to contact terms, and that this matching [^4] fixes uniquely the universal asymptotic behavior of the residues in Eq.(\[eq:2\]).
Hence the meaning of the asymptotic theorem is that at large-$N$ the sum of pure poles in Eq.(\[eq:2\]) saturates the logarithms of perturbation theory and that the residues of the poles have a field theoretical meaning. In particular they are asymptotically proportional, apart from the power of momentum and the projector, to the square of the renormalization factor determined by the anomalous dimension divided by the spectral density, both computed on shell.
The asymptotic theorem has two important implications.
The first implication is the rather obvious observation that, given the anomalous dimension, the asymptotic spectral density can be read immediately in Eq.(\[eq:2\]) if the residues are known for the *discrete* set of poles asymptotically. The second implication is somehow surprising. Since asymptotically we can substitute to the *discrete* sum the *continuous* integral weighted by the spectral density, the asymptotic propagator reads: \[eq:3\] \^[(s)]{}(x) \^[(s)]{}(0) \_[conn]{}e\^[-ipx]{}d\^4x \~P\^[(s)]{} ( ) p\^[2D-4]{} \_[ m\^[(s)2]{}\_1]{}\^ dm\^2 + with the integral representation in Eq.(\[eq:3\]) depending only on the anomalous dimension but not on the spectral density.
Finally, using the Kallen-Lehmann representation (see subsect.(2.2)) we write: \[eq:4\] &&\^[(s)]{}(x) \^[(s)]{}(0) \_[conn]{}e\^[-ipx]{}d\^4x\
&&= \_[n=1]{}\^ P\^[(s)]{} ()\
&& =\_[n=1]{}\^ P\^[(s)]{} () The preceding relation between the reduced matrix elements $< 0|\mathcal{O}^{(s)}(0)|p,n,s>'$ and the renormalization factors $Z_n^{(s)}$: |< 0|\^[(s)]{}(0)|p,n,s>’|\^2=m\^[(s)2D-4]{}\_n Z\_n\^[(s)2]{} \_s\^[-1]{}(m\^[(s)2]{}\_n) can be regarded as a non-perturbative definition of the renormalization factors in a suitable non-perturbative scheme, in such a way that with this interpretation the asymptotic theorem holds exactly and not only asymptotically.
Should we know the matrix elements non-perturbatively, we would obtain also the non-perturbative contributions to the propagators due to the $OPE$.
The asymptotic theorem cannot imply anything about these contributions since they are suppressed by inverse powers of momentum for large momentum.
The asymptotic theorem has been inspired by a computation of the anti-selfdual ($ASD$) propagator in a Topological Field Theory ($TFT$) underlying large-$N$ $YM$, that satisfies the asymptotic theorem and implies exact linearity of the joint scalar and pseudoscalar glueball spectrum, i.e. an exactly constant spectral density equal to $\Lambda_{QCD}^{-2}$ in some scheme. But the glueball propagator of the $TFT$ furnishes also the first of the non-perturbative terms in the $OPE$, that are suppressed by inverse powers of momentum, as we will see momentarily.
Anti-selfdual glueball propagators in a Topological Field Theory underlying large-$N$ $YM$
------------------------------------------------------------------------------------------
Secondly, we analyze the physics implications of the anti-selfdual ($ASD$) glueball propagator computed in the aforementioned $TFT$ underlying large-$N$ pure $YM$.
Roughly speaking the $TFT$ describes glueball propagators in the ground state of the large-$N$ one-loop integrable sector of Ferretti-Heise-Zarembo [@ferretti:new_struct] (see subsect.(2.3)), that are homogeneous polynomials of degree $L$ in the $ASD$ curvature.
The shortest of such operators is $ \operatorname{Tr}{F^-}^2(x) \equiv \sum_{\alpha \beta} \operatorname{Tr}{F_{\alpha \beta}^-}^2(x)$ with $F_{\alpha \beta}^-=F_{\alpha \beta} -{^*\!F}_{\alpha \beta}$ and ${^*\!}$ the Hodge dual. In the $TFT$ [@boch:quasi_pbs; @MB0] a non-perturbative scheme exists in which the $ASD$ glueball propagator [^5] is given by: \[eqn:top\] && \_[conn]{} e\^[-ipx]{}d\^4x\
&&= 2 (\_[conn]{} + \_[conn]{} )e\^[-ipx]{}d\^4x\
&&=\_[k=1]{}\^\
&&=p\^4 \_[k=1]{}\^ + \_[k=1]{}\^g\_k\^4\^2\_(k\^2\_-p\^2) +\_[k=1]{}\^ where $\Lambda_{\overline{W}}$ is the $RG$-invariant scale in the scheme in which it coincides with the mass gap, and $g_k$ is the ’t Hooft coupling renormalized on shell, i.e. at $p^2=k \Lambda^2_{\overline{W}}$. The second term in the last line is a physically-irrelevant divergent sum of contact terms, i.e. a distribution supported at coinciding points in the coordinate representation.
It is not the aim of this paper to furnish a theoretical justification of Eq.(\[eqn:top\]), that can be found in [@boch:quasi_pbs; @MB0]. Additional computations can be found in [@boch:glueball_prop; @boch:crit_points; @Top]. For the purposes of this paper the reader can consider Eq.(\[eqn:top\]) just as an ansatz that implies interesting phenomenological and theoretical consequences. In this subsection we analyze in detail these consequences. In fact, the agreement of Eq.(\[eqn:top\]) with the $RG$, the $OPE$, and the $NSVZ$ theorem, that we discuss in this subsection, is remarkable by itself, even without the theoretical justification in [@boch:quasi_pbs; @MB0].
Eq.(\[eqn:top\]) contains a new term proportional to $\delta^2$ that in a previous computation [@boch:glueball_prop; @boch:crit_points; @MB0] was set to zero by a Wick-ordering prescription, necessary to cancel, as in ordinary $YM$ perturbation theory of composite operators, certain infinite contributions in the $TFT$. This computation will be reported elsewhere.
We show momentarily that Novikov-Shifman-Vainshtein-Zakharov ($NSVZ$) low-energy theorem (see subsect.(2.5)) fixes instead the residual finite part, arising after the arbitrary subtraction due to Wick-ordering, so that $\delta$ does not actually vanish.
We have checked by direct computation in [@MB] in collaboration with S. Muscinelli that the $ASD$ propagator of the $TFT$ satisfies asymptotically [^6]: \[7\] &&\_[k=1]{}\^\
&& \~ up to contact terms, according to the asymptotic theorem of this paper and to the fact that the first coefficient of the anomalous dimension of $\operatorname{Tr}F^{-2}$ is $\gamma_0=2 \beta_0$ [@MB]. In fact, the inspiration for the proof of the asymptotic theorem came from the computation [@boch:glueball_prop; @boch:crit_points] in the $TFT$ and from the detailed $RG$ estimates in [@MB] (see subsect.(2.4)).
But Eq.(\[eqn:top\]) contains a finer information than the asymptotic theorem.
Indeed, on the $UV$ side Eq.(\[eqn:top\]) reproduces the first two coefficient functions in the $RG$-improved $OPE$ of the $ASD$ propagator (see subsect.(2.4)): \[OP\] && \_[conn]{} e\^[-ipx]{}d\^4x\
&& \~C\_0(p\^2) +C\_1(p\^2) <\^2(0)>+... and not only the first coefficient, i.e. the perturbative contribution implied by the asymptotic theorem. $C_0(p^2)$ is the perturbative coefficient function displayed in Eq.(\[7\]): \[C\] C\_0(p\^2) \~ and $C_1(p^2)$ is fixed by the general principles of the $RG$ and by the Callan-Symanzik equation to satisfy asymptotically (see subsect.(2.4)): C\_1(p\^2) \~ Indeed, it corresponds in Eq.(\[eqn:naive\_rg1\]) to the case $D=4, D_1=4, \gamma_0(\mathcal{O}_D)=2 \beta_0$ and, since the glueball condensate is $RG$ invariant,$\gamma_0(\mathcal{O}_{D_1})=0$. The scalar contribution to $C_1(p^2)$ arising from the scalar propagator in the second line of Eq.(\[eqn:top\]) has been computed recently at two-loop order by Zoller-Chetyrkin [@chet:tensore] in the $\overline{MS}$ scheme. Disregarding momentarily the contact terms in Eq.(\[eqn:top\]), the same estimates that enter the proof of the asymptotic theorem in sect.(3) or in [@MB] imply: \[9\] \_[k=1]{}\^ && \~\_\^4\
&&\~\^2 \_\^4 C\_1(p\^2) Thus the $TFT$ is in perfect agreement with the constraint arising by the perturbative $OPE$ and the $RG$ also for the second coefficient function in the $OPE$.
Besides, the glueball condensate $<\frac{g^2}{N}tr{F^-}^2(0)>$ is non-vanishing in the $TFT$ [@boch:crit_points; @MB0], as opposed to perturbation theory. Its value in the $TFT$ is proportional to a suitable power of the $RG$-invariant scale. Let us call this scale $\Lambda_{GC}$: =\_[GC]{}\^4 Moreover, the zero-momentum divergent sum of contact terms in Eq.(\[eqn:top\]) mixes with $C_1(p^2) <\frac{g^2}{N}tr{F^-}^2(0)>$ in the $OPE$ implicitly determined by the $ASD$ propagator of the $TFT$, in such a way that $C_1(p^2)$ in the $TFT$ has a zero-momentum quadratically-divergent part.
Remarkably, a similarly divergent contact term at zero momentum occurs in the recent perturbative computation by Zoller-Chertyrkin [@chet:tensore] of the part of the second coefficient $C_1(p^2)$ that arises from the scalar propagator contributing to the $ASD$ correlator, and it is an obstruction to implementing the $NSVZ$ theorem (see subsect.(2.5)): \[8\] &&\_[conn]{} d\^4x = in perturbation theory, since in perturbation theory the subtraction of the infinite zero-momentum contact term in the $LHS$ leaves a finite ambiguity in the zero-momentum correlator, that affects the $RHS$ of Eq.(\[8\]).
To mention again Zoller-Chertyrkin words [@chet:tensore]: “The two-loop part is new and has a feature that did not occur in lower orders, namely, a divergent contact term. Its appearance clearly demonstrates that non-logarithmic perturbative contributions to $C_1$ are not well defined in $QCD$, a fact seemingly ignored by the the $QCD$ sum rules practitioners.”
The aforementioned infinite ambiguity is resolved in the $TFT$ because of the unambiguous non-perturbative separation between the contact terms and the physical terms that carry the pole singularities (in Minkowski space-time) in Eq.(\[eqn:top\]), and the subsequent subtraction of the quadratically-divergent sum of contact terms displayed in Eq.(\[eqn:top\]).
Indeed, in the $TFT$ the $NSVZ$ theorem reads [^7](see subsect.(2.5)): \[10\] &&\_[conn]{} d\^4x = After subtracting the contact terms it combines with Eq.(\[eqn:top\]) to give: \_[k=1]{}\^= \_[GC]{}\^4 where the convergent series in the LHS arises as the restriction to zero momentum of the third term in the last line in Eq.(\[eqn:top\]). Thus the $NSVZ$ theorem fixes $\delta$ and, as a consequence, the normalization of the first non-trivial coefficient function in the $OPE$ of the $TFT$.
On both the infrared ($IR$) and the ultraviolet ($UV)$ side Eq.(\[eqn:top\]) is not only an asymptotic formula but implies exact linearity in the square of the masses of the joint scalar and pseudoscalar spectrum in the large-$N$ limit of $YM$ all the way down to the low-lying glueball states.
This is a strong statement that could be easily falsified.
Indeed, on the infrared side it implies that the ratio of the masses of the two lowest-scalar (or pseudoscalar) glueball states is $\sqrt2=1.4142 \cdots$. As we discuss in subsect.(1.5), in the lattice computation that is presently closer to the continuum limit [^8] for $SU(8)$ $YM$, Meyer-Teper [@Me1; @Me2] found for the mass ratios of the lowest scalar and pseudoscalar states, $r_s=\frac{m_{0^{++*}}}{m_{0^{++}}}$ and $r_{ps}=\frac{m_{0^{-+}}}{m_{0^{++}}}$, $r_s=r_{ps}=1.42(11)$ in accurate agreement with the $TFT$. In subsect.(1.5) we compare the predictions of the $TFT$ also with the lattice computations of Lucini-Teper-Wenger [@L1] and of Lucini-Rago-Rinaldi [@L2].
In addition, on the infrared side it is needed a non-perturbative definition of the beta function in order for Eq.(\[eqn:top\]) to make sense, since for the low-lying glueballs $g_k$ must be evaluated at scales on the order of $\Lambda_{\overline{W}}$ and this is a scale close, if not coinciding, to the one where the perturbative Landau infrared singularity of the running coupling occurs.
The $TFT$ provides such a non-perturbative scheme for the beta function for which no Landau infrared singularity of the coupling occurs [@boch:quasi_pbs].
The functions $g(\frac{p}{{\Lambda_{\overline{W}}}})$ and $Z(\frac{p}{{\Lambda_{\overline{W}}}})$ are the solutions of the differential equations [@boch:quasi_pbs]: $$\begin{aligned}
\label{eqn:eq_def_gk}
\frac{\partial g}{\partial \log p}
&=\frac{-\beta_0 g^3+\frac{1}{(4\pi)^2}g^3\frac{\partial \log Z}{\partial \log p}}{1-\frac{4}{(4\pi)^2}g^2} \nonumber \\
\frac{\partial\log Z}{\partial\log p}
&=2\gamma_0 g^2 +\cdots \nonumber \\
\gamma_{0}&=\frac{1}{(4\pi)^2}\frac{5}{3}\end{aligned}$$ with $p=\sqrt{p^2}$. The definitions of $g_k$ and $Z_k$ are: $$\begin{aligned}
&g_k=g(\sqrt k)\\
&Z_k=Z( \sqrt k)\label{01}\end{aligned}$$ In [@boch:quasi_pbs] it is shown that Eq.(\[eqn:eq\_def\_gk\]) reproduces the correct universal one-loop and two-loop coefficients of the perturbative $\beta$ function of pure $YM$. Indeed, we get: $$\begin{aligned}
\label{eqn:matching_beta_pert}
\frac{\partial g}{\partial \log p}
&=\frac{-\beta_0 g^3+\frac{2\gamma_0 }{(4\pi)^2}g^5}{1-\frac{4}{(4\pi)^2}g^2}+\cdots \nonumber \\
&=-\beta_0 g^3+\frac{2\gamma_0}{(4\pi)^2} g^5-\frac{4\beta_0}{(4\pi)^2}g^5+\cdots \nonumber\\
&=-\beta_0 g^3-\beta_1 g^5+\cdots\end{aligned}$$ with: $$\begin{aligned}
&\beta_0=\frac{1}{(4\pi)^2}\frac{11}{3}\\
&\beta_1=\frac{1}{(4\pi)^4}\frac{34}{3}\end{aligned}$$ Besides, in the $TFT$ the glueball propagators for the operators $\mathcal{O}_{2L}$ in the ground state of Ferretti-Heise-Zarembo [@ferretti:new_struct] can be computed [@boch:glueball_prop] asymptotically for large $L$ [^9]. These operators have mass dimension $D=2L$ and are homogeneous polynomials of degree $L$ in the $ASD$ curvature $F^-$ [@ferretti:new_struct] (see subsect.(2.3)): $$\begin{aligned}
\label{eqn:formula}
&\int \braket{\mathcal{O}_{2L}(x)\mathcal{O}_{2L}(0)}_{conn} \,e^{-ip\cdot x}d^4x
\sim \sum_{k=1}^{\infty}\frac{k^{2L-2} Z_k^{-L}\Lambda_{\overline{W}}^2 \Lambda_{\overline{W}}^{4L-4}}{p^2+k\Lambda_{\overline{W}}^2} \end{aligned}$$ Ferretti-Heise-Zarembo have computed the one-loop anomalous dimension of $\mathcal{O}_{2L}$ for large $L$ [@ferretti:new_struct]: $$\begin{aligned}
\gamma_{0 (\mathcal{O}_{2L})}= \frac{1}{(4\pi)^2}\frac{5}{3} L+O(\frac{1}{L})\end{aligned}$$ The one-loop anomalous dimension computed within the $TFT$ Eqs.(\[eqn:eq\_def\_gk\]-\[01\]-\[eqn:formula\]) agrees with Ferretti-Heise-Zarembo computation asymptotically for large $L$ and exactly for the $L=2$ ground state, that is the $ASD$ operator that occurs in Eq.(\[eqn:top\]), for which $\gamma_{0 (\mathcal{O}_{4})}=2 \beta_0$ exactly.
As a consequence the asymptotic theorem of this paper is satisfied asymptotically for large-$L$ by the large-$L$ $ASD$ correlators of the $TFT$ as well, as it has been checked by direct computation in [@MB].
The $AdS$/Gauge Theory correspondence versus the Topological Field Theory
-------------------------------------------------------------------------
Thirdly, we compare the proposal for the glueball propagators of the $TFT$ with the widely known proposals for the large-$N$ glueball propagators of a vast class of confining $QCD$-like theories, including pure $YM$, $QCD$ and $SUSY$ gauge theories, based on the $AdS$/Large-$N$ Gauge Theory correspondence.
In the framework of the $AdS$/Large-$N$ Gauge Theory correspondence [@Mal] we examine Witten supergravity background [@Witten], that has been proposed to describe large-$N$ $QCD$, and Klebanov-Strassler supergravity background [@KS1; @KS2], that has been proposed to describe large-$N$ cascading $\mathcal{N}$ $=1$ $SUSY$ gauge theories. They belong to the so called top-down approach, that means that they are essentially deductions from first principles in the framework of the $AdS$/Large-$N$ Gauge Theory correspondence. Therefore, they are very rigid and lead to sharp predictions for the glueball spectrum and the glueball propagators.
Also the $TFT$ underlying large-$N$ $YM$ is meant to be a deduction from fundamental principles [@boch:quasi_pbs; @MB0] and therefore it is very rigid and leads to a sharp prediction for the joint scalar and pseudoscalar glueball spectrum and propagator as well.
We examine also Polchinski-Strassler model [@PS; @PS2] or Hard-Wall model and the Soft-Wall model [@Softwall]. They belong to the bottom-up approach in the framework of the $AdS$/Large-$N$ Gauge Theory correspondence, that means that they are meant to be models that aim to incorporate some features of large-$N$ $QCD$ rather than deductions from fundamental principles. Therefore, they are less rigid and consequently their predictions are not as sharp as in the previous cases. For example, the spectrum of the Hard-Wall model depends on the choice of boundary conditions at the wall [@Brower2]. The spectrum of the Soft-Wall model [@Softwall] depends on the ad hoc choice of the dilaton potential, that purposely is chosen in such a way to imply exact linearity of the square of glueball and meson masses, as opposed to the spectrum of the Hard-Wall model [@Brower2], of Witten model [@Brower1] and of Klebanov-Strassler background [@KS1; @KS2], that are asymptotically quadratic in the square of the glueball masses.
All these different proposals can be tested both in the infrared and in the ultraviolet.
The infrared test is by numerical results in lattice gauge theories.
The ultraviolet test is by first principles. Indeed, as we pointed out in the previous subsections, the structure of the glueball propagators is severely constrained by the perturbative $RG$, as the asymptotic theorem of this paper shows, and by the $OPE$. Another test by first principles is by the low-energy theorems of $NSVZ$, that we have discussed in the framework of the $TFT$. A short review of the theoretical background behind these ideas is reported in sect.(2).
We should add at this stage that all the proposals that are meant to describe large-$N$ $YM$ or large-$N$ $QCD$, i.e. Witten background, the Hard-Wall model, the Soft-Wall model and the $TFT$, sharply disagree [^10] among themselves both about the $IR$ low-energy spectrum and about the $UV$.
The ultraviolet test
--------------------
We have submitted the aforementioned proposals to a stringent test in the $UV$ for the asymptotics of the scalar and/or pseudscalar glueball propagator, that coincides up to an overall constant with $C_0$ in Eq.(\[C\]) [@MB], after which only the $TFT$ has survived. Indeed, in the framework of the $AdS$ String/Large-$N$ Gauge Theory correspondence all the glueball propagators, for which we could find presently an explicit computation in the literature, behave as $p^4 \log^n(\frac{p^2}{\mu^2})$, with $n=1$ for the Hard- and Soft-Wall models [@forkel; @italiani; @forkel:holograms; @forkel:ads_qcd] and $n=3$ for Klebanov-Strassler background [@krasnitz:cascading2; @krasnitz:cascading], in contradiction with the universal $RG$ estimate [@MB] for $C_0$ Eq.(\[C\]).
Klebanov-Strassler background deserves a further separate examination.
There is no infrared test for it, since no lattice computation is available for supersymmetric gauge theories.
Moreover, it has not passed the ultraviolet test for the scalar glueball propagator [@MB], despite it is able to reproduce even in the supergravity approximation the correct $NSVZ$ asymptotically-free $\beta$ function of the large-$N$ cascading $\mathcal{N}$ $=1$ $SUSY$ gauge theories. Since this is puzzling, we suggest here a possible explanation.
Indeed, in $\mathcal{N}$ $=1$ $SUSY$ $YM$, the final end of the cascade, it there exists a phase strongly coupled in the $UV$ foreseen by Kogan-Shifman [@KSh]. This phase is described by the very same large-$N$ $NSVZ$ $\beta$ function: =- since the $IR$ fixed point of the $RG$ flow $g^2 = \frac{\left(4\pi\right)^2}{2}$ is attractive both for $g^2 \leq \frac{\left(4\pi\right)^2}{2}$, the asymptotically-free phase weakly-coupled in the $UV$, and for $g^2 \geq \frac{\left(4\pi\right)^2}{2}$, the strongly-coupled phase in the $UV$. Therefore, Kogan-Shifman argue [@KSh] that there exists a strongly-coupled phase in the $UV$, admitting a continuum limit, described by the strong-coupling branch of the same $NSVZ$ beta function, whose weak coupling branch describes the asymptotically-free phase.
In *both* cases the $RG$ flow stops at $g^2 = \frac{\left(4\pi\right)^2}{2}$, so that the running coupling *never* diverges in the $IR$. In particular the $RG$ flow is *not* connected to $g^2=\infty$ in the $IR$.
However, the $RG$ flow is connected to $g^2=\infty$ in the $UV$ of the non-asymptotically-free phase.
In fact, it is natural to identify the aformentioned strongly-coupled phase in the $UV$ with Klebanov-Strassler background, since the effective coupling of the corresponding scalar glueball propagator grows in the $UV$ as $\log^3(\frac{p^2}{\mu^2})$ [@krasnitz:cascading2; @krasnitz:cascading] instead of decreasing as $\frac{1}{\log(\frac{p^2}{\mu^2})}$, as the universal estimate for $C_0$ in the asymptotically-free phase would require [@MB].
Thus we are led to conclude that even in the most favorable situation, when the exact $\beta$ function is reproduced on the string side of the correspondence, the $AdS$ String/large-$N$ Gauge Theory correspondence in its present strong coupling incarnation describes in a neighborhood of $g^2=\infty$ the aforementioned *strongly-coupled* phase in the $UV$, whose existence is implied by the supersymmetric $NSVZ$ $\beta$ function, *not* the asymptotically-free phase.
But lattice gauge theory computations in $YM$ (or in $QCD$ in ’t Hooft large-$N$ limit) show that the aforementioned strongly-coupled phase in the $UV$, admitting a continuum limit, does not exist in pure non-supersymmetric $YM$.
The infrared test
-----------------
We are interested in the large-$N$ limit, therefore we look for lattice results that have been computed for the largest gauge group possible.
We should mention that comparisons of this kind have been already presented in the past years by many groups, using the lattice results for $SU(3)$ as benchmark. But in recent years lattice results for larger gauge groups up to $SU(8)$ have become available, as opposed to the earlier important $SU(3)$ results (for an updated review of large-$N$ lattice $QCD$ see [@ML]).
Since for all the approaches proposed in the literature the computations are supposed to hold in the large-$N$ limit, there is not much point in looking at lattice result for $SU(3)$ once lattice results for higher rank $SU(N)$ groups have become available. If $SU(3)$ is sufficiently close to $SU(N)$, as some evidence from the numerical lattice results seems to approximately indicate, the $SU(N)$ result will be a good description of both. If not, the theoretical predictions that we want to test are meant for large-$N$ $SU(N)$ and not for $SU(3)$. Therefore $SU(8)$ is presently the most suitable choice in this framework.
Thus we compare in some detail the predictions for the low-lying glueball masses, scalar, pseudoscalar and spin 2, with the three lattice numerical computations for $SU(8)$, discussing also the lattice numerical uncertainty.
There are presently three lattice computations, in chronological order, by Lucini-Teper-Wenger [@L1], by Meyer-Teper [@Me1; @Me2] and by Lucini-Rago-Rinaldi [@L2] for the mass ratios, $r_s=\frac{m_{0^{++*}}}{m_{0^{++}}}$, $r_{ps}=\frac{m_{0^{-+}}}{m_{0^{++}}}$ and $r_2= \frac{m_{2^{++}}}{m_{0^{++}}}$ in $SU(8)$ $YM$. They are remarkably in agreement when compared on the same lattice and for close values of the $YM$ coupling. Since Lucini-Teper-Wenger and Lucini-Rago-Rinaldi essentially agree at quantitative level, we discuss in detail for simplicity only the most recent computation, i.e. Lucini-Rago-Rinaldi, that we compare with Meyer-Teper.
However, Meyer-Teper perform the computation also for one smaller value of the $YM$ coupling and a larger lattice and perhaps a different variational basis, in order to be as close as possible to the continuum limit.
As a consequence there is about a $20\%$ difference in their final results: For Meyer-Teper $r_s=r_{ps}=1.42(11)$ and for Lucini-Rago-Rinaldi: $r_s=1.79(08)$, $r_{ps}=1.78(08)$. Yet both computations show degeneracy of the first excited scalar with the first pseudoscalar mass. In addition, the mass ratio of the lowest spin-$2$ glueball to the lowest scalar is for Meyer-Teper $r_2= \frac{m_{2^{++}}}{m_{0^{++}}} = 1.40 $ while for Lucini-Rago-Rinaldi $r_2= \frac{m_{2^{++}}}{m_{0^{++}}} = 1.70 $.
A possible interpretation is that new states arise for smaller coupling corresponding to the ratios $r_s=r_{ps}=1.42(11)$ of Meyer-Teper [^11].
Of course the previous observation implies that Meyer-Teper is closer to the continuum limit, but their result should be taken with a grain of salt because Meyer-Teper computation is presently the only one for such a smaller coupling.
Indeed, the previous Lucini-Teper-Wenger computation $r_s \sim 1.83$ is in agreement with Lucini-Rago-Rinaldi. Yet it has been suggested [^12] that $r_s=1.79(08)$, $r_{ps}=1.78(08)$ is quite close to the prediction of the $TFT$ for the next-excited glueballs, $r_{s}=r_{ps}=\sqrt3=1.7320 \cdots$, if it is assumed that that Lucini-Rago-Rinaldi see only the next-excited glueballs for some reason linked to the choice of the variational basis and/or the value of the $YM$ coupling. This should be clarified by future computations.
The theoretical predictions are as follows.
In the $TFT$, $r_{s}=r_{ps}=\sqrt2=1.4142 \cdots$ in accurate agreement with Meyer-Teper. For the second scalar or pseudoscalar excited state the $TFT$ predicts $r_{s}=r_{ps}=\sqrt3=1.7320 \cdots$, quite close to Lucini-Rago-Rinaldi values, if we assume that they do not see the lower state of Meyer-Teper.
In Witten model $r_s=1.5860$, $r_{ps}=1.2031$, $r_2=1$. These numbers are obtained from [@Brower1] according to the standard identification (see also [@Mal] for the numerical values of $r_s$ and $r_{ps}$) of the dilaton on the string side as the dual of $\operatorname{Tr}F^2$ on the gauge side [^13].
In the Hard-Wall model (Polchinski-Strassler) for $Dirichlet$ boundary conditions [@Braga1; @Braga2] $r_s=1.64$, $r_2=1.48$, for *Neumann* boundary conditions [@Braga2] $r_s=1.83$, $r_2=1.56$, while for other *different* boundary conditions for *different* states [@Brower2] $r_s=2.19$, $r_{ps}=1.25$, $r_2=1.25$.
In the Soft-Wall model [@forkel; @italiani; @forkel:holograms; @forkel:ads_qcd] $r_s=\sqrt\frac{3}{2}=1.2247 \cdots$
Thus the $TFT$ agrees sharply with Meyer-Teper.
Witten model is inconsistent with Lucini-Rago-Rinaldi and barely compatible with Meyer-Teper for $r_s$ or $r_{ps}$ taken separately, but it is in contrast with their apparent degeneracy implied by the lattice result of both groups. On the contrary it predicts $r_2=1$, i.e. that the lowest-mass spin-$2$ glueball is exactly degenerate with the lowest-mass scalar, that is sharply in contradiction with all the lattice computations, not only Meyer-Teper.
The Soft-Wall model is barely compatible with Meyer-Teper and inconsistent with Lucini-Rago-Rinaldi.
The Hard-Wall model is very sensitive to boundary conditions and thus the question is as to whether it can *fit* the lattice data, rather than predict anything. Yet none of the choices of boundary conditions gives an accurate prediction for $r_s$ but in one case: For *Neumann* bounday conditions and assuming that Lucini-Rago-Rinaldi see the first excited state and Meyer-Teper computation is not correct. In addition, in the Hard-Wall model as in Witten model, $r_2=1$ [@Brower2] unless rather arbitrarily the boundary conditions for the scalar and the spin-$2$ glueball are chosen to be different.
Our conclusion is that Meyer-Teper lattice computation clearly favors the $TFT$ in the infrared and disfavors all the other models considered.
Besides, it is desirable that Meyer-Teper computation be confirmed and extended by other groups [^14].
Conclusions
-----------
We have proved an asymptotic structure theorem for glueball and meson propagators of any integer spin in large-$N$ $QCD$ that fixes asymptotically the residues of the poles in terms of the anomalous dimension and of the spectral density [^15].
The asymptotic theorem was inspired by a $TFT$ underlying large-$N$ $YM$.
The $ASD$ glueball propagator of the $TFT$ satisfies the constraints that follow by the perturbative renormalization group, i.e. the asymptotic theorem, and by the first non-perturbative term in the $OPE$ as well. However, the $TFT$ does not contain a complete set of condensates of operators in the $OPE$. This is not surprising since the $TFT$ is supposed to describe by construction only the ground state of Ferretti-Heise-Zarembo one-loop integrable sector of large-$N$ $YM$.
Moreover, none of the scalar or pseudoscalar propagators based on the $AdS$ String/ large-$N$ Gauge Theory correspondence presently computed in the literature, as opposed to the $TFT$, satisfies any of the constraints that arise by the renormalization group and by the $OPE$ in the $UV$.
In particular, somehow surprisingly, Klebanov-Strassler background does not reproduce the universal $UV$ asymptotics of $\mathcal{N}$ $=1$ $SUSY$ $YM$, despite it reproduces the correct beta function. We suggest as explanation that it describes the phase not asymptotically free but strongly coupled in the ultraviolet foreseen by Kogan-Shifman on the basis of the structure of the $NSVZ$ beta function.
On the infrared side the $TFT$ agrees accurately with Meyer-Teper lattice computation, the mass spectra based on the presently proposed versions of the $AdS$ String/Gauge Theory correspondence do not.
We conclude that the glueball propagator of the $TFT$ is definitely favored by first principles in the $UV$, and presently by lattice data in the $IR$, with respect to the glueball propagators of the $AdS$ String/Gauge Theory correspondence in its present strong coupling incarnation.
A short review of the large-$N$ limit of $QCD$
==============================================
’t Hooft large-$N$ limit
------------------------
The $SU(N)$ pure $YM$ theory is defined by the partition function: $$\label{eqn:Z1}
Z=\int \delta A \, e^{-\frac{1}{2g_{YM}^2}\int \sum_{\alpha\beta} \operatorname{Tr}\bigl(F_{\alpha\beta}^2\bigr)d^4x}$$ Introducing ’t Hooft coupling constant $g$ [@'t; @hooft:large_n]: $$g^2=g^2_{YM}N$$ the partition function reads: $$Z=\int \delta A\, e^{-\frac{N}{2g^2}\int \sum_{\alpha\beta} \operatorname{Tr}\bigl(F_{\alpha\beta}^2\bigr) d^4x}$$ According to ’t Hooft [@'t; @hooft:large_n] the large-$N$ limit is defined with $g$ fixed when $N\rightarrow \infty$. The normalization of the action in Eq.(\[eqn:Z1\]) corresponds to choosing the gauge field $A_\alpha=A^a_\alpha t^a $ with the generators $t^a$ valued in the fundamental representation of the Lie algebra, normalized as: $$\operatorname{Tr}\, (t^a t^b)=\frac{1}{2}\delta^{ab}$$ In Eq.(\[eqn:Z1\]) $F_{\alpha \beta}$ is defined by: $$\label{eqn:F_wilsonian}
F_{\alpha\beta}(x)=\partial_\alpha A_\beta-\partial_\beta A_\alpha + i[A_\alpha,A_\beta]$$ We refer to the normalization of the action in Eq.(\[eqn:Z1\]) as the Wilsonian normalization. However, perturbation theory is formulated with the canonical normalization (employed in subsect.(1.2)), obtained rescaling the field $A_\alpha $ in Eq.(\[eqn:Z1\]) by the coupling constant $g_{YM}=\frac{g}{\sqrt{N}}$: $$\begin{aligned}
A_\alpha (x)\rightarrow g_{YM}A^c_\alpha (x)\end{aligned}$$ in such a way that in the action the kinetic term becomes independent on $g$: $$\frac{1}{2}\int \sum_{\alpha \beta} \operatorname{Tr}(F_{\alpha \beta} ^2(A^c)) (x) d^4 x$$ where: $$\begin{aligned}
\label{eqn:F_canonical}
F_{\alpha \beta}(A^c)= \partial_\beta A^c_\alpha - \partial_\alpha A^c_\beta +ig_{YM}[A^c_\alpha , A^c_\beta]\end{aligned}$$ In ’t Hooft large-$N$ limit [@'t; @hooft:large_n] $r$-point connected correlators of single-trace local operators with the Wilsonian normalization scale as $N^{2-r}$. It follows that at the leading $\frac{1}{N}$ order multi-point correlators of local gauge invariant operators factorize: $$\begin{aligned}
\label{eqn:ordine_1_N}
&\braket{\mathcal{O}_1(x_1)\mathcal{O}_2(x_2)\cdots \mathcal{O}_n(x_n)}
\nonumber\\
&=\braket{\mathcal{O}_1(x_1)}\braket{\mathcal{O}_2(x_2)}\cdots \braket{\mathcal{O}_n(x_n)}
+O(1)\end{aligned}$$ Indeed, according to Eq.(\[eqn:ordine\_1\_N\]), the one-point correlators are of order of $N$, while the connected two-point correlators are of order of 1. The connected three-point correlators are of order of $\frac{1}{N}$ and so on. Therefore, only one-point condensates survive at leading order and two-point connected correlators survive at next-to-leading order. Hence the interaction vanishes in the large-$N$ limit at the leading order for connected correlators, since it is associated to the three- and multi-point connected correlators.
Kallen-Lehmann representation of two-point correlators
------------------------------------------------------
Because of confinement and the mass gap and the vanishing of the interaction at the leading large-$N$ order, it is believed [@migdal:multicolor] that the two-point connected Euclidean correlators of local gauge invariant single-trace scalar operators $\mathcal{O}^{(0)}(x)$ in the pure glue sector of large-$N$ $QCD$: G\^[(2)]{}\_[conn]{}(p)=\^[(0)]{}(x)\^[(0)]{}(0) \_[conn]{}e\^[- ipx]{} d\^4x = \^\_0 dm\^2 are an infinite sum of propagators of massive free fields, i.e. the spectral distribution $\mathcal{R}(m)$ in the Kallen-Lehmann representation is saturated by massive free one-particle states only, the glueballs . In the scalar or pseudoscalar case: G\^[(2)]{}\_[conn]{}(p)&& = \_[n=1]{}\^\
&& = \_n The generalization to any integer spin [@migdal:multicolor], that includes also gauge-invariant fermion bilinears in the large-$N$ ’t Hooft limit of $QCD$, is: \[uv\] \^[(s)]{}(x)\^[(s)]{}(0) \_[conn]{}e\^[-ipx]{} d\^4x = \_[n=1]{}\^ P\^[(s)]{} () In [@migdal:multicolor] Migdal pointed out that the sum in Eq.(\[uv\]) must be infinite, otherwise it cannot be asymptotic to the perturbative result.
The asymptotic theorem of subsect.(1.1) and sect.(3) is in fact a quantitative refinement of this statement.
The reduced matrix elements $< 0|\mathcal{O}^{(s)}(0)|p,n,s>'$ are expressed in terms of the polarization vectors $e^{(s)}_j(\frac{p_\alpha}{m})$ and of the matrix elements $< 0|\mathcal{O}^{(s)}(0)|p,n,s,j>$ of the operator $\mathcal{O}^{(s)}$ between the vacuum and one-particle states $|p,n,s,j>$: = e\^[(s)]{}\_j() < 0|\^[(s)]{}(0)|p,n,s>’ The polarization vectors define the projectors that enter the spin-$s$ propagators: \_j e\^[(s)]{}\_j() = P\^[(s)]{} () The free propagators for $s=1,2$ were worked out in [@Velt] (see the end of sect.(3) for explicit formulae). The generalization to any integer or half-integer spin can be found in [@Francia1; @Francia2].
The large-$N$ integrable sector of Ferretti-Heise-Zarembo
---------------------------------------------------------
In the ’t Hooft large-$N$ limit of $QCD$ there is a special sector of the theory discovered by Ferretti-Heise-Zarembo [@ferretti:new_struct], that is integrable at one-loop for the anomalous dimensions.
The pure glue subsector of the integrable sector is composed by local single-trace gauge invariant operators built by the anti-selfdual ($ASD$) or the selfdual ($SD$) part of the curvature $F_{\alpha \beta}$ and their covariant derivatives [@ferretti:new_struct]. They are defined by: $$\begin{aligned}
F^-_{\alpha \beta}&= F_{\alpha \beta}- {^*\!F}_{\alpha \beta}\nonumber\\
F^+_{\alpha \beta}&= F_{\alpha \beta}+ {^*\!F}_{\alpha \beta}\end{aligned}$$ where: $${^*\!F}_{\alpha \beta}=\frac{1}{2}\epsilon_{\alpha\beta\gamma \delta}F^{\gamma\delta}$$ Therefore, the operators in the subsector described above have the form: $$\mathcal{O}(x)=\operatorname{Tr}(D_{\mu_1}\cdots D_{\mu_n}F^-_{\alpha_1\beta_1}D_{\nu_1}\cdots D_{\nu_m}F^-_{\alpha_2 \beta_2}\cdots\cdots
D_{\rho_1}\cdots D_{\rho_l}F^-_{\alpha_L\beta_L})(x)$$ with any possible contraction of the indices. Here $L$ is the number of $F^-$ in the operator $\mathcal{O}$. This sector is integrable at one loop in the large-$N$ limit [@ferretti:new_struct]. The anomalous dimensions of these operators can computed at one loop as the eigenvalues of the Hamiltonian of a closed spin chain. The construction extends to chiral fermion bilinear operators of massless quarks and to an open spin chain [@ferretti:new_struct].
The ground state of the Hamiltonian spin chain by definition corresponds to the operators with the most negative anomalous dimensions. For any fixed $L$ the ground state of the closed chain turns out to be built by operators that contain only $F^-_{\alpha\beta}$ and that have indices contracted to obtain a scalar in a peculiar way determined by the anti-ferromagnetic ground state of the spin chain: $$\label{eqn:op_interessanti}
\mathcal{O}_{2L}(x)=\operatorname{Tr}(\,\underbrace{F^-_{\alpha_1\beta_1}\cdots F^-_{\alpha_L\beta_L}}_{{}\text{Certain scalar contractions}}\,)(x)$$ with dimension in energy $D=2L$. In the spin chain each $F^-_{\alpha_i\beta_i}$ corresponds to a site, therefore $L$ corresponds to the length of the chain. Hence the large $L$ limit corresponds to the thermodynamic limit, i.e the infinite length limit. In [@ferretti:new_struct] it was computed the large-$N$ one-loop anomalous dimension of the ground state of the spin chain of length $L$, using the Bethe ansatz in the thermodynamic limit: $$\begin{aligned}
\label{eqn:intro_dim_anomala_L}
\gamma_{\mathcal{O}_{2L}}(g)&=-\gamma_0\, L\, g^2+O(\frac{1}{L}) \nonumber\\
\gamma_0&=\frac{5}{3}\frac{1}{(4\pi)^2}\end{aligned}$$ For $L=2$ the operator in the ground state is $\operatorname{Tr}{F^-}^2$ and its one-loop anomalous dimension is exactly (see also [@MB]): $$\begin{aligned}
\gamma_{\mathcal{O}_4}(g)&=-2\beta_0 \, g^2 + \cdots \nonumber \\
\beta_0&=\frac{11}{3}\frac{1}{(4\pi)^2}\end{aligned}$$ The $\mathcal{O}_4$ correlator reduces in Euclidean space-time to the sum of the scalar $\mathcal{O}_S=\operatorname{Tr}F^2$ and pseudoscalar correlator $\mathcal{O}_P=\operatorname{Tr}F {^*\!F} $: \_4(x)\_4(0) \_[conn]{}= 2 \_S(x)\_S(0) \_[conn]{}+ 2 \_P(x)\_P(0) \_[conn]{}
Renormalization group and $OPE$
-------------------------------
The structure of the two-point correlators of (scalar) local gauge invariant operators in $QCD$ with massless quarks or in any asymptotically free gauge theory with no perturbative mass scale is severely constrained [@migdal:multicolor] by perturbation theory in conjunction with the renormalization group [@MB] and by the operator product expansion ($OPE$) [@migdal:multicolor]: \_D(x)\_D(0) \_[conn]{}e\^[-ipx]{}d\^4x= C\_0(p\^2)+C\_1(p\^2) <\_[D\_1]{}(0)>+ Assuming multiplicative renormalizability of the operator $\mathcal{O}_D$, the coefficient functions $C_0, C_1, \cdots$ in the $OPE$ satisfy the Callan-Symanzik equations (see for example [@ZI]): $$\label{eqn:RG_eq_p}
\left(p_{\alpha}\frac{\partial}{\partial p_{\alpha}}-\beta(g)\frac{\partial}{\partial g}-2(D-2+\gamma_{\mathcal{O}_D}(g))\right)C_0(p^2)=0$$ and: $$\label{eqn:RG_eq_p}
\left(p_{\alpha}\frac{\partial}{\partial p_{\alpha}} -\beta(g)\frac{\partial}{\partial g}-(2D-D_1-4+2\gamma_{\mathcal{O}_D}(g)- \gamma_{\mathcal{O}_{D_1}}(g) )\right)C_1(p^2)=0$$ The solution for $C_0$ is [@MB]: $$\label{eqn:pert_general_behavior}
C_0(p^2)= p^{2D-4}\,\mathcal{G}_{0}(g(p))\, Z^2_{\mathcal{O}_D}(\frac{p}{\mu},g(p))$$ and: $$C_1(p^2)= p^{2D-D_1-4}\,\mathcal{G}_{1}(g(p))\, Z^2_{\mathcal{O}_D}(\frac{p}{\mu},g(p)) Z^{-1}_{\mathcal{O}_{D_1}}(\frac{p}{\mu},g(p))$$ with: \_[\_D]{}(g)= - =-\_[0]{}(\_D) g\^2 + and: (g)= =-\_[0]{} g\^3 - \_1 g\^5 + The power of $p$ is implied by dimensional analysis, $\mathcal{G}$ is a dimensionless function that depends only on the running coupling $g(p)$ and $Z$ is the contribution from the anomalous dimension.
Since the correlator of composite operators is conformal at the lowest non-trivial order in perturbation theory, the perturbative estimate for $\mathcal{G}_0,\mathcal{G}_1$ is [@MB]: $$\mathcal{G}(g(p)) \sim {\log \frac{p^2}{ \Lambda^2_{QCD}} }\sim \frac{1}{g^2(p)}$$ Indeed, $\int p^{2D-4} \log \frac{p^2}{\mu^2} e^{ ipx} d^4p \sim \frac{1}{x^{2D}}$ is conformal in the coordinate representation for $D$ integer, $D \ge 3$ (see Appendix A of [@MB]).
Collecting the previous results, we get the naive scheme-independent universal large-momentum asymptotic estimate for $C_0$ [@MB]: $$\begin{aligned}
\label{eqn:naive_rg}
&C_0(p^2)\sim p^{2D-4}
g(p)^{\frac{2\gamma_0(\mathcal{O}_D) }{\beta_0}-2}\end{aligned}$$ and analogously for $C_1$: $$\begin{aligned}
\label{eqn:naive_rg1}
&C_1(p^2)\sim p^{2D-D_1-4}
g(p)^{\frac{2\gamma_0(\mathcal{O}_D) - \gamma_0(\mathcal{O}_{D_1}) }{\beta_0}-2}\end{aligned}$$ In fact, these estimates are naive because the correlator of $\mathcal{O}_D$ in the momentum representation is not multiplicatively renormalizable because of the presence of contact terms in perturbation theory.
Thus the naive $RG$-estimates may hold only after subtracting the contact terms. The strategy to check them is as follows.
In the coordinate representation [@chetyrkin:TF] no contact term arises for $x\neq 0$. If: \_D(x)\_D(0) \_[conn]{}= C\_0(x\^2)+C\_1(x\^2) <\_[D\_1]{}(0)>+ the coefficient functions $C_0, C_1, \cdots$ in the $OPE$ satisfy the Callan-Symanzik equations (see for example [@ZI]): $$\label{eqn:RG_eq_p}
\left(x_{\alpha}\frac{\partial}{\partial x_{\alpha}}+\beta(g)\frac{\partial}{\partial g}+2(D+\gamma_{\mathcal{O}_D}(g))\right)C_0(x^2)=0$$ and: $$\label{eqn:RG_eq_p}
\left(x_{\alpha}\frac{\partial}{\partial x_{\alpha}} +\beta(g)\frac{\partial}{\partial g}+(2D-D_1+2\gamma_{\mathcal{O}_D}(g)- \gamma_{\mathcal{O}_{D_1}}(g) )\right)C_1(x^2)=0$$ The solutions are: $$\label{eqn:pert_general_behavior}
C_0(x^2)= \frac{1}{x^{2D}}\,\mathcal{G}_{0}(g(x))\, Z^2_{\mathcal{O}_D}(x \mu,g(x))$$ and: $$C_1(x^2)= \frac{1}{x^{2D-D_1}}\,\mathcal{G}_{1}(g(x))\, Z^2_{\mathcal{O}_D}(x \mu,g(x)) Z^{-1}_{\mathcal{O}_{D_1}}(x \mu,g(x))$$ with $x=\sqrt{x^2}$. Since the correlator is conformal at the lowest non-trivial order in perturbation theory, the perturbative estimate for $\mathcal{G}(g(x))$ is [@MB]: $$\mathcal{G}(g(x)) \sim 1 + O(g^2(x))$$ Collecting the previous results, we get the actual small-distance scheme-independent universal asymptotic behavior: $$\begin{aligned}
&C_0(x^2)\sim \frac{1}{x^{2D}} \,
g(x)^{\frac{2\gamma_0(\mathcal{O}_D) }{\beta_0}}\end{aligned}$$ and: $$\begin{aligned}
\label{eqn:naive_rg}
&C_1(x^2)\sim \frac{1}{x^{2D-D_1}} \,
g(x)^{\frac{2\gamma_0(\mathcal{O}_D) - \gamma_0(\mathcal{O}_{D_1}) }{\beta_0}}\end{aligned}$$ Thus, in order to get the correct $RG$ estimates in the momentum representation, we should first compute the Fourier transform of the $RG$-improved result in the coordinate representation. But in general the Fourier transform does not exist because of the local singularity in $x=0$. Nevertheless, as a byproduct of the proof of the asymptotic theorem, we show in sect.(3) how to obtain explicit results for the large-momentum asymptotics of the Fourier transform, *after* the subtraction of the contact terms. It turns out that the naive $RG$ estimate in the momentum representation for $C_0$ is in fact correct, but in the two cases $\gamma'=0,1$ with $\gamma'=\frac{\gamma_0}{\beta_0}$, that need only a slight refinement discussed in sect.(3). Entirely similar results hold for $C_1$. For the case $\gamma'=0$ the asymptotic estimate in the momentum representation is simply $C_0(p^2) \sim p^{2D-4} \log \frac{p^2}{\mu^2}$, that corresponds to a correlator asymptotically conformal in the $UV$ (see Appendix A of [@MB]).
$NSVZ$ low-energy theorems in $QCD$
-----------------------------------
We adapt to the large-$N$ limit the derivation of the low-energy theorem in [@1; @2], for a scalar operator $\mathcal{O}_D$ with dimension in energy $D$ and anomalous dimension $\gamma_{\mathcal{O}_D}$.
Actually, in order to make contact with the $TFT$ of subsect.(1.2), we specialize to the operators $\mathcal{O}_{2L}$, that occur as the ground state of the Hamiltonian spin chain in the integrable sector of Ferretti-Heise-Zarembo. While in intermediate steps we consider the large-$L$ limit, the actual formulation of the $NSVZ$ theorem depends only on the dimension $D$ of the operator.
We present the derivation for an operator with generic anomalous dimension, while originally $NSVZ$ considered only the $RG$-invariant case, i.e. zero anomalous dimension.
We start by the definition: $$\label{eqn:LE_def}
\braket{\frac{1}{N} \operatorname{Tr}\mathcal{O}_D}=\frac{\int \frac{1}{N} \operatorname{Tr}\mathcal{O}_D(0) e^{-\frac{N}{2g^2}\int \operatorname{Tr}F^2(x)d^4x }}
{\int e^{-\frac{N}{2g^2}\int \operatorname{Tr}F^2(x)d^4x }}$$ and we assume that there exists a non-perturbative scheme in which: $$\braket{\frac{1}{N} \operatorname{Tr}\mathcal{O}_D}=\Lambda_{YM}^D Z_{\mathcal{O}_D}$$ In addition for large-$L$, in the ground state of Ferretti-Heise-Zarembo: $$Z_{\mathcal{O}_{2L}}=Z^{L+\mathit{O}(\frac{1}{L})}$$ for some $Z$. We derive both members of Eq.(\[eqn:LE\_def\]) with respect to $-\frac{1}{g^2}$. Therefore, for large $L$: $$\frac{\partial\braket{\frac{1}{N} \operatorname{Tr}\mathcal{O}_{2L}}}{\partial(-\frac{1}{g^2})}\sim
2L\,\Lambda_{YM}^{2L-1}\,\frac{\partial \Lambda_{YM}}{\partial(-\frac{1}{g^2})}\,Z^{L}+
L Z^{L-1}\Lambda_{YM}^{2L}\frac{\partial Z}{\partial(-\frac{1}{g^2})}$$ To compute $\frac{\partial \Lambda_{YM}}{\partial(-\frac{1}{g^2})}$ we use the definition of $\Lambda_{YM}$: $$\left(\frac{\partial}{\partial \log \Lambda}+\beta(g)\frac{\partial}{\partial g}\right)\Lambda_{YM}=0$$ so that: $$\frac{\partial \Lambda_{YM}}{\partial(-\frac{1}{g^2})}=\frac{g^3}{2}\frac{\partial\Lambda_{YM}}{\partial g}=
-\frac{g^3}{2\beta(g)}\frac{\partial \Lambda_{YM}}{\partial \log\mu}=
-\frac{g^3}{2\beta(g)}\Lambda_{YM}$$ The last identity follows from the relation: $$\Lambda_{YM}=\Lambda f(g)= e^{\log \Lambda} f(g)$$ for some function $f(g)$. To compute $\frac{\partial Z}{\partial(-\frac{1}{g^2})}$ we use its definition: $$\begin{aligned}
Z=e^{\int_{g(\mu)}^{g(\Lambda)}\frac{\gamma(g')}{\beta(g')}dg'} \nonumber\\
\Rightarrow \frac{\partial Z}{\partial(-\frac{1}{g^2})}=
\frac{g^3}{2\beta(g)}Z \gamma(g)\end{aligned}$$ On the other hand, deriving the RHS of Eq.(\[eqn:LE\_def\]) we get: $$\frac{\partial\braket{\frac{1}{N}\operatorname{Tr}\mathcal{O}_{2L}}}{\partial(-\frac{1}{g^2})}=
\frac{1}{2}\int \braket{\operatorname{Tr}\mathcal{O}_{2L}(0) \operatorname{Tr}F^2(x)}_{conn}d^4x$$ and: $$\begin{aligned}
- \frac{g^3}{\beta(g)} D(1-\frac{\gamma(g)}{2}) \braket{\frac{1}{N} \operatorname{Tr}\mathcal{O}_D}
= \int \braket{\operatorname{Tr}\mathcal{O}_D(0) \operatorname{Tr}F^2(x)}_{conn}d^4x\end{aligned}$$ with the Wilsonian normalization of the action. Finally, taking the limit $\Lambda \rightarrow \infty$ we get the $NSVZ$ low-energy theorem with the Wilsonian normalization of the action: $$\begin{aligned}
\frac{D}{\beta_0} \braket{\frac{1}{N} \operatorname{Tr}\mathcal{O}_D}
= \int \braket{\operatorname{Tr}\mathcal{O}_D(0) \operatorname{Tr}F^2(x)}_{conn}d^4x\end{aligned}$$
The asymptotic structure theorem for glueball and meson propagators of any spin in large-$QCD$
==============================================================================================
Firstly, we prove the asymptotic theorem for scalar or pseudoscalar propagators.
We define the asymptotic spectral density as follows. For any test function $f$ we assume that the spectral sum can be approximated asymptotically by an integral, keeping the leading term in the Euler-MacLaurin formula [@migdal:meromorphization]: \_[n=1]{}\^ f(m\^[(s)2]{}\_n ) \~\_[1]{}\^ f(m\^[(s)2]{}\_n ) dn Then by definition the asymptotic spectral density satisfies: =\_s(m\^2) i.e. : \_[1]{}\^ f(m\^[(s)2]{}\_n ) dn = \_[ m\^[(s)2]{}\_1 ]{}\^ f(m\^2) \_s (m\^2) dm\^2 We write an ansatz for the large-$N$ two-point Euclidean correlator of a local gauge-invariant scalar or pseudoscalar operator $\mathcal{O}$ of naive dimension in energy $D$ and with anomalous dimension $\gamma_{\mathcal{O}}(g)$: \[p\] (x) (0) \_[conn]{}e\^[-ipx]{}d\^4x = \_[n=1]{}\^ This ansatz in not restrictive and follows only by dimensional analysis to the extent the dimensionless pure numbers $R_n$ are unspecified yet. However, the specific form of the ansatz is the most convenient for our aims.
We now distinguish two cases, $D$ even and $D$ odd. For local gauge-invariant composite operators in $QCD$ the lowest non-trivial operator with $D$ even occurs for $D=4$ in the pure glue sector, while the lowest $D$ odd occurs for $D=3$ in the sector containing fermion bilinears. For $D$ even using the identity: \[b\] m\^[2D-4]{}\_n =((m\^[2]{}\_n +p\^2)(m\^[2]{}\_n-p\^2)+p\^4)\^[-1]{} we get: \[b0\] (x) (0) \_[conn]{}e\^[-ipx]{}d\^4x = p\^[2D-4]{} \_[n=1]{}\^ + where the dots represent contact terms, i.e. distributions whose Fourier transform is supported at $x=0$, that are physically irrelevant and that therefore can be safely discarded. The contact terms arise because, for $D$ even and $\frac{D}{2}-1$ positive, in Eq.(\[b\]) in addition to the term $p^{2D-4}$ at least one term involving the factor of $m^{2}_n +p^2$, that cancels the denominator, always occurs.
For $D$ odd we use instead the identity: \[b1\] m\_n\^[2D-4]{} = m\_n\^2 m\^[2(D-1)-4]{}\_n = (p\^2+m\_n\^2-p\^2) ((m\^[2]{}\_n +p\^2)(m\^[2]{}\_n-p\^2)+p\^4)\^[-1]{} from which we get a similar result but with opposite sign: \[b2\] (x) (0) \_[conn]{}e\^[-ipx]{}d\^4x = - p\^[2D-4]{} \_[n=1]{}\^ + It is also clear from Eq.(\[b\]) and Eq.(\[b0\]) that the sum of the contact terms is divergent, but nevertheless the entire sum, and not just the individual terms, is a polynomial of *finite* degree in momentum. Later in this subsection we pass to the coordinate representation, where contact terms do no arise at all for $x\ne 0$. A fact that confirms their physical irrelevance.
Now we substitute to the sum the integral using the Euler-McLaurin formula: $$\sum_{k=k_1}^{\infty}G_k(p)=
\int_{k_1}^{\infty}G_k(p)dk - \sum_{j=1}^{\infty}\frac{B_j}{j!}\left[\partial_k^{j-1}G_k(p)\right]_{k=k_1}$$ We disregard the terms involving the Bernoulli numbers since in our case they are suppressed by inverse powers of momentum. Thus the infinite sum reads asymptotically: && \_[n=1]{}\^\
&&\~\_[1]{}\^ dn\
&&= \_[m\^2\_1]{}\^ (m\^2) dm\^2\
&&= \_[m\^2\_1]{}\^ dm\^2 Now we compare Eq.(\[p\]) with perturbation theory. Assuming asymptotic freedom the non-perturbative propagator has to match at large momentum, up to contact terms, the large momentum $RG$-improved perturbative result obtained solving the Callan-Symanzik equation, that assuming naively multiplicative renormalizability of the operator $\mathcal{O}$ reads (see subsect.(2.4)): (x) (0) \_[conn]{}e\^[-ipx]{}d\^4x \~p\^[2D-4]{} Z\_\^[2]{}(p) \_[0]{}(g(p)) This assumption is too naive because of the occurrence of contact terms also in perturbation theory. However, we prove later, employing the coordinate representation of the propagator, that after subtracting the contact terms in the momentum representation the naive $RG$-estimate is in fact correct but in the special cases $\gamma'=0,1$ with $\gamma'=\frac{\gamma_0}{\beta_0}$.
The only unknown function is $\mathcal{G}_{0}(g(p))$ that is supposed to be a $RG$-invariant function of the running coupling only. $\mathcal{G}_{0}(g(p))$ is fixed for a composite operator at the lowest non-trivial order by the condition that the two-point correlator be exactly conformal in the $UV$ in the coordinate representation.
Hence we must have asymptotically for large $p$: && \_[m\^2\_1]{}\^ dm\^2 = Z\_\^[2]{}(p) \_[0]{}(g(p)) up perhaps to an overall sign. It is convenient first to compactify the $dm^2$ integration and then to remove the cutoff $\Lambda$. For large $\Lambda$ and for large $p<< \Lambda$: && \_[m\^2\_1]{}\^[\^2]{} dm\^2 = Z\_\^[2]{}(p) \_[0]{}(g(p)) This is an integral equation of Fredholm type for which, by the Fredholm alternative, a solution exists if and only if it is unique. We find first explicitly a solution for large $\Lambda$, then we show how it extends to $\Lambda=\infty$. It is convenient to introduce the dimensionless variables $\nu=\frac{p^2}{\Lambda^2_{QCD}}$, $k=\frac{m^2}{\Lambda^2_{QCD}}$ and $K=\frac{\Lambda^2}{\Lambda^2_{QCD}}$. We get: && \_[k\_1]{}\^[K]{} dk= Z\_\^[2]{}() \_[0]{}(g()) and explicitly (see subsect.(2.4)), keeping only the asymptotic universal part: \[12\] && \_[k\_1]{}\^[K]{} dk= ((1-))\^[-1]{} We show now that the solution is: \[11\] R(k) \~Z\^2(k) \~((1- + O() ) )\^ with asymptotic accuracy for large $k$ in the sense determined by the term $O(\frac{1}{\log k})$, i.e. within the universal leading and next-to-leading logarithmic accuracy, as remarked in sect.(1). The constant $c$ is related to the scheme dependence, but the universal part is actually $c$ independent. The proof of existence of the solution is by direct computation. The necessary integrals have been already computed in [@MB]. We substitute the ansatz in Eq.(\[11\]) into Eq.(\[12\]). We distinguish two cases: either $\gamma'>1$ or otherwise. For $\gamma'>1$ the integral in Eq.(\[12\]) is convergent, in such a way that the integration domain can be extended to $\infty$. Otherwise the integral is divergent, but the divergence is a contact term. Therefore, after subtracting the contact term, the solution can be extended to $\infty$. Following [@MB] firstly we change variables in the $LHS$ of Eq.(\[12\]) from $k$ to $k+\nu$: $$\begin{aligned}
\label{14}
I_c^{2}(\nu)&=\int_1^{\infty}\beta_0^{-\gamma'}\left(\frac{1}{\log(\frac{k}{c})}\left(1-\frac{\beta_1}{\beta_0^2}\frac{\log\log(\frac{k}{c})}{\log(\frac{k}{c})}\right)\right)^{\gamma'}\frac{dk}{k+\nu}\nonumber\\
&=\beta_0^{-\gamma'}\int_{1+\nu}^{\infty}\left(\frac{1}{\log(\frac{k-\nu}{c})}\left(1-\frac{\beta_1}{\beta_0^2}\frac{\log\log(\frac{k-\nu}{c})}{\log(\frac{k-\nu}{c})}\right)\right)^{\gamma'}\frac{dk}{k}\nonumber\\
&\sim \beta_0^{-\gamma'}\int_{1+\nu}^{\infty}\left[\log(\frac{k-\nu}{c})\right]^{-\gamma'}
\left(1-\gamma'\frac{\beta_1}{\beta_0^2}\frac{\log\log(\frac{k-\nu}{c})}{\log(\frac{k-\nu}{c})}\right)\frac{dk}{k}\nonumber\\
&\sim \beta_0^{-\gamma'}\int_{1+\nu}^{\infty}\left[\log(\frac{k-\nu}{c})\right]^{-\gamma'}\frac{dk}{k}
-\gamma'\frac{\beta_1}{\beta_0^2}\beta_0^{-\gamma'}\int_{1+\nu}^{+\infty}\left[\log(\frac{k-\nu}{c})\right]^{-\gamma'-1}\log\log(\frac{k-\nu}{c})
\frac{dk}{k}\end{aligned}$$ For the first integral in the last line we get: $$\begin{aligned}
\label{eqn: int1}
&\int_{1+\nu}^\infty\frac{1}{k}[\beta_0\log(\frac{k}{c})]^{-\gamma'}
\biggl[1+\frac{\log(1-\frac{\nu}{k})}{\log(\frac{k}{c})}\biggr]^{-\gamma'}dk\nonumber\\
&\sim \int_{1+\nu}^\infty\frac{1}{k}[\beta_0\log(\frac{k}{c})]^{-\gamma'}
\biggl[1+\gamma'\frac{\nu}{k \log(\frac{k}{c})}\biggr]dk\nonumber\\
&=\int_{1+\nu}^\infty\frac{1}{k}[\beta_0\log(\frac{k}{c})]^{-\gamma'}dk+
\gamma' \nu\int_{1+\nu}^\infty\frac{1}{k^2}\beta_0^{-\gamma'}[\log(\frac{k}{c})]^{-\gamma'-1}dk \end{aligned}$$ From the first integral it follows the leading asymptotic behavior [@boch:glueball_prop] provided $\gamma' \neq 1$: $$\label{eqn:sol_int_leading}
\int_{1+\nu}^\infty\frac{1}{k}[\beta_0\log(\frac{k}{c})]^{-\gamma'}dk=
\frac{1}{\gamma'-1} \beta_0^{-\gamma'}\left[\log\left(\frac{1+\nu}{c}\right)\right]^{-\gamma'+1}$$ For $\gamma' = 0$ there is nothing to add. It corresponds to the asymptotically conformal case in the $UV$. If $\gamma' \neq 0$ we add the second contribution. We evaluate it at the leading order by changing variables and integrating by parts: $$\begin{aligned}
\label{eqn:int_next-to-leading}
&\gamma'\frac{\beta_1}{\beta_0^2}\beta_0^{-\gamma'}\int_{1+\nu}^{+\infty}\left[\log(\frac{k-\nu}{c})\right]^{-\gamma'-1}\log\log(\frac{k-\nu}{c})
\frac{dk}{k}\nonumber\\
&\sim \gamma'\frac{\beta_1}{\beta_0^2}\beta_0^{-\gamma'}\int_{1+\nu}^{+\infty}\left[\log(\frac{k}{c})\right]^{-\gamma'-1}\log\log(\frac{k}{c})
\frac{dk}{k}\nonumber\\
&=\gamma'\frac{\beta_1}{\beta_0^2}\beta_0^{-\gamma'}\int_{\log\frac{1+\nu}{c}}^{+\infty}t^{-\gamma'-1}\log(t)dt\nonumber\\
&=\gamma'\frac{\beta_1}{\beta_0^2}\beta_0^{-\gamma'}\left[\frac{1}{\gamma'}\left(\log(\frac{1+\nu}{c})\right)^{-\gamma'}\log\log(\frac{1+\nu}{c})+
\frac{1}{\gamma'^2}\left(\log(\frac{1+\nu}{c})\right)^{-\gamma'}\right]\end{aligned}$$ The second term in brackets in the last line is subleading with respect to the first one. Collecting Eq.(\[eqn:int\_next-to-leading\]) and Eq.(\[eqn:sol\_int\_leading\]) we get for $I_c^{2}(\nu)$: $$\begin{aligned}
\label{eqn:esp_ntl}
&\beta_0^{-\gamma'}\int_1^{\infty}\left(\frac{1}{\log(\frac{k}{c})}\left(1-\frac{\beta_1}{\beta_0^2}\frac{\log\log(\frac{k}{c})}{\log(\frac{k}{c})}\right)\right)^{\gamma'}\frac{dk}{k+\nu}\nonumber\\
& \sim \frac{1}{\gamma'-1}\beta_0^{-\gamma'}\left(\log\frac{1+\nu}{c}\right)^{-\gamma'+1}-\frac{\beta_1}{\beta_0^2}\beta_0^{-\gamma'}\left(\log(\frac{1+\nu}{c})\right)^{-\gamma'}\log\log(\frac{1+\nu}{c})\nonumber\\
&=\frac{\beta_0^{-\gamma'}}{\gamma'-1}\biggl(\log\frac{1+\nu}{c}\biggr)^{-\gamma'+1}\left[1-\frac{\beta_1(\gamma'-1)}{\beta_0^2}\left(\log(\frac{1+\nu}{c})\right)^{-1}\log\log(\frac{1+\nu}{c})\right]\nonumber\\
&\sim \frac{1}{\beta_0(\gamma'-1)}\left(\beta_0\log\frac{1+\nu}{c}\right)^{-\gamma'+1}\left[1-\frac{\beta_1}{\beta_0^2}\left(\log(\frac{1+\nu}{c})\right)^{-1}\log\log(\frac{1+\nu}{c})\right]^{\gamma'-1}\nonumber\\
&\sim \Biggl(\frac{1}{\beta_0\log \nu}\biggl(1-\frac{\beta_1}{\beta_0^2}\frac{\log\log\nu}{\log\nu}\biggr)\Biggr)^{\gamma'-1}\end{aligned}$$ Thus the proof of the existence of the asymptotic solution is complete. Uniqueness follows by the Fredholm alternative.
We prove now the asymptotic theorem in the coordinate representation. The coordinate representation is the most convenient to get actual proofs of the $RG$ estimates, since in this representation for $x \neq 0$ contact terms do not occur, in such a way that composite operators are multiplicatively renormalizable.
In fact, the estimates in the momentum representation based on the Callan-Symanzik equations of subsect.(2.4) are rather naive, since they assume multiplicative renormalizability in the momentum representation, that is technically false. However, the following proof of the asymptotic theorem in the coordinate representation implies also that the naive $RG$ estimate for $C_0$[^16] in the momentum representation, after subtracting the contact terms, is in fact correct but for $\gamma'=0,1$.
To show this, we proceed writing the ansatz for the propagator in the coordinate representation, expressing the free propagator in terms of the modified Bessel function $K_1$: && (x) (0) \_[conn]{}\
&&= \_[n=1]{}\^ e\^[ ipx]{}d\^4p\
&&= \_[n=1]{}\^ R\_n m\^[2D-4]{}\_n \^[-1]{}(m\_n\^2) K\_1( ) Approximating the sum by the integral using the Euler-MacLaurin formula [@migdal:meromorphization], we get asymptotically: && (x) (0) \_[conn]{}\
&& \~ \_[1]{}\^ R\_n m\^[2D-4]{}\_n \^[-1]{}(m\_n\^2) K\_1( ) dn\
&& = \_[m\^2\_1]{}\^ R(m) m\^[2D-4]{} K\_1( ) dm\^2 We introduce now the dimensionless variable $z^2=x^2 m^2$: \[R\] && (x) (0) \_[conn]{}\
&& \~ \_[m\^2\_1]{}\^ R(m) m\^[2D-4]{} K\_1( ) dm\^2\
&& = \_[m\^2\_1 x\^2]{}\^ R() ()\^[D-2]{} z K\_1(z)\
&& = \_[m\^2\_1 x\^2]{}\^ R() z\^[2D-3]{} K\_1(z) dz\^2 In the coordinate representation the solution of the Callan-Symanzik equation (see subsect.(2.4)) is: $$\label{eqn:pert_general_behavior_x}
\braket{\mathcal{O}(x)\mathcal{O}(0)}_{\mathit{conn}}=
\frac{1}{(x^2)^D} \,\mathcal{G}_{0}(g(x))\, Z^2_{\mathcal{O}}(x \mu ,g(x))$$ with the truly $RG$-invariant function $\mathcal{G}_{0}(g(x))$ admitting the expansion: \_[0]{}(g(x))= const(1+ ) since the correlator in the coordinate representation must be exactly conformal at the lowest non-trivial order. Hence within the universal asymptotic accuracy: \_ \~ ((1-))\^ It follows from Eq.(\[R\]) that it must hold: \_[m\_1\^2 x\^2]{}\^ R() z\^[2D-3]{} K\_1(z) dz\^2 \~((1-))\^ The asymptotic solution is: R() \~((1-))\^ Indeed, the universal part of $R(\frac{z}{x})$ is actually $z$ independent and therefore we can put it, for any fixed $z=z_0$, outside the integral over $z$ in the limit $x \rightarrow 0$: &&\_[m\_1\^2 x\^2]{}\^ R() z\^[2D-3]{} K\_1(z) dz\^2 \~R() \_[0]{}\^ z\^[2D-3]{} K\_1(z) dz\^2\
&&\~((1-))\^ since the integral: \_[0]{}\^ z\^[2D-3]{} K\_1(z) dz\^2 is convergent for $D>1$ because $K_1$ has a simple pole in $z=0$ and decays exponentially for large $z$. Therefore, within the universal asymptotic accuracy: R() \~Z\^2\_(x ,g(x)) and the naive $RG$ estimate in momentum space is in fact correct but for $\gamma'=0,1$.
Indeed, we have just proved that the universal part of the residues $R_n$ determined by the integral equations in the coordinate representation and in the momentum representation is the same. Since in the coordinate representation the $RG$ estimate is certainly correct because of the lack of contact terms, it follows that the asymptotic behavior in the momentum representation is computable using the sum of free propagators with the residues determined by the coordinate representation as input. But then, after *subtracting* the contact terms that arise in the sum of free propagators, the asymptotic behavior in the momentum representation is *computed* by the integral in Eq.(\[eqn:esp\_ntl\]), that coincides with the naive $RG$ estimate of subsect.(2.4) [@MB] but for $\gamma'=0,1$.
The extension to any integer spin $s$ is an easy corollary. It is only necessary to prove that: &&\_[n=1]{}\^ P\^[(s)]{} ()\
&&=P\^[(s)]{} ( ) p\^[2D-4]{} \_[n=1]{}\^ + where the dots represent contact terms and $P^{(s)} \big(\frac{p_{\alpha}}{p} \big)$ is the projector obtained substituting $-p^2$ to $m_n^2$ in $P^{(s)} \big(\frac{p_{\alpha}}{m_n} \big)$. The proof is as follows. $m^{(s)2D-4}_n P^{(s)} \big(\frac{p_{\alpha}}{m^{(s)}_n}\big)$ is a polynomial in powers of $m^2_n$. To each monomial $m^{2d}_n$ occurring in this polynomial we can substitute either $p^{2d}$ or $- p^{2d}$, for $d$ even or $d$ odd respectively, up to contact terms, because of Eq.(\[b0\]) and Eq.(\[b2\]). This is the same as substituting $-p^2$ to $m_n^2$ in $P^{(s)} \big(\frac{p_{\alpha}}{m_n} \big)$ since for $d$ even we always get a positive sign. The asymptotic theorem for any spin follows.
For completeness we write explicitly the spin-$1$ and the spin-$2$ propagators as determined by the asymptotic theorem. We employ Veltman conventions for Euclidean and Minkowski propagators (see Appendix F in [@Velt2]).
For spin $1$: && \^[(1)]{}\_(x) \^[(1)]{}\_(0) \_[conn]{}e\^[-ipx]{}d\^4x\
&& \~\_[n=1]{}\^ (\_ + )\
&& \~ p\^[2D-4]{} (\_ - ) \_[n=1]{}\^ + For spin 2: && \^[(2)]{}\_(x) \^[(2)]{}\_(0) \_[conn]{}e\^[-ipx]{}d\^4x\
&& \~\_[n=1]{}\^\
&&\~ p\^[2D-4]{} \_[n=1]{}\^ +where: \_(m)= \_ + and: \_(p)= \_ - Some observations are in order.
Each massive propagator is conserved only on the respective mass shell. However, after subtracting the sum of contact terms denoted by the dots (a polynomial of *finite* degree in the momentum representation with diverging coefficients), the resulting massless projector implies off-shell conservation, as if the large-$N$ $QCD$ propagators were saturated by massless particles only. This is necessary to match $QCD$ perturbation theory (with massless quarks). For a direct check see [@chetyrkin:TF; @chet:tensore].
In the spin-$2$ case the massless projector contains a factor of $\frac{1}{3}$ in the last term, that descends from the massive case, and not of $\frac{1}{2}$, that would occur for a truly physical massless spin-$2$ particle according to van Dam-Veltman-Zakharov discontinuity [@Velt; @Zak]. This factor of $\frac{1}{3}$ occurs also in perturbative computations of the correlator of the stress-energy tensor in $QCD$ [@chet:tensore]. Indeed, a spin-$2$ glueball in $QCD$ is not a graviton.
[99]{} M. Bochicchio, S. P. Muscinelli, *Ultraviolet asymptotics of glueball propagators*, [hep-th/1304.6409](http://arxiv.org/abs/arXiv:1304.6409). M. F. Zoller, K. G. Chetyrkin, *OPE of the energy-momentum tensor correlator in massless QCD*, [hep-ph/1209.1516](http://arxiv.org/abs/1209.1516). K.G. Chetyrkin, B.A. Kniehl, and M. Steinhauser, *Hadronic Higgs Boson Decay to Order $\alpha^4$*, *Phys. Rev. Lett.* [**79**]{} 353 (1997) [\[hep-ph/9705240\]](http://arxiv.org/abs/hep-ph/9705240). K.G. Chetyrkin, B.A. Kniehl, M. Steinhauser, W.A. Bardeen, *Effective QCD Interactions of CP-odd Higgs Bosons at Three Loops*, *Nucl. Phys. B* [**535**]{} 3 (1998) [\[hep-ph/9807241\]](http://arxiv.org/abs/hep-ph/9807241). A. L. Kataev, N. V. Krasnikov, A. A. Pivovarov, *Two Loop Calculations For The Propagators Of Gluonic Currents*, *Nucl. Phys. B* [**198**]{} (1982) 508, \[Erratum-ibid. [**490**]{} (1997) 505\] [\[hep-ph/9612326\]](http://arxiv.org/abs/hep-ph/9612326). G. Ferretti, R. Heise, K. Zarembo, *New integrable structures in large-N QCD*, *Phys. Rev. D* **70** (2004) 074024 [\[hep-th/0404187\]](http://arxiv.org/abs/hep-th/0404187). M. Bochicchio, *Quasi BPS Wilson loops, localization of loop equation by homology and exact beta function in the large-N limit of $SU(N)$ Yang-Mills theory*, JHEP **0905** 116 (2009) [\[hep-th/0809.4662\]](http://arxiv.org/abs/0809.4662). M. Bochicchio, *Exact beta function and glueball spectrum in large-N Yang Mills theory*, *PoS EPS-HEP2009: [**075**]{}(2009)* [\[hep-th/0910.0776\]](http://arxiv.org/abs/0910.0776). M. Bochicchio, *Glueballs in large-N $YM$ by localization on critical points*, [hep-th/1107.4320](http://arxiv.org/abs/1107.4320), extended version of the talk at the Galileo Galilei Institute Conference “Large-$N$ Gauge Theories”, Florence, Italy, May 2011. M. Bochicchio, *Glueball propagators in large-N YM*, [hep-th/1111.6073](http://arxiv.org/abs/1111.6073). M. Bochicchio, *Yang-Mills mass gap at large-N, topological quantum field theory and hyperfiniteness*, [hep-th/1202.4476](http://arxiv.org/abs/1202.4476), a byproduct of the Simons Center workshop “Mathematical Foundations of Quantum Field Theory”, Stony Brook, USA, Jan 16-20 (2012). H. B. Meyer, M. J. Teper, *Glueball Regge trajectories and the Pomeron – a lattice study –*, *Phys.Lett. B* [**605**]{} 344 (2005) [\[hep-ph/0409183\]](http://arxiv.org/abs/hep-ph/0409183). H. B. Meyer, *Glueball Reggge Trajectories*, [\[hep-lat/0508002\]](http://arxiv.org/abs/arXiv:hep-lat/0508002). B. Lucini, M. Teper, U. Wenger, *Glueballs and k-strings in SU(N) gauge theories: calculations with improved operators*, JHEP **0406** 012 (2004) [\[hep-lat/0404008\]](http://arxiv.org/abs/hep-lat/0404008). B. Lucini, A. Rago, E. Rinaldi, *Glueball masses in the large N limit*, JHEP [**1008**]{} 119 (2010) [\[hep-lat/1007.3879\]](http://arxiv.org/abs/arXiv:1007.3879). O. Aharony, S. S. Gubser, J. Maldacena, H. Ooguri, Y. Oz, *Large $N$Field Theories, String Theory and Gravity*, Phys. Rept. [**323**]{} 183 (2000) [\[hep-th/9905111\]](http://arxiv.org/abs/hep-th/9905111). E. Witten, *Anti-de Sitter Space, Thermal Phase Transition, And Confinement In Gauge Theories*, *Adv. Theor. Math. Phys.* [**2**]{} 505 (1998) [\[hep-th/9803131\]](http://arxiv.org/abs/hep-th/9803131). I. R. Klebanov, M. J. Strassler, *Supergravity and a Confining Gauge Theory: Duality Cascades and $\chi$SB-Resolution of Naked Singularities*, JHEP [**0008**]{} 052 (2000) [\[hep-th/0007191\]](http://arxiv.org/abs/hep-th/0007191). M. J. Strassler, *The Duality Cascade*, [hep-th/0505153](http://arxiv.org/abs/hep-th/0505153). J. Polchinski, M. J. Strassler, *Hard scattering and gauge/string duality*, *Phys. Rev. Lett.* [**88**]{} (2002) 031601 [\[hep-th/0109174\]](http://arxiv.org/abs/hep-th/0109174). R. C. Brower, J. Polchinski, M. J. Strassler, C.-I. Tan, *The Pomeron and Gauge/String Duality*, JHEP [**0712**]{} (2007) 005 [\[hep-th/0603115\]](http://arxiv.org/abs/hep-th/0603115). A. Karch, E. Katz, D. T. Son, M. A. Stephanov, *Linear Confinement and AdS/QCD*, *Phys. Rev. D* [**74**]{} 015005 (2006) [\[hep-ph/0602229\]](http://arxiv.org/abs/hep-ph/0602229). R. C. Brower, M. Djuric, C.-I Tan, *Odderon in Gauge/String Duality* JHEP [**0907**]{} 063 (2009) [\[hep-th/0812.0354\]](http://arxiv.org/abs/arXiv:0812.0354). R. C. Brower, S. D. Mathur, *Glueball Spectrum for QCD from AdS Supergravity Duality*, *Nucl. Phys. B* [**587**]{} 249 (2000) [\[hep-th/0003115\]](http://arxiv.org/abs/hep-th/0003115). H. Forkel, *Holographic glueball structure*, *Phys. Rev. D* [**78**]{} 025001 (2008) [\[hep-ph/0711.1179\]](http://arxiv.org/abs/0711.1179). P. Colangelo, F. de Fazio, F. Jugeau, S. Nicotri, *Investigating AdS/QCD duality through scalar glueball correlators*, *Int. J. Mod. Phys. A* [**24**]{} 4177 (2009) [\[hep-ph/0711.4747\]](http://arxiv.org/abs/0711.4747). H. Forkel, *Glueball correlators as holograms*, [hep-ph/0808.0304](http://arxiv.org/abs/0808.0304). H. Forkel, *AdS/QCD at the correlator level*, *PoS(Confinement8)* [**184**]{} [\[hep-ph/0812.3881\]](http://arxiv.org/abs/0812.3881). M. Krasnitz, *A two point function in a cascading $\mathcal{N} =1$ gauge theory from supergravity*, [hep-th/0011179](http://arxiv.org/abs/hep-th/0011179). M. Krasnitz, *Correlation functions in a cascading $\mathcal{N} =1$ gauge theory*, JHEP [**0212**]{} (2002) 048 [\[hep-th/0209163\]](http://arxiv.org/abs/hep-th/0209163). I. I. Kogan, M. Shifman, *Two Phases of Supersymmetric Gluodynamics*, *Phys. Rev. Lett.*[ **75**]{} 2085 (1995) [\[hep-th/9504141\]](http://arxiv.org/abs/hep-th/9504141). B. Lucini, M. Panero, *SU(N) gauge theories at large N*, *Physics Reports* [**526**]{} (2013) 93 [\[hep-th/1210.4997\]](http://arxiv.org/abs/1210.4997). H. Boschi-Filho, N. R. F. Braga, *QCD/String holographic mapping and glueball mass spectrum*, *Eur. Phys. J. C* [**32**]{} (2004) 529 [\[hep-th/0209080\]](http://arxiv.org/abs/hep-th/0209080). H. Boschi-Filho, N. R. F. Braga, H. L. Carrion, *Glueball Regge trajectories from gauge/string duality and the Pomeron*, *Phys. Rev. D* [**73**]{}(2006) 0407901 [\[hep-th/0507063\]](http://arxiv.org/abs/hep-th/0507063). G. ’t Hooft, *Nucl. Phys. B* **72** 461 (1974). A. Migdal, *Multicolor QCD as a dual-resonance theory*, *Annals of Physics* **109** 365 (1977). C. Itzykson, J-B. Zuber, *Quantum Field Theory*, McGraw-Hill. A. M. Polyakov, *Gauge Fields and Strings*, Harwood Academic Publishers. H. Van Dam, M. Veltman, *Massive and Mass-Less Yang-Mills And Gravitational Fields*, *Nucl. Phys. B* [**22**]{} 397 (1970). V. I. Zakharov, JETP Lett. [**12**]{} (1970) 312 \[Pisma Zh. Eksp. Teor. Fiz. [**12**]{} (1970) 447\]. D. Francia, J. Mourad, A. Sagnotti, *Current Exchanges and Unconstrained Higher Spins*, *Nucl. Phys. B* [**773**]{} (2007) 203 [\[hep-th/0701163\]](http://arxiv.org/abs/hep-th/0701163). D. Francia, *Geometric massive higher spins and current exchanges*, *Fortsh. Phys.* [**56**]{} (2008) 800 [\[hep-th/0804.2857\]](http://arxiv.org/abs/arXiv:0804.2857). K. G. Chetyrkin, A. Maier, *Massless correlators of vector, scalar and tensor currents in position space at orders $\alpha_s^3$ and $\alpha_s^4$: explicit analytical results*, *Nucl. Phys. B* [**844**]{} 266 (2011) [\[hep-ph/1010.1145\]](http://arxiv.org/abs/1010.1145). V. A. Novikov, M. A. Shifman, A. I. Vainshtein, V. I. Zakharov, *Nucl. Phys. B* [**165**]{} (1980) 67. V. A. Novikov, M. A. Shifman, A. I. Vainshtein, V. I. Zakharov, *Nucl. Phys. B* [**191**]{} (1981) 301. A. Migdal, *Meromorphization of Large N QFT*, [hep-th/1109.1623](http://arxiv.org/abs/1109.1623). M. Veltmann, *Diagrammatica*, Cambridge University Press. J. Mondejar, A. Pineda, *Constraints on Regge models from perturbation theory*, JHEP [**0710**]{} (2007) 061 [\[hep-th/0704.1417\]](http://arxiv.org/abs/0704.1417). J. Mondejar, A. Pineda, *$1/N_c$ and 1/n preasymptotic corrections to Current-Current correlators*, JHEP [**0806**]{} (2008) 039 [\[hep-th/0803.3625\]](http://arxiv.org/abs/0803.3625).
[^1]: But with coefficients that are divergent in our case, because of the infinite sum.
[^2]: We use Veltman conventions for Euclidean and Minkowski propagators of spin $s$ (see sect.(3)).
[^3]: We have verified explicitly in [@MB] the $RG$ estimates for the operators $\operatorname{Tr}{F}^2$ and $\operatorname{Tr}{F{^*\!F}}$ on the basis of a remarkable three-loop computation by Chetyrkin et al. [@chetyrkin:scalar; @chetyrkin:pseudoscalar] (see subsect.(1.2) and subsect.(2.4)). The earlier two-loop computation was performed in [@Kataev:1981gr].
[^4]: While the asymptotic behavior of the residues in Eq.(\[eqn:zk\_as\_behav\]), fixed $\gamma_0$ for the operator $\mathcal{O}$, holds for every real $\gamma'=\frac{\gamma_0}{\beta_0}$, it corresponds to the actual behavior of the momentum representation in Eq.(\[CS\]) for every $\gamma'$ but for $\gamma'=0,1$ (see sect.(3)).
[^5]: We use here a manifestly covariant notation as opposed to the one in the $TFT$ [@boch:glueball_prop; @boch:crit_points].
[^6]: In [@MB] we have set $\delta=0$.
[^7]: This follows from the identity $\operatorname{Tr}F^2(x)=\frac{1}{2} \operatorname{Tr}F^{-2}(x)+\operatorname{Tr}(F{^*\!F})$ and by the fact that the term $\int d^4x \operatorname{Tr}(F{^*\!F})$ is irrelevant in the $TFT$ [@boch:crit_points].
[^8]: This means on the presently larger lattice with the smaller value of the $YM$ coupling.
[^9]: Again we have set $\delta=0$ in Eq.(\[eqn:formula\]).
[^10]: The only common feature is the gross picture of the existence of the mass gap and of an infinite tower of massive glueballs.
[^11]: We would like to thank Biagio Lucini for suggesting this interpretation.
[^12]: Biagio Lucini, private communication.
[^13]: On the contrary the standard identification is not employed in [@Brower1]. In fact, in [@Brower1] it is shown that on the string side there is another scalar state with mass lower than the dilaton. But this lowest-mass scalar couples to a field on the string side that has no correspondent on the gauge side. In particular, according to the non-standard identification, the mass gap would not arise by states that couple to $\operatorname{Tr}F^2$, a statement that we do not believe. For this non-standard choice $r_s=1.7388$, $r_{ps}=2.092$, $r_2=1.7388$. Indeed, subsequently in [@Brower2] it is employed the standard identification.
[^14]: Biagio Lucini communicated to us that there is an ongoing computation by Lucini-Rago-Rinaldi.
[^15]: After this paper was posted in the arXiv we have been informed of [@A1; @A2] where, for the meson propagators of the scalar and of the vector current in $QCD$, the scaling of the residues with the meson masses are analyzed assuming an asymptotically linear spectrum and employing a different technique based on dispersion relations and on the explicit perturbative computation. The leading and next-to-leading asymptotic results of [@A1; @A2] for the residues of the meson propagator of the vector and of the scalar current agree perfectly with the asymptotic theorem of this paper as special cases.
[^16]: And mutatis mutandis for $C_1$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We consider a model of *energy complexity* in Radio Networks in which transmitting or listening on the channel costs one unit of energy and computation is free. This simplified model captures key aspects of battery-powered sensors: that battery-life is most influenced by transceiver usage, and that at low transmission powers, the actual cost of transmitting and listening are very similar.
The energy complexity of tasks in single-hop (clique) networks are well understood [@ChangKPWZ17; @NakanoO00; @BenderKPY18; @JurdzinskiKZ02c]. Recent work of Chang et al. [@ChangDHHLP18] considered energy complexity in *multi-hop* networks and showed that $\mathsf{Broadcast}$ admits an *energy-efficient* protocol, by which we mean each of the $n$ nodes in the network spends $O({\mathrm{polylog}}(n))$ energy. This work left open the strange possibility that *all* natural problems in multi-hop networks might admit such an energy-efficient solution.
In this paper we prove that the landscape of energy complexity is rich enough to support a multitude of problem complexities. Whereas $\mathsf{Broadcast}$ can be solved by an energy-efficient protocol, exact computation of $\mathsf{Diameter}$ cannot, requiring $\Omega(n)$ energy. Our main result is that $\mathsf{Breadth First Search}$ has sub-polynomial energy complexity at most $2^{O(\sqrt{\log n\log\log n})}=n^{o(1)}$; whether it admits an efficient $O({\mathrm{polylog}}(n))$-energy protocol is an open problem.
Our main algorithm involves recursively solving a generalized BFS problem on a “cluster graph” introduced by Miller, Peng, and Xu [@miller2013parallel]. In this application, we make crucial use of a close relationship between distances in this cluster graph, and distances in the original network. This relationship is new and may be of independent interest.
We also consider the problem of approximating the network $\mathsf{Diameter}$. From our main result, it is immediate that $\mathsf{Diameter}$ can be 2-approximated using $n^{o(1)}$ energy per node. We observe that, for all $\epsilon > 0$, approximating $\mathsf{Diameter}$ to within a $(2-\epsilon)$ factor requires $\Omega(n)$ energy per node. However, this lower bound is only due to graphs of very small diameter; for large-diameter graphs, we prove that the diameter can be nearly $3/2$-approximated using $O(n^{1/2+o(1)})$ energy per node.
author:
- |
Yi-Jun Chang\
[ETH Zürich]{}
- |
Varsha Dani\
[Univ. of New Mexico]{}
- |
Thomas P. Hayes[^1]\
[Univ. of New Mexico]{}
- |
Seth Pettie[^2]\
[Univ. of Michigan]{}
bibliography:
- 'reference.bib'
title: The Energy Complexity of BFS in Radio Networks
---
Introduction
============
Consider a network of $n$ tiny sensors scattered throughout a National Park. We’d like the sensors to organize themselves, so that in the event of a forest fire, say, information about it can be *efficiently* broadcast to the entire network.
In this extremely low power setting, sensors would need to spend most of their time with their transceiver units shut off to conserve power. In a steady state, we might expect that we have a good *labelling* of the nodes, and each node with label $i$ wakes up at times of the form $jP + i$, where $j$ runs through every positive integer, and $P$, the polling period, is also a positive integer. Each node wakes up just long enough to receive a message and forward it on any neighbors with label $i+1$. In this way, at the expense of adding $P$ to the latency, the nodes are able to reduce their power consumption by a factor of $P$, compared to the always-on scenario.
Once $P$ has been optimized, which should be a function of the available power, the next issue is how to find a good labelling efficiently. In this paper we focus mainly on the problem of computing *BFS labelings*: a given source $s$ has label zero, and all other devices label themselves by the distance (in hops) to $s$. Such a labeling gives a 2-approximation to the diameter, and via up-casts and down-casts, allows for time- and energy-efficient dissemination of a message from any origin. Thus, the problem of finding a BFS labelling is a very natural question in this context.
The Model
---------
We work within the classic *Radio Network* model [@chlamtac1985broadcasting], but in contrast to most prior work in this model, we treat *energy* (defined below) as the primary measure of complexity and *time* to be important, but secondary.
There are $|V|$ devices associated with the nodes of an undirected graph $G=(V,E)$. *Time* is partitioned into discrete steps. All devices agree on time zero,[^3] and agree on some upper bound $n\ge |V|$. In each timestep, each device performs some computation and chooses to either ${\mathsf{idle}}$, ${\mathsf{listen}}$ to the channel, or ${\mathsf{transmit}}$ a message. If a device $v$ chooses to ${\mathsf{listen}}$, and *exactly* one device $u\in N(v)$ ${\mathsf{transmit}}$s a message $m_u$, then $v$ receives $m_u$. In all other cases, $v$ receives no feedback from the environment.[^4] Devices can locally generate unbiased random bits; there is no shared randomness. Let ${\mathsf{RN}}[b]$ denote this Radio Network model, where $b$ is the maximum number of bits per message. All of our algorithms work in ${\mathsf{RN}}[O(\log n)]$ and all our lower bounds apply even to ${\mathsf{RN}}[\infty]$.
#### Cost Measures.
An algorithm runs in *time $t$* if all devices halt and return their output by timestep $t$. Typically the algorithm is randomized, with some probability of failure, but $t$ is a function of $n$ or other given parameters, not a random variable. The *energy cost* of $v\in V$ is the number of timesteps for which $v$ is ${\mathsf{listen}}$ing or ${\mathsf{transmit}}$ting. (This is motivated by the fact that the *sleep mode* of tiny devices is so efficient that it is reasonable to approximate its energy-cost by *zero*, and that transceiver usage is often the most expensive part of a computation. Moreover, at low transmission powers, transmitting and listening are comparable; see, e.g., [@PolastreSC05 Fig. 2] and [@BarnesCMA10 Table 1].) The energy cost of the *algorithm* is the maximum energy cost of any device.
#### Energy Complexity.
Most prior work on energy complexity has focused on *single-hop* (clique) networks, typically under the assumption that $|V|=n$ is *unknown*, and that some type of collision-detection is available.[^5] Because of the high degree of symmetry, there are only so many interesting problems in single-hop networks. Nakano and Olariu [@NakanoO00] proved that the $\mathsf{Initialization}$ problem (assign devices distinct IDs in $\{1,\ldots,|V|=n\}$) can be solved with $O(\log\log n)$ energy. Bender et al. [@BenderKPY18] showed that with collision-detection, all $n$ devices holding messages can transmit all of them using $O(\log(\log^* n))$ energy. Chang et al. [@ChangKPWZ17] proved that $\Theta(\log(\log^* n))$ is optimal, and more generally, settled the complexity of $\mathsf{LeaderElection}$ and $\mathsf{ApproximateCounting}$ (estimating “$n$”) in all the collision-detection models, with and without randomization. It was proved that collision-detection gives two exponential advantages in energy complexity. With randomization, $\mathsf{LeaderElection}/\mathsf{ApproximateCounting}$ takes $\Theta(\log^* n)$ energy (without CD) or $\Theta(\log(\log^* n))$ energy (with CD), and deterministically, they take $\Theta(\log N)$ energy (without CD [@JurdzinskiKZ02c]) and $\Theta(\log\log N)$ energy (with CD), where devices initially have IDs in $[N]$. See also [@JurdzinskiKZ02b; @JurdzinskiKZ02; @JurdzinskiKZ02c; @JurdziskiKZ03; @JurdzinskiS02]. Three-way tradeoffs between time, energy, and error probability were studied by Chang et al. [@ChangKPWZ17] and Kardas et al. [@KardasKP13].
Very recently Chang et al. [@ChangDHHLP18] extended the single-hop notion of energy complexity to *multi-hop* networks ($G$ is *not* a clique), and proved nearly sharp upper and lower bounds on $\mathsf{Broadcast}$, both in ${\mathsf{RN}}[O(\log n)]$ and the same model when listeners have collision detection. Without CD the energy complexity of $\mathsf{Broadcast}$ is between $\Omega(\log^2 n)$ and $O(\log^3 n)$; with CD it is between $\Omega(\log n)$ and $O\left(\frac{\log n\log\log n}{\log\log\log n}\right)$.
#### Other Energy Models.
Other notions of energy complexity have been studied in radio networks. For example, when distances between devices are very large, transmitting is significantly more expensive than listening, and it makes sense to design algorithms that minimize the worst-case number of transmissions per device. Gasnieniec et al. [@GasieniecKKPS07], Klonowski and Pajak [@KlonowskiP18], and Berenbrink et al. [@BerenbrinkCH09] studied broadcast and gossiping problems under this cost model. Klonowski and Sulkowska [@KlonowskiS16] defined a distributed model in which devices are scattered randomly at points in $[n^{1/d}]^d$ and can choose their transmission power dynamically. Several works have looked at energy complexity against an adversarial *jammer*, where the energy cost is sometimes a function of the adversary’s energy budget. See, e.g., [@KutylowskiR03; @KabarowskiKR06; @GilbertKPPSY14; @KingPSY18].
#### Time Complexity.
Most prior work in the ${\mathsf{RN}}$ model has studied the time complexity of basic primitives such as $\mathsf{LeaderElection}$, $\mathsf{Broadcast}$, $\mathsf{BFS}$, etc. We review a few results most relevant to our work. Bar-Yehuda et al.’s [@bar1991efficient] *decay* algorithm solves $\mathsf{BFS}$ in $O(D\log^2 n)$ time and $\mathsf{Broadcast}$ in $O(D\log n + \log^2 n)$ time. Here $D$ is the diameter of the network. Since $\Omega(D)$ is an obvious lower bound, the question is which $\log$-factors are necessary. Alon et al. [@alon1991lower] proved that the additive $\log^2 n$ term is necessary in a strong sense: even with full knowledge of the graph topology, $\mathsf{Broadcast}$ needs $\Omega(\log^2 n)$ time even when $D=O(1)$. Kushilevitz and Mansour [@KushilevitzM98] proved that if devices are forbidden from transmitting before hearing the message, then $\Omega(D\log(n/D))$ time in necessary. Czumaj and Davies [@CzumajD17] (improving [@haeupler2016faster]) gave a $\mathsf{Broadcast}$ algorithm running in $O(D\log_D n + {\mathrm{polylog}}(n))$ time, which is optimal when $D > n^\epsilon$. These $\mathsf{Broadcast}$ algorithms *do not* solve $\mathsf{BFS}$. Improving the classic $O(D\log^2 n)$ decay algorithm for $\mathsf{BFS}$, Ghaffari and Haeupler [@GhaffariH16] solve $\mathsf{BFS}$ in $O(D\log (n)\log\log (n) + {\mathrm{polylog}}(n))$ time.
#### New Results.
It is useful to coarsely classify energy-efficiency bounds as either *feasible* or *infeasible*. We consider ${\mathrm{polylog}}(n)$ energy to be feasible and polynomial energy $n^{\Omega(1)}$ to be infeasible.[^6] It is not immediately obvious that there are *any* natural, infeasible problems, especially if we are considering the full power of ${\mathsf{RN}}[\infty]$, where message congestion is not an issue. In this paper we demonstrate that the energy landscape is rich, and that even coarsely classifying the energy complexity of simple problems is technically challenging and demands the development of new algorithm design techniques. Our results are as follows
- We develop a recursive $\mathsf{BreadthFirstSearch}$ algorithm in ${\mathsf{RN}}[O(\log n)]$ with “intermediate” energy-complexity $2^{O(\sqrt{\log n\log\log n})} = n^{o(1)}$. The algorithm involves simulating itself on a clustered version of the input graph. Due to the nature of the ${\mathsf{RN}}$ model, this simulation is not free, but incurs a polylogarithmic increase in energy, which restricts the profitable depth of recursion to be at most $\sqrt{\log n/\log\log n}$.
- We give examples of some “hard” problems in energy-complexity, even when the model is ${\mathsf{RN}}[\infty]$. The problem of deciding whether $\mathsf{diam}(G)$ is 1 or at least 2 takes $\Omega(n)$ energy; in this case the hard graph $G$ is dense. We adapt the construction of [@AbboudCK16] (designed for the $\mathsf{CONGEST}$ model) to show that even on *sparse* graphs, with arboricity $O(\log n)$, deciding whether $\mathsf{diam}(G)$ is 2 or at least 3 takes $\tilde{\Omega}(n)$ energy.
- To complement the hardness results, we show that $\mathsf{Diameter}$ can be nearly $3/2$-approximated[^7] in ${\mathsf{RN}}[O(\log n)]$ with $O(n^{1/2+o(1)})$ energy, by adapting [@holzer2014brief; @RodittyW13] and using our new $\mathsf{BreadthFirstSearch}$ routine.
The existence of a subpolynomial-energy $\mathsf{BreadthFirstSearch}$ algorithm is somewhat surprising for information-theoretic reasons. Observe that the number of edges in $E(G)$ that are collectively discovered by all devices is at most the number of messages successfully received, which itself is at most the aggregate energy cost. Thus, if the per-device energy cost is $n^{o(1)}$, we can never hope to know about more than $n^{1+o(1)}$ edges in $E(G)$ — a negligible fraction of the input on dense graphs! On the other hand, it is possible to efficiently verify the *non-existence* of many non-edges. Given a candidate $\mathsf{BFS}$-labeling, for example, it is straightforward to *verify* its correctness with ${\mathrm{polylog}}(n)$ energy.
#### Organization.
In Section \[sect:cluster\] we review the Miller-Peng-Xu [@miller2013parallel] clustering algorithm and prove that it preserves distances better than previously known. In Section \[sect:cluster-sim\] we define some communications primitives and prove that they can be executed on the cluster graph (as if it were an ${\mathsf{RN}}[O(\log n)]$ network) at the cost of a polylogarithmic factor increase in energy usage. In Section \[sect:BFS\] we design and analyze a recursive BFS algorithm, which uses $2^{O(\sqrt{\log n\log\log n})}$ energy. In Section \[appendix:diameter\] we consider the energy cost of approximately computing the network’s $\mathsf{Diameter}$.
Cluster Partitioning {#sect:cluster}
====================
Miller, Peng, and Xu [@miller2013parallel] introduced a remarkably simple algorithm for partitioning a given graph into vertex-disjoint clusters with certain desirable properties. In this section we prove that the MPX clustering approximately preserves relative *distances* from the original graph significantly better than previously known.
Given a graph $G = (V,E)$, and a parameter $\beta$, each vertex $v \in V$ independently samples a random variable $\delta_v \sim \operatorname{Exponential}(\beta)$ from the exponential distribution with mean $1/\beta$. Assign each $v$ to the “cluster” centered at $u \in V$ that minimizes ${\mathsf{dist}}_G(v,u) - \delta_u$. Equivalently, we may think of a cluster forming at each vertex $u$ at time $-\delta_u$, and spreading through the graph at a uniform rate of one edge per time unit. Each vertex $v$ is absorbed into the first cluster to reach it, if this happens prior to time $-\delta_v$, when it would start growing its own cluster. Refer to Figure \[fig:cluster-graph\]. Throughout the paper, *we only choose $\beta$ such that $1/\beta$ is an integer*.
= \[draw, circle\] (a) at (2,1)[-1]{}; (b) at (3,1)[0]{}; (c) at (1,2)[-5]{}; (d) at (3,2)[-8]{}; (e) at (1,3)[-2]{}; (f) at (2,3)[0]{}; (a)–(b)–(c)–(d)–(e)–(f)–(a)–(c)–(e)–(a)–(d);; (b)–(d)–(f)–(b)–(e); (c)–(f); (g) at (1,4)[-2]{}; (h) at (1,5)[-4]{}; (i) at (2,4)[0]{}; (j) at (2,6)[0]{}; (k) at (3,4)[-1]{}; (l) at (3,5)[-1]{}; (m) at (3,6)[0]{}; (e)–(i)–(g); (i)–(h); (i)–(k)–(l)–(m); (l)–(j); (m)–(h); (n) at (4,3)[-6]{}; (o) at (4,6)[0]{}; (p) at (5,2)[-1]{}; (q) at (5,3)[0]{}; (r) at (5,4)[-4]{}; (s) at (5,5)[-1]{}; (t) at (5,6)[0]{}; (k)–(n); (n)–(p)–(q); (r)–(s); (s)–(t); (o)–(t); (o)–(l); (u) at (6,1)[0]{}; (v) at (6,2)[0]{}; (w) at (6,3)[-2]{}; (x) at (6,4)[-2]{}; (y) at (6,5)[0]{}; (z) at (6,6)[-5]{}; (aa) at (7,1)[-1]{}; (ab) at (7,2)[0]{}; (ac) at (7,3)[-1]{}; (ad) at (7,4)[0]{}; (ae) at (7,5)[-3]{}; (af) at (7,6)[0]{}; (p)–(v)–(ab)–(ac)–(w)–(q); (r)–(x)–(ad); (q)–(r); (ad)–(ae)–(y)–(s); (v)–(w); (y)–(z); (w)–(x); (ag) at (8,2)[0]{}; (ah) at (8,3)[0]{}; (ai) at (8,4)[-1]{}; (aj) at (8,5)[-3]{}; (p)–(u)–(aa)–(ag); (ah)–(ai)–(aj); (af)–(y); (aj)–(af); (ag)–(ah); (y)–(t); (r)–(y);
(C1) at (10,2)[C1]{}; (C2) at (11.5,3)[C2]{}; (C3) at (12.5,6)[C3]{}; (C4) at (12,4)[C4]{}; (C5) at (13.5,5)[C5]{}; (C6) at (15,5)[C6]{}; (C2)–(C1)–(C3); (C3)–(C5)–(C4)–(C3); (C4)–(C2)–(C6)–(C3);
Miller et al. [@miller2013parallel] were primarily interested in this construction because the algorithm parallelizes well, the clusters have diameter $O(\log(n)/\beta)$ w.h.p., and a $O(\beta)$-fraction of the edges are “cut,” having their endpoints in distinct clusters. Haeupler and Wajc [@haeupler2016faster] observed that this algorithm can be efficiently implemented in the Radio Network model [@chlamtac1985broadcasting; @ChlamtacK87], with only minor modifications.
The Cluster Graph as a Distance Proxy
-------------------------------------
Define ${{\mathsf{Cl}}}(u)$ to be the cluster containing $u$. The cluster graph, ${{\mathsf{cluster}}(G,\beta)} = G^* = (V^*,E^*)$ is defined by $$\begin{aligned}
V^* &= \{{{\mathsf{Cl}}}(u) \ | \ u\in V(G)\}\\
\mbox{ and } E^* &= \{({{\mathsf{Cl}}}(u),{{\mathsf{Cl}}}(v)) \ | \ (u,v)\in E(G), {{\mathsf{Cl}}}(u)\neq{{\mathsf{Cl}}}(v)\}.\end{aligned}$$
To prove that distances in $G^*$ are a good proxy for distances in $G$, we make use of the following lemma, which is a slight variant of lemmas by Miller, Peng, Vladu, and Xu [@MillerPVX15 Lemma 2.2] and Haeupler and Wajc [@haeupler2016faster Corollary 3.8]. We include a proof for completeness.
Define ${\mathsf{Ball}_{G}(v, \ell)}
=\{ u \in V \ | \ {\mathsf{dist}}_G(u,v) \leq \ell\}$ to be the ball of radius $\ell$ around $v$.
\[lem:cluster-exponential-hits-in-ball\] Let $G^* = {{\mathsf{cluster}}(G,\beta)}$ be the cluster graph for $G$. For every positive integer $j$ and $\ell>0$, the probability that the number of $G^*$-clusters intersecting ${\mathsf{Ball}_{G}(v, \ell)}$ is more than $j$ is at most $$(1 - \exp(-2 \ell \beta))^j.$$
Condition on the time $t$ that the $(j + 1)$st signal would reach vertex $v$, as well as on the identities $v_1, \dots, v_j$ of the vertices whose signals reach $v$ before time $t$. Due to the memoryless property of the exponential distribution, each of these arrival times are independently distributed as $\min\{t, {\mathsf{dist}}(v_i, v)\} - X \leq t - X$, where $X \sim \operatorname{Exponential}(\beta)$.
Now, if $\max_{1 \le i \le j} X_i > 2\ell$, then ${\mathsf{Ball}_{G}(v, \ell)}$ cannot intersect any clusters except those centered at $v_1, \dots,
v_j$, because they do not reach ${\mathsf{Ball}_{G}(v, \ell)}$ until times $\ge t - \ell$, whereas the first signal reached $v$ before time $t - 2\ell$, and has therefore already flooded all of ${\mathsf{Ball}_{G}(v, \ell)}$ before time $t - \ell$. Thus, $${\mathbf{P}\left({\mathsf{Ball}_{G}(v, \ell)} \mbox{ intersects more than $j$ clusters}\right)} \le {\mathbf{P}\left(\forall i\in[1,j], X_i \le 2 \ell\right)} = (1 - \exp(-2 \ell \beta))^{j}. \qedhere$$
A natural way to show that $G^*$ approximately preserves distances in $G$ is to consider the fraction of edges in a shortest path that are “cut” by the partition, which corresponds to applying Lemma \[lem:cluster-exponential-hits-in-ball\] with $\ell=1/2$ and $j=1$.[^8] This was the approach taken in [@ChangDHHLP18], but it only guarantees that the fraction of edges cut concentrates around its expectation ($O(\beta)$) for paths of length $\tilde{\Omega}({\operatorname{poly}}(\beta^{-1}))$. In Lemmas \[lem:dist-ratio-union-bound\] and \[lem:dist-ratio-union-bound2\] we use Lemma \[lem:cluster-exponential-hits-in-ball\] in a different way to bound the ratio of distances in $G$ to those in $G^*$, which works even for relatively short distances. Lemma \[lem:dist-ratio-union-bound\] applies to all distances (and suffices for our BFS application in Section \[sect:BFS\]) whereas Lemma \[lem:dist-ratio-union-bound2\] applies to distances $\Omega(\beta^{-1}\log^2 n)$.
\[lem:dist-ratio-union-bound\] Let $G^* = {{\mathsf{cluster}}(G,\beta)}$ be a clustering of $G$. There exists a constant $C$ such that for every pair $u,v \in V(G)$,
[(\_[G\^\*]{}((u),(v)) )]{} 1 - .
More generally, let $P=(u,\dots,v)$ be any length-$d$ path connecting $u$ and $v$. With probability $1 - \frac{1}{n^3}$, there exists a path $P^*$ in $G^*$ connecting ${{\mathsf{Cl}}}(u)$ and ${{\mathsf{Cl}}}(v)$ with length at most $d\cdot C\beta\log(n)$, where each cluster in $P^*$ intersects $P$.
First observe that the probability of any $\delta_v$-value being outside $[0,4\log(n)/\beta)$ is $\ll n^{-4}$ and hence all clusters have radius less than $4\log(n)/\beta$ with probability $\ll n^{-3}$. This gives the lower bound on ${\mathsf{dist}}_{G^*}(u,v)$.
For the upper bound, define $\ell$ to be the integer $1/\beta$. Fix any length-$d$ path $P$ from $u$ to $v$ (e.g., a shortest path, with $d={\mathsf{dist}}_G(u,v)$), and cover its vertices with $\left\lceil \frac{d}{2 \ell + 1} \right\rceil$ paths of length $2 \ell$. Applying Lemma \[lem:cluster-exponential-hits-in-ball\] to the center vertex $u'$ of one of these subpaths, we conclude that the number of clusters that intersect ${\mathsf{Ball}_{G}(u', \ell)}$, (which includes the entire subpath) is more than $j$ with probability $$\label{eqn:clusters-geom-distr}
\left(1 - \exp(-2 \beta \ell) \right)^j
=
(1 - \exp(-2))^j,$$ Choosing $j$ to be the appropriate multiple of $\log(n)$, we can make this probability $\ll n^{-4}$. Taking a union bound over the $\approx \beta d/2 < n$ subpaths, the probability that any subpath intersects more than $C \log(n)$ clusters is $\ll n^{-3}$. This concludes the proof.
Lemma \[lem:dist-ratio-union-bound\] suffices to achieve our main result, BFS labeling in $2^{O(\sqrt{\log n\log\log n})}$ energy, but the exponent can be improved by a constant factor by using Lemma \[lem:dist-ratio-union-bound2\] whenever applicable. We include the proof of Lemma \[lem:dist-ratio-union-bound2\] since it may be of independent interest.
\[lem:dist-ratio-union-bound2\] Let $G^* = {{\mathsf{cluster}}(G,\beta)}$ be a clustering of $G$. There exists a constant $C$ such that for every pair $u,v \in V(G)$ $$\begin{aligned}
&{\mathbf{P}\left({\mathsf{dist}}_{G^*}({{\mathsf{Cl}}}(u),{{\mathsf{Cl}}}(v))
\in \left[\frac{{\mathsf{dist}}_{G}(u,v)\cdot \beta}{8\log(n)}, \ {\mathsf{dist}}_G(u,v)\cdot C\beta\right]\right)}
\ge 1 - \frac{1}{n^3}.\end{aligned}$$
We condition on the event that all cluster radii are at most $4\log(n)/\beta$, which fails to hold with probability $\ll n^{-3}$. As before, the lower bound on ${\mathsf{dist}}_{G^*}(u,v)$ follows from this event. Furthermore, this implies that sufficiently distant segments on the shortest $u$-$v$ path are essentially independent.
As before, cover the vertices of the shortest $u$-$v$ path with length-$2\ell$ subpaths, $\ell=1/\beta$, and color the subpaths with $4\log(n)+1$ colors such that any two subpaths of the same color are at distance at least $8\log(n)/\beta$. Each color-class contains $\Omega(\log n)$ subpaths. By Lemma \[lem:cluster-exponential-hits-in-ball\] and (\[eqn:clusters-geom-distr\]), the number of clusters intersecting subpaths of a particular color class is stochastically dominated by the sum of $\Omega(\log n)$ geometrically distributed random variables with constant expectation $\frac{1}{1-(1-\exp(-2))} = \exp(2)$. By a Chernoff bound, the probability that this sum deviates from its expectation by more than a constant factor is $1/{\operatorname{poly}}(n)$. Hence, for sufficiently large $C$ (controlling the number of summands and the tolerable deviation) the probability that any color-class hits too many distinct clusters is $\ll n^{-3}$.
Lemma \[lem:dist-ratio-union-bound2\] cannot be improved by more than constant factors. It is easy to construct families of graphs for which both the upper and lower bounds are tight, with high probability, depending on which vertex pairs are chosen.
Distributed Implementation
--------------------------
The definition of ${{\mathsf{cluster}}(G,\beta)}$ immediately lends itself to a distributed implementation in radio networks, as was noted in [@haeupler2016faster]. For completeness we show how it can be reduced to calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$.
${\mathsf{Local}{\text -}\mathsf{Broadcast}}$:
: We are given two disjoint vertex sets ${\mathcal{S}}$ and ${\mathcal{R}}$, where each vertex $u \in {\mathcal{S}}$ holds a message $m_u$. An ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ algorithm guarantees that for every $v \in {\mathcal{R}}$ with $N(v) \cap {\mathcal{S}}\neq \emptyset$, with probability $1-f$, $v$ receives some message $m_u$ from [*at least one*]{} vertex $u \in N(v) \cap {\mathcal{S}}$. We only apply this routine with $f = 1/{\operatorname{poly}}(n)$.
\[lemma:sr-decay\] ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ can be implemented in $O(\log\Delta\log f^{-1})$ time and energy, where $\Delta\leq n-1$ is an upper bound on the maximum degree. Senders use $O(\log f^{-1})$ energy; receivers that hear a message use $O(\log \Delta)$ energy in expectation; receivers that hear no message use $O(\log\Delta\log f^{-1})$ energy.
This lemma follows from a small modification to the *Decay* algorithm [@bar1992time], which is known to be optimal in terms of *time*; see Newport [@Newport14]. For the sake of completeness, we provide a proof here. Each sender $u\in {\mathcal{S}}$ repeats the following $O(\log f^{-1})$ times. Randomly pick an $X_u \in [1,\log\Delta]$ such that ${\mathbf{P}\left(X_u =t\right)} \ge 2^{-t}$ and transmit $m_u$ at time step $X_u$. The energy of any sender is clearly $O(\log f^{-1})$ with probability 1. For a receiver $v\in {\mathcal{R}}$, if the number of senders in $N(v)$ is in the range $[2^{t-1},2^t]$, $v$ will receive *some* message with constant probability in the $t$th timestep of every iteration. Receivers with no adjacent sender will never detect this, and spend $\Theta(\log\Delta\log f^{-1})$ energy.
We show that ${{\mathsf{cluster}}(G,\beta)}$ can be computed, w.h.p., using $4\log(n)/\beta$ ${\mathsf{Local}{\text -}\mathsf{Broadcast}\text{s}}$ in the communication network $G=(V,E)$. Every vertex $u$ will learn its cluster-identifier ${\operatorname{ID}}({{\mathsf{Cl}}}(u)))$ and get a label ${\mathcal{L}}(v)$ such that ${\mathcal{L}}(v)=0$ iff $v$ is a cluster center and ${\mathcal{L}}(v)=i$ iff there is a $u\in N(v)$ with ${\mathcal{L}}(u)=i-1$ such that ${{\mathsf{Cl}}}(u)={{\mathsf{Cl}}}(v)$. If ${\mathcal{L}}(v) = i$, we say that $v$ is at *layer $i$*.
The graph ${{\mathsf{cluster}}(G,\beta)}$ is constructed as follows. Every vertex $v$ picks a value $\delta_v\sim \text{Exponential}(\beta)$ and sets its start time to be $\text{start}_v\gets \lceil \frac{4 \log(n)}{\beta} - \delta_v\rceil$. With probability at least $1-1/n^3$, all start times are positive. For $i = 1$ to $4\log(n)/\beta$, do the following. At the beginning of the $i$th iteration, if $v$ is not yet in any cluster and $\text{start}_v=i$, then $v$ becomes a cluster center and sets ${\mathcal{L}}(v)=0$. During the $i$th iteration, we execute ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ with ${\mathcal{S}}$ being the set of all clustered vertices and ${\mathcal{R}}$ the set of all as-yet unclustered vertices. The message of $u\in {\mathcal{S}}$ contains ${\operatorname{ID}}({{\mathsf{Cl}}}(u))$ and ${\mathcal{L}}(u)$. Any vertex $v \in {\mathcal{R}}$ receiving a message from $u \in {\mathcal{S}}$ joins $u$’s cluster and sets ${\mathcal{L}}(v)={\mathcal{L}}(u)+1$. Lemma \[lem:clustr-diam-ub\] follows immediately from the above construction.
\[lem:clustr-diam-ub\] The cluster graph ${{\mathsf{cluster}}(G,\beta)}$ can be constructed using $4\log(n)/\beta$ ${\mathsf{Local}{\text -}\mathsf{Broadcast}\text{s}}$ with probability $1-1/n^3$. This takes $O(\log^3(n)/\beta)$ time and $O(\log^3(n)/\beta)$ energy per vertex.
Communication Primitives for the Cluster Graph {#sect:cluster-sim}
==============================================
Our BFS algorithm forms a cluster graph $G^*$ and computes BFS recursively on numerous subgraphs of $G^*$. In order for this type of recursion to work, we need to argue that algorithms on the (abstract) $G^*$ can be simulated, with some time and energy cost, on the underlying $G$. We focus on algorithms that are composed *exclusively* of calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ (as our BFS algorithm is), but the method can be used to simulate arbitrary radio network algorithms.
We use the primitives ${\mathsf{Down}{\text -}\mathsf{cast}}$ and ${\mathsf{Up}{\text -}\mathsf{cast}}$ to allow cluster centers to disseminate information to their constituents and gather information from some constituent.
${\mathsf{Down}{\text -}\mathsf{cast}}$:
: There is a set ${\mathcal{U}}$ of vertices such that each $u \in {\mathcal{U}}$ is a cluster center, and the goal is to let each $u \in {\mathcal{U}}$ broadcast a message $m_u$ to all members of ${{\mathsf{Cl}}}(u)$.
${\mathsf{Up}{\text -}\mathsf{cast}}$:
: There is a set ${\mathcal{U}}$ of vertices such that each $u \in {\mathcal{U}}$ wants to deliver a message $m_u$ to the center of ${{\mathsf{Cl}}}(u)$. Any cluster center $v$ with at least one $u\in {\mathcal{U}}\cap {{\mathsf{Cl}}}(v)$ must receive *any* message from one such vertex.
\[lemma:sr-cluster\] ${\mathsf{Up}{\text -}\mathsf{cast}}$ and ${\mathsf{Down}{\text -}\mathsf{cast}}$ can be implemented with $O{\left( \frac{\log^3 n}{\beta\log(1/\beta)} \right)}$ calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ on $G$, in which each vertex participates in $O(\log n)$ ${\mathsf{Local}{\text -}\mathsf{Broadcast}\text{s}}$. I.e., the total time and energy per vertex are $O{\left( \frac{\log^5 n}{\beta\log(1/\beta)} \right)}$ and $O(\log^3 n)$, respectively.
Consider the following two quantities:
: ${\mathcal{C}}= O(\log_{(1/\beta)} n)$. By Lemma \[lem:cluster-exponential-hits-in-ball\], ${\mathcal{C}}$ is an upper bound on the number of clusters intersecting $N(v) \cup \{v\}$, with high probability. Intuitively, ${\mathcal{C}}$ represents the [*contention*]{} at $v$.
: ${\mathcal{D}}= 4\log(n)/\beta$ is the maximum radius of any cluster, i.e., the maximum ${\mathcal{L}}$-value is at most ${\mathcal{D}}$.
If there were only *one* cluster, then doing an ${\mathsf{Up}{\text -}\mathsf{cast}}$ or ${\mathsf{Down}{\text -}\mathsf{cast}}$ would be easily reducible to $O(\log(n)/\beta)$ ${\mathsf{Local}{\text -}\mathsf{Broadcast}\text{s}}$. In order to minimize interference between neighboring clusters, we modify, slightly, the clustering algorithm so that all constituents of a cluster have shared randomness. When a new cluster center $v$ is formed, it generates a subset $S_{{{\mathsf{Cl}}}(v)} \subset [\ell]$, $\ell = \Theta({\mathcal{C}}\log n)$, by including each index independently with probability $1/{\mathcal{C}}$. It disseminates $S_{{{\mathsf{Cl}}}(v)}$ to all members of ${{\mathsf{Cl}}}(v)$ along with ${\operatorname{ID}}({{\mathsf{Cl}}}(v))$. It is straightforward to show that with probability $1-1/{\operatorname{poly}}(n)$, for every $v$, $$\label{eqn:isolation}
\text{There exists } j\in[\ell] : j\in S_{{{\mathsf{Cl}}}(v)}
\text{ and for all } u\in N(v), j\not\in S_{{{\mathsf{Cl}}}(u)}$$ ${\mathsf{Down}{\text -}\mathsf{cast}}$ is implemented in ${\mathcal{D}}$ stages, each stage consisting of $\ell$ steps. In step $j$ of stage $i$, we execute ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ with ${\mathcal{S}}$ consisting of every $v$ with a message to send such that ${\mathcal{L}}(v)=i-1$ and $j\in S_{{{\mathsf{Cl}}}(v)}$, and with ${\mathcal{R}}$ consisting of every $u$ with ${\mathcal{L}}(u)=i$ and $j\in S_{{{\mathsf{Cl}}}(u)}$. By (\[eqn:isolation\]), during stage $i$, every layer-$i$ vertex in every participating cluster receives the cluster center’s message with high probability. An ${\mathsf{Up}{\text -}\mathsf{cast}}$ is performed in an analagous fashion.
Each ${\mathsf{Up}{\text -}\mathsf{cast}}$/${\mathsf{Down}{\text -}\mathsf{cast}}$ performs $\ell{\mathcal{D}}=\Theta({\mathcal{C}}{\mathcal{D}}\log n)=O(\frac{\log^3 n}{\beta\log(1/\beta)})$ ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ on $G$, for a total of $O(\frac{\log^5 n}{\beta\log(1/\beta)})$ time. Each vertex $v$ participates in $O(|S_{{{\mathsf{Cl}}}(v)}|)$ ${\mathsf{Local}{\text -}\mathsf{Broadcast}\text{s}}$, which is $O(\log n)$ w.h.p., for a total of $O(\log^3 n)$ energy.
\[lem:sr-sim\] A call to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ on the cluster graph $G^* = {{\mathsf{cluster}}(G,\beta)}$ can be simulated with $O\left(\frac{\log^3 n}{\beta\log(1/\beta)}\right)$ calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ on $G$; each vertex in $V(G)$ participates in $O(\log n)$ ${\mathsf{Local}{\text -}\mathsf{Broadcast}\text{s}}$.
Let ${\mathcal{S}}$ and ${\mathcal{R}}$ be the sets of sending and receiving clusters in $G^*$. All members of $C$ know that $C$ is in ${\mathcal{S}}$ or ${\mathcal{R}}$. The ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ algorithm has three steps.
1. Begin by doing a ${\mathsf{Down}{\text -}\mathsf{cast}}$ in each $C\in {\mathcal{S}}$. Each member of $C$ learns the message $m_C$.
2. Perform one ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ on $G$, with sender set $\bigcup_{C\in {\mathcal{S}}} C$ and receiver set $\bigcup_{C'\in {\mathcal{R}}} C'$. At this point, w.h.p., every ${\mathcal{R}}$-cluster adjacent to an ${\mathcal{S}}$-cluster has at least one constituent that has received a message.
3. Finally, do one ${\mathsf{Up}{\text -}\mathsf{cast}}$ on every cluster $C\in {\mathcal{R}}$ to let the cluster center of $C$ learn one message from a constituent of $C$, if any.
The algorithm clearly satisfies the requirement of ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ on ${{\mathsf{cluster}}(G,\beta)}$. The number of calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ on $G$ is $O\left({\mathcal{C}}{\mathcal{D}}\log n\right) = O\left(\frac{\log^3 n}{\beta\log(1/\beta)}\right)$ and each vertex participates in $O(\log n)$ of them.
BFS with Sub-polynomial Energy {#sect:BFS}
==============================
Technical Overview
------------------
Suppose every vertex in the graph could cheaply compute its distance from the source up to an $\pm \rho$ error. Given this knowledge, we could trivially solve exact BFS in $\tilde{O}(D)$ time and $\tilde{O}(\rho)$ energy per vertex, simply by letting vertices sleep through steps that they need not participate in. In particular, we would advance the *BFS wavefront* one layer at a time using calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$, except that each vertex $u$ would sleep through the first $\widetilde{{\mathsf{dist}}_G}(s,u)-\rho$ calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$, where $\widetilde{{\mathsf{dist}}_G}$ is the approximate distance. It would be guaranteed to fix ${\mathsf{dist}}_G(s,u)$ (and halt) in the next $2\rho$ calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$.
Lemmas \[lem:dist-ratio-union-bound\] and \[lem:dist-ratio-union-bound2\] suggest a method of obtaining approximate distances. If we computed the cluster graph $G^* = {{\mathsf{cluster}}(G,\beta)}$ and then computed *exact* distances on $G^*$, Lemmas \[lem:dist-ratio-union-bound\] and \[lem:dist-ratio-union-bound2\] allow us to approximate all distances from the source, up to an additive error of $\tilde{O}(\beta^{-1})$ (for small distances) and multiplicative error of $w^2$ (for larger distances), where $w = \Theta(\log n)$ is a sufficiently large multiple of $\log n$. Note that, from the perspective of energy efficiency, the main advantage to computing distances in $G^*$ rather than $G$ is that $G^*$ has a smaller diameter $w\beta\cdot{\mathsf{diam}}(G)$.
Our algorithm computes distances up to $D$ by advancing the *BFS wavefront* in ${\left\lceil \beta D \right\rceil}$ stages, extending the radius $\beta^{-1}$ per stage. The *$i$th wavefront* $W_i$ is defined to be the vertex set $$W_i = \{u\in V(G) \mid {\mathsf{dist}}_G(S,u) = i\beta^{-1}\},$$ where $S$ is the set of sources. (Recall that $\beta^{-1}$ is an integer.) To implement the $i$th stage correctly it suffices to activate a vertex set $X_i$ that includes all the affected vertices, in particular: $$X_i \supset \{u \in V(G) \mid {\mathsf{dist}}_G(S,u) \in [i\beta^{-1},(i+1)\beta^{-1}]\} \ \ \mbox{ (w.h.p.)}$$ In order for each vertex $u$ to decide whether it should join $X_i$ or sleep through the $i$th stage, $u$ maintains lower and upper bounds on its distance to the $i$th wavefront, or more accurately, the distance from its cluster ${{\mathsf{Cl}}}(u)$ to $W_i$ in $G$.
\[inv:interval\] Before the $i$th stage begins, each vertex $u$ knows $L_i({{\mathsf{Cl}}}(u))$ and $U_i({{\mathsf{Cl}}}(u))$ such that $${\mathsf{dist}}_G(W_i,{{\mathsf{Cl}}}(u)) = {\mathsf{dist}}_G(S,{{\mathsf{Cl}}}(u)) - i\beta^{-1}
\in [L_i({{\mathsf{Cl}}}(u), U_i({{\mathsf{Cl}}}(u))].$$
Clearly, if some cluster $C$ satisfies Invariant \[inv:interval\] at stage $i-1$ with the interval $[L_{i-1}(C),U_{i-1}(C)]$, it also satisfies Invariant \[inv:interval\] at stage $i$ with $L_i(C) = L_{i-1}(C) - \beta^{-1}$ and $U_i(C) = U_{i-1}(C) - \beta^{-1}$ since the $(i-1)$th stage advances the wavefront by exactly $\beta^{-1}$. In the algorithm these are called *Automatic Updates*; they can be done locally, without expending any energy. In order to keep the interval $[L_i(C),U_i(C)]$ relatively narrow (and hence useful for keeping vertices in $C$ asleep), we occasionally refresh it with a *Special Update*. Let $W_i^* \subseteq V(G^*)$ be the clusters in $G^*$ that intersect the wavefront $W_i$. We call BFS on a subgraph $G_i^*$ of $G^*$ from the source-set $W_i^*$, up to a radius of $Z[i]$. The only clusters that participate in this recursive call are those that are likely to be relevant, i.e., those $C$ for which $L_i(C) \le Z[i]\cdot \beta^{-1}$. (The $Z[i]$ sequence will be defined shortly.) After this recursive call completes we update $[L_i(C),U_i(C)]$ for all participating $C$ by applying Lemmas \[lem:dist-ratio-union-bound\] and \[lem:dist-ratio-union-bound2\] to the (exact) distance ${\mathsf{dist}}_{G_i^*}(W_i^*, C)$ obtained in the cluster graph.
#### Specification.
Our ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$ procedure (see Figure \[fig:BFS\]) takes four parameters: $G$, the graph, $S \subset V(G)$, the set of sources, $A \subseteq V(G)$, the set of *active* vertices (which is a superset of $S$), and $D$, the depth of the search. When we make a call to ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$, every vertex can locally calculate $D$ and whether it is in $S$ or $A$.[^9] $G^*$ denotes the cluster graph returned by ${{\mathsf{cluster}}(G,\beta)}$, where $\beta$ is a parameter fixed throughout the computation. We compute $G^*$ once, just before the first recursive call to ${\mathsf{Recursive}\text{-}\mathsf{BFS}}(G,\cdot,\cdot,\cdot)$; subsequent calls to ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$ on $G$ with different $(S,A,D)$ parameters can use the same $G^*$. It is important to remember that $G$ can be either the actual radio network (RN) or a *virtual* RN on which we can simulate RN algorithms, with a certain overhead in terms of time and energy. At the termination of ${\mathsf{Recursive}\text{-}\mathsf{BFS}}(G,S,A,D)$, every vertex $u \in A$ returns ${\mathsf{dist}}_{A}(S,u)$ if it is at most $D$, and $\infty$ otherwise. Vertices in $V(G)\backslash A$ expend no energy.
#### Correctness.
If one believes that the algorithm (Figure \[fig:BFS\]) faithfully implements the high level description given so far, its correctness is immediate. Every time we set $[L_i(C),U_i(C)]$ the interval is correct with probability $1-1/{\operatorname{poly}}(n)$, either because $[L_{i-1}(C),U_{i-1}(C)]$ is correct (an Automatic Update), or because they are set according to Lemmas \[lem:dist-ratio-union-bound\] and \[lem:dist-ratio-union-bound2\], which hold with probability $1-1/{\operatorname{poly}}(n)$ (Special Update). If $L_i(C)$ is correct for all $C$, then $X_i$ will include all vertices necessary to compute the $(i+1)$th wavefront, and the $i$th stage will succeed, up to the $1/{\operatorname{poly}}(n)$ error probability inherent in calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$. The main question is whether the procedure is *efficient*.
#### Efficiency.
We will argue that for a very specific $Z[\cdot]$ sequence, which guides the Special Update steps, the following claims hold:
\[claim:X\] Each vertex is included in the set $X_i$ for $\tilde{O}(1)$ values of $i$.
\[claim:U\] For each vertex $u$, ${{\mathsf{Cl}}}(u)$ is included in $G_i^*$ for $\tilde{O}(1)$ values of $i$.
Our algorithms ($\mathsf{cluster}$ and ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$) are based solely on calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$. Define ${\mathsf{En}}(D)$ to be the number of calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ that one vertex participates in when computing BFS to distance $D$. If Claims \[claim:X\] and \[claim:U\] hold, then $$\label{eqn:EnD1}
{\mathsf{En}}(D) \leq \tilde{O}(1)\cdot {\mathsf{En}}(\tilde{O}(\beta D)) + \tilde{O}(\beta^{-1})$$ The $\tilde{O}(\beta^{-1})$ term accounts for the cost of computing $G^*$ (Lemma \[lem:clustr-diam-ub\]) and the $\tilde{O}(1)$ times a vertex is included in $X_i$ (Claim \[claim:X\]), each of which involves $\beta^{-1}$ ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$s on $G$. Every recursive call to ${\mathsf{Recursive}\text{-}\mathsf{BFS}}(G^*,\cdot,\cdot,D')$ has $D' = \tilde{O}(\beta D)$ and by Claim \[claim:U\] each vertex participates in $\tilde{O}(1)$ such recursive calls. Moreover, according to Lemma \[lem:sr-sim\], the *energy* overhead for simulating one call to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ on $G^*$ is $\tilde{O}(1)$ calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ on $G$. This justifies the first term of (\[eqn:EnD1\]). The time and energy of our algorithm is analyzed in Theorem \[thm:main\]. As a foreshadowing of the analysis, if $D_0$ is the distance threshold of the top-level call to ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$, we will set set $\beta = 2^{-\sqrt{\log D_0 \log\log n}}$ and apply (\[eqn:EnD1\]) to recursion depth $\sqrt{\log D_0/\log\log n}$.
#### The $Z$-Sequence.
The least obvious part of the ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$ algorithm is the $Z$-sequence, which guides how Special Updates are performed. Recall that $w = \Theta(\log n)$ is a sufficiently large multiple of $\log n$; if we are computing BFS to distance $D$ in $G$, then we need never compute BFS beyond distance $D^* \geq w\beta D$ in $G^*$. The $Z$-sequence is defined as follows. $$\begin{aligned}
Y[i] &= \max_{j\ge 0} \{2^j \mbox{ such that } 2^j|i\} & (i \ge 1)\\
\mbox{I.e., } Y &= (1,2,1,4,1,2,1,8,1,2,1,4,1,2,1,16, 1,2,1,4,1,2,1,8,1, 2,1,4,1,2,1,32, \ldots)\\
Z[0] &= D^\ast\\
Z[i] &= \min\{ D^\ast, \ \alpha \cdot Y[i]\}, \quad \mbox{ where $\alpha = 4$} & (i \ge 1)\\
D^\ast &= \min_{j\ge 0} \{\alpha 2^j \mbox{ such that } \alpha 2^j \ge w\beta D\}\end{aligned}$$ In other words, $Z$ is derived by multiplying $Y$ by $\alpha = 4$, truncating large elements at $D^*$, and beginning the sequence at 0, with $Z[0]=D^*$. (Here $Z[0]$ corresponds to the distance threshold $D^*$ used in Step 1 of ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$ to estimate distances to the $0$th wavefront $W_0 = S$.)
\[0.5\]
(0,0) rectangle (16,16); (1,0)–(1,2); (2,0)–(2,4); (3,0)–(3,2); (4,0)–(4,8); (5,0)–(5,2); (6,0)–(6,4); (7,0)–(7,2); (8,0)–(8,16); (9,0)–(9,2); (10,0)–(10,4); (11,0)–(11,2); (12,0)–(12,8); (13,0)–(13,2); (14,0)–(14,4); (15,0)–(15,2); (16,0)–(16,16); (0.5,0)–(0.5,1); (1.5,0)–(1.5,1); (2.5,0)–(2.5,1); (3.5,0)–(3.5,1); (4.5,0)–(4.5,1); (5.5,0)–(5.5,1); (6.5,0)–(6.5,1); (7.5,0)–(7.5,1); (8.5,0)–(8.5,1); (9.5,0)–(9.5,1); (10.5,0)–(10.5,1); (11.5,0)–(11.5,1); (12.5,0)–(12.5,1); (13.5,0)–(13.5,1); (14.5,0)–(14.5,1); (15.5,0)–(15.5,1);
(0,12.8)–(1.6,11.2)–(3.2,10.4)–(4,9.6)–(5,9.1)–(6.4,7.2)–(8,5.8)–(8.3,5.65)–(9,5.6)–(10,5.4)–(11,5)–(12,4.8)–(14,4.5)–(16,3.7);
(0,12.8)–(0.1,12.5)–(4,4.5)–(4,8)–(4.1,7.7)–(8,0.2) –(8,5.8)–(8.1,5.5)–(10,1.5)–(10,4)–(10.1,3.7)–(12,0.2)–(12,4.8)–(12.1,4.5)–(14,0.5)–(14,4)–(14.1,3.7)–(16,0.2)–(16,3.7);
Figure \[fig:time-evolution\] gives an example, from the perspective of a single cluster, of how the distance estimate evolve over time.
#### Organization of Section \[sect:BFS\].
In Section \[sect:BFSlemmas\] we prove a number of lemmas that relate to the correctness and efficiency of ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$, including proofs of Claims \[claim:X\] and \[claim:U\]. In Section \[sect:BFSmainthm\] we analyze the overall time and energy-efficiency of the BFS algorithm.
Auxiliary Lemmas {#sect:BFSlemmas}
----------------
Lemma \[lem:distance-estimates\] justifies how distance estimates are updated in Steps 1, 7, and 8 of ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$ in order to preserve Invariant \[inv:interval\], with high probability.
\[lem:distance-estimates\] Let $W_i$ be the $i$th wavefront; let $\Upsilon$ include all clusters $C$ such that ${\mathsf{dist}}_G(W_i,C) \in [i\beta^{-1}, (i+Z')\beta^{-1}]$; and let $G_i^*$ be the subgraph of $G^*$ induced by $\Upsilon$. If ${{\mathsf{Cl}}}(u) \in \Upsilon$ and ${\mathsf{dist}}_G(S,u) \ge i\beta^{-1}$, then w.h.p.,
\_G(W\_i,u) .
If $d = {\mathsf{dist}}_G(W_i,u) \ge Z'\beta^{-1}+1$ then the lower bound is already correct, so suppose that $d \le Z'\beta^{-1}$. Let $P$ be any length-$d$ path from $u$ to $W_i$ in $G$. Lemma \[lem:dist-ratio-union-bound\] implies that w.h.p., there is a path $P^*$ in $G_i^*$ from $W_i^*$ to ${{\mathsf{Cl}}}(u)$ with length at most $O(\beta d \log n) < w \beta d$, and so ${\mathsf{dist}}_{G_i^*}(W_i^*,{{\mathsf{Cl}}}(u)) \leq w \beta d$, as required.
This upper bound follows from the cluster diameter upper bound $K = 8\log(n)/\beta \leq w/(2\beta)-1$. Thus, if ${\mathsf{dist}}_{G_i^*}(W_i^*, {{\mathsf{Cl}}}(u)) = d'$ then ${\mathsf{dist}}_G(W_i,u) \le (d'+1)\cdot(K+1) \leq \max\{d'+1\}\cdot w\beta^{-1}$.
Lemma \[lem:distance-estimates\] shows that Step 1 of ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$ initializes $L_0(\cdot), U_0(\cdot)$ to satisfy Invariant \[inv:interval\], w.h.p. Here $\Upsilon = A^*$ is the set of all active clusters; if ${\mathsf{dist}}_A(S,u) \in [0,D]$ (the relevant range), then Lemma \[lem:distance-estimates\] guarantees that ${\mathsf{dist}}_A(S,u) \in [L_0({{\mathsf{Cl}}}(u)),U_0({{\mathsf{Cl}}}(u))]$ after Step 1. The estimates set in Step 8 of ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$ are trivially correct; Lemma \[lem:distance-estimates\] also guarantees that the lower and upper bounds fixed in Step 7 are correct.
We use several properties of the $Z$ sequence, listed in Lemma \[lem:Z\].
\[lem:Z\] Fix an index $i$.
1. For any number $b \geq \alpha$, define $j>i$ to be the smallest index such that $Z[j] \geq b$. Then $$j-i \leq b/\alpha.$$ Suppose the number $b$ additionally satisfies that $b \leq Z[i]$ and $b \in \{\alpha, 2\alpha, 4\alpha, 8\alpha, \ldots D^\ast\}$. Then we have $Z[i] = b$ and $j-i = Z[j]/\alpha$. \[Seq-property-1\]
2. Define $j > i$ to be the smallest index such that $Z[j] > Z[i]$ or $Z[j] = D^\ast$. Then we have $j-i = Z[i]/\alpha$; moreover, all indices $k \in \{i+1, \ldots, j-1\}$ satisfy that $Z[k] \leq Z[i]/2$. \[Seq-property-2\]
Parts 1 and 2 follow from the fact that in the $Y$-sequence, the values at least $2^\ell$ appear periodically with period $2^\ell$. Thus, the values at least $\alpha 2^\ell$ in the $Z$-sequence also appear periodically with period $2^\ell$.
We are now prepared to prove Claim \[claim:X\].
It follows from Invariant \[inv:interval\] that $X_i$, as defined in Step 4 of ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$, includes all active vertices within distance $\beta^{-1}$ of the $i$th wavefront $W_i$. It remains to show no $u$ is included in $X_i$ for more than ${\operatorname{poly}}(\log n)$ indices $i$.
Suppose that $u \in X_i$ for $i>0$. It follows that $L_i({{\mathsf{Cl}}}(u)) \le \beta^{-1}$ and that in the previous stage, $L_{i-1}({{\mathsf{Cl}}}(u)) \leq 2\beta^{-1}$. Since $Z[i] \ge \alpha = 4$, it must have been that ${{\mathsf{Cl}}}(u)$ was included in $\Upsilon$ and participated in the Special Update (Step 7 of ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$) before stage $i$. If ${\mathsf{dist}}_{G_i^*}(W_i^*,{{\mathsf{Cl}}}(u))=x$ and after the Special Update, $L_i({{\mathsf{Cl}}}(u)) \le \beta^{-1}$, it must be that $x\leq w$, and hence $U_i({{\mathsf{Cl}}}(u)) \le w^2 \beta^{-1}$. Thus, $u$ may participate in at most $w^2$ more stages (joining $X_i,X_{i+1},\ldots,X_{i+w^2}$) before its distance is settled and it is deactivated, in Step 6 of ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$.
Before proving Claim \[claim:U\] we begin with three auxiliary lemmas, Lemmas \[lem-aux-1-ssss\], \[lem-aux-11-ssss\], and \[lem-aux-2-ssss\].
\[lem-aux-1-ssss\] Recall $\alpha = 4$. Suppose cluster $C$ is included in $G_i^*$ and $G_j^*$, but not in $G_{i'}^*$ for any $i'\in \{i+1, \ldots, j-1\}$. Then we have $$\frac{L_i(C)}{8\alpha} \leq \frac{j-i}{\beta} \leq \max\left\{\frac{1}{\beta},\; \frac{L_i(C)}{\alpha}\right\}.$$
We prove the upper and lower bounds on $(j-i)/\beta$ separately.
#### Upper Bound.
Select $j^* > i$ to be the first stage index for which $Z[j^\ast] \geq \min\{ D^\ast, \beta L_i(C)\}$. Clearly $j^* \geq j$ since if $L_{i+1}(C),\ldots,L_{j^*}(C)$ were set according to Automatic Updates we would have $L_{j^*-1}(C) \leq L_i(C) \leq Z[j^*] \beta^{-1}$, which would trigger a Special Update to $L_{j^*}(C)$. There are two cases to consider, either of which establishes the upper bound on $(j-i)/\beta$.
- Suppose $\beta\cdot L_{i}(C) < \alpha$. Then $j = j^\ast = i+1$, and so $(j-i)/\beta = 1/\beta$.
- Suppose $\beta\cdot L_{i}(C) \geq \alpha$. According to Lemma \[lem:Z\](\[Seq-property-1\]), $j^\ast - i \leq \min\{ D^\ast, \beta L_i(C)\}/\alpha$, and so $(j-i)/\beta
\leq (j^\ast -i)/\beta \leq \min\{ D^\ast, \beta L_i(C)\}/(\alpha \beta) \leq L_i(C)/\alpha$.
#### Lower Bound.
In order to prove that $(j-i)\cdot \beta^{-1} \geq L_i(C)/(8\alpha)$ it suffices to find any particular index $j^*$ such that:
1. $(j^\ast-i)\cdot \beta^{-1} \geq L_i(C)/ (8 \alpha)$.
2. For all $j' \in [i+1, j^\ast]$, $C$ is not included in $G_{j'}^*$.
Condition 2 implies that $j^\star < j$ and then Condition 1 implies that $(j-i)/\beta > (j^\ast-i)\cdot \beta^{-1} \geq L_i(C)/ (8 \alpha)$, as desired. We will explain how to select $j^*$ shortly. In the meantime, consider the following two conditions; we will argue that (a) and (b) imply Condition 2 above.
1. For all $j' \in [i+1, j^\ast - 1]$, we have $Z[j'] < Z[j^\ast]$.
2. $L_{i}(C) - (j^\ast - i)\cdot \beta^{-1} > Z[j^\ast]\cdot \beta^{-1}$.
Recall that $C$ is *not* included in $G_{j'}^*$ iff $L_{j'-1}(C) > (Z[j']+1)\cdot \beta^{-1}$, so it suffices to prove the latter inequality for every $j' \in [i+1,j^*]$. By induction, we can assume that the claim is true for all $j'' \in [i+1,j'-1]$, i.e., $L_{j''}(C)$ was set according to an Automatic Update (Step 8) and $L_{j''}(C) = L_i(C) - (j''-i)\cdot \beta^{-1}$. Thus, $$\begin{aligned}
L_{j'-1}(C)
&= L_{i}(C) - ((j'-1) - i)\cdot \beta^{-1}
&&&& \text{Follows from induction hypothesis}\\
&\geq L_{i}(C) - ((j^\ast-1) - i)\cdot \beta^{-1}\\
& > (Z[j^\ast]+1)\cdot \beta^{-1} &&&& \text{by (b)}\\
& > (Z[j']+1)\cdot \beta^{-1} &&&& \text{by (a)}\end{aligned}$$
#### Choice of $j^\ast$.
Select $x$ to be the integer in $\{ \alpha, 2\alpha, 4\alpha, 8\alpha, \ldots, D^\ast\}$ such that $$x \in \left[\frac{\beta\cdot L_i(C)}{8},\; \frac{\beta\cdot L_i(C)}{4}\right).$$ It is guaranteed that $x$ exists so long as $\beta\cdot L_i(C) > 4\alpha$. When $\beta\cdot L_i(C) \leq 4\alpha$, we already have the desired lower bound on $(j-1)\cdot\beta^{-1}$ since $L_i(C)/(8\alpha) \leq \beta^{-1}/2 < \beta^{-1} \leq (j-i)\cdot\beta^{-1}$.
Observe that $Z[i]$, like $x$, is also an integer in $\{ \alpha, 2\alpha, 4\alpha, 8\alpha, \ldots, D^\ast\}$. In a Special Update, the largest value that $L_i(C)$ can attain is $Z[i]\cdot\beta^{-1}+1$, hence $$Z[i] \geq \beta\cdot (L_i(C)-1) > \beta\cdot L_i(C)/2 > 2x,$$ Define $j^\ast > i$ to be the smallest index such that $Z[j^\ast] \geq x$. In particular, since $Z[i] \geq 2x > x$, Lemma \[lem:Z\](\[Seq-property-1\]) guarantees that $Z[j^\ast] = x$ and hence $$j^\ast-i = Z[j^\ast]/\alpha = x/\alpha \geq \beta\cdot L_i(C)/ (8 \alpha).$$ Thus Condition 1 is met for this choice of $j^*$.
Condition (a) is also met, since by definition of $j^*$, $Z[j'] < x = Z[j^*]$ for all $j' \in [i+1,j^*-1]$. Now we turn to Condition (b). Observe that $$\label{eqn:ub}
j^\ast-i = Z[j^\ast]/\alpha = x/\alpha < \beta\cdot L_i(C)/ (4 \alpha).$$ We prove that $L_{i}(C) - (j^\ast - i)\cdot \beta^{-1} > Z[j^\ast] \cdot \beta^{-1}$. $$\begin{aligned}
Z[j^\ast]\cdot \beta^{-1}
&< 2x\cdot \beta^{-1} &&&& \text{ since } Z[j^\ast] < 2x\\
&< L_i(C)/2 &&&& \text{ since } x \in [\beta\cdot L_i(C)/8, \beta\cdot L_i(C)/4)\\
&= L_i(C)(1 - 2/\alpha) &&&& \text{ since } \alpha = 4\\
&< L_i(C) - 8(j^\ast - i)\cdot \beta^{-1} &&&& \text{ by (\ref{eqn:ub}), } (j^\ast-i)\cdot\beta^{-1} < L_i(C)/(4 \alpha)\\
&< L_i(C) - (j^\ast - i)\cdot \beta^{-1}\end{aligned}$$ Conditions (a) and (b) imply Condition 2, which implies $L_i(C)/(8\alpha) \leq (j-i)\cdot \beta^{-1}$.
\[lem-aux-11-ssss\] Suppose $C$ appears in $G_i^*$ and $G_j^*$ but not in $G_{i'}^*$ for any $i' \in \{i+1, \ldots, j-1\}$. Suppose that when $L_i(C)$ is set during a Special Update (Step 7 of ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$), we have $L_{i}(C) =(Z[i]/\beta)+1$. It must be that $Z[j] > Z[i]$ or $Z[j] = D^\ast$.
Define $j^\ast > i$ to be the *smallest* index such that $Z[j^\ast] > Z[i]$ or $Z[j^\ast] = D^\ast$. To prove the lemma it suffices to show that $j^* = j$, i.e., $L_{j'}(C)$ is set according to an Automatic Update for $j' \in \{i+1,\ldots,j^*-1\}$ but $C$ appears in $G_{j^*}$ and participates in a Special Update.
To prove that $L_{j'}(C)$ is set according to an Automatic Update (assuming, inductively, that the claim holds for $L_{i+1}(C),\ldots,L_{j'-1}(C)$) it suffices to show $$L_{j'-1}(C) - \beta^{-1}
= L_i(C) - (j'-i)\cdot\beta^{-1}
> Z[k]\cdot \beta^{-1}.$$ By Lemma \[lem:Z\](\[Seq-property-2\]), $j^*-i = Z[i]/\alpha$. Since $j'<j^*$ we have $$(j'-i)\cdot \beta^{-1}
< (j^\ast - i)\cdot \beta^{-1}
= Z[i]\cdot \beta^{-1}/\alpha
< L_{i}(C) / \alpha.$$ It follows that $$L_i(C) - (j'-i)\cdot\beta^{-1}
> (1-1/\alpha)L_i(C) > L_i(C)/2.$$ On the other hand, Lemma \[lem:Z\](\[Seq-property-2\]) implies that $$Z[j']\cdot\beta^{-1}
\leq
(Z[i]/2)\cdot\beta^{-1}
< L_i(C)/2.$$ Therefore $L_i(C) - (j'-i)\beta^{-1} > Z[j']\cdot\beta^{-1}$, implying $L_{j'}(C)$ is set according to an Automatic Update. Finally, from the definition of $i$ and $j^*$ we have $$L_{j^*-1}(C) < L_i(C)
= Z[i]\cdot\beta^{-1}+1
\leq Z[j^*]\cdot \beta^{-1}+1
< (Z[j^*]+1)\cdot \beta^{-1},$$ meaning $C$ appears in $G_{j^*}^*$ and $L_{j^*}(C)$ is set according to a Special Update.
In the ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$ algorithm, the upper bound estimates $U_i(C)$ are all monotonically decreasing with $i$, due to the way Special and Automatic Updates are performed in Steps 7 and 8. On the other hand, the lower bound estimates $L_i(C)$ are only monotonically decreasing during Automatic Updates and may oscillate many times over the execution of the algorithm. (See Figure \[fig:time-evolution\] for a depiction of how this happens.) Since $U_{\cdot}(\cdot)$-values offer a more stable way to measure progress, we need to connect them with the $L_{\cdot}(\cdot)$-values, which directly influence the composition of $X_i$ and $G_i^*$.
\[lem-aux-2-ssss\] If $[L_i(C),U_i(C)]$ is set during a Special Update step, then $$U_i(C) \leq \max\{2w^2\cdot L_i(C),\; 2w^2\cdot \beta^{-1}\}$$
The proof is by induction on $i$. We regard Step 1 of ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$ as the Special Update for $i=0$. The claim clearly holds for $i=0$ since $U_0(C)$ is set such that $U_0(C) \in \{w\beta^{-1}, w^2\cdot L_0(C)\}$. Assume, inductively, that the lemma holds for all indices less than $i$.
In general, whenever $L_i(C)$ is set to be $x\beta^{-1}/w$ in Step 7, where $x= {\mathsf{dist}}_{G_i^*}(W_i^*,C)$, the claim holds since $U_i(C) \in \{w\beta^{-1}, w^2\cdot L_i(C)\}$. Thus, we may proceed under the assumption that $L_i(C)$ is set to be $Z[i]\cdot\beta^{-1}+1$ during a Special Update.
Define $i^\ast < i$ to be the last stage in which $L_{i^*}(C)$ was set by a Special Update. We consider two cases, depending on how $L_{i^*}(C)$ was set.
- Suppose $L_{i^\ast}(C)$ is set to be $x/(\beta w) \leq Z[i^\ast]/\beta$ in the Special Update, and as a consequence, $U_{i^*}(C) \leq \max\{w\beta^{-1}, w^2 \cdot L_{i^*}(C)\}$. (Here $x > 0$ is the BFS-label of $C$ found in Step 7.) If $U_{i^\ast}(C) \leq (2w^2+1)\cdot \beta^{-1}$, then we are already done, since $U_i(C) \leq U_{i^\ast}(C) - 1/\beta \leq 2w^2\cdot \beta^{-1}$. Thus, we may assume $U_{i^\ast}(C) > (2w^2+1)\cdot\beta^{-1}$, and consequently, that $L_{i^\ast}(C) > 2\beta^{-1}$.
By Lemma \[lem-aux-1-ssss\], we have $(i-{i^\ast})/\beta \leq \max\{1/\beta, L_{i^\ast}(C)/\alpha\} < L_{i^\ast}(C)/2$. In order for $L_i(C)$ to be set by a Special Update, it is necessary that $L_{i-1}(C) \leq (Z[i]+1)\cdot \beta$. Thus, we must have $$\begin{aligned}
Z[i]\cdot \beta^{-1} &\geq L_{i-1}(C)-\beta^{-1}
& & & & \text{since $L_{i-1}(C) \leq (Z[i]+1)\cdot \beta^{-1}$}\\
&= L_{i^\ast}(C) - (i-{i^\ast})\cdot\beta^{-1}
& & & & \text{since $C$ does not appear in $G_{i^\ast+1}^*,\ldots,G_{i-1}^*$}\\
&\geq L_{i^\ast}(C)/2 & & & & \text{since $(i-{i^\ast})\cdot \beta^{-1} \leq L_{i^\ast}(C)/2$}\end{aligned}$$ Remember that $L_i(C) = Z[i]\cdot \beta^{-1}+1$, and based on this we show that $U_i(C) \leq 2w^2 \cdot L_i(C)$. $$\begin{aligned}
L_i(C) &= Z[i]\cdot \beta^{-1}+1 \\
&> L_{i^\ast}(C)/2 & & & &\text{ since } Z[i]\cdot\beta^{-1} \geq L_{i^\ast}(C)/2\\
& \geq U_{i^\ast}(C) / (2w^2) & & & &\text{ since $U_{i^\ast}(C) \leq w^2 L_{i^\ast}(C)$}\\
&> U_i(C) / (2w^2) & & & &\text{ since $U_i(C) < U_{i^\ast}(C)$, as $i^\ast < i$.}\end{aligned}$$
- Now consider the case when $L_{{i^\ast}}(C)$ is set to be $Z[{i^\ast}]\cdot\beta^{-1}+1$. By Lemma \[lem-aux-11-ssss\], we have $Z[i] \geq Z[i^\ast]$. Therefore, $L_i(C) = Z[i]\cdot\beta^{-1}+1 \geq Z[i^\ast]\cdot\beta^{-1}+1 = L_{i^\ast}(C)$. By the inductive hypothesis, it is guaranteed that $U_{i^\ast}(C) \leq \max\{2w^2\cdot\beta^{-1},\, 2w^2\cdot L_{i^\ast}(C)\}$. If $U_{i^\ast}(C) \leq 2w^2\cdot \beta^{-1}$, then we are done. If $U_{i^\ast}(C) \leq 2w^2\cdot L_{i^\ast}(C)$, then we have $$L_i(C) \geq L_{i^\ast}(C) \geq U_{i^\ast}(C) / (2w^2) > U_i(C) / (2w^2).$$
This concludes the induction and the proof.
We are now in a position to prove Claim \[claim:U\], that each vertex participates in $G_i^*$ for at most $\tilde{O}(1)$ indices $i$.
Suppose that $C$ participates in a Special Update that sets $[L_i(C),U_i(C)]$ with $U_i(C) \ge 2w^2\cdot \beta^{-1}$ and that the next interval to be set by a Special Update is $[L_{j}(C),U_{j}(C)]$. Then $$\begin{aligned}
(j-i)
&\geq \frac{\beta\cdot L_i(C)}{8\alpha}
\geq \frac{\beta\cdot U_i(C)}{16\alpha w^2}.\label{eqn:blah}\end{aligned}$$ The first inequality of (\[eqn:blah\]) follows from Lemma \[lem-aux-1-ssss\] and the second inequality from Lemma \[lem-aux-2-ssss\]. Since $U_*(C)$ is decremented by at least $\beta^{-1}$ in each stage, (\[eqn:blah\]) implies that $$U_{j}(C) \leq U_i(C) - (j-i)\cdot\beta^{-1} \leq U_i(C)\left(1-\frac{1}{16\alpha w^2}\right).$$ In other words, $C$ participates in at most $\log_{1+\Theta(1/w^2)} D = \Theta(w^2 \log D) = O(\log^3 n)$ Special Updates until some stage $i$ in which $U_i(C) < 2w^2\cdot \beta^{-1}$, after which $C$ participates in at most $O(w^2)$ Special Updates all constituents of $C$ settle their distance from the source and are deactivated.
Time and Energy Complexity of BFS {#sect:BFSmainthm}
---------------------------------
The remainder of this section constitutes a proof of Theorem \[thm:main\].
\[thm:main\] Let $G=(V,E)$ be a radio network, $s\in V$ be a distinguished source vertex, and $D = \max_u {\mathsf{dist}}_G(s,u)$. A Breadth First Search labeling can be computed in $\tilde{O}(D)\cdot 2^{O(\sqrt{\log D\log\log n})}$ time and $\tilde{O}(1)\cdot 2^{O(\sqrt{\log D\log\log n})}$ energy, with high probability.
The main problem is to compute BFS up to some threshold distance $D_0$. Once we have a solution to this problem, we can obtain bounds in terms of the (unknown) $D$ parameter by testing every $D_0 = 2^k$ that is a power of 2, stopping at the first value that labels all of $V(G)$. We use a call to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ as a unit of measurement of both time and energy, i.e., calling ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ takes one unit of time, and every *participating* vertex expends one unit of energy. (By Lemma \[lemma:sr-decay\] actual time and energy are at most a $O(\log^2 n)$ factor larger.)
The algorithm we apply is a slightly modified ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$, where all cluster graphs in all recursive invocations are constructed with $\beta = 2^{-\sqrt{\log D_0 \log\log n}}$. We only apply ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$ to recursion depth $L = \sqrt{\log D_0/\log\log n}$, at which point we revert to the trivial BFS algorithm that settles all distances up to $D'$ using $D'$ time and energy, by calling ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ $D'$ times.
Define ${\mathsf{En}}_r(D')$ to be the number of calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ that a vertex participates in when computing BFS to distance $D'$, and when the recursion depth is $r \in [0,L]$. Thus, we have $${\mathsf{En}}_r(D') = \left\{
\begin{array}{lr}
\tilde{O}(1)\cdot {\mathsf{En}}_{r+1}(\tilde{O}(\beta D')) + \tilde{O}(\beta^{-1}) & \mbox{if $r < L$}\\
D' & \mbox{if $r=L$}
\end{array}\right.$$ By Lemma \[lem:clustr-diam-ub\] the cost to create the cluster graph $G^*$ is $\tilde{O}(\beta^{-1})$. By Claim \[claim:X\] each vertex appears in $X_i$ for $\tilde{O}(1)$ stages $i$, and for each, participates in $\beta^{-1}$ calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$. These costs are covered by the $\tilde{O}(\beta^{-1})$ term. All calls to ${\mathsf{Recursive}\text{-}\mathsf{BFS}}$ on $G^*$ involve computing BFS to some distance at most $D^* = w\beta D' = \tilde{O}(\beta D')$. By Claim \[claim:U\], every vertex participates in $\tilde{O}(1)$ such recursive calls. Moreover, by Lemma \[lem:sr-sim\], every cluster $C$ (vertex in $G^*$) that participates in a call to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ on $G^*$ can be simulated such that constituent vertices of $C$ participate in $\tilde{O}(1)$ calls to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ on $G$. The costs of recursive calls are represented by the $\tilde{O}(1)\cdot {\mathsf{En}}_{r+1}(\tilde{O}(\beta D'))$ term.
When the recursion depth $r$ reaches $L$, the *maximum* value of $D'$ is therefore at most $$D_L = D_0 \cdot (\tilde{O}(\beta))^L = (\tilde{O}(1))^L = 2^{O(\sqrt{\log D_0\log\log n})},$$ since $\beta^L = D_0^{-1}$. Thus, the energy cost of the top-level recursive call is at most $${\mathsf{En}}_0(D_0) = (\tilde{O}(1))^L \cdot (D_L + \tilde{O}(\beta^{-1})) = \tilde{O}(1)\cdot 2^{O(\sqrt{\log D_0 \log\log n})}.$$
We can set up a similar recursive expression for the time of this algorithm. $${\mathsf{Time}}_r(D') \le \left\{
\begin{array}{lr}
\displaystyle O(D') + \tilde{O}(\beta^{-1})\cdot
\sum_{i=0}^{{\left\lceil \beta D' \right\rceil}-1} {\mathsf{Time}}_{r+1}(Z[i]) & \mbox{if $r<L$}\\
D' & \mbox{ if $r=L$}
\end{array}
\right.$$ The $r=L$ case is the time of the trivial algorithm, so we focus on justifying the expression for $r<L$. The time to advance the BFS wavefront over all ${\left\lceil \beta D' \right\rceil}$ stages of Step 5 is $O(D')$. We treat Step 1 as the Special Update for $i=0$ with $Z[0]=D^*$. In general, the Special Update for stage $i$ takes ${\mathsf{Time}}_{r+1}(Z[i])$ time , and each unit of time (i.e., a call to ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$) is simulated in $G$ in time linear in the maximum cluster diameter, namely $\tilde{O}(\beta^{-1})$. By Lemma \[lem:Z\], each value $b\in B = \{\alpha,2\alpha,4\alpha,\ldots,D^*\}$ appears less than $(\beta D'/b)$ times in $Z[0],\ldots,Z[{\left\lceil \beta D' \right\rceil}-1]$, hence we can rewrite the sum as $\sum_{b\in B} (\beta D'/b)\cdot {\mathsf{Time}}_{r+1}(b)$. Assuming inductively that ${\mathsf{Time}}_{r+1}(b)$ is $b\cdot(\tilde{O}(1))^{L - (r+1)}$, which holds when $r+1=L$, we have $$\begin{aligned}
{\mathsf{Time}}_r(D') &\leq O(D') + \tilde{O}(\beta^{-1})\cdot
\sum_{b\in B} (\beta D'/b)\cdot {\mathsf{Time}}_{r+1}(b)\\
&= O(D') + \tilde{O}(1)\cdot \sum_{b\in B} (D'/b)\cdot b\cdot (\tilde{O}(1))^{L-(r+1)}\\
&= D' \cdot (\tilde{O}(1))^{L-r}\end{aligned}$$ Hence ${\mathsf{Time}}_0(D_0) = D_0 \cdot (\tilde{O}(1))^L = \tilde{O}(D_0)\cdot 2^{O(\sqrt{\log D_0\log\log n})}$.
Hardness of Diameter Approximation {#appendix:diameter}
==================================
In this section, we show that certain approximations of diameter cannot be computed in $o(n)$ energy, even allowing messages of unlimited size. Our lower bounds also hold in the setting where the network supports [*collision detection*]{}, i.e., in each time slot $t$, each listener $v$ is able to distinguish between the following two cases: (i) at least two vertices in $N(v)$ transmit at time $t$ (noise), or (ii) no vertex in $N(v)$ transmits at time $t$ (silence).
First, we show that computing a $(2-\epsilon)$-approximation of diameter is hard by proving that it takes $\Omega(n)$ energy to distinguish between (i) an $n$-vertex complete graph $K_n$ (which has diameter $1$), or (ii) an $n$-vertex complete graph minus one edge $K_n - e$ (which has diameter 2).
\[thm:diameter-lb1\] The energy complexity of computing a $(2-\epsilon)$-approximation of diameter is $\Omega(n)$, even on the class of unit-disc graphs.
Throughout the proof, we consider the scenario where the underlying graph is $K_n$ with probability $1/2$, and is $K_n - e$ with probability $1/2$. The edge $e$ is chosen uniformly at random. Observe that both $K_n$ and $K_n - e$ are both unit disc graphs. Let $\mathcal{A}$ be any randomized algorithm that is able to distinguish between $K_n$ and $K_n - e$. We make the following simplifying assumptions, which only increase the capabilities of the vertices.
- Each vertex has a distinct ID from $[n]$.
- All vertices have access to a shared random string.
- By the end of each time slot $t$, each vertex knows the following information: (i) the IDs of the vertices transmitting at time $t$, (ii) the IDs of the vertices listening at time $t$, and (iii) the channel feedback (i.e., noise, silence, or a message $m$) for each listening vertex.
With the above extra capabilities, all vertices share the same history. Since the actions of the vertices at time $t+1$ depend only on the shared history of all vertices and their shared random bits, by the end of time $t$ all vertices are able to predict the actions (i.e., transmit a message $m$, listen, or idle) of all vertices at time $t+1$.
We say that time $t$ is [*good*]{} for a pair $\{u,v\}$ if the following conditions are met. Intuitively, if $t$ is not good for $\{u,v\}$, then what happens at time $t$ does not reveal any information as to whether $\{u,v\}$ is an edge.
- The number of transmitting vertices at time $t$ is either 1 or 2,
- One of the two vertices $\{u,v\}$ listens at time $t$, and the other one transmits at time $t$.
Once the shared random string is fixed, define $X_{\text{bad}}$ to be the set of pairs $\{u,v\}$ such that there is no time $t$ that is good for $\{u,v\}$ in an execution of $\mathcal{A}$ on $K_n$. Define $X_{\text{good}}$ to be the remaining pairs.
We claim that if the energy per vertex is at most $E = (n-1)/8$, then for every pair $\{u,v\}$, ${\mathbf{P}\left(\{u,v\} \in X_{\text{bad}}\right)} \geq 1/2$. Recall that if a time $t$ is good for some pair, then the number of transmitting vertices is at most 2. Thus, if $t$ is good for $x$ pairs, then at least $x/2$ vertices listen at time $t$, and so the total energy spent over all vertices and all time slots is at least $|X_{\text{good}}| / 2$. On the other hand, it is also at most $nE = n(n-1)/8$. If $n(n-1)/8 \geq |X_{\text{good}}| / 2$, then $|X_{\text{bad}}| \geq n(n-1)/4$ and ${\mathbf{P}\left(\{u,v\} \in X_{\text{bad}}\right)} \geq 1/2$.
Recall that we pick $e$ at random and then choose the input graph to be either $K_n$ or $K_n-e$. Once $e$ is selected, let $\mathcal{E}$ be the event that $e\in X_{\text{bad}}$, which now depends only on the shared random string. When $\mathcal{E}$ occurs, the execution of $\mathcal{A}$ is identical on both $K_n$ and $K_n - e$, and so the success probability of $\mathcal{A}$ is at most $1/2$. Thus, $\mathcal{A}$ fails with probability at least $(1/2) {\mathbf{P}\left(\mathcal{E}\right)} \geq 1/4$. This contradicts the assumption that $\mathcal{A}$ is able to distinguish between $K_n$ and $K_n - e$.
For sparse graphs (i.e., those with $O(\log n)$-arboricity), we show that $(3/2-\epsilon)$-approximation of diameter is hard. The proof follows the framework of [@AbboudCK16], which shows that computing diameter takes $\Omega(n / \log^2 n)$ time in the ${\mathsf{CONGEST}}$ model, or more generally $\Omega\left(\frac{n}{B \log n}\right)$ time in the message-passing model with $B$-bit message size constraint. Note that a time lower bound in ${\mathsf{CONGEST}}$ does not, in general, imply any lower bound in ${\mathsf{RN}}[\infty]$, which *has no message size constraint*. The main challenge for proving Theorem \[thm:diameter-lb2\] is that we allow messages of unbounded length.
\[thm:diameter-lb2\] The energy complexity of computing an $(3/2-\epsilon)$-approximation of diameter is $\Omega(n / \log^2 n)$, even on graphs of $O(\log n)$-arboricity or $O(\log n)$ treewidth.
The proof is based on a reduction from the [*set-disjointness*]{} problem of communication complexity, which is defined as follows. Consider two players $A$ and $B$, each of them holding a subset of $\{0, \ldots, n-1\}$. Their task is to decide whether their subsets are disjoint. If the maximum allowed failure probability is $f < 1/2$, then they need to communicate $\Omega(n)$ bits [@BravermanM13; @KalyanasundaramS92]. This is true even if the two players have access to a public random string.
#### Lower Bound Graph Construction.
Let $S_A=\{a_1, \ldots, a_{\alpha}\}$ and $S_B=\{b_1, \ldots, b_{\beta}\}$ be two subsets of $\{0, \ldots, k-1\}$ corresponding to an instance of set-disjointness problem. We assume that $k = 2^{\ell}$, for some positive integer $\ell$, and so each element $s \in S_A \cup S_B$ is represented as a binary string of length $\ell = \log k$. We write ${\mathsf{Ones}}(s) \subseteq [\ell] = \{1, \ldots, \ell\}$ to denote the set of indices $i$ in $[\ell]$ such that $s[i]=1$ (i.e., the $i$th bit of $s$ is $1$); similarly, ${\mathsf{Zeros}}(s) = [\ell] \setminus {\mathsf{Ones}}(s)$ is the set of indices $i$ in $[\ell]$ such that $s[i]=0$. For example, if the binary representation of $s$ is $10110010$ ($\ell = 8$), then ${\mathsf{Ones}}(s) = \{1,3,4,7\}$ and ${\mathsf{Zeros}}(s) = \{2,5,6,8\}$.
Define the graph $G=(V,E)$ as follows.
Vertex Set.
: Define $V=V_A \cup V_B \cup V_C \cup V_D \cup \{u^\star, v^\star\}$, where $V_A = \{u_1, \ldots, u_\alpha\}$, $V_B = \{v_1, \ldots, v_\beta\}$, $V_C = \{w_1, \ldots, w_{\ell}\}$, and $V_D = \{x_1, \ldots, x_{\ell}\}$. Note that we have natural 1-1 correspondences $V_A \leftrightarrow S_A$, $V_B \leftrightarrow S_B$, $V_C \leftrightarrow [\ell]$, and $V_D \leftrightarrow [\ell]$.
Edge Set.
: The edge set $E$ is constructed as follows. Initially $E = \emptyset$.
For each vertex $u_i \in V_A$ and each $w_j \in V_C$, add $\{u_i, w_j\}$ to $E$ if $j \in {\mathsf{Ones}}(a_i)$.
For each vertex $u_i \in V_A$ and each $x_j \in V_D$, add $\{u_i, x_j\}$ to $E$ if $j \in {\mathsf{Zeros}}(a_i)$.
For each vertex $v_i \in V_B$ and each $w_j \in V_C$, add $\{v_i, w_j\}$ to $E$ if $j \in {\mathsf{Zeros}}(b_i)$.
For each vertex $v_i \in V_B$ and each $x_j \in V_D$, add $\{v_i, x_j\}$ to $E$ if $j \in {\mathsf{Ones}}(b_i)$.
Add edges between $u^\star$ and all vertices in $V_A \cup V_C \cup V_D$.
Add edges between $v^\star$ and all vertices in $V_B \cup V_C \cup V_D$.
The graph $G$ has $n=\alpha+\beta+2\ell+2 \le 2(k+\log k+1)$ vertices. It is straightforward to show that $G$ has arboricity and treewidth $O(\log k) = O(\log n)$.
A crucial observation is that if $S_A \cap S_B =\emptyset$ (a [*yes*]{}-instance for the set-disjointness problem), then the diameter of $G$ is 2; otherwise (a [*[no]{}*]{}-instance for the set-disjointness problem) the diameter of $G$ is 3. This can be seen as follows. First of all, observe that we must have ${\mathsf{dist}}(s,t) \leq 2$ unless $s \in V_A$ and $t \in V_B$. Now suppose $s = u_i \in V_A$ and $t = v_j \in V_B$.
- Consider the case $a_i \neq b_j$. We show that ${\mathsf{dist}}(s,t) = 2$. Note that there is an index $l \in [\ell]$ such that $a_i$ and $b_j$ differ at the $l$th bit. If the $l$th bit of $a_i$ is 0 and the $l$th bit of $b_j$ is 1, then $(u_i, x_l, v_j)$ is a length-2 path between $s$ and $t$. If the $l$th bit of $a_i$ is 1 and the $l$th bit of $b_j$ is 0, then $(u_i, w_l, v_j)$ is a length-2 path between $s$ and $t$.
- Consider the case $a_i = b_j$. We show that ${\mathsf{dist}}(s,t) = 3$. Note that there is no index $l \in [\ell]$ such that $a_i$ and $b_j$ differ at the $l$th bit. Thus, each $w_l \in V_C$ and $x_l \in V_D$ is adjacent to exactly one of $\{u_i, v_j\}$. Hence there is no length-2 path between $s$ and $t$.
Therefore, if $S_A \cap S_B =\emptyset$, then ${\mathsf{dist}}(s,t) = 2$ for all pairs $\{s,t\}$, and so the diameter is 2; otherwise, there exist $s = u_i \in V_A$ and $t = v_j \in V_B$ such that ${\mathsf{dist}}(s,t) = 3$, and so the diameter is 3.
#### Reduction.
Suppose that there is a randomized distributed algorithm $\mathcal{A}$ that is able to compute the diameter with $o(n / \log^2 n)$ energy per vertex, with failure probability $f = 1/{\operatorname{poly}}(n)$. We show that the algorithm $\mathcal{A}$ can be transformed into a randomized communication protocol that solves the set-disjointness problem with $o(n)$ bits of communication, and with the same failure probability $f = 1/{\operatorname{poly}}(n)$.
The main challenge in the reduction is that we do not impose any message size constraint. To deal with this issue, our strategy is to consider a modified computation model $\mathcal{M}'$. We will endow the vertices in the modified computation model $\mathcal{M}'$ with strictly more capabilities than the original radio network. Then, we argue that in the setting of $\mathcal{M}'$, we can assume that each message has size $O(\log k)$.
#### Modified Computation Model $\mathcal{M}'$.
We add the following extra powers to the vertices:
(P1)
: All vertices have access to an infinite shared random string. They know the vertex set and the IDs of all vertices. Specifically, ${\operatorname{ID}}(w_i) = i$ for each $w_i \in V_C$; ${\operatorname{ID}}(x_i) = \ell + i$ for each $x_i \in V_D$; ${\operatorname{ID}}(u^\star) = 2\ell +1$; ${\operatorname{ID}}(v^\star) = 2\ell +2$. Thus, for each $v \in V_C \cup V_D \cup \{u^\star, v^\star\}$, its role can be inferred from ${\operatorname{ID}}(v)$.
(P2)
: Messages received by vertices in $V_C \cup V_D \cup \{u^\star, v^\star\}$ (according to the usual radio network rules) are immediately communicated to *all* vertices. For example, if $v \in V_C \cup V_D \cup \{u^\star, v^\star\}$ receives $m$ from $u \in V$ at time $t$, then by the end of round $t$ all vertices in $V$ know that “$v$ receives $m$ from $u$ at time $t$.”
(P3)
: Each vertex $v \in V_A \cup V_B$ knows the list of the IDs of its neighbors initially.
Next, we discuss the consequences of these extra powers. In particular, we show that we can make the following assumptions about algorithms in this modified model $\mathcal{M}'$.
#### Vertices in $V_C \cup V_D \cup \{u^\star, v^\star\}$ Never Transmit.
Powers (P1) and (P2) together imply that each vertex in the graph is able to locally simulate the actions of all vertices in $V_C \cup V_D \cup \{u^\star, v^\star\}$. Intuitively, this means that all vertices in $V_C \cup V_D \cup \{u^\star, v^\star\}$ do not need to transmit at all throughout the algorithm.
Note that each vertex $v \in V$ already knows the list of $N(v) \cap (V_C \cup V_D \cup \{u^\star, v^\star\})$. If $v \in V_A \cup V_B$, then $v$ knows this information via (P3). If $v \in V_C \cup V_D \cup \{u^\star, v^\star\}$, then $v$ knows this information via (P1); the role of each vertex in $V_C \cup V_D \cup \{u^\star, v^\star\}$ can be inferred from its ID, which is a public to everyone.
Thus, right before the beginning of each time $t$, each vertex $v \in V$ already knows exactly which vertices in $N(v) \cap (V_C \cup V_D \cup \{u^\star, v^\star\})$ will transmit at time $t$ and their messages. Thus, in the modified model $\mathcal{M}'$, we can simulate the execution of an algorithm which allows the vertices in $V_C \cup V_D \cup \{u^\star, v^\star\}$ to transmit by another algorithm that forbid them to do so.
#### Messages Sent by Vertices in $V_A \cup V_B$ Have Length $O(\log k)$.
Next, we argue that we can assume that each message $m$ sent by a vertex $v' \in V_A \cup V_B$ can be replaced by another message $m'$ which contains only the list of all neighbors of $v'$, and this can be encoded as an $O(\log k)$-bit message, as follows. Recall that $N(v')$ is a subset of $V_C \cup V_D \cup \{u^\star, v^\star\}$, and so we can encode $N(v')$ as a binary string of length $|V_C \cup V_D \cup \{u^\star, v^\star\}| = 2\ell + 2 = O(\log k)$.
The message $m$ is a function of all information that $v'$ has. Since no vertex in $V_C \cup V_D \cup \{u^\star, v^\star\}$ transmits any message, $v'$ never receives a message, and so the information that $v'$ has consists of only the following components.
- The shared randomness and the ID list of all vertices (due to (P1)).
- The history of vertices in $V_C \cup V_D \cup \{u^\star, v^\star\}$ (due to (P2)).
- The list of neighbors of $v'$ (due to (P3)).
The only private information that $v'$ has is its list of neighbors. If a vertex $u' \in V$ knows the list of neighbors of $v'$, then $u'$ is able to calculate $m$ locally, and so $v'$ can just send its list of neighbors in lieu of $m$.
#### Algorithm $\mathcal{A}'$.
To sum up, given the algorithm $\mathcal{A}$, we can transform it into another algorithm $\mathcal{A}'$ in the modified computation model $\mathcal{M}'$ that uses only $O(\log k)$-bit messages, and $\mathcal{A}'$ achieves what $\mathcal{A}$ does. Note that the energy cost of $\mathcal{A}'$ is at most the energy cost $\mathcal{A}$.
#### Solving Set-Disjointness.
Now we show how to transform $\mathcal{A}'$ into a protocol for the set-disjointness problem using only $o(k)$ bits of communication. The protocol is simply a simulation of $\mathcal{A}'$. The shared random string used by $\mathcal{A}'$ is the same random string shared by the two players $A$ and $B$.
Each player $X \in \{A,B\}$ is responsible for simulating vertices in $V_X$. Vertices in $V_A$ and $V_B$ never receive messages, and so all we need to do is let both players $A$ and $B$ know the messages *sent* to $V_C \cup V_D \cup \{u^\star, v^\star\}$ (in view of (P2)).
We show how to simulate one round $\tau$ of $\mathcal{A}'$. Let $Z(\tau)$ be the subset of vertices in $V_C \cup V_D \cup \{u^\star, v^\star\}$ that listen at time $\tau$, and consider a vertex $u' \in Z(\tau)$. (Recall that everyone can predict the action of every vertex in $V_C \cup V_D \cup \{u^\star, v^\star\}$.)
Let $Q_A$ be the number of vertices in $N(u') \cap V_A$ transmitting at time $\tau$. We define $m_{u',\tau,A}$ as follows. $$m_{u',\tau,A} =
\begin{cases}
\text{``0''} &\text{if $Q_A = 0$.}\\
\text{``$\geq 2$''} &\text{if $Q_A \geq 2$.}\\
(v', m') &\text{if $Q_A = 1$, and $v'$ is the vertex in $N(u') \cap V_A$ sending $m'$ at time $\tau$.}
\end{cases}$$ We define $m_{u',\tau,B}$ analogously. Note that the length of $m'$ must be $O(\log k)$ bits.
The protocol for simulating round $\tau$ is simply that $A$ sends $m_{u',\tau,A}$ (for each $u' \in Z(\tau)$) to $B$, and $B$ sends $m_{u',\tau,B}$ (for each $u' \in Z(\tau)$) to $A$. This offers enough information for both player to know the channel feedback (noise, silence, or a message $m$) received by each vertex in $Z(\tau)$. Note that the number of bits exchanged by $A$ and $B$ due to the simulation of round $\tau$ is $O(|Z(\tau)|\log k)$.
Recall that the energy cost of each vertex in an execution of $\mathcal{A}'$ is $o(k / \log^2 k)$, and we have $|V_C \cup V_D \cup \{u^\star, v^\star\}| = O(\log k)$. Thus, the total number of bits exchanged by the two players $A$ and $B$ is $$\sum_{\tau} O(|Z(\tau)|\log k) = |V_C \cup V_D \cup \{u^\star, v^\star\}| \cdot o(k / \log^2 k) \cdot O(\log k) = o(k).\qedhere$$
We remark that the proof of Theorem \[thm:diameter-lb2\] can be extended to graphs with higher diameter by using a slightly more complicated lower bound graph construction and analysis; see e.g., [@bringmann2018note]. Intuitively, this is due to the fact that the lower bound graph is [*sparse*]{}, so we are able to subdivide the edges.
Upper Bounds
------------
The approximation ratios in Theorems \[thm:diameter-lb1\] and \[thm:diameter-lb2\] cannot be improved. Observe that $\mathsf{BFS}$ already gives a 2-approximation of diameter, as $D' = \max_{u \in V(G)}\{{\mathsf{dist}}_G(s,u)\} \in [{\mathsf{diam}}(G)/2, {\mathsf{diam}}(G)]$, and we know that a $\mathsf{BFS}$ can be computed in $n^{o(1)}$ energy.
If we allow an energy budget of $n^{\frac{1}{2} + o(1)}$ then it is possible to achieve a [*nearly*]{} $3/2$-approximation by applying the algorithm of [@holzer2014brief; @RodittyW13], which computes a $D'$ such that $\lfloor 2{\mathsf{diam}}(G)/3 \rfloor \leq D' \leq {\mathsf{diam}}(G)$. More precisely, if we write ${\mathsf{diam}}(G) = 3h + z$, where $h$ is a non-negative integer, and $z \in \{0,1,2\}$, then $D' \in [2h+z, {\mathsf{diam}}(G)]$ for the case $z = 0, 1$, and $D' \in [2h+1, {\mathsf{diam}}(G)]$ for the case $z = 2$. Note that this does not contradict the $\Omega(n)$ energy lower bound for distinguishing between ${\mathsf{diam}}(G)=1$ and ${\mathsf{diam}}(G)=2$ in Theorem \[thm:diameter-lb1\], nor does it contradict Theorem \[thm:diameter-lb2\].
The algorithm of [@holzer2014brief; @RodittyW13] is as follows. Let each vertex join $S$ with probability $(\log n) /\sqrt{n}$, and compute a $\mathsf{BFS}$ from each vertex in $S$. Let $v^\star$ be any vertex that maximizes the distance to $S$. Identify any set of $\sqrt{n}$ vertices $R$ that are the closest to $v^\star$, and compute a $\mathsf{BFS}$ from each vertex in $R$. The diameter approximation $D'$ is the maximum $\mathsf{BFS}$-label computed throughout the algorithm. Note that there are multiple valid choice of $v^\star$ and $R$, and the tie can be broken arbitrarily.[^10] Since $\mathsf{BFS}$ can be computed in $n^{o(1)}$ energy, with a suitable implementation, this algorithm be executed using $n^{\frac{1}{2} + o(1)}$ energy. For the sake of completeness, in what follows we provide the detail for an implementation, which is based on the following subroutines.
[Leader Election:]{}
: Elect a leader $v_0\in V$ such that all vertices know ${\operatorname{ID}}(v_0)$. It is known that this task can be solved in $\tilde{O}(n)$ time and $\tilde{O}(1)$ energy [@ChangDHHLP18].
[Find Minimum:]{}
: Suppose there is already a leader $v_0 \in V$, and each vertex $u \in V$ knows ${\mathsf{dist}}(u,v^\star)$. Each vertex $u$ holds an integer $k_u \in [1, K]$ and a message $m_u$. The goal is to elect one vertex $u^\star$ such that $k_{u^\star} = \min \{ k_u \ | \ u \in V \}$ and have all vertices know $m_{u^\star}$. Tie is broken arbitrarily. The task [Find Maximum]{} is defined analogously.
We argue that the task ${\mathsf{Find \ Minimum}}$ and ${\mathsf{Find \ Maximum}}$ can be solved in $\tilde{O}({\mathsf{diam}}(G))$ time and $\tilde{O}(1)$ energy, given that $K = O({\operatorname{poly}}(n))$. To solve this task, we will do a binary search. Let $I \subseteq [1, K]$ be an interval currently under consideration. We let $v_0$ test whether there exists a vertex $u'$ with $k_{u'} \in I$ by doing $O({\mathsf{diam}}(G))$ ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$s on the $\mathsf{BFS}$ tree, layer by layer. The root $v_0$ is able to announce the result to everyone, also using $O({\mathsf{diam}}(G))$ ${\mathsf{Local}{\text -}\mathsf{Broadcast}}$s on the $\mathsf{BFS}$ tree, layer by layer. After $O(\log K) = \tilde{O}(1)$ iterations, we are done.
\[thm-diameter-ub1\] There is an algorithm that computes a 2-approximation of diameter in $n^{1 + o(1)}$ time and $n^{o(1)}$ energy.
Apply [Leader Election]{} to elect a leader $v_0$, do a $\mathsf{BFS}$ from $v_0$, and then do a ${\mathsf{Find \ Maximum}}$ to let each vertex learn $\max\{ {\mathsf{dist}}(u, v_0) \ | \ u \in V\}$. This gives a 2-approximation of the diameter $D$.
\[thm-diameter-ub2\] There is an algorithm that computes an approximation $D'$ such that $\lfloor 2{\mathsf{diam}}(G)/3 \rfloor \leq D' \leq {\mathsf{diam}}(G)$ in $n^{3/2 + o(1)}$ time and $n^{1/2 + o(1)}$ energy.
We show how to implement the algorithm of [@holzer2014brief; @RodittyW13]. We first apply [Leader Election]{} to elect a leader $v_0$, and do a $\mathsf{BFS}$ from $v_0$, we will use this tree to do ${\mathsf{Find \ Minimum}}$ and ${\mathsf{Find \ Maximum}}$ in subsequent steps of the algorithm.
In the algorithm of [@holzer2014brief; @RodittyW13], we let each vertex join $S$ with probability $(\log n) /\sqrt{n}$. Using $|S| = \tilde{O}(\sqrt{n})$ iterations of ${\mathsf{Find \ Minimum}}$ we can let everyone know the ${\operatorname{ID}}$s of vertices in $S$. Then, we sequentially compute a $\mathsf{BFS}$ from each vertex in $S$. Let $v^\star$ be a vertex that maximizes the distance to $S$. Such a vertex $v^\star$ can be elected using one iteration of ${\mathsf{Find \ Maximum}}$. To compute the set $R$, we first do a $\mathsf{BFS}$ from $v^\star$ so that everyone knows its distance to $v^\star$. Then, after $\sqrt{n}$ iterations of ${\mathsf{Find \ Minimum}}$, we can let everyone learn the set $R$, and then we can do the $\mathsf{BFS}$ computation from each vertex in $R$ sequentially. The diameter approximation $D'$ is the maximum $\mathsf{BFS}$-label computed throughout the algorithm, and this can be computed using one iteration of ${\mathsf{Find \ Maximum}}$. It is clear that the algorithm takes $n^{3/2 + o(1)}$ time and $n^{1/2 + o(1)}$ energy, as it only uses $\tilde{O}(\sqrt{n})$ ${\mathsf{Find \ Minimum}}$, ${\mathsf{Find \ Maximum}}$, and $\mathsf{BFS}$ computations.
[^1]: Supported by NSF CAREER award CCF-1150281.
[^2]: Supported by NSF grants CCF-1514383, CCF-1637546, and CCF-1815316.
[^3]: Synchronizing devices in an energy-efficient manner is an interesting open problem. In some situations it makes sense to assume the devices begin in a synchronized state, e.g., if the sensors are simultaneously turned on and dropped from an airplane on the aforementioned National Park.
[^4]: Here $N(v)=\{u \ | \ \{u,v\}\in E(G)\}$ is the neighborhood of $v$. A more powerful model allows for *collision detection*, i.e., differentiation between zero and two or more transmitters in $N(v)$. Since collision detection only gives a ${\mathrm{polylog}}(n)$ advantage in any complexity measure (${\mathsf{Local}{\text -}\mathsf{Broadcast}}$ in Section \[sect:cluster\] allows each vertex to differentiate between zero and two or more transmitters in ${\mathrm{polylog}}(n)$ rounds w.h.p.) and we are insensitive to such factors, we assume the weakest model, without collision detection.
[^5]: Sender-side CD enables devices to detect if another device is transmitting; receiver-side CD lets receivers detect if at least two devices are transmitting.
[^6]: These definitions seem to be robust to certain modeling assumptions, e.g., whether collision detection is available.
[^7]: I.e., it returns a value in the range $\left[{\left\lfloor \frac{2}{3}\mathsf{diam}(G) \right\rfloor}, \mathsf{diam}(G)\right]$.
[^8]: One imagines a vertex $v_e$ in the middle of an edge $e$; $e$ is cut iff ${\mathsf{Ball}_{G}(v_e, 1/2)}$ intersects two clusters, which must cover distinct endpoints of $e$.
[^9]: The purpose of the $A$ parameter is to refrain from computing useless information. E.g., when we compute the distance from the clusters $W_i^*$ intersecting the $i$th wavefront, we are only interested in distances to clusters intersecting as-yet unvisited vertices (those intersecting $A$), not settled vertices “behind” the wavefront.
[^10]: Precisely, it is required that $|R| = \sqrt{n}$, and for each $u \in R$, there are less than $\sqrt{n}$ vertices $v$ such that ${\mathsf{dist}}(v, v^\star) < {\mathsf{dist}}(u, v^\star)$. In general, there could be multiple choices of $R$ satisfying this requirement.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider a stationary axisymmetric force-free degenerate magnetosphere of a rotating Kerr black hole surrounded by a thin Keplerian infinitely-conducting accretion disk. We focus on the closed-field geometry characterize by a direct magnetic coupling between the disk and the hole’s event horizon. We first argue that the hole’s rotation necessarily limits the radial extent of the force-free link on the disk surface: the faster the hole rotates, the smaller the magnetically-connected inner part of the disk has to be. We then show that this is indeed the case by solving numerically the Grad–Shafranov equation—the main differential equation describing the structure of the magnetosphere. An important element in our approach is the use of the regularity condition at the inner light cylinder to fix the poloidal current as a function of the poloidal magnetic flux. As an outcome of our computations, we are able to chart out the maximum allowable size of the portion of the disk that is magnetically connected to the hole as a function of the black hole spin. We also calculate the angular momentum and energy transfer between the hole and the disk that takes place via the direct magnetic link. We find that both of these quantities grow rapidly and that their deposition becomes highly concentrated near the inner edge of the disk as the black hole spin is increased.'
author:
- 'Dmitri A. Uzdensky [^1]'
date: 'October 27, 2004'
title: 'Force-Free Magnetosphere of an Accretion Disk — Black Hole System. II. Kerr Geometry'
---
Introduction {#sec-intro}
============
This paper is devoted to the subject of magnetic interaction between a rotating black hole and an accretion disk around it—a topic that has enjoyed a lot of attention among researchers in recent years. Magnetic fields are believed to play an important role in the dynamics of accreting black hole systems (e.g., Begelman, Blandford, & Rees 1984; Krolik 1999b; Punsly 2001). In particular, they can be very effective in transporting angular momentum and the associated rotational energy of either the hole or the disk.
Where and how this transport takes place and to what observational consequences it can lead, is partly determined by the global geometry of the magnetic field lines. Conceptually, one can think of two basic types of geometry. The first type is the open-field configuration shown schematically in Figure \[fig-geometry-open\]. The main topological feature here is that there is no direct magnetic link between the hole and the disk. Instead, all the field lines are open and extend to infinity. Historically, this configuration was the first to have been considered, and it has been studied very extensively during the past three decades (see, e.g., Lovelace 1976; Blandford 1976; Blandford & Znajek 1977, hereafter BZ77; MacDonald & Thorne 1982, hereafter MT82; Phinney 1983; Macdonald 1984; Thorne, Price, & Macdonald 1986; Punsly 1989, 2001, 2003, 2004; Punsly & Coroniti 1990; Beskin & Par’ev 1993; Beskin 1997; Ghosh & Abramowicz 1997; Beskin & Kuznetsova 2000; Komissarov 2001, 2002b, 2004a). The reason for this popularity is that this configuration is related to the famous Blandford–Znajek mechanism (BZ77) now widely regarded as the primary process powering jets in active galactic nuclei (AGN) and micro-quasars. As Blandford and Znajek showed, a large-scale, ordered open magnetic field can extract the rotational energy from a spinning black hole and transport it to large distances via Poynting flux (a similar process works along the field lines connected to the disk).
The second type of magnetic field geometry is the closed-field configuration, shown in Figure \[fig-geometry-closed\]. Although it has been occasionally discussed in the literature before the last decade (e.g., Zeldovich & Schwartzman, quoted in Thorne 1974; MT82; Thorne et al. 1986; Nitta, Takahashi, & Tomimatsu 1991; Hirotani 1999), it is only in the last five years that it has attracted serious scientific attention (e.g., Blandford 1999, 2000, 2002; Gruzinov 1999; van Putten 1999; van Putten & Levinson 2003; Li 2000, 2001, 2002a, 2002b, 2004; Wang, Xiao, & Lei 2002; Wang, Lei, & Ma 2003a; Wang et al. 2003b, 2004). The basic topological structure of magnetic field in this configuration is very different from that of the open-field configuration. The field lines are closed and directly connect the black hole to the disk. In this so-called Magnetically-Coupled configuration (Wang et al. 2002), the energy and angular momentum are not taken away to infinity, but instead are exchanged between the hole and the disk by the magnetic field. Therefore, magnetic coupling, together with the accretion process, controls the spin evolution and the spin equilibrium of the black hole (Wang et al. 2002, 2003a). In addition, the rotational energy of the hole can be magnetically extracted (just like in the Blandford–Znajek process) and deposited onto the disk leading to a change in the disk energy-dissipation profile and hence its observable spectral characteristics (Gammie 1999; Li 2000, 2001, 2002a, 2002b, 2004; Wang et al. 2003a,b). Finally, if the rotating field lines are strongly twisted and become unstable to a non-axisymmetric kink-like instability, a strong variability of the energy release may result, which would be a possible explanation for quasi-periodic oscillations (QPOs) in micro-quasar systems (e.g., Gruzinov 1999; Wang et al. 2004). All these phenomena make the closed-field configuration astrophysically very interesting.
Most of the work that has been done on studying the magnetic field structure around an accreting Kerr black hole, including the seminal paper by Blandford & Znajek (BZ77), has been performed under the assumption that the magnetosphere above the thin disk is ideally conducting and force-free. Then, if one also assumes that the system is stationary and axisymmetric, the magnetic field is governed by the general-relativistic version of the force-free Grad–Shafranov equation (e.g., MT82; for the full-MHD generalization of this equation see Nitta et al. 1991; Beskin & Par’ev 1993; Beskin 1997). Since this is a rather nontrivial nonlinear partial differential equation (PDE) with singular surfaces and free functions, it is generally not tractable analytically, except in some special simple cases, such as the slow-rotation limit (BZ77). However, over the past 20 years, a number of force-free solutions for the magnetosphere have been obtained numerically, either by solving the Grad–Shafranov equation itself (MacDonald 1984; Fendt 1997) or as an asymptotic steady state of force-free degenerate electrodynamics (FFDE) evolution (Komissarov 2001, 2002b, 2004a). Until now, most of these studies have been done in the context of the open-field configuration, primarily because of its relevance to the jet problem.
In contrast, most of the work on closed-field configurations has been limited to analytic and semi-analytic studies of the effects that magnetic link has on the disk radiation profile and on the spin evolution of the black hole. The structure of the magnetosphere has not in fact been computed self-consistently. These studies have just assumed the existence of the link and made some simplified assumptions about the field distribution on the horizon.
The only exception to this deficiency is the recent work by Uzdensky (2004) where a force-free magnetosphere linking a Keplerian disk to a Schwarzschild black hole has been numerically computed for the first time.
In the present paper, we make the next logical step by extending this previous work to the more general case of a rapidly rotating Kerr black hole. This is indeed the most important case, not only because real astrophysical black holes are believed to be rotating, but also because the nonlinear terms in the Grad–Shafranov equation, especially the toroidal field pressure, become large in this case. As a result, even the existence of closed-field solutions is not guaranteed. And indeed, one of the main goals of our present study is to determine the conditions for existence of such solutions in Kerr geometry. In other words, we aim at determining the limitations that the rotation of the black hole imposes on the direct magnetic link between the hole and the disk. In addition, by computing the global magnetic field structure, we will be able to study the effect of the black hole rotation on the magnetic field distribution on the horizon, the poloidal electric current as a function of poloidal magnetic flux, and the location of the inner light cylinder, as well as such astrophysically-important processes as angular momentum and energy transfer between the hole and the disk.
In order to achieve the goal of obtaining numerical solutions of the force-free Grad–Shafranov equation in Kerr goemetry, we first analize the mathematical structure of this equation. In particular, we pay special attention to its singular surfaces (the event horizon and the light cylinder) and the corresponding regularity conditions. Thus, we use the light-cylinder regularity condition to determine the poloidal current as a function of poloidal magnetic flux, similar to way it was done by Contopoulos, Kazanas, & Fendt (1999) for the case of the pulsar magnetosphere (see also Beskin & Kuznetsova 2000; Uzdensky 2003; Uzdensky 2004). The event-horizon regularity condition, also known as Znajek’s (1977) horizon boundary condition, is then used to determine the poloidal flux distribution on the horizon. Thus, one does not have the freedom to arbitrarily specify any extra boundary conditions at the horizon, and hence there is no problem with causality, in line with the reasoning presented by Beskin & Kuznetsova (2000) and by Komissarov (2002b, 2004a) (see also Levinson 2004).
Finally, although in this paper we deal exclusively with large-scale, ordered magnetic fields, we acknowledge the difficulty in justifying the existence of such fields around accreting black holes (e.g., Livio, Ogilvie, & Pringle 1999), especially in the closed-field configuration. Also, as recent numerical simulations (e.g., Hawley & Krolik 2001; Hirose et al. 2004), there may be a significant deposition of energy and angular momentum at the inner edge of the disk due to small-scale, intermittent magnetic fields connecting the disk to the plunging region (see also Krolik 1999a; Agol & Krolik 2000).
The paper is organized as follows. § \[sec-equations\] describes the mathematical formalism of force-free axisymmetric stationary magnetospheres in Kerr geometry. In particular, in § \[subsec-kerr\] we introduce the Kerr metric tensor in Boyer–Lindquist coordinates and list several general geometric relationships for the future use. In § \[subsec-GS-eqn\] we consider steady-state, axisymmetric, degenerate electro-magnetic fields and then discuss the force-free condition and the Grad–Shafranov equation. In § \[subsec-EH\] we consider the black hole’s event horizon as a singular surface of this equation and discuss the associated regularity condition, which is also known as Znajek’s horizon boundary condition. In § \[sec-idea\] we present a simple but robust physical argument that demonstrates that a force-free magnetic link between a rotating black hole and the disk cannot extend to arbitrarily large distances on the disk, we also argue that the maximal radial extent of the magnetic link should scale inversely with the black hole’s rotation rate in the slow-rotation limit. We confirm these propositions in § \[sec-numerical\], where we present our numerical solutions of the grad–Shafranov equation. Then, in § \[sec-implications\] we discuss the magnetically-mediated angular-momentum and energy exchange between the hole and the disk. We then close by summarizing our findings in § \[sec-conclusions\].
Axisymmetric force-free magnetosphere in Kerr geometry — basic equations {#sec-equations}
========================================================================
Kerr geometry — mathematical preliminaries {#subsec-kerr}
------------------------------------------
In this paper we employ Boyer–Lindquist coordinates ($t,r,\theta,\phi$) in Kerr geometry. The metric of the four-dimensional space-time can be written in these coordinates as ds\^2 = (g\_\^2 - \^2) dt\^2 - 2g\_ ddt + g\_[rr]{} dr\^2 + g\_ d\^2 + g\_d\^2 , \[eq-metric\] with the components of the metric tensor given by $$\begin{aligned}
\alpha &=& {\rho\over\Sigma} \sqrt{\Delta} \, ,
\label{eq-alpha} \\
\omega &=& {2aMr\over{\Sigma^2}} \, ,
\label{eq-beta} \\
g_{rr} &=& {\rho^2\over\Delta}, \qquad
g_{\theta\theta}=\rho^2, \qquad
g_{\phi\phi} = \varpi^2 \, ,
\label{eq-metric-tensor} \end{aligned}$$ where $$\begin{aligned}
\rho^2 &\equiv& r^2+a^2 \cos^2\theta \, ,
\label{eq-rho} \\
\Delta &\equiv& r^2+a^2-2Mr \, ,
\label{eq-Delta} \\
\Sigma^2 &\equiv& (r^2+a^2)^2 - a^2\Delta\sin^2\theta \, ,
\label{eq-Sigma} \\
\varpi &\equiv& {\Sigma\over\rho} \sin\theta \, .
\label{eq-varpi}\end{aligned}$$
Here, $M$ and $a\in [0;M]$ are the mass and the spin parameter (specific angular momentum) of the central black hole, respectively. (Throughout this paper we use geometric units, i.e., we set both the gravitational constant $G$ and the speed of light $c$ to 1).
In order to describe the electromagnetic processes around a black hole, we use the 3+1 split of the laws of electrodynamics introduced by MT82 (see also Thorne et al. 1986). In this formalism, one splits the four-dimensional spacetime into the global time $t$ and the absolute three-dimensional curved space, the geometry of which is described by a three-dimensional (3D) metric tensor with components given by equation (\[eq-metric-tensor\]). The electromagnetic field is represented by the electric and magnetic field 3-vectors ${\bf E}$ and ${\bf B}$ measured by local zero-angular-momentum observers (ZAMOs; see Thorne et al. 1986). In order to describe these vectors, we will use both the coordinate basis $\{ \partial_i \} = \{{\bf e}_i\}$ and the orthonormal basis $\{{\bf e}_{\hat{i}} \}$ \[where the Roman index $i$ runs through the three spatial coordinates $(r,\theta,\phi)$\]. Because the spatial 3D metric tensor $g_{ij}$ is diagonal, these two bases are related via \_i = [**e**]{}\_ , i=r,, \[eq-basis-1\] (note: there is no summation over $i$ in this expression!). In particular, in the Boyer–Lindquist coordinates in Kerr geometry, we have \_r = [**e**]{}\_ , \_= \_ , \_= [**e**]{}\_ . \[eq-basis-2\]
We shall also need the following mathematical expressions: the 3-gradient of a scalar function $f({\bf x})=f(r,\theta,\phi)$ in the Boyer–Lindquist coordinates is $$\begin{aligned}
\nabla f &=& \sum\limits_i \,
g_{ii}^{-1/2} (\partial_i f) {\bf e}_{\hat{i}} \nonumber \\
&=& {\sqrt{\Delta}\over\rho}\, (\partial_r f)\, {\bf e}_{\hat r} +
{1\over\rho}\, (\partial_\theta f)\, {\bf e}_{\hat \theta} +
{1\over\varpi}\, (\partial_\phi f)\, {\bf e}_{\hat \phi} \, ,
\label{eq-gradient} \end{aligned}$$ and its square is $$\begin{aligned}
|\nabla f|^2 &=& \sum\limits_i g_{ii}^{-1} (\partial_i f)^2 \nonumber \\
&=& {\Delta\over{\rho^2}} (\partial_r f)^2 +
{1\over{\rho^2}} (\partial_\theta f)^2 +
{1\over{\varpi^2}} (\partial_\phi f)^2 \, .
\label{eq-gradient-square} \end{aligned}$$
Finally, the 3-divergence of a 3-vector ${\bf A}$ can be written as = A\^i\_[;i]{}= A\^i\_[,i]{} + A\^i ()\_[,i]{} = [1]{} ( A\^i )\_[,i]{} , \[eq-divergence\] where $g$ is the determinant of the 3-D metric tensor: = [[\^2]{}]{} = . \[eq-g\]
Stationary axisymmetric ideal force-free magnetosphere in Kerr geometry {#subsec-GS-eqn}
-----------------------------------------------------------------------
As mentioned above, in the 3+1 split formalism of MT82 a magnetosphere of a rotating Kerr black hole is described in terms of two spatial vector fields, ${\bf E}$ and ${\bf B}$. Under the assumptions that the magnetosphere is: (1) stationary ($\partial_t =0$), (2) axisymmetric ($\partial_\phi=0$), and (3) ideally-conducting, or degenerate (${\bf E\cdot B} =0$), these two vector fields can be expressed in terms of three scalar functions, $\Psi(r,\theta)$, $\Omega_F(r,\theta)$, and $I(r,\theta)$: (r,) = [**B**]{}\_[pol]{} + [**B**]{}\_[tor]{} , where $$\begin{aligned}
{\bf B}_{\rm pol} &=& \nabla\Psi\times\nabla\phi =
{1\over{\varpi\rho}}\, \Psi_\theta\, {\bf e}_{\hat{r}} -
{\sqrt{\Delta}\over{\varpi\rho}}\, \Psi_r\, {\bf e}_{\hat{\theta}}\, ,
\label{eq-Bpol} \\
{\bf B}_{\rm tor} &=& B_{\hat{\phi}} {\bf e}_{\hat{\phi}} =
{I\over{\alpha\varpi}}\, {\bf e}_{\hat{\phi}} \, ,
\label{eq-Btor}\end{aligned}$$ and $${\bf E} (r,\theta) = {\bf E}_{\rm pol} =
-{{\delta\Omega}\over\alpha}\, \nabla\Psi\, , \qquad
E_{\phi} = 0 \, ,
\label{eq-E}
\eeq
where
\beq
\delta\Omega \equiv \Omega_F - \omega \, .
\label{eq-DeltaOmega}
\eeq
Here, $\Psi(r,\theta)$ is the poloidal magnetic flux function,
$\Omega_F=\Omega_F(\Psi)$ is the angular velocity of the magnetic
field lines, and $I(r,\theta)$ is $(2/c)$ times the poloidal electric
current flowing through the circular loop $r={\rm const}$, $\theta=
{\rm const}$.
[Note that our definitions of $\Psi$ and $I$ differ from the ones
adopted by MT82: $\Psi=\psi_{\rm MT82}/2\pi$, $I=-(2/c)I_{\rm MT82}$.]
Next, in this work we are interested in the case of a {\it force-free}
magnetosphere, i.e., a magnetosphere that is so tenuous that the
electromagnetic forces completely dominate over all others, including
gravitational, pressure, and inertial forces. Even though this framework
has been widely accepted as a primary tool in describing magnetospheres
of black holes and radio-pulsars, its usefulness and validity near
the event horizon has been seriously challenged by Punsly (2001, 2003).
However, according to the recent MHD simulations by Komissarov (2004b),
these worries seem to be unfounded. Therefore, we shall still employ
the force-free approach in this paper. Correspondingly, we shall write
the force-balance equation (in the ZAMO reference frame) as
\beq
\rho_e {\bf E} + {\bf j \times B} = 0 \, ,
\label{eq-force-free}
\eeq
where the ZAMO-measured electric charge density $\rho_e$ and
electric current density ${\bf j}$ are related to ${\bf E}$
and ${\bf B}$ via Maxwell's equations (see MT82).
The toroidal component of the force-free equation immediately leads to
\beq
I(r,\theta) = I(\Psi) \, ,
\label{eq-I=IofPsi}
\eeq
i.e., the poloidal electric current does not cross poloidal
flux surfaces.
The poloidal component of equation~(\ref{eq-force-free}), upon using
expressions (15)--(18), yields the so-called generally-relativistic
force-free Grad--Shafranov equation --- the main equation that governs
the system. In this paper we shall use as a starting point the form of
this equation given in MT82 (i.e., eq. [6.4] of MT82 slightly modified
to account for the change in the definition of~$\Psi$):
\begin{eqnarray}
\nabla \cdot \biggl( {\alpha\over{\varpi^2}}\,
\bigl[ 1-{{\delta\Omega^2 \varpi^2}\over{\alpha^2}} \bigr]
\nabla\Psi \biggr) &+& \nonumber \\
{{\delta\Omega}\over\alpha}\, {d\Omega_F\over{d\Psi}}\, (\nabla\Psi)^2 +
{1\over{\alpha\varpi^2}}\, II'(\Psi) &=& 0 \, .
\label{eq-GS-MT82}
\end{eqnarray}
This is a nonlinear 2nd-order elliptic partial differential equation (PDE);
it determines $\Psi(r,\theta)$ provided that $\Omega_F(\Psi)$ and $I(\Psi)$
are known. We can rewrite this equation as follows:
\beq
LHS \equiv \alpha\varpi^2 \nabla \cdot \biggl({1\over{\alpha\varpi^2}}\,
(\alpha^2 - \delta\Omega^2 \varpi^2) \nabla\Psi \biggr) =
RHS \equiv -II'(\Psi) - \delta\Omega \Omega_F'(\Psi) \,
\varpi^2 (\nabla\Psi)^2 \, ,
\label{eq-GS}
\eeq
where a prime denotes the derivative with respect to~$\Psi$,
e.g., $I'(\Psi)=dI/d\Psi$.
Upon introducing the quantities
\beq
D \equiv \alpha^2 - \delta\Omega^2 \varpi^2,
\label{eq-D-def}
\eeq
and
\beq
Q(r,\theta) \equiv {\sqrt{|g|}\over{\alpha\varpi^2}} =
{{\rho\Sigma}\over{\varpi\Delta}} = {{\rho^2}\over{\Delta\sin\theta}}\, ,
\label{eq-Q-def}
\eeq
and upon using identity~(\ref{eq-divergence}), the left-hand side
(LHS) of this equation can be written in a compact and convenient form
\beq
LHS = Q^{-1} [QD(\nabla\Psi)^i]_{,i} =
[D(\nabla\Psi)^i]_{,i} + D(\nabla\Psi)^i \partial_i \ln Q \, .
\label{eq-LHS}
\eeq
Using expression~(\ref{eq-gradient}), we get the Grad--Shafranov
equation in the following final form:
\begin{eqnarray}
LHS &=& \partial_r \biggl( {{D\Delta}\over{\rho^2}}\, \Psi_r \biggr) +
\partial_\theta \biggl( {D\over{\rho^2}}\, \Psi_\theta \biggr) +
{D\over{\rho^2 Q}}\, \biggl( \Psi_r\Delta\partial_r Q +
\Psi_\theta \partial_\theta Q \biggr) \nonumber \\
&=& RHS \equiv -II'(\Psi) - \delta\Omega \Omega_F'(\Psi) \,
\varpi^2 (\nabla\Psi)^2 \, .
\label{eq-GS-2}
\end{eqnarray}
%-------------------------------------------------------------------
\subsection{Regularity condition at the event horizon}
\label{subsec-EH}
From the Grad--Shafranov equation in the form~(\ref{eq-GS-2})
it is easy to see that, in general, this equation has two types of
singular surfaces. One of them is the so-called {\it light cylinder}
(often called in the literature the velocity-of-light surface or simply
the light surface) defined as a surface where $D=0$. We shall discuss
it in more detail later (see \S~\ref{subsec-LC}).
There is also another singular surface of the Grad--Shafranov equation:
{\it the event horizon} defined as the surface where
\beq
\Delta = 0 = \alpha \, .
\label{eq-EH}
\eeq
This surface will be the main focus of this section.
As can be seen from equation~(\ref{eq-Delta}), the event
horizon is a constant-$r$ surface,
\beq
r(\theta) = r_H = M + \sqrt{M^2-a^2} = {\rm const} \, .
\label{eq-rH}
\eeq
In addition, the frame-dragging frequency $\omega$
defined by equation~(\ref{eq-beta}) is also constant
on the horizon,
\beq
\omega(r=r_H,\theta) = \Omega_H = {a\over{2Mr_H}} = {\rm const} \, .
\label{eq-OmegaH}
\eeq
This constant is what is conventionally called the rotation rate
of the Kerr black hole.
Because the horizon is surface of constant~$r$, one can immediately
see that it is a singular surface of equation~(\ref{eq-GS-2}). This
is because the coefficient in front of the 2nd-order derivative in
the direction normal to this surface (in this case, radial) vanishes,
even though the coefficient in front of the 2nd derivative in the
$\theta$-direction does not.
The fact that the event horizon is just a singular surface of the
Grad--Shafranov equation is extremely important. It means that
one cannot impose an independent boundary condition for the function
$\Psi(r,\theta)$ at the horizon. One can only impose a {\it regularity
condition} there (Beskin~1997; Komissarov~2002b, 2004a; Uzdensky~2004).
Mathematically, this condition means that there should be no
logarithmic terms in the asymptotic expansion of $\Psi(r,\theta)$
near $r=r_H$ (see MT82). Physically, the regularity condition originates
from the requirement that freely-falling observers measure finite electric
and magnetic fields near the horizon (see Thorne~et~al. 1986). Alternatively,
the event horizon regularity condition can be obtained from the
fast-magnetosonic critical condition in the limit in which plasma
density goes to zero and hence the inner fast magnetosonic surface
approaches the horizon (Beskin~1997; Beskin \& Kuznetsova 2000;
Komissarov~2004a).
In the present paper, we will not repeat the rigorous derivation
of this condition (we refer the reader to MT82 or Thorne~et~al. 1986).
Instead, we just note that as a result of the regularity requirement,
one expects both the 1st and 2nd radial derivatives of $\Psi$ to remain
finite at the horizon. Therefore, when applying the Grad--Shafranov
equation~(\ref{eq-GS-2}) at $r=r_H$, one can just simply set $\Delta=0$.
Then, after some algebra, one gets:
\beq
I^2[\Psi_0(\theta)] =
\biggl(\delta\Omega {\varpi\over\rho}\,
{{d\Psi_0}\over{d\theta}} \biggr)^2 +
{\rm const} \, , \qquad r=r_H \, ,
\eeq
where
\beq
\Psi_0(\theta) \equiv \Psi(r=r_H,\theta) \, .
\label{eq-Psi0}
\eeq
In the absence of a finite line-current along the axis $\theta=0$,
i.e., when $I(\theta=0)=0$, the integration constant is zero and
hence
\beq
I = \pm \delta\Omega\, {\varpi\over\rho}\, {d\Psi_0\over{d\theta}}\, ,
\qquad r=r_H \, .
\label{eq-I-EH}
\eeq
As for the choice of sign in this expression, it can be
shown that the correct sign must be plus [remember that
MT82 have minus sign because we define $I(\Psi)$ with an
opposite sign]; this comes from the requirement that
Poynting flux measured by a ZAMO in the vicinity of the
horizon is directed {\it towards} the black hole (e.g.,
Znajek~1977, 1978; BZ77; MT82). Thus, we have
\beq
I[\Psi_0(\theta)]=\delta\Omega\, {\varpi\over\rho}\, {d\Psi_0\over{d\theta}}=
{{2Mr_H\sin\theta}\over{\rho^2}}\, \delta\Omega\, {d\Psi_0\over{d\theta}}\, ,
\qquad r=r_H \, .
\label{eq-EH-bc}
\eeq
Equation~(\ref{eq-EH-bc}) was first derived by Znajek (1977) and is
frequently referred to as the "Znajek's horizon boundary condition".
We stress, however, that, because the event horizon is a singular
surface of the Grad--Shafranov equation, one cannot really impose
a boundary condition there; expression~(\ref{eq-EH-bc}) actually
follows from the Grad--Shafranov equation itself under the condition
that the solution be regular at~$r=r_H$.
It is interesting to note that, because not only the 2nd-, but
also the 1st-order radial derivatives of $\Psi$ drop out of the
Grad--Shafranov equation when $\Delta$ is set to zero, this equation
becomes an ordinary (as opposed to a partial) differential equation
at the horizon! This implies that the horizon poloidal magnetic flux
distribution, $\Psi_0(\theta)$, is connected to the magnetosphere
outside the horizon only through the functions $I(\Psi)$ and
$\Omega_F(\Psi)$ and not through any radial derivatives. From
the practical point of view, this fact means that equation~(\ref{eq-EH-bc})
can be viewed as a Dirichlet-type boundary condition that determines
the function~$\Psi_0(\theta)$ once both $I(\Psi)$ and $\Omega_F(\Psi)$
are given. It is important to emphasize that we really have only one
relationship on the horizon--- equation~(\ref{eq-EH-bc}) --- between
three functions [$\Psi_0(\theta)$, $I(\Psi)$, and $\Omega_F(\Psi)$]
and hence one needs to find some other conditions, set somewhere else,
to fix $I(\Psi)$ and $\Omega_F(\Psi)$ if one wants to use~(\ref{eq-EH-bc})
to calculate $\Psi_0(\theta)$. We shall return to this important point
in \S~\ref{subsec-setup}.
%***********************************************************
\section{Disruption of the hole--disk magnetic link by the black hole
rotation}
\label{sec-idea}
The main topic of this paper is a force-free magnetic link
between a Kerr black hole and a thin, infinitely conducting
Keplerian accretion disk around it. Thus, we are primarily
interested in the closed-field configuration depicted
schematically in Fig.~\ref{fig-geometry-closed}. In contrast
to the open-field configuration, in which all the field
lines piercing the event horizon extend to infinity, in the
closed-field configuration, magnetic field lines connect the
black hole to the disk, forming a nested structure of toroidal
flux surfaces. In this section we will examine the conditions
under which this configuration can exist and, in particular,
will discuss the limitations that the rotation of the black hole
imposes on the radial extent of the force-free magnetic link
between the disk and the hole.
First, we would like to point out that a magnetically-linked black
hole--disk system is dramatically different from a magnetically-linked
star--disk system in at least one important aspect. Indeed, let us
examine the system's evolution on the shortest relevant, i.e., rotation,
timescale. In the case where the central object is a highly-conducting
star, such as a neutron star or a young star, it turns out that no
steady state configuration with the topology similar to that presented
in Figure~\ref{fig-geometry-closed} is possible. This is because both
the disk and the star can be regarded (on this short timescale) as
perfect conductors, so that the footpoints of the field lines that
link the two are frozen into their surfaces. Hence, the disk footpoint
of a given field line rotates with its corresponding Keplerian rotation
rate, $\Omega_K(r)$, whereas the footpoint of the same field line on the
star's surface rotates with the stellar angular velocity~$\Omega_*$.
Therefore, each field line connecting the star to the disk [with the
exception of a single line connecting to the disk at the corotation
radius $r_{\rm co}$ where $\Omega(r_{\rm co})=\Omega_*$] is subject
to a continuous twisting. This twisting results in the generation of
toroidal magnetic flux out of the poloidal flux, which tends to inflate
and even open the magnetospheric flux surfaces after only a fraction of
one differential star--disk rotation period (e.g., van~Ballegooijen~1994;
Uzdensky~et~al. 2002; Uzdensky 2002a,b).
On the other hand, in the case of a black hole being the central object
the situation is different. The key difference is that, unlike stars,
black holes do not have a conducting surface. On the contrary, they
are actually effectively quite resistive, in the language of the
Membrane Paradigm (see Znajek~1978; Damour~1978; MacDonald
\& Suen 1985; Thorne~et~al. 1986).
The rather large effective resistivity makes it in principle
possible for the field lines frozen into a rotating conducting
disk to slip through the event horizon. This fact makes a quest
for a stationary closed-field configuration in the black-hole
case a reasonable scientific task, since it is at least conceivable
that such configurations may in principle exist.
In our previous paper (Uzdensky~2004) we studied exactly this question
for the case of a Schwarzschild black hole. We found that a
stationary force-free configuration of the type depicted in
Figure~\ref{fig-geometry-closed} indeed exists in this case.
At the same time, however, there is of course no guarantee
that a similar configuration will exist in the Kerr case.
This is because the nonlinear terms in the Grad--Shafranov
equation that correspond to field-line rotation and toroidal
field pressure are no longer small in the Kerr case, whereas
in the Schwarzschild case these terms, although formally finite,
were only at a few per cent level.
In fact, we can make an even stronger statement: even for a
slowly-rotating Kerr black hole, a force-free configuration
in which magnetic field connects the polar region of the horizon
to arbitrarily large distances on the disk (which is precisely
the geometry depicted in Fig.~\ref{fig-geometry-closed}) does
not exist! We shall now present the basic physical argument
for why this must be the case.
Let us suppose that a force-free configuration of Figure~\ref
{fig-geometry-closed}, where all the field lines attached to
the disk at all radii thread the event horizon, does indeed exist.
First, let us consider the polar region of the black hole, $r=r_H$,
$\theta\rightarrow 0$. Suppose that near the rotation axis the flux
$\Psi_0(\theta)$ behaves as a power law: $\Psi_0\sim\theta^\gamma$
(the most natural behavior corresponding to a constant poloidal field being
$\Psi_0\sim\theta^2$). Then note that in a configuration under consideration,
the field lines threading this polar region connect to the disk at some very
large radius $r_0(\Psi)\gg r_H$. Since the field lines rotate with the
Keplerian angular velocity of their footpoints on the disk, $\Omega_F(\Psi)
\sim r_0^{-3/2}(\Psi) \rightarrow 0$ as $\Psi\rightarrow 0$, one finds that,
for sufficiently small $\Psi$ [and hence sufficiently large $r_0(\Psi)$],
$\Omega_F(\Psi)$ becomes much smaller than the black hole rotation rate
$\Omega_H = a/2r_H$. Now let us look at the event horizon regularity
condition~(\ref{eq-EH-bc}). For the field lines under consideration,
we find that $\sin\theta d\Psi_0/d\theta \sim \theta \theta^{\gamma-1}
\sim \Psi$ and $\delta\Omega = \Omega_F(\Psi) - \Omega_H
\simeq -\Omega_H = {\rm const} \neq 0$, $\Psi\rightarrow 0$.
Thus,
\beq
I(\Psi) \sim -\Omega_H \Psi \sim -a\Psi,
\qquad {\rm as} \quad \Psi\rightarrow 0 \, ,
\label{eq-I-axis}
\eeq
and, correspondingly,
\beq
II'(\Psi\rightarrow 0) \sim a^2 \Psi \, .
\label{eq-II'-axis}
\eeq
Now, let us look at the force-free balance on the same field lines
but far away from the black hole, at radial distances of the order
of $r\sim r_0 \gg r_H$. At these large distances $\alpha\approx 1$
and $\delta\Omega \varpi \ll c$, so that the electric terms in the
Grad--Shafranov equation are small and the coefficient $D$ is close
to~1. Then the LHS of the Grad--Shafranov equation~(\ref{eq-GS-2})
is essentially a linear diffusion-like operator and can be estimated
as being of the order of $\Psi/r^2$. We see that both the LHS and
the RHS given by equation~(\ref{eq-GS-2}) scale linearly with $\Psi$
but the LHS has an additional factor $\sim r^{-2}$. Thus we conclude
that at sufficiently large distances this term becomes negligible when
compared with the $II'(\Psi)$-term~(\ref{eq-II'-axis}). In other words,
the toroidal field, produced in the polar region of the horizon by
the black hole dragging the field lines along, turns out to be too
strong to be confined by the poloidal field tension at large distances.
In fact, this argument suggests that the maximal radial extent $r_{max}$
of the region on the disk connected to the polar region of a Kerr black
hole should scale as $r_{\rm max}\sim r_H/a$ in the limit $a\rightarrow 0$.
One should note that, in the Schwarzschild limit $a\rightarrow 0$, this
maximal distance goes to infinity and hence a fully-closed force-free
configuration can exist at arbitrarily large distances, in agreement
with the conclusions of our paper~I. [Also note that if one tries to
perform a similar analysis for the Schwarzschild case, then from the
horizon regularity condition one finds that $I(\Psi)=\Omega_K(\Psi)\sin\theta
(d\Psi/d\theta)|_{r=r_H} \sim \Psi \cdot r_0^{-3/2}(\Psi)$. Then, assuming
that $r_0(\Psi)$ is a power law at large distances, $r\sim r_0(\Psi)$,
the toroidal-field pressure term can be estimated as $II'(\Psi)\sim
\Psi\cdot r_0^{-3}(\Psi)$. We can thus see that at large distances
this term becomes negligible compared with the LHS ($\sim \Psi r^{-2}$),
so no limitation on the radial extent of the magnetic link can be derived.]
We also would like to remark that this finding is not really surprising
in view of some important properties axisymmetric force-free magnetospheres,
known from the general theory of the (non-relativistic) Grad--Shafranov
equation (see, e.g., van~Ballegooijen~1994; Uzdensky~2002b). This analogy
is so important that we would like to make a digression to describe it here.
Let us consider a closed simply-connected (i.e., without magnetic islands)
axisymmetric configuration like the one shown in Figure~\ref
{fig-geometry-closed}. Then start to increase gradually the overall magnitude
(which we shall call $\lambda$) of the nonlinear source term $II'(\Psi)$
--- the so-called generating function --- starting from zero. As we are
doing this, let's keep the functional shape of $I(\Psi)$, as well as the
boundary conditions for $\Psi$, fixed. Then one finds the following
interesting behavior: there is a certain maximal value $\lambda_{\rm max}$
(whose exact value depends on the details of the functional shape of
$I(\Psi)$ and the boundary conditions), such that one finds no solutions
of the Grad--Shafranov equation for $\lambda > \lambda_{\rm max}$.
For $\lambda<\lambda_{\rm max}$, one actually finds {\it two} solutions
and these two solutions correspond to two different values of the field-line
twist angles $\Delta\Phi(\Psi)$. In the limit $\lambda\ll \lambda_{\rm max}$
the two solutions are remarkably different. One of them corresponds to
$\Delta\Phi\sim\lambda/\lambda_{\rm max} \ll 1$; it is very close to the
purely potential closed-field configuration and can be obtained as a
perturbation from the potential solution. The other solution corresponds
to some finite distribution $\Delta\Phi_c(\Psi)$, in general of order 1
radian, and is characterized by very strongly inflated poloidal field
lines. This configuration in fact approaches the open-field geometry
in the limit $\lambda \rightarrow 0$.
Now, as one increases $\lambda$, the difference between the two
solutions decreases and they in fact merge into one single solution
at $\lambda=\lambda_{\rm max}$. The corresponding configuration shows
some modest inflation of the poloidal field and corresponds to the
field line twist angles that are finite (i.e., of order 1 radian) but
less than $\Delta\Phi_c(\Psi)$. Most importantly, as we mentioned above,
no solutions with the required simple topology (i.e., without magnetic
islands) exist for $\lambda>\lambda_{\rm max}$.
Clearly, this is exactly what happens in the Kerr black hole case.
Indeed, in this case the regularity condition~(\ref{eq-EH-bc})
requires that the generating function $II'(\Psi)$ be of the order
of $a^2\Psi$ for small $\Psi$. In a certain sense, the spin parameter
$a^2$ effectively plays the role of the parameter $\lambda$ from our
example above. If one considers a configuration in which the magnetic
link extends to a radius $r_{\rm max}$ on the disk and fixes the disk
boundary conditions, it turns out that there is a critical maximum value
$a_{\rm max}^2$ beyond which no solution can be found. From the argument
presented in the beginning of this section we expect that $a_{\rm max}$
scale inversely with $r_{\rm max}$; in particular, for an infinitely
extended link ($r_{\max}\rightarrow \infty$), one finds $a_{\rm max}
\rightarrow 0$ and no solution is found for any~$a>0$!
To sum up, even though the field lines can, to a certain degree,
slip through the horizon because the latter is essentially resistive,
in some situations the horizon is not resistive enough to ensure
the existence of a steady force-free configuration! Indeed, the
field lines are "dragged" by the rotating black hole to such a
degree that, in order for them to slip through the horizon steadily,
they must have a certain rather large toroidal component. When, for
fixed disk boundary conditions, the black-hole spin parameter $a$
is increased beyond a certain limit $a_{\rm max}(r_{\rm max})$, this
toroidal field becomes so large that the poloidal field tension is
no longer able to contain its pressure at large distances.
Finally, the argument put forward in this section proves that
it may be not only interesting but also in fact necessary to
consider hybrid configurations in which at least a portion of
the field lines are open and magnetic disk-hole coupling plays
a more limited role.
%***********************************************************
\section{Numerical Simulations}
\label{sec-numerical}
In order to verify the proposition put forward in the preceding
section and to study the magnetically-coupled disk--hole magnetosphere,
we have performed a series of numerical calculations. We obtained
the solutions of the force-free Grad--Shafranov equation corresponding
to various values of two parameters: the black-hole spin parameter~$a$
and the radial extent $R_s$ of the magnetic link. In this section
we shall describe the actual computational set-up of the problem,
including the boundary conditions and the numerical procedure; we
shall also present the main results of our calculations.
%-------------------------------------------------------------------
\subsection{Problem formulation and boundary conditions}
\label{subsec-setup}
We start by describing the basic problem set-up and the boundary
conditions.
The simplest axisymmetric closed-field configuration one could
consider is that shown in Figure~\ref{fig-geometry-closed}.
In this configuration, all magnetic field lines connect the disk
and the hole. Furthermore, the entire event horizon and the entire
disk surface participate in this magnetic linkage; in particular,
the field lines threading the horizon very close to the axis
$\theta=0$ are anchored at some very large radial distances
in the disk:
\beq
\Psi_0(\theta\rightarrow 0) \equiv
\Psi(r=r_H,\theta\rightarrow 0) =
\Psi_{\rm disk}(r\rightarrow \infty) \equiv
\Psi(r\rightarrow \infty, \theta=\pi/2)
\eeq
However, as follows from the arguments presented in \S~\ref{sec-idea},
a steady-state force-free configuration of this type can only exist
in the case of a Schwarzschild black hole; in the case of a Kerr black
hole, even a slowly-rotating one ($a\ll M$), such a configuration is
not possible. And indeed, in complete agreement with this point of
view, in our simulations, we were not able to obtain a convergent
solution even for a Kerr black hole with the spin parameter as
small as $a=0.05$.
Also in \S 3 we proposed a conjecture that, for a given value of $a$,
the magnetic link between the polar region of the black hole and the disk
cannot, generically, extend to distances on the disk larger than a
certain $r_{\rm max}(a)$. The exact value of $r_{\rm max}$ depends
on the details of the problem, such as the exact flux distribution
$\Psi_d(r)$ on the surface of the disk, etc. However, we proposed
that $r_{\rm max}$ is a monotonically decreasing function of~$a$,
and, more specifically, in the limit $a\rightarrow 0$, $r_{\rm max}$
is inversely proportional to~$a$. For a finite ratio $a/M=O(1)$,
we expect that the magnetic link can only be sustained over a
finite range of radii not much larger than the radius of the
Innermost Stable Circular Orbit $r_{\rm ISCO}$.
In order to test these propositions, we set up a series of
numerical calculations aimed at solving the Grad--Shafranov
equation for various values of two parameters: the black-hole
spin parameter $a$ and the radial extent of the magnetic coupling
on the disk surface~$R_s$.
Correspondingly, in order to investigate the dependence
on the radial extent of magnetic coupling, we modified
the basic geometry of the configuration by allowing for
two topologically-distinct regions: region of closed field
lines connecting the black hole to the inner part of the disk
$r<R_s$, and the region of open field lines extending from
the outer part of the disk all the way to infinity.%
\footnote
{In general, open field lines originating from the disk
may carry a magnetocentrifugal wind (Blandford \& Payne 1982)
and the resulting mass-loading may make a full-MHD treatment
necessary for these field lines. Here, however, we shall ignore
this complication and will assume the force-free approach to be
valid in this part of the magnetosphere as well.}
This configuration is shown in Figure~\ref{fig-geometry-kerr}.
We count the poloidal flux on the disk from the radial
infinity inward, so that $\Psi_d(r=\infty)=0$, and
$\Psi_d(r=r_H)=\Psi_{\rm tot}$. The disk flux distribution
may still be the same as in the configuration of
Figure~\ref{fig-geometry-closed}; however, now there is a critical
field line $\Psi_s\equiv \Psi_d(R_s)<\Psi_{\rm tot}$ that
acts as a separatrix between open field lines ($\Psi<\Psi_s$)
and closed field lines ($\Psi_s<\Psi<\Psi_{\rm tot}$)
connecting to the black hole. Correspondingly, the poloidal
flux on the black hole surface varies from $\Psi=\Psi_s$
at the pole $\theta=0$ to $\Psi=\Psi_{\rm tot}$ at the
equator $\theta=\pi/2$.
It is worth noting that a more general configuration
would also have some open field lines connecting the
polar region of the black hole to infinity. In fact,
such a configuration would be more physically interesting
because these open field lines would enable an additional
extraction of the black hole's rotational energy via the
Blandford--Znajek mechanism (BZ77). We shall call this
a hybrid configuration because the disk--hole magnetic
coupling and the Blandford--Znajek mechanism operate
simultaneously. In the present paper, however, we shall
assume that no such hole--infinity open field lines.
We make this choice not because of any physical reasons
but simply because of technical convenience: we want to
isolate the effect tof disk-hole coupling. In addition,
as we discuss in more detail in \S~\ref{sec-conclusions},
a proper treatment of these open field lines would require
a more complicated numerical procedure than that needed
for the field lines that connect to the conducting disk.
In addition to boundary conditions, one has to specify the
angular velocity $\Omega_F(\Psi)$ of the magnetic field lines.
Since we assume that the disk is a perfect conductor, and since
in our field configuration all the field lines go through the
disk, this angular velocity is equal to that of the matter in
the disk.
Now let us consider the open field lines $\Psi< \Psi_s$.
In principle, since they are attached to a rotating Keplerian
disk, these lines rotate differentially with the angular velocity
$\Omega_F(\Psi)=\Omega_K[r_0(\Psi)]$. Correspondingly, just as
the closed field lines going into the black hole or the open
field lines in a pulsar magnetosphere, they have to cross a
light cylinder
and therefore have to carry poloidal current $I(\Psi)$, whose
value must be consistent with, and indeed determined by, the
regularity condition at the light cylinder. Because this outer
light cylinder is very distinct from the inner light cylinder
that is crossed by the closed field lines entering the event horizon,
we in general would expect the function $I(\Psi<\Psi_s)$ be very
different from the function $I(\Psi>\Psi_s)$. In particular, we
would expect a discontinuous behavior, $I_s^{\rm open}\equiv
\lim\limits_{\Psi\rightarrow\Psi_s} I(\Psi<\Psi_s) \neq
I_s^{\rm closed}\equiv \lim\limits_{\Psi\rightarrow\Psi_s} I(\Psi>\Psi_s)$,
even though the field-line angular velocity $\Omega_F=\Omega_K[r_0(\Psi)]$
remains perfectly continuous and smooth at $\Psi=\Psi_s$.
Dealing with such a discontinuity in $I(\Psi)$ across the separatrix
$\Psi=\Psi_s$ presents certain numerical difficulties, especially
taking into account that the location of the separatrix $r_s(\theta)=
r(\Psi=\Psi_s,\theta)$ is not known a priori. Therefore, in the present
study we decided to simplify the problem by introducing the following
modifications: we require that the outer part of the disk, $r>R_s$,
be nonrotating: $\Omega_F(\Psi<\Psi_s)\equiv 0$.
Correspondingly, the open field lines do not cross an outer light
cylinder, and so $I(\Psi<\Psi_s)\equiv 0$. To put it in other words,
we just take the open-field outer part of the disk magnetosphere
to be potential. Next, in order to avoid the numerically-challenging
discontinuities in $\Omega_F(\Psi)$ and $I(\Psi)$ at $\Psi=\Psi_s$,
we slightly modify the disk rotation law just inside of $R_s$
by taking $\Omega_F$ smoothly to zero over a small (compared
with the total amount of closed flux) poloidal flux range. In
particular, we used the following prescription:
\begin{eqnarray}
\Omega_F(\Psi) &=& 0\, , \qquad \Psi<\Psi_s \, , \nonumber \\
\Omega_F(\Psi) &=& \Omega_K[r_0(\Psi)] \cdot
\tan^2 \bigl({{\Psi-\Psi_s}\over{\Delta\Psi}}\bigr)\, , \qquad
\Psi>\Psi_s \, ,
\label{eq-OmegaofPsi}
\end{eqnarray}
where $\Delta\Psi=0.2(\Psi_{\rm tot}-\Psi_s)$ and
(see equation [5.72] of Krolik~1999, p.~117)
\begin{equation}
\Omega_K(r) = {\sqrt{M}\over{r^{3/2}+a\sqrt{M}}} \, .
\label{eq-Keplerian}$$
These modifications enabled us to focus on examining how black hole rotation (i.e., the spin parameter $a$) limits the radial extent $R_s$ of the force-free magnetic coupling, while at the same time avoiding certain numerical difficulties resulting from the discontinuous behavior of poloidal current $I(\Psi)$. We believe that these modifications do not lead to any significant qualitative change in our conclusions, especially in the case of small $a$ and large $R_s$. Nevertheless, we intend in the future to enhance our numerical procedure so that it become fully capable of treating this discontinuity.
Let us now describe the computaional domain and the boundary conditions.
First, because of the assumed axial symmetry and the symmetry with respect to the equatorial plane, we performed our computations only in one quadrant, described by $\theta\in[0,\pi/2]$ and $r\in[r_H,\infty]$. Thus, we have four natural boundaries of the domain: the axis $\theta=0$, the infinity $r=\infty$, the equator $\theta=\pi/2$, and the horizon $r=r_H$. Of these, the axis and the equator require boundary conditions for $\Psi$, whereas the horizon and the infinity are actually regular singular surfaces and so we only impose regularity conditions on them.
The boundary condition on the rotation axis is particularly simple: (r,=0) = \_s = [const]{} . \[eq-bc-axis\]
The equatorial boundary, $\theta=\pi/2$, actually consists of two parts: the disk (considered to be infinitesimally thin) and the plunging region between the disk and the black hole. The border between them, i.e., the inner edge of the disk, is assumed to be very sharp and to lie at the ISCO: $r_{\rm in}=r_{\rm ISCO}(a)$; $r_{\rm ISCO}$ varies between $r_{\rm ISCO}=6M$ for a Schwarzschild black hole ($a=0$) and $r_{\rm ISCO}=M$ for a maximally rotating Kerr black hole ($a\rightarrow 1$).
Let us first discuss the boundary conditions at the disk surface, $r>r_{\rm in}$. Depending on the resistive properties of the disk, and on the timescale under consideration, one can choose between two possibilities, both of which appear to be physically sensible:
1\) If one is interested in time-scales much longer than the characteristic rotation timescale but much shorter than both the accretion and the magnetic diffusion timescales, then it is reasonable to regard the poloidal flux distribution across the disk to be a fixed prescribed function, which must be specified explicitly. Thus, in this case one adopts a Dirichlet-type disk boundary condition: (r>r\_[in]{},=/2) = \_d(r) \[eq-bc-disk-Dirichlet\] The function $\Psi_d(r)$ is arbitrary; the only requirement that must be imposed in accordance with the discussion above is the convention that $\Psi_d(r=\infty)=0$ and $\Psi_d(r_{\rm in})=
\Psi_{\rm tot}$. Since we don’t have any good physical reasons to favor one choice of $\Psi_d(r)$ over any other, we in this paper just choose it arbitrarily to be a power-law with the exponent equal to $-1$: \_d(r) = \_[tot]{} ([[r\_[in]{}]{}r]{}) . \[eq-Psi\_d\]
2\) If one looks for a configuration that is stationary on timescales much longer than the effective magnetic diffusion time (while perhaps still much shorter than the accretion time scale), then one should regard the disk as effectively very resistive for the purposes of specifying the disk boundary condition. This situation may arise in the case of a turbulent disk; for such a disk, the effective magnetic diffusivity $\eta$ can probably be estimated as $\eta_{\rm turb}=\alpha_{SS} c_s h$, in the spirit of the $\alpha$-prescription for the effective viscosity in the SS73 model. Then, the characteristic radial velocity of the magnetic footpoints across the disk is roughly $v_{\rm fp} \sim \alpha_{SS} c_s (B_r/B_z)_d$. For the ratio $(B_r/B_z)_d$ of order 1, this velocity is much greater (by a factor of $r/h$) than the characteristic accretion velocity. Therefore, the only way one can have a steady-state configuration on the diffusion time-scale (which, according to the above estimate is of the order of the disk sound crossing time $r/c_s$) is for the poloidal field to be nearly perpendicular to the disk, $B_r \ll B_z$. This requirement translates into a simple von-Neumann boundary condition for $\Psi(r,\theta)$ at the disk surface: (r,=[2]{}) = 0 . \[eq-bc-disk-Neumann\] In our present paper, however, we chose the Dirichlet-type boundary condition represented by equations (\[eq-bc-disk-Dirichlet\])–(\[eq-Psi\_d\]) and set $\Psi_{\rm tot}=1$ throughout the paper.
In the plunging region $(r_H\leq r\leq r_{\rm in},\theta=\pi/2)$ we have chose (r\_Hrr\_[in]{},=/2)=\_[tot]{} \_d(r\_[in]{}) = [const]{} . \[eq-bc-plunging\] This choice appears to be physically appropriate for an accreting (and not just rotating) disk. The reason for this is that the matter in this region falls rapidly onto the black hole and thereby stretches the magnetic loops in the radial as well as the azimuthal directions, greatly reducing the strength of the vertical field component. The horizontal magnetic field then reverses across the plunging region, which is thus described as an infinitesimally thin non-force-free current sheet lying along the equator. In essense, this situation is directly analogous to the case of a force-free pulsar magnetosphere, where all the field lines crossing the outer light cylinder have to be open and extend out to infinity, thus forming an equtorial current sheet (Beskin 2003; van Putten & Levinson 2003). In the black-hole case, one could still consider an alternative picture of the plunging region with some field lines crossing the equator inside $r_{\rm in}$. However, in this case one would still have to have a non-FFDE equatorial current sheet inside the inner light cylinder, as was shown by Komissarov (2002b, 2004a).
Finally, as we have discussed in § \[subsec-EH\], the event horizon is a regular singular surface of the Grad–Shafranov equation. Correspondingly, one cannot and need not impose an additional arbitrary boundary condition here (e.g., Beskin & Kuznetsova 2000; Komissarov 2002b, 2004a). Instead, one imposes the regularity condition (\[eq-EH-bc\]); this condition has the form of an ordinary differential equation (ODE) that determines the function $\Psi_0(\theta)$ provided that both $\Omega_F(\Psi)$ and $I(\Psi)$ are given. Thus, from the procedural point of view, this condition can be used as a Dirichlet boundary condition on the horizon. It is important to acknowledge, however, that one does not have the freedom of specifying an arbitrary function $\Psi(\theta)$ and then studying how the information contained in this function propagates outward and affects the solution away from the horizon. The function $\Psi_0(\theta)$ is uniquely determined once $\Omega_F(\Psi)$ and $I(\Psi)$ are given and thus there is no causality violation here.
Similarly, the spatial infinity $r=\infty$ is also a regular singular surface of the Grad–Shafranov equation and thus can also be described by a regularity condition. In this sense, the horizon and the infinity are equivalent (e.g., Punsly & Coroniti 1990). Note that, in our particular problem set-up, the sitiuation at infinity is greatly simplified because we have set $\Omega_F(\Psi)=0$ on the open field lines extending from the disk. Because of this, there is no outer light cylinder for these lines to cross and thus one can also set $I(\Psi<\Psi_s)=0$. Then, at very large distances ($r\gg r_H$), the Grad–Shafranov equation (\[eq-GS-2\]) becomes a very simple linear equation: $\Psi_{rr}+r^{-2}\sin\theta \partial_\theta(\Psi_\theta/\sin\theta) = 0$, and the asymptotic solution that corresponds to the open field geometry with finite magnetic flux is just (r=,)=\_s . \[eq-bc-infty\]
Light-Cylinder Regularity Condition {#subsec-LC}
-----------------------------------
At this point the problem is almost completely determined. The only thing we still have to specify is the poloidal current $I(\Psi)$. Unlike $\Omega_F(\Psi)$, which was determined from the frozen-in condition on the disk surface, the function $I(\Psi)$ cannot be explicitly prescribed as an arbitrary function on any given surface. Instead, it must be somehow determined self-consistently together with the solution $\Psi(r,\theta)$ itself. This means that there must be one more condition that we have not yet used. And indeed, this additional condition is readily found — it is the (inner) light-cylinder regularity condition. Let us look at it more closely.
As can be easily seen from the Grad–Shafranov equation (\[eq-GS-2\]), the light cylinder, defined as the surface where D=0 =\_[LC]{} = || , \[eq-LC\] is a singular surface, because the coefficients in front of both the $r$- and $\theta$- second-order derivatives of $\Psi$ vanish there. Physically speaking, the light cylinder is the surface where the locally-measured rotational velocity of the magnetic field lines with respect to the ZAMOs is equal to the speed of light, $v_{\rm B,\phi}=c$, and where $E=B_{\rm pol}$ in the ZAMO frame. In general relativity there are two light cylinders, the inner one and the outer one. The outer light cylinder is just a direct analog of the pulsar light cylinder; it is crossed by rotating field lines that are open and extend to infinity. In our problem, we are interested in the closed field lines, i.e., those reaching the event horizon. These field lines cross the so-called inner light cylinder, whose existence is a purely general-relativistic effect, first noticed by Znajek (1977) and by BZ77.
Because the inner light cylinder is a singular surface of equation (\[eq-GS-2\]), in general this equation admits solutions that are not continuous or continuously differentiable at the light cylinder. Such solutions, while admissible mathematically, are not physically possible. Thus, we supplement our mathematical problem by an additional physical requirement that the solution be continuous and smooth across the light cylinder surface. In particular, this means that the 1st and 2nd derivatives of $\Psi$ must be finite there. Correspondingly, one can just drop all the terms proportional to $D$ when applying equation (\[eq-GS-2\]) at the light cylinder and keep only the terms involving the derivatives of $D$. The result can be formulated as an expression that determines the function $I(\Psi)$, namely: -II’()= \_r (\_r D)|\_[LC]{}+ [1]{} \_(\_D)|\_[LC]{} + \_F’ \^2 ()\^2 |\_[LC]{} , \[eq-LC-regularity\] where $\Psi$, $r$, and $\theta$ are taken at the light cylinder: =\_[LC]{}()= , \[eq-Psi\_LC\] and the function $r_{\rm LC}(\theta)$ —the shape of the light cylinder surface — is determined implicitly by equation (\[eq-LC\]). This approach was first used successfully at the outer light cylinder by Contopoulos et al. (1999) in the context of pulsar magnetspheres. In the black hole problem, it was first used by Uzdensky (2004) for the Schwarzschild case.
Let us now discuss how one can use the light cylinder regularity condition (\[eq-LC-regularity\]) to determine $I(\Psi)$ in practice. Conceptually, one can think of this condition as follows. Suppose one starts by fixing all the other boundary and regularity conditions in the problem \[including the choice of $\Omega_F(\Psi)$\]. Then, for an arbitrarily chosen function $I(\Psi)$, one can regard the condition (\[eq-LC-regularity\]) as a mixed-type, Dirichlet-Neumann boundary condition because it can be viewed as a quadratic algebraic equation for, say, the first radial derivative. Thus, if $I(\Psi)$ is given, one can express $\Psi_r|_{\rm LC}$ in terms of $\Psi_{\rm LC}$ and $\Psi_\theta|_{\rm LC}$. Next, one applies this condition separately on each side of the light cylinder and gets a complete, well-defined problem in each of the two regions separated by the light cylinder. Then, one can obtain a solution in each of these regions. Because of the use of the regularity condition (\[eq-LC-regularity\]), each of the two solutions is going to be regular near the light cylinder. In general, however, these solutions are not going to match each other at $r=r_{\rm LC}(\theta)$ and the mismatch $\Delta\Psi_{\rm LC}(\theta)$ will depend on the original choice of the function $I(\Psi)$. This observation suggests a method for selecting a unique function $I(\Psi)$: one can devise a procedure in which one iterates with respect to $I(\Psi)$ until $\Delta\Psi_{\rm LC}$ becomes zero. The corresponding function $I(\Psi)$ is then declared the correct one: only with this choice of $I(\Psi)$ the solution $\Psi(r,\theta)$ passes smoothly through the light cylinder.
The above method for determining $I(\Psi)$ is conceptually illuminating and can be easily implemented in simple cases. For example, in the case of a [*uniformly-rotating*]{} pulsar magnetosphere, two important simplifications take place. First, the location of the light cylinder is known a priori, $r_{\rm LC}(\theta)=
c/\Omega=\rm const$, and hence one can choose a computational grid that is most suitable for dealing with the light cylinder (e.g., cylindrical polar coordinates with some gridpoints lying on the cylinder). Second, because $\Omega_F=\rm const$, the terms quadratic in the derivatives of $\Psi$ disappear, and the task of resolving equation (\[eq-LC-regularity\]) with respect to the derivative normal to the light cylinder becomes trivial. These simplifications make the procedure described above very practical and it was in fact used successfully by Contopoulos et al. (1999) (and repeated later by Ogura & Kojima 2003) to obtain a unique solution for an axisymmetric pulsar magnetosphere that was smooth across the outer light cylinder.
In the problem considered in this paper, however, the situation is much more complicated. In particular, the light cylinder’s position and shape are not known a priori; instead, they need to be determined self-consistently as part of the solution. Also, equation (\[eq-LC-regularity\]) is, in general, quadratic with respect to $\partial_r\Psi$, and hence one has to deal with the problem of the existence of its solutions and with the task of selecting only one of them. Because of this overall complexity, we decided against using this procedure in our calculations. Instead, we chose a much simpler and more straight-forward method: we used equation (\[eq-LC-regularity\]) to determine $I(\Psi)$ \[or, rather, the combination $II'(\Psi)$ that is actually needed for further computations\] directly, by explicitly interpolating all the terms on the right-hand-side of equation (\[eq-LC-regularity\]). We will describe this in more detail in the next section.
Numerical procedure {#subsec-procedure}
-------------------
We performed our calculations in the domain $\{r\in[r_H,\infty],
\theta\in[0,\pi/2]\}$ on a grid that was uniform in $\theta$ and in the variable $x\equiv \sqrt{r_H/r}$ (which enabled us to extend the computational domain to infinity). The highest resolution used was 60 gridzones in the $\theta$-direction and 200 gridzones in the radial ($x$) direction. To solve the elliptic Grad–Shafranov equation (\[eq-GS-2\]), we employed a relaxation procedure similar to the one employed by Uzdensky et al. (2002). In this procedure, we introduced artificial time variable $t$ and evolved the flux function according to the parabolic equation = f(r,) (LHS-RHS) , \[eq-relaxation\] where $LHS$ and $RHS$ are the left- and right-hand sides of the Grad–Shafranov equation (\[eq-GS-2\]), respectively, and the factor $f(r,\theta)$ was an artificial multiplier introduced in order to accelerate convergence in regions where the diffusion coefficients in the $x$ and $\theta$ directions are small (e.g., very far away or very close to the horizon). The sign in front of $f(r,\theta)$ was chosen according to the sign of the diffusion coefficient in equation (\[eq-GS-2\]): it was plus outside the light cylinder (where $D>0$) and minus inside (where $D<0$). It is clear that any steady-state configuration achieved as a result of this evolution is a solution of the Grad-Shafranov equation (\[eq-GS-2\]).
Here we would like to draw attention to the following non-trivial problem. During the relaxational evolution described by equation (\[eq-relaxation\]), the light cylinder generally moves across the grid and, from time to time inevitably gets close to some of the gridpoints. This leads to the danger, first noted by Macdonald (1984), that some gridpoints will oscillate between the two sides of the light cylinder. Indeed, suppose that a given gridpoint $P$ is initially on the outer side of the light cylinder ($D_P>0$), and so $\partial\Psi_P/\partial t$ is determined by equation (\[eq-relaxation\]) with the plus sign. Let us suppose that the resulting evolution of $\Psi_P$ is such that $D_P$ decreases. The, after some time one may find that $D_P$ has become negative; correspondingly, at the next timestep one uses equation (\[eq-relaxation\]) with the minus sign and so $\Psi_P$ starts to evolve in the opposite direction. Because the value of $D$ at a fixed spatial point $P$ is, locally, a smooth monotonic function of $\Psi_P$, it now starts to increase and may become positive again in one or two timesteps. This leads to rapid small-amplitude oscillations of the light cylinder around some gridpoints, instead of a smooth large-scale motion associated with the iteration process. As a result, the light cylinder gets “stuck” on these gridpoints and the function $r_{\rm LC}(\theta)$ becomes a series of steps and plateaus instead of a smooth curve. A simple and efficient way to avoid this problem turned out to be to update the function $D(r,\theta)$ not at every timestep but rather very infrequently. Although it caused some delay in the convergence of the relaxation process, this modification has worked very well in practice, enabling the light cylinder to move freely across the grid and to achieve its ultimate smooth shape.
To implement our relaxation procedure numerically, we used an explicit finite-difference scheme with 1st-order accurate time derivative and centered 2nd-order accurate spatial derivatives. It is also worth mentioning that writing out the full-derivative terms such as $\partial_r (\Psi_r D\Delta/\rho^2)$ as $\Psi_r
\partial_r (D\Delta/\rho^2)+ (D\Delta/\rho^2)\Psi_{rr}$, etc., and then evaluating them on the grid actually worked better than evaluating these full derivatives directly as they are. The initial condition—the starting point of our relaxation process—was prescribed explicitly as (t=0,r,) = \_s + \[\_d(r)-\_s\] (1-) . \[eq-initial-condition\] Also, we found it useful to use cubic-spline interpolation of functions $I(\Psi)$ and $\Omega_F(\Psi)$ to avoid some small-scale rapid oscillations of the solution.
Finally, let us describe the particular numerical implementation of the procedure that was used to determine the poloidal current $I(\Psi)$ in our code. As we mentioned at the end of the previous section, we used equation (\[eq-LC-regularity\]) explicitly to determine $II'(\Psi)$ by interpolating all the terms on the right-hand side of that equation at the light cylinder. Because the light cylinder surface is roughly spherical, it was convenient to represent $I(\Psi)$ by a tabular function specified on a one-dimensional array $\{\Psi_{\rm LC}^j\}$ of the values of $\Psi_{\rm LC}$ at the radial rays $\theta=\theta^j= jh_\theta$, where $h_\theta$ is the grid-spacing in $\theta$. Along each of these rays, we first had to locate the pair of radially-adjacent gridpoints between which the light cylinder lay. Then we used an interpolation of $D(r,\theta)$ to determine the position $r=r_{\rm LC}(\theta^j)$ of the light cylinder more precisely and to obtain $\Psi_{\rm LC}^j=
\Psi_{\rm LC}(\theta^j)$, as well as the values of the derivatives $\Psi_r$, $\Psi_\theta$, $D_r$, and $D_\theta$ at the light cylinder for each of the rays. Finally, we used (\[eq-LC-regularity\]) to compute the value of $II'(\Psi)$ at each $\Psi_{\rm LC}^j$. This is actually not as trivial as it may seem, because the condition (\[eq-LC-regularity\]) in such an approach was enforced at all times during the relaxation procedure that determined $\Psi(r,\theta)$, whereas the Grad–Shafranov equation itself was satisfied only after convergence had been reached. Therefore, one had to exercise extra care, for example, in deciding how often $I(\Psi)$ needs to be updated. We found that it was necessary to update $I(\Psi)$ only fairly infrequently during our relaxation procedure.
Results {#subsec-results}
-------
The single most important result of the present study is presented in Figure \[fig-a\_of\_Psi\_s\]. This figure shows where in the two-dimensional $(a,\Psi_s)$ parameter space force-free solutions exist and where they do not. Filled circles on this plot represent the runs in which convergence was achieved (allowed region), whereas open circles correspond to the runs that failed to converge to a suitable solution (forbidden region). The boundary $a_{\rm max}(\Psi_s)$ between the allowed and forbidden regions is located somewhere inside the narrow hatched band that runs from the lower left to the upper right of the Figure \[the finite width of the band represents the uncertainty in $a_{\rm max}(\Psi_s)$ due to a limited number of runs\]. As we can see, $a_{\rm max}(\Psi_s)$ is a monotonically increasing function. In particular, in the limit $\Psi_s \rightarrow 0$, $a_{\rm max}$ indeed scales linearly with $\Psi_s$ and hence is inversely proportional to $R_s=r_{\rm in} \Psi_{\rm tot}/\Psi_s$, in full agreement with our expectations presented in § \[sec-idea\]. However, this linear dependence no longer holds for finite values of $\Psi_s$ (and hence of $a_{\rm max}$).
In order to study the effect that black hole spin has on the solutions, we concentrate on several values of $a$ for a fixed value of $\Psi_s$. In particular, we choose $\Psi_s=0.5$ \[that corresponds to $R_s=
2r_{\rm in}(a)$\] and considered four values of‘$a$: $a=0$, $a=0.25$, $a=0.5$, and $a=0.7$. Thus, Figure \[fig-contour\] shows the contour plots of the poloidal magnetic flux for these four cases. We see that the flux surfaces inflate somewhat with increased $a$, but this expansion is not very dramatic, even in the case $a=0.7$, which is very close to the critical value $a_{\rm max}(\Psi_s=0.5)$ that corresponds to a sudden loss of equilibrium. We note that this finding is completely in line with our discussion in § \[sec-idea\].
The next three Figures present the plots of three important functions that characterize the solutions. In each Figure there are four curves corresponding to our selected values $a=0$, 0.25, 0.5, and 0.7 for $\Psi_s=0.5$. Figure \[fig-Psi\_0\] shows the event-horizon flux distribution $\Psi_0(\theta)$; Figure \[fig-I\] shows the poloidal-current function $I(\Psi)$; and Figure \[fig-alpha\_LC\] shows the position of the inner light cylinder described in terms of the lapse function $\alpha_{\rm LC}(\theta)$.
In Figure \[fig-Psi\_0\] we also plot $\Psi_0^{(0)}(\theta)$ corresponding to the simple split-monopole solution with uniform radial field at the horizon. Note that on the horizon we have $\Delta=0$ and hence $\Sigma=r_H^2+a^2 = 2Mr_H$; therefore B\_=[1]{} \_= [1]{} \_= [1]{} [1]{} \_ ,r=r\_H . Thus, $B_{\hat{r}}(\theta)={\rm const}$ corresponds to $\Psi_0^{(0)}(\theta)=\Psi_s+(\Psi_{\rm tot}-\Psi_s) (1-\cos\theta)$, independent of $a$. This function is plotted in Figure \[fig-Psi\_0\] (the dashed line) for comparison with the actual solutions. We see that the deviation from $\Psi_0^{(0)}(\theta)$ becomes noticeable only when $a$ approaches 1.
Figure \[fig-alpha\_LC\] shows $\alpha_{\rm LC}(\theta)$. An interesting feature here is that the light cylinder reaches the event horizon at some intermediate angle $0<\theta_{\rm co}<\pi/2$ for small values of $a$. This is because, when $a<0.359..\, M$, the inner edge of a Keplerian disk rotates faster than the black hole; correspondingly, somewhere in the disk there exists a corotation point $r_{\rm co}> r_{\rm in}$ such that $\Omega_K(r_{\rm co})=\Omega_H$. The field line $\Psi_{\rm co}$ threading the disk at this point corotates with the black hole. Therefore, at the point $\theta_{\rm co}$ where this line intersects the horizon, we have $\delta\Omega=0$, and so this point ($r=r_H,\theta=\theta_{\rm co}$) has to lie on the light cylinder. The location $\theta_{\rm co}$ of this point moves towards the equator when $a$ is increased and reaches it at $a=0.359..\, M$. For larger values of $a$, the entire disk outside of the ISCO rotates slower than the black hole and the light cylinder touches the horizon only at the pole $\theta=0$.
Finally, we also computed all the electric and magnetic field components and checked that $E^2<B^2$ everywhere outside the horizon.
Astrophysical Implications/Consequences {#sec-implications}
=======================================
In this section we’ll discuss the exchange of energy and angular momentum between the black hole and the disk. Apart from the question of existence of solutions, this issue is one of the most important for actual astrophysical applications. Fortunately, once a particular solution describing the magnetosphere is obtained, computing the energy and angular momentum transported by the magnetic field becomes very simple.
Indeed, according to MT82, angular momentum and red-shifted energy (i.e., “energy at infinity”) are transported along the poloidal field lines through the force-free magnetosphere without losses. Thus, the amount of angular momentum $\Delta L$ transported out in a unit of global time $t$ through a region between two neighboring poloidal flux surfaces, $\Psi$ and $\Psi+\Delta\Psi$, as given by equation (7.6) of MT82 (modified to suit our choice of notation), is $${{d\Delta L}\over{dt}} = - {1\over 2}\, I \Delta\Psi \, ,
\label{eq-torque-MT82}$$ and the red-shifted power—flux of redshifted energy per unit global time $t$— is expressed as $$\Delta P = - {1\over 2}\, \Omega_F I \, \Delta\Psi \, ,
\label{eq-power-MT82}$$ (see eq. \[7.8\] of MT82).
Then, taking into account the contributions from both hemispheres and both sides of the disk, we can compute the total magnetic torque exerted by the hole onto the disk per unit $t$ as $${{dL}\over{dt}} = -\int\limits_{\Psi_s}^{\Psi_{\rm tot}} I(\Psi) d\Psi \, ,
\label{eq-torque-total}$$ and, correspondingly, the total red-shifted power transferred from the hole onto the disk via Poynting flux is $$P = -\int\limits_{\Psi_s}^{\Psi_{\rm tot}} \Omega_F(\Psi) I(\Psi) d\Psi \, .
\label{eq-power-total}$$
Next, since in our problem we have an explicit mapping (\[eq-Psi\_d\]) between $\Psi$ and the radial coordinate $r$ on the disk, we can immediately write down expressions for the radial distributions of angular momentum and red-shifted energy deposited on the disk per unit global time: $${{d\Delta L(r)}\over{dt}} =
- I[\Psi_d(r)]\, {{d\Psi_d}\over dr} \, dr \, ,
\label{eq-torque-of-r}$$ and $$\Delta P(r) = -\Omega_K(r)\, I[\Psi_d(r)]\, {{d\Psi_d}\over dr} \, dr \, .
\label{eq-power-of-r}$$
Figures \[fig-L-of-r\] and \[fig-P-of-r\] show these distributions for our selected cases $a=0.25$, 0.5, and 0.7 for fixed $\Psi_s=0.5$. We see that in the case $a=0.25$ there is a corotation point $r_{\rm co}$ on the disk such that $\Omega_{\rm disk}>\Omega_H$ inside $r_{\rm co}$ and $\Omega_{\rm disk}<\Omega_H$ outside $r_{\rm co}$. Correspondingly, both angular memontum and red-shifted energy flow from the inner ($r<r_{\rm co}$) part of the disk to the black hole and from the hole to the outer ($r>r_{\rm co}$) part of the disk. At larger values of $a$, however, the Keplerian angular velocity at $r=r_{\rm in}$ is smaller than the black hole’s rotation rate and there is no corotation point; correspondingly, both angular momentum and redshifted energy flow from the hole to the disk. Also, as can be seen in Figures \[fig-L-of-r\] and \[fig-P-of-r\], the deposition of these quantities becomes strongly concentrated near the disk’s edge, especially at higher values of $a$.
Next, Figures \[fig-L-of-a\] and \[fig-P-of-a\] demonstrate the dependence of the total integrated angular momentum and red-shifted energy fluxes (\[eq-torque-total\])–(\[eq-power-total\]) on the black hole spin $a$ for a fixed value of $\Psi_s=0.5$. We see that both quantities are negative at small values of $a$ (meaning a transfer from the disk to the hole,) but then increase and become positive at larger $a$. The angular momentum transfer rate depends roughly linearly on $a$, whereas the red-shifted power $P(a)$ grows even faster, especially at large values of $a$. It is also interesting to note that the two quantities go through zero at slightly different values of the spin parameter: $dL/dt$ becomes zero at $a\approx 0.23$, while $P=0$ at $a\approx 0.26$. This means that it possible to have the total angular momentum flow from the hole to the disk and the total power flow from the disk to the hole at the same time.
Conclusions {#sec-conclusions}
===========
In this paper we investigated the structure and the conditions for the existence of a force-free magnetosphere linking a rotating Kerr black hole to its accretion disk. We assumed that the magnetosphere is stationary, axisymmetric, and degenerate and that the disk is thin, ideally conducting, and Keplerian and that it is truncated at the Innermost Stable Circular Orbit. Our main goal was to determine under which conditions a force-free magnetic field can connect the hole directly to the disk and how the black hole rotation limits the radial extent of such a link on the disk surface.
We first introduced (in § \[sec-idea\]) a very simple but robust physical argument that shows that, generally, magnetic field lines connecting the polar region of a spinning black hole to arbitrarily remote regions of the disk cannot be in a force-free equilibrium. The basic reason for this can be described as follows. Since the field lines threading the horizon have to first cross the inner light cylinder, and since they generally rotate at a rate that is different from the rotation rate of the black hole, these field lines have to be bent somewhat. In other words, they develop a toroidal magnetic field component, just like the open field lines crossing the outer light cylinder in a pulsar magnetosphere. In the language of the Membrane Paradigm (see Thorne et al. 1986), this toroidal field is needed so that the field lines could slip resistively across the stretched event horizon. The next step in our argument is to look at those field lines that connect the polar region of the horizon to the disk somewhere far away from the black hole. In a force-free magnetosphere, toroidal flux spreads along field lines to keep the poloidal current $I\sim B_{\hat{\phi}}\alpha\varpi$ constant along the field. Then one can show that the outward pressure of the toroidal field generated due to the black hole rotation turns out to be so large that it cannot be confined by the poloidal field tension at large enough distances. In other words, the field lines under consideration cannot be in a force-free equilibrium. Furthermore, one can generalize this argument to the case of closed magnetospheres of finite size and derive a conjecture that the maximal radial extent $R_{\rm max}$ of the magnetically-coupled region on the disk surface should scale inversely with the black hole spin parameter $a$ in the limit $a\rightarrow 0$.
In order to verify this hypothesis and to study the detailed structure of magnetically-coupled disk–hole configurations, we have obtained numerical solutions of the general-relativistic force-free Grad–Shafranov equation corresponding to partially-closed field configurations (shown in Fig. \[fig-geometry-kerr\]). This is an nonlinear 2nd-order partial differential equation for the poloidal flux function $\Psi(r,\theta)$ and it is the main equation governing the system’s behavior.
An additional complication in this problem arises from the need to specify two free functions that enter the force-free Grad–Shafranov equation; these are the field-line angular velocity $\Omega_F(\Psi)$ and the poloidal current $I(\Psi)$. Because all the field lines are assumed to be frozen into the disk, the first of this functions is determined in a fairly straightforward way. Namely, for any given field line $\Psi$, $\Omega_F(\Psi)$ is just the Keplerian angular velocity at this line’s footpoint on the disk. Specifying the poloidal current, on the other hand, is a much more difficult and nontrivial task. The reason for this is that it cannot be just prescribed explicitly on any given surface and one should look more thoroughly into the mathematical nature of the Grad–Shafranov equation itself to determine $I(\Psi)$. In particular, the most important feature of the Grad–Shafranov equation in this regard is that it becomes singular on two surfaces, the event horizon and the inner light cylinder. This observation is very useful because one can impose a physically-motivated regularity condition at each of these surfaces. One of the most important ideas in our analysis is that one can use the light-cylinder regularity condition to determine, using an iterative procedure, the poloidal current $I(\Psi)$, similar to the way it was done by CKF99 in the context of pulsar magnetospheres.
As for the singularity at the event horizon, it is also very important. Basically, it tells us that it is not possible to prescribe an arbitrary boundary condition at the horizon; instead, one can only impose a certain physical condition of regularity there. When combined with the Grad–Shafranov equation itself, this regularity requirement results in a single relationship (historically known as the horizon boundary condition, first derived by Znajek 1977) between three functions: the horizon flux distribution $\Psi_0(\theta)$, and the two free functions, $\Omega_F(\Psi)$ and $I(\Psi)$ (e.g., Beskin 1997; Beskin & Kuznetsova 2000). What’s important is that there are no other independent relationships that can be specified on this surface. In practical terms, this means that this condition should be used to determine the function $\Psi_0(\theta)$ in terms of $\Omega_F(\Psi)$ and $I(\Psi)$, which therefore must be determined outside the horizon. This fact helps to alleviate some of the causality issues raised by Punsly (1989, 2001, 2003) and by Punsly & Coroniti (1990).
Since one of the goals of this work was to study the dependence $R_{\rm max}(a)$, we performed a series of computations corresponding to various values of two parameters: the black hole spin parameter $a$ and the the magnetic link’s radial extent $R_s$ on the disk surface (the field lines anchored to the disk beyond $R_s$ were taken to be open and non-rotating). At the same time, the disk boundary conditions were kept the same in all these runs, namely, $\Psi_d(r)=\Psi_{\rm tot} r_{\rm in}(a)/r$. Therefore, varying the value of $R_s$ for fixed $a$ was equivalent to varying the amount $\Psi_s$ of open magnetic flux threading the disk.
Whereas for some pairs of values of $a$ and $\Psi_s$ we were able to achieve a convergent force-free solution, for others we were not. Thus, as one of the main results of our computations, we were able to chart out the allowed and the forbidden domains in the two-parameter space $(a,\Psi_s)$. The boundary between these two domains is a curve $a_{\rm max}(\Psi_s)$, which can be easily remapped into the curve $R_{\rm max}(a)$. As can be seen in Figure \[fig-a\_of\_Psi\_s\], this is a monotonically rising curve with the asymptotic behavior $a_{\rm max}\propto \Psi_s$ as $\Psi_s\rightarrow 0$, which is in line with our predictions.
We also computed the total angular momentum and red-shifted energy exchanged in a unit of global time $t$ between the hole and the disk through magnetic coupling. We studied the dependence of these quantities on the black hole spin parameter $a$ and found that the angular momentum transfer rate rises roughly linearly with $a$; it is negative for small $a$ (meaning the angular momentum transfer to the hole) and reverses sign around $a\approx 0.23$ (for $\Psi_s=0.5\Psi_{\rm tot}$). The total energy transfer increases with $a$ at an accelerated (i.e., faster than linear) rate, especially at larger values of $a$; it is also negative at small $a$, but becomes positive around $a=0.26$. This means that there is a narrow range $0.23..<a<0.26..$ where the integrated angular momentum flows from the hole to the disk, whereas the integrated red-shifted energy flows in the opposite direction.
Finally, we note that, in the case of open or partially-open field configuration responsible for the Blandford–Znajek process, one has to consider magnetic field lines that extend from the event horizon out to infinity. Since these field lines are not attached to a heavy infinitely conducting disk, their angular velocity $\Omega_F(\Psi)$ cannot be explicitly prescribed; it becomes just as undetermined as the poloidal current $I(\Psi)$ they carry. Fortunately, however, these field lines now have to cross two light cylinders (the inner one and the outer one). Since each of these is a singular surface of the Grad–Shafranov equation, one can impose corresponding regularity conditions on these two surfaces. Thus, we propose that one should be able to devise an iterative scheme that uses the two light-cylinder regularity conditions in a coordinated manner to determine the two free functions $\Omega(\Psi)$ and $I(\Psi)$ simultaneously, as a part of the overall solution process. At the same time, the regularity conditions at the event horizon and at infinity could be used to obtain the asymptotic poloidal flux distributions at $r=r_H$ and at $r\rightarrow\infty$, respectively. We realise of course that iterating with respect to two functions simultaneously may be a very difficult task. This purely technical obstacle (in addition to having to deal with the separatrix between the open- and closed-field regions) is the primary reason why, in this paper, we have restricted ourselves to a configuration which has no open field lines extending from the black hole to infinity. We leave this problem as a topic for future research.
It is possible that, instead of solving the Grad–Shafranov equation itself, the easiest and most practical way to achieve a stationary solution will be to use a [*time-dependent*]{} relativistic force-free code, such as one of those being developed now (Komissarov 2001, 2002a, 2004a; MacFadyen & Blandford 2003; Spitkovsky 2004; Krasnopolsky 2004, private communication).
I am very grateful to V. Beskin, O. Blaes, R. Blandford, S. Boldyrev, A. K[ö]{}nigl, B. C. Low, M. Lyutikov, A. MacFadyen, V. Pariev, B. Punsly, and A. Spitkovsky for many fruitful and stimulating discussions. I also would like to express my gratitude to the referee of this paper (Serguei Komissarov) for his very useful comments and suggestions that helped improve the paper. This research was supported by the National Science Foundation under Grant No. PHY99-07949.
References
==========
Agol, E., & Krolik, J. H. 2000, ApJ, 528, 161
Begelman, M. C., Blandford, R. D., & Rees, M. J. 1984, Rev. Mod. Phys., 56, 255
Beskin, V. S., & Par’ev, V. I. 1993, Phys. Uspekhi, 36, 529
Beskin, V. S. 1997, Phys. Uspekhi, 40, 659
Beskin, V. S. 2003, Phys. Uspekhi, 46, 1209
Beskin, V. S., & Kuznetsova, I. V. 2000, Nuovo Cimento, 115, 795; preprint (astro-ph/0004021)
Blandford, R. D. 1976, MNRAS, 176, 465
Blandford, R. D. 1999, in Astrophysical Disks: An EC Summer School, ed. J. A. Sellwood & J. Goodman (San Francisco: ASP), ASP Conf. Ser. 160, 265; preprint (astro-ph/9902001)
Blandford, R. D. 2000, Phil. Trans. R. Soc. Lond. A, 358, 811; preprint (astro-ph/0001499)
Blandford, R. D. 2002, in “Lighthouses of the Universe”, eds. Gilfanov, M. et al. (New York: Springer), 381
Blandford, R. D., & Znajek, R. L. 1977, MNRAS, 179, 433 (BZ77)
Blandford, R. D., & Payne, D. G. 1982, MNRAS, 199, 883
Contopoulos, I., Kazanas, D., & Fendt, C. 1999, ApJ, 511, 351
Damour, T. 1978, Phys. Rev. D, 18, 3589
Fendt, C. 1997, A&A, 319, 1025
Gammie, C. F. 1999, ApJ, 522, L57
Ghosh, P., & Abramowicz, M. A. 1997, MNRAS, 292, 887
Gruzinov, A. 1999, preprint (astro-ph/9908101)
Hawley, J. F., & Krolik, J. H. 2001, ApJ, 548, 348
Hirose, S., Krolik, J. H., de Villiers, J.-P., & Hawley, J. F. 2004, ApJ, 606, 1083
Hirotani, K., Takahashi, M., Nitta, S.-Y., & Tomimatsu, A. 1992, ApJ, 386, 455
Komissarov, S. S. 2001, MNRAS, 326, L41
Komissarov, S. S. 2002a, MNRAS, 336, 759
Komissarov, S. S. 2002b, preprint (astro-ph/0211141)
Komissarov, S. S. 2004a, MNRAS, 350, 427
Komissarov, S. S. 2004b, MNRAS, 350, 1431
Krolik, J. H. 1999a, ApJ, 515, L73
Krolik, J. H. 1999b, Active Galactic Nuclei: From The Central Black Hole To The Galactic Environment (Princeton: Princeton Univ. Press)
Levinson, A. 2004, ApJ, 608, 411
Li, L.-X. 2000, ApJ, 533, L115
Li, L.-X. 2001, in X-ray Emission from Accretion onto Black Holes, ed. T. Yaqoob & J. H. Krolik, JHU/LHEA Workshop, June 20-23, 2001
Li, L.-X. 2002a, ApJ, 567, 463
Li, L.-X. 2002b, A&A, 392, 469
Li, L.-X. 2004, preprint (astro-ph/0406353)
Livio, M., Ogilvie, G. I., & Pringle, J. E. 1999, ApJ, 512, 100
Lovelace, R. V. E. 1976, Nat., 262, 649
Macdonald, D., & Thorne, K. S. 1982, MNRAS, 198, 345 (MT82)
Macdonald, D. A. 1984, MNRAS, 211, 313
Macdonald, D. A., & Suen, W.-M. 1985, Phys. Rev. D, 32, 848
MacFadyen, A. I., & Blandford, R. D. 2003, AAS HEAD Meeting 35, 20.16
Nitta, S.-Y., Takahashi, M., & Tomimatsu, A. 1991, Phys. Rev. D, 44, 2295
Ogura, J., & Kojima, Y. 2003, Prog. Theor. Phys., 109, 619
Phinney, E. S. 1983, in Astrophysical Jets, ed. A. Ferrari & A. G.Pacholczyk (Dordrecht: Reidel), 201
Punsly, B. 1989, Phys. Rev. D, 40, 3834
Punsly, B. 2001, Black Hole Gravitohydromagnetics (Berlin: Springer)
Punsly, B. 2003, ApJ, 583, 842
Punsly, B. 2004, preprint (astro-ph/0407357)
Punsly, B., & Coroniti, F. V. 1990, ApJ, 350, 518
Spitkovsky, A. 2004, in IAU Symp. 218, Young Neutron Stars and Their Environment, ed. F. M. Camilo & B. M. Gaensler (San Francisco: ASP), 357; preprint (astro-ph/0310731)
Takahashi, M. 2002, ApJ, 570, 264
Thorne, K. S. 1974, ApJ, 191, 507
Thorne, K. S., Price, R. H., & Macdonald, D. A. 1986, Black Holes: The Membrane Paradigm (New Haven: Yale Univ. Press)
Uzdensky, D. A., K[ö]{}nigl, A., & Litwin, C. 2002, ApJ, 565, 1191
Uzdensky, D. A., 2002a, ApJ, 572, 432
Uzdensky, D. A., 2002b, ApJ, 574, 1011
Uzdensky, D. A., 2003, ApJ, 598, 446
Uzdensky, D. A., 2004, ApJ, 603, 652
van Ballegooijen, A. A. 1994, Space Sci. Rev., 68, 299
van Putten, M. H. P. M. 1999, Science, 284, 115
van Putten, M. H. P. M., & Levinson, A. 2003, ApJ, 584, 937
Wang, D. X., Xiao, K., & Lei, W. H. 2002, MNRAS, 335, 655
Wang, D.-X., Lei, W. H., & Ma, R.-Y. 2003a, MNRAS, 342, 851
Wang, D.-X., Ma, R.-Y., Lei, W.-H., & Yao, G.-Z. 2003b, ApJ, 595, 109
Wang, D.-X., Ma, R.-Y., Lei, W.-H., & Yao, G.-Z. 2004, ApJ, 601, 1031
Znajek, R. L. 1977, MNRAS, 179, 457
Znajek, R. L. 1978, MNRAS, 185, 833
[^1]: Currently at Princeton University.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Using the convex integration technique for the three-dimensional Navier-Stokes equations introduced by T. Buckmaster and V. Vicol, it is shown the existence of non-unique weak solutions for the 3D Navier-Stokes equations with fractional hyperviscosity $(-\Delta)^{\theta}$, whenever the exponent $\theta$ is less than J.-L. Lions’ exponent $5/4$, i.e., when $\theta < 5/4$.'
author:
- 'Tianwen Luo$^*$'
- 'Edriss S. Titi$^\dagger$'
date: 'January 13, 2020'
title: 'Non-uniqueness of Weak Solutions to Hyperviscous Navier-Stokes Equations - On Sharpness of J.-L. Lions Exponent'
---
Introduction
============
In this paper we consider the question of non-uniquness of weak solutions to the 3D Navier-Stokes equations with fractional viscosity (FVNSE) on $\mathbb{T}^3$ $$\begin{aligned}
\label{eq:FVNSE}
\begin{cases}
{\partial}_t v + \nabla \cdot (v \otimes v) + \nabla p + \nu(- \Delta)^{\theta} v = 0,\\
\nabla \cdot v = 0,
\end{cases}\end{aligned}$$ where $\theta \in \mathbb{R}$ is a fixed constant, and for $u \in C^{\infty}(\mathbb{T}^3)$ with $\int_{\mathbb{T}^3} u(x) dx =0$, the fractional Laplacian is defined via the Fourier transform as $$\begin{aligned}
\mathcal{F}((- \Delta)^{\theta} u)(\xi) = |\xi|^{2\theta}\mathcal{F}(u)(\xi), \quad \xi \in \mathbb{Z}^3.\end{aligned}$$
A vector field $v \in C^0_{weak}(\mathbb{R};L^2(\mathbb{T}^3))$ is called a weak solution to the FVNSE if it solves in the sense of distribution.
When $\theta = 1$, FVNSE is the standard Navier-Stokes equations. J.-L. Lions first considered FVNSE in [@Lions59], and showed the existence and uniqueness of weak solutions to the initial value problem, which also satisfied the energy equality, for $\theta \in [5/4,\infty)$ in [@Lions69]. Moreover, an analogue of the Caffarelli-Kohn-Nirenberg [@CKN] result was established in [@KatzPavlovic] for the FVNSE system , showing that the Hausdorff dimension of the singular set, in space and time, is bounded by $5 - 4\theta$ for $\theta \in (1,5/4)$. The existence, uniqueness, regularity and stability of solutions to the FVNSE have been studied in [@OlsonTiti05; @JiuWang14; @Wu03; @Tao09] and references therein. Very recently, using the method of convex integration introduced in [@dLSz4], Colombo, De Lellis and De Rosa in [@CdLdR18] showed the non-uniquenss of Leray weak solutions to FVNSE for $\theta \in (0,1/5)$ and for $\theta \in (0,1/3)$ in [@DeRosa19].
In the recent breakthrough work [@BV17], Buckmaster and Vicol obtained non-uniqueness of weak solutions to the three-dimensional Navier-Stokes equations. They developed a new convex integration scheme in Sobolev spaces using intermittent Beltrami flows which combined concentrations and oscillations. Later, the idea of using intermittent flows was used to study non-uniqueness for transport equations in [@MS17; @MS19; @Modena-Sattig-19] employing scaled Mikado waves, and for stationary Navier-Stokes equations in [@Luo19; @Cheskidov-Luo-19] employing viscous eddies.
The schemes in [@BV17; @MS17] are based on the convex integration framework in Hölder spaces for the Euler equations, introduced by De Lellis and Sz[é]{}kelyhidi in [@dLSz4], subsequently refined in [@Isett12; @Buckmaster2013transporting; @Buckmaster2014; @DaneriSzekelyhidi16], and culminated in the proof of the second half of the Onsager conjecture by Isett in [@Isett16]; also see [@BDSV17] for a shorter proof. For the first half of the Onsager conjecture, see, e.g., [@CET94; @BTi], and the references therein.
The main contribution of this note is to show that the results in Buckmaster-Vicol’s paper hold for FVNSE for $\theta < 5/4$:
\[Thm:Main\] Assume that $\theta \in [1, 5/4)$. Suppose $u$ is a smooth divergence-free vector field, define on $\mathbb{R}_+ \times \mathbb{T}^3$, with compact support in time and satisfies the condition $$\begin{aligned}
\int_{\mathbb{T}^3} u(t,x)dx \equiv 0.
\end{aligned}$$ Then for any given $\varepsilon_0 > 0$, there exists a weak solution $v$ to the FVNSE , with compact support in time, satisfying $$\begin{aligned}
\|v - u\|_{L^{\infty}_t W^{2\theta - 1,1}_x} < \varepsilon_0.
\end{aligned}$$ As a consequence there are infinitely many weak solutions of the FVNSE which are compactly supported in time; in particular, there are infinitely many weak solutions with initial values zero.
In the above theorem we assume that $\theta \in [1, 5/4)$. However, using the constructions in [@BV17] with a slightly different choice of parameters, one can actually show that Theorem 1.2 and Theorem 1.3 in [@BV17] hold for the 3D FVNSE, i.e., there exist non-unique weak solutions $v \in C_t^0 W_x^{\beta,2}$, with a different $\beta > 0$, depending on $\theta$. However, in this paper we choose to prove a weaker result, Theorem \[Thm:Main\], in order to simplify the presentation while retaining the main idea.
For the case $\theta \in (-\infty,1)$, the same construction also yields weak solutions $v \in C^0_t L^2_x \cap C^0_t W^{1,1}_x$ with a suitable choice of parameters.
We now make some comments on the analysis in this paper. Using the technique in [@BV17], we adapt a convex integration scheme with intermittent Beltrami flows as the building blocks. The main difficulty in a convex integration scheme for (FVNSE), is the error induced by the frictional viscosity $\nu(- \Delta)^{\theta} v$, which is greater for a larger exponent $\theta$. This error is controlled by making full use of the concentration effect of intermittent flows introduced in [@BV17]. As it is shown in the crucial estimate , the error is controllable only for $\theta < 5/4$. Compared with [@BV17], since our goal is to construct weak solutions $v \in C^0_t L^2_{x,weak} \cap L^{\infty}_t W^{2\theta - 1,1}_x$, we adapt a slightly simpler cut-off function and prove only estimates that are sufficient for this purpose.
Outline
=======
Iteration lemma
---------------
Following [@BV17], we consider the approximate system $$\begin{aligned}
\label{eq:NS-alpha-Reynold-stress}
\begin{cases}
{\partial}_t v + \nabla \cdot (v \otimes v) + \nabla p + \nu(- \Delta)^{\theta} v = \nabla \cdot R,\\
\nabla \cdot v = 0,
\end{cases}\end{aligned}$$ where $R$ is a symmetric $3 \times 3$ matrix.
\[Lemma:Iteration\] Let $\theta \in (-\infty, 5/4)$. Assume $(v_q, R_q)$ is a smooth solution to with $$\begin{aligned}
\|R_q\|_{L^{\infty}_t L^1_x} &\leq \delta_{q+1}, \label{est-R_q-L^1}
\end{aligned}$$ for some $\delta_{q+1} > 0$. Then for any given $\delta_{q+2} > 0$, there exists a smooth solution $(v_{q+1}, R_{q+1})$ of with $$\begin{aligned}
\|R_{q+1}\|_{L^{\infty}_t L^1_x} &\leq \delta_{q+2}, \label{est-R-q+1}\\
\text{and}\quad \operatorname{supp}_t v_{q+1} \cup \operatorname{supp}_t R_{q+1} &\subset N_{\delta_{q+1}}(\operatorname{supp}_t v_{q} \cup \operatorname{supp}_t R_{q}). \label{est-supp-v_q-R_q}
\end{aligned}$$ Here for a given set $A \subset \mathbb{R}$, the $\delta$-neighborhood of $A$ is denoted by $$\begin{aligned}
N_{\delta}(A) = \{ y \in \mathbb{R}: \exists y' \in A, |y-y'| < \delta \}.
\end{aligned}$$ Furthermore, the increment $w_{q+1} = v_{q+1} - v_q$ satisfies the estimates $$\begin{aligned}
\|w_{q+1}\|_{L^{\infty}_t L^2_x} &\leq C \delta_{q+1}^{1/2}, \label{est-w_q-L^2_x}\\
\|w_{q+1}\|_{L^{\infty}_t W^{2 \theta - 1,1}_x} &\leq \delta_{q+2}, \label{est-w_q-W^1_x}
\end{aligned}$$ where the positive constant $C$ depends only on $\theta$.
Assume Lemma \[Lemma:Iteration\] is valid. Let $v_0 = u$. Then $$\begin{aligned}
\int_{\mathbb{T}^3} {\partial}_t v_0(t,x)dx = \frac{d}{dt} \int_{\mathbb{T}^3} v_0(t,x)dx \equiv 0.
\end{aligned}$$ Let $$\begin{aligned}
R_0 = \mathcal{R}({\partial}_t v_0 + \nu(- \Delta)^{\theta} v_0) + v_0 \otimes v_0 + p_0 I , \quad p_0 = -\frac{1}{3} |v_0|^2,
\end{aligned}$$ where $\mathcal{R}$ is the symmetric anti-divergence operator established in Lemma \[Lemma:symm-anti-div\], below. Clearly $(v_0,R_0)$ solves . Set $$\begin{aligned}
\delta_1 &= \|R_0\|_{L^{\infty}_t L^1_x}, \\
\delta_{q+1} &= 2^{-q} \varepsilon_0, \quad \text{ for } q \geq 1.
\end{aligned}$$ Apply Lemma \[Lemma:Iteration\] iteratively to obtain smooth solution $(v_q, R_q)$ to . It follows from that $$\begin{aligned}
\sum \|v_{q+1} - v_{q}\|_{L^{\infty}_t L^2_x} = \sum \|w_{q+1}\|_{L^{\infty}_t L^2_x} \leq C \sum \delta_{q+1}^{1/2} < \infty.
\end{aligned}$$ Thus $v_q$ converge strongly to some $v \in C^0_t L^2_x$. Since $\|R_{q+1}\|_{L^{\infty}_t L^1_x} \to 0$, as $q \to \infty$, $v$ is a weak solution to the FVNSE . Estimate leads to $$\begin{aligned}
\|v - v_0\|_{_{L^{\infty}_t W^{2\theta - 1,1}_x}} \leq \sum_{q=1}^{\infty} \|w_q\|_{_{L^{\infty}_t W^{2\theta - 1,1}_x}} \leq \sum_{q=1}^{\infty}\delta_{q+1} \leq \varepsilon_0.
\end{aligned}$$ Furthermore, it follows from that $$\begin{aligned}
\operatorname{supp}_t v &\subset \cup_{q \geq 0} \operatorname{supp}_t v_q \subset N_{\sum_{q \geq 0} \delta_{q+1}}(\operatorname{supp}_t u) \subset N_{\delta_1 + \varepsilon_0}(\operatorname{supp}_t u).
\end{aligned}$$
Now we show the existence of infinitely many weak solutions with initial values zero. Let $u(t,x) = \varphi(t) \sum_{|k| \leq N} a_k e^{ik \cdot x}$ with $a_k \neq 0, a_k \cdot k = 0, a_{-k} = a_k^*$ for all $|k| \leq N$, and $\varphi \in C_c^{\infty}(\mathbb{R}_+)$. Thus $\nabla \cdot u = 0$ satisfies the conditions of the theorem. Hence there exists a weak solution $v$ to close enough to $u$ so that $v \centernot{\equiv} 0$.
Iteration scheme
================
Notations and Parameters
------------------------
For a complex number $\zeta \in \mathbb{C}$, we denote by $\zeta^*$ its complex conjugate. Let us normalize the volume $$\begin{aligned}
|\mathbb{T}^3| = 1.\end{aligned}$$ For smooth functions $u \in C^{\infty}(\mathbb{T}^3)$ with $\int_{\mathbb{T}^3} u(x) dx =0$ and $s \in \mathbb{R}$, we define $$\begin{aligned}
\mathcal{F}(|\nabla|^s u)(\xi) = |\xi|^{s}\mathcal{F}(u)(\xi), \quad \xi \in \mathbb{Z}^3.\end{aligned}$$ For $M, N \in [0,+\infty]$, denote the Fourier projection of $u$ by $$\begin{aligned}
\mathcal{F} (\mathbb{P}_{[M,N)} u) = \begin{cases}
u(\xi), & M \leq |\xi| < N, \xi \in \mathbb{Z}^3,\\
0, &\text{otherwise}.
\end{cases}\end{aligned}$$ We also denote $\mathbb{P}_{\leq k} = \mathbb{P}_{[0,k)}$ and $\mathbb{P}_{\geq k} = \mathbb{P}_{[k,+\infty)}$ for $k > 0$.
Following the notation in [@BV17], we introduce here several parameters $\sigma, r, \lambda$, with $$\begin{aligned}
0 < \sigma < 1 < r < \lambda < \mu < \lambda^2, \quad \sigma r < 1, \label{ineq:parameters}\end{aligned}$$ where $\lambda = \lambda_{q+1} \in 5\mathbb{N}$ is the ‘frequency’ parameter; $\sigma$ with $1/\sigma \in \mathbb{N}$ is a small parameter such that $\lambda \sigma \in \mathbb{N}$ parameterizes the spacing between frequencies; $r \in \mathbb{N}$ denotes the number of frequencies along edges of a cube; $\mu$ measures the amount of temporal oscillation.
Later $\sigma, r, \mu$ will be chosen to be suitable powers of $\lambda_{q+1}$. We also fix a constant $p > 1$ which will be chosen later to be close to $1$. The constants implicitly in the notation ‘$\lesssim$’ may depend on $p$ but are independent of the parameters $\sigma, r, \lambda$.
Intermittent Beltrami flows
---------------------------
We use intermittent Beltrami flows introduced in [@BV17] as the building blocks. Recall some basic facts of Beltrami waves.
\[Prop:Beltriami-Waves\]([@BV17 Proposition 3.1]) Given $\overline{\xi} \in \mathbb{S}^2 \cap \mathbb{Q}^3$, let $A_{\overline{\xi}} \in \mathbb{S}^2 \cap \mathbb{Q}^3$ be such that $$\begin{aligned}
A_{\overline{\xi}} \cdot \overline{\xi} = 0, \quad |A_{\overline{\xi}}| = 1, \quad A_{-\overline{\xi}} = A_{\overline{\xi}}.
\end{aligned}$$ Let $\Lambda$ be a given finite subset of $\mathbb{S}^2$ such that $- \Lambda = \Lambda$, and $\lambda \in \mathbb{Z}$ be such that $\lambda \Lambda \subset \mathbb{Z}^3$. Then for any choice of coefficients $a_{\overline{\xi}} \in \mathbb{C}$ with $a_{\overline{\xi}}^* = a_{-\overline{\xi}}$ the vector field $$\begin{aligned}
W(x) = \sum_{\overline{\xi} \in \Lambda} a_{\overline{\xi}} B_{\overline{\xi}} e^{i \lambda \overline{\xi} \cdot x}, \quad \text{ with } B_{\overline{\xi}} = \frac{1}{\sqrt{2}}(A_{\overline{\xi}} + i \overline{\xi} \times A_{\overline{\xi}}),
\end{aligned}$$ is real-valued, divergence-free and satisfies $$\begin{aligned}
\nabla \times W = \lambda W, \quad
\nabla \cdot (W \otimes W) = \nabla \frac{|W|^2}{2}.
\end{aligned}$$ Furthermore, $$\begin{aligned}
\langle W \otimes W \rangle := \fint_{ \mathbb{T}^3} W \otimes W dx= \sum_{\overline{\xi} \in \Lambda} \frac{1}{2}|a_{(\overline{\xi})}|^2(\mathrm{Id} - \overline{\xi} \otimes \overline{\xi}).
\end{aligned}$$
Let $\Lambda, \Lambda^+, \Lambda^- \subset \mathbb{S}^2 \cap \mathbb{Q}^3$ be defined by $$\begin{aligned}
\Lambda^+ = \{ \frac{1}{5}(3e_1 \pm 4e_2), \frac{1}{5}(3e_2 \pm 4e_3), \frac{1}{5}(3e_3 \pm 4e_1) \},\\
\quad \Lambda^- = -\Lambda^+, \quad
\Lambda = \Lambda^+ \cup \Lambda^-.\end{aligned}$$ Clearly we have $$\begin{aligned}
5\Lambda \in \mathbb{Z}^3, \quad \text{ and } \quad \min_{\overline{\xi}', \overline{\xi} \in \Lambda, \overline{\xi}'+ \overline{\xi}\neq 0} |\overline{\xi}'+ \overline{\xi}| \geq \frac{1}{5}. \label{est-xi+xi'}\end{aligned}$$ Also it is direct to check that $$\begin{aligned}
\frac{1}{8}\sum_{\overline{\xi} \in \Lambda}(\mathrm{Id} - \overline{\xi} \otimes \overline{\xi}) = \mathrm{Id}.\end{aligned}$$ In fact, representations of this form exist for symmetric matrices close to the identity. We have the following simple variant of [@BV17 Proposition 3.2].
\[Prop:convex-representation\] Let $B_{\varepsilon}(\mathrm{Id})$ denote the ball of symmetric matrices, centered at the identity, of radius $\varepsilon$. Then there exist a constant $\varepsilon_{\gamma} > 0$ and smooth positive functions $\gamma_{(\overline{\xi})} \in C^{\infty}(B_{\varepsilon_{\gamma}}(\mathrm{Id}))$, such that
1. $\gamma_{(\overline{\xi})} = \gamma_{(-\overline{\xi})}$;
2. for each $R \in B_{\varepsilon_{\gamma}}(\mathrm{Id})$ we have the identity $$\begin{aligned}
R = \frac{1}{2}\sum_{\overline{\xi} \in \Lambda} \left(\gamma_{(\overline{\xi})}(R)\right)^2(\mathrm{Id} - \overline{\xi} \otimes \overline{\xi}).
\end{aligned}$$
Define the Dirichlet kernel $$\begin{aligned}
D_r(x) &= \frac{1}{(2r+1)^{3/2}} \sum_{\xi \in \Omega_r} e^{i \xi \cdot x}, \quad
\Omega_r = \{(j,k,l): j,k,l \in \{-r,\cdots,r\} \}.\end{aligned}$$ It has the property that, for $1 < p \leq \infty$, $$\begin{aligned}
\|D_r\|_{L^p} \lesssim r^{3/2 - 3/p}, \quad \|D_r\|_{L^2} = (2 \pi)^3.\end{aligned}$$
Following [@BV17], for $\overline{\xi} \in \Lambda^+$, define a directed and rescaled Dirichlet kernel by $$\begin{aligned}
\eta_{(\overline{\xi})}(t,x) = \eta_{\overline{\xi}, \lambda, \sigma, r, \mu}(t,x) = D_r(\lambda \sigma (\overline{\xi} \cdot x + \mu t, A_{\overline{\xi}} \cdot x, (\overline{\xi} \times A_{\overline{\xi}}) \cdot x)),\end{aligned}$$ and for $\overline{\xi} \in \Lambda^-$, define $$\begin{aligned}
\eta_{(\overline{\xi})}(t,x) = \eta_{-(\overline{\xi})}(t,x).\end{aligned}$$ Note the important identity $$\begin{aligned}
\frac{1}{\mu} {\partial}_t \eta_{(\overline{\xi})}(t,x) = \pm (\overline{\xi} \cdot \nabla) \eta_{(\overline{\xi})}(t,x), \quad \overline{\xi} \in \Lambda^{\pm}. \label{eq:Dt-eta}\end{aligned}$$ Since the map $x \mapsto \lambda \sigma (\overline{\xi} \cdot x + \mu t, A_{\overline{\xi}} \cdot x, (\overline{\xi} \times A_{\overline{\xi}}) \cdot x)$ is the composition of a rotation by a rational orthogonal matrix mapping $\{e_1, e_2, e_3\}$ to $\{ \overline{\xi}, A_{\overline{\xi}}, \overline{\xi} \times A_{\overline{\xi}} \}$, a translation, and a rescaling by integers, for $1 < p \leq \infty$, we have $$\begin{aligned}
\fint_{ \mathbb{T}^3} \eta_{(\overline{\xi})}(t,x)^2(t,x) dx = 1, \quad \|\eta_{(\overline{\xi})}\|_{L^{\infty}_t L^p_x(\mathbb{T}^3)} \lesssim r^{3/2 - 3/p}.\end{aligned}$$
Let $W_{(\overline{\xi})}$ be the Beltrami plane wave at frequency $\lambda$, $$\begin{aligned}
W_{(\overline{\xi})} = W_{\overline{\xi}, \lambda}(x) = B_{\overline{\xi}} e^{i \lambda \overline{\xi} \cdot x}.\end{aligned}$$ Define the intermittent Beltrami wave $\mathbb{W}_{(\overline{\xi})}$ as $$\begin{aligned}
\mathbb{W}_{(\overline{\xi})}(t,x) := \mathbb{W}_{\overline{\xi},\lambda,\sigma,r,\mu}(t,x) = \eta_{(\overline{\xi})}(t,x)W_{(\overline{\xi})}(x).\end{aligned}$$ It follows from the definitions and that $$\begin{aligned}
\mathbb{P}_{[\frac{\lambda}{2}, 2 \lambda)} \mathbb{W}_{(\overline{\xi})} &= \mathbb{W}_{(\overline{\xi})}, \label{est-supp-frequency-W}\\
\mathbb{P}_{[\frac{\lambda}{5}, 4 \lambda)} (\mathbb{W}_{(\overline{\xi})} \otimes \mathbb{W}_{(\overline{\xi}')}) &= \mathbb{W}_{(\overline{\xi})} \otimes \mathbb{W}_{(\overline{\xi}')}, \quad \overline{\xi}' \neq -\overline{\xi} \label{est-supp-frequency-W2}.\end{aligned}$$ The following properties are immediate from the definitions.
([@BV17 Proposition 3.4]) \[Prop:Property-W\] Let $a_{\overline{\xi}} \in \mathbb{C}$ be constants with $a_{\overline{\xi}}^* = a_{-\overline{\xi}}$. Let $$\begin{aligned}
W(x) = \sum_{\overline{\xi} \in \Lambda} a_{\overline{\xi}} \mathbb{W}_{(\overline{\xi})}(x).
\end{aligned}$$ Then $W(x)$ is real valued. Moreover, for each $R \in B_{\varepsilon_{\gamma}}(\mathrm{Id})$ we have $$\begin{aligned}
\sum_{\overline{\xi} \in \Lambda} \left(\gamma_{(\overline{\xi})}(R)\right)^2 \fint_{ \mathbb{T}^3} \mathbb{W}_{(\overline{\xi})} \otimes \mathbb{W}_{(-\overline{\xi})} = \sum_{\overline{\xi} \in \Lambda} \left(\gamma_{(\overline{\xi})}(R)\right)^2 B_{\overline{\xi}} \otimes B_{-\overline{\xi}} = R.
\end{aligned}$$
([@BV17 Proposition 3.5]) For any $1 < p \leq \infty, N \geq 0, K \geq 0$: $$\begin{aligned}
\| \nabla^N {\partial}_t^K \mathbb{W}_{(\overline{\xi})} \|_{L^{\infty}_t L^p_x} &\lesssim \lambda^N (\lambda \sigma r \mu)^K r^{3/2 - 3/p}, \label{est-W-xi-L^p}
\\
\| \nabla^N {\partial}_t^K \eta_{(\overline{\xi})} \|_{L^{\infty}_t L^p_x} &\lesssim (\lambda \sigma r)^N (\lambda \sigma r \mu)^K r^{3/2 - 3/p}. \label{est-eta-xi-L^p}
\end{aligned}$$
Perturbations
-------------
Let $\psi(t)$ be a smooth cut-off function such that $$\begin{aligned}
\psi(t) = 1 \text{ on } \operatorname{supp}_t R_q,\quad \operatorname{supp}\psi(t) \subset N_{\delta_{q+1}}(\operatorname{supp}_t R_q), \quad |\psi'(t)| \leq 2 \delta_{q+1}^{-1}. \label{eta_0}\end{aligned}$$ Take a smooth increasing function $\chi$ such that $$\begin{aligned}
\chi(s) = \begin{cases}
1, & 0 \leq s < 1\\
s, & s \geq 2
\end{cases},\end{aligned}$$ and set $$\begin{aligned}
\rho(t,x) = \varepsilon_{\gamma}^{-1}\delta_{q+1} \chi(\delta_{q+1}^{-1}|R_q(t,x)|) \psi^2(t).\end{aligned}$$ where $\varepsilon_{\gamma}$ is the constant in Proposition \[Prop:convex-representation\]. Then clearly $$\begin{aligned}
\operatorname{supp}_t \rho &\subset N_{\delta_{q+1}}(\operatorname{supp}_t R_q). \label{est-supp-rho}\end{aligned}$$ It follows from the above definition that $$\begin{aligned}
&|R_q|/\rho = \varepsilon_{\gamma}\frac{|R_q|}{\delta_{q+1} \chi(\delta_{q+1}^{-1}|R_q(t,x)|) \psi^2} \leq \varepsilon_{\gamma} \implies
\mathrm{Id} - R_q/\rho \in B_{\varepsilon_{\gamma}}(\mathrm{Id}) \text{ on } \operatorname{supp}R_q.\end{aligned}$$ Therefore, the amplitude functions $$\begin{aligned}
a_{(\overline{\xi})}(t,x) := \rho^{1/2}(t,x) \gamma_{(\overline{\xi})}(\mathrm{Id} - \rho(t,x)^{-1}R_q(t,x) )\end{aligned}$$ are well-defined and smooth. Define the velocity perturbation to be $w = w_{q+1}$: $$\begin{aligned}
w &= w^{(p)} + w^{(c)} + w^{(t)}, \\
w^{(p)} &= \sum_{\overline{\xi} \in \Lambda} a_{(\overline{\xi})} \mathbb{W}_{(\overline{\xi})} = \sum_{\overline{\xi} \in \Lambda} a_{(\overline{\xi})}(t,x) \eta_{(\overline{\xi})}(t,x) B_{\overline{\xi}} e^{i \lambda \overline{\xi} \cdot x},\\
w^{(c)} &= \frac{1}{\lambda_{q+1}} \sum_{\overline{\xi} \in \Lambda} \nabla \left( a_{(\overline{\xi})}\eta_{(\overline{\xi})} \right) \times W_{(\overline{\xi})},\\
w^{(t)} &= \frac{1}{\mu}\sum_{\overline{\xi} \in \Lambda^+} \mathbb{P}_{LH} \mathbb{P}_{\neq 0}\left( a_{(\overline{\xi})}^2 \eta_{(\overline{\xi})}^2 \overline{\xi} \right),\end{aligned}$$ where $\mathbb{P}_{LH} = \mathrm{Id} - \nabla \Delta^{-1} \operatorname{div}$ is the Leray-Helmholtz projection into divergence-free vector field, and $\mathbb{P}_{\neq 0} f = f - \fint_{ \mathbb{T}^3} f dx$. It is well-known that $\mathbb{P}_{LH}$ is bounded on $L^p, 1< p < \infty$ (see, e.g., [@Grafakos]). It follows from Proposition \[Prop:Property-W\] that $$\begin{aligned}
\sum_{\overline{\xi} \in \Lambda} a_{(\overline{\xi})}^2 \fint_{ \mathbb{T}^3} \mathbb{W}_{(\overline{\xi})} \otimes \mathbb{W}_{(-\overline{\xi})} dx
&= \rho \mathrm{Id} - R_q. \label{eq:rho-stress}\end{aligned}$$
Estimates for perturbations
---------------------------
The following bounds hold: $$\begin{aligned}
\|\rho\|_{L^{\infty}_t L^1_x} &\leq C \delta_{q+1}, \label{est-rho-L1}\\
\|\rho^{-1}\|_{C^0(\operatorname{supp}R_q)} & \lesssim \delta_{q+1}^{-1}, \label{est-rho-inverse}\\
\|\rho\|_{C^N_{t,x}} & \leq C(\delta_{q+1}, \|R_q\|_{C^N}), \label{est-rho-C^N}\\
\|a_{(\overline{\xi})}\|_{L^{\infty}_t L^2_x} &\lesssim \|\rho\|_{L^{\infty}_t L^1_x}^{1/2} \lesssim \delta_{q+1}^{1/2}, \label{est-a-L2}\\
\|a_{(\overline{\xi})}\|_{C^N_{t,x}} & \leq C(\delta_{q+1}, \|R_q\|_{C^N}). \label{est-a-C^N}
\end{aligned}$$
It follows from that $$\begin{aligned}
\|\rho(t,\cdot)\|_{L^1_x} &=\int_{|R_q| \leq \delta_{q+1}} \rho + \int_{|R_q| > \delta_{q+1}} \rho \lesssim \delta_{q+1} + \int_{|R_q| > \delta_{q+1}} |R_q|\\
& \leq C \delta_{q+1}.
\end{aligned}$$ It is direct to verify and , while and follow from and .
Now we can estimate the time support of $w_{q+1}$: $$\begin{aligned}
\operatorname{supp}_t w_{q+1} \subset \operatorname{supp}_t \rho \subset \operatorname{supp}\psi \subset N_{\delta_{q+1}}(\operatorname{supp}_t R_q). \label{est-supp-w}\end{aligned}$$
We need the following Lemma, which is a variant of [@BV17 Lemma 3.6].
([@MS17 Lemma 2.1])\[Lemma:improved-Holder\] Let $f,g \in C^{\infty}(\mathbb{T}^3)$, and $g$ is $(\mathbb{T}/N)^3$ periodic, $N \in \mathbb{N}$. Then for $1 \leq p \leq \infty$, $$\begin{aligned}
\|f g\|_{L^p} \leq \|f\|_{L^p} \|g\|_{L^p} + C_p N^{-1/p} \|f\|_{C^1} \|g\|_{L^p}.
\end{aligned}$$
Let us denote $$\begin{aligned}
\mathcal{C}_N =C(\sup_{\overline{\xi} \in \Lambda}\|a_{(\overline{\xi})}\|_{C^N_{t,x}})\end{aligned}$$ to be some polynomials depending on $\sup_{\overline{\xi} \in \Lambda}\|a_{(\overline{\xi})}\|_{C^N_{t,x}}$.
\[Lemma:est-w\] Suppose the parameters satisfy and $$\begin{aligned}
r^{3/2} \leq \mu .\label{ass} \end{aligned}$$ Then the following estimates for the perturbations hold: $$\begin{aligned}
\|w_{q+1}^{(p)}\|_{L^{\infty}_t L^2_x} & \lesssim \delta_{q+1}^{1/2} + (\lambda_{q+1} \sigma)^{-1/2} \mathcal{C}_1,\\
\|w_{q+1}\|_{L^{\infty}_t L^p_x} & \lesssim r^{3/2-3/p}\mathcal{C}_1, \label{est-w-Lp}\\
\|w_{q+1}^{(c)}\|_{L^{\infty}_t L^p_x} + \|w_{q+1}^{(t)}\|_{L^{\infty}_t L^p_x} & \lesssim (\sigma r + \mu^{-1}r^{3/2})r^{3/2-3/p}\mathcal{C}_1, \\
\|{\partial}_t w_{q+1}^{(p)}\|_{L^{\infty}_t L^p_x} + \|{\partial}_t w_{q+1}^{(c)}\|_{L^{\infty}_t L^p_x} & \lesssim \lambda_{q+1} \sigma \mu r^{5/2 - 3/p}\mathcal{C}_2, \label{est-w1p}\\
\| |\nabla|^N w_{q+1}\|_{L^{\infty}_t L^p_x} & \lesssim r^{3/2-3/p} \lambda_{q+1}^N \mathcal{C}_{N+1}, \label{est-nabla-s-w}
\end{aligned}$$ for $1 < p < \infty, N \geq 1$.
Since $\mathbb{W}_{(\overline{\xi})}$ is $(\mathbb{T}/\lambda \sigma)^3$ periodic, it follows from , , and Lemma \[Lemma:improved-Holder\] that $$\begin{aligned}
\|w_{q+1}^{(p)}\|_{L^{\infty}_t L^2_x} & \lesssim \sum_{\overline{\xi} \in \Lambda} (\|a_{(\overline{\xi})}\|_{L^{\infty}_t L^2_x} + (\lambda_{q+1} \sigma)^{-1/2} \|a_{(\overline{\xi})}\|_{C^1}) \|\mathbb{W}_{(\overline{\xi})}\|_{L^{\infty}_t L^2_x}\\
& \lesssim \delta_{q+1}^{1/2} + (\lambda_{q+1} \sigma)^{-1/2}\mathcal{C}_1.
\end{aligned}$$ In view of , and yield that $$\begin{aligned}
\|w_{q+1}^{(p)}\|_{L^{\infty}_t L^p_x} & \lesssim \sum_{\overline{\xi} \in \Lambda} \|a_{(\overline{\xi})}\|_{C^0}\|\mathbb{W}_{(\overline{\xi})}\|_{L^{\infty}_t L^p_x}\lesssim r^{3/2-3/p}\mathcal{C}_0,\\
\|w_{q+1}^{(c)}\|_{L^{\infty}_t L^p_x} & \lesssim \lambda_{q+1}^{-1} \sum_{\overline{\xi} \in \Lambda} \left(\| \eta_{(\overline{\xi})} \|_{L^{\infty}_t L^p_x} + \| \nabla \eta_{(\overline{\xi})} \|_{L^{\infty}_t L^p_x}\right)\|a_{(\overline{\xi})}\|_{C^1}\|\mathbb{W}_{(\overline{\xi})}\|_{L^{\infty}_t L^p_x}\\
& \lesssim (\sigma r) r^{3/2-3/p}\mathcal{C}_1,\\
\|w_{q+1}^{(t)}\|_{L^{\infty}_t L^p_x} & \lesssim \mu^{-1}\sum_{\overline{\xi} \in \Lambda^+} \|a_{(\overline{\xi})}^2 \eta_{(\overline{\xi})}^2 \overline{\xi} \|_{L^{\infty}_t L^p_x} \lesssim \mu^{-1} \sum_{\overline{\xi} \in \Lambda^+}\|a_{(\overline{\xi})}^2\|_{C^0} \| \eta_{(\overline{\xi})} \|_{L^{\infty}_t L^{2p}_x}^2 \\
&\lesssim \mu^{-1}r^{3-3/p}\mathcal{C}_0,
\end{aligned}$$ where the boundedness of $\mathbb{P}_{LH}$ and $\mathbb{P}_{\neq 0}$ on $L^p$, for $1 < p < \infty$, is used in the first inequality of the estimate for $\|w_{q+1}^{(t)}\|_{L^{\infty}_t L^p_x}$. In the same way, we can estimate $$\begin{aligned}
\|{\partial}_t w_{q+1}^{(p)}\|_{L^{\infty}_t L^p_x} & \lesssim \sum_{\overline{\xi} \in \Lambda} \|{\partial}_t a_{(\overline{\xi})}\|_{C^0} \| \mathbb{W}_{(\overline{\xi})}\|_{L^{\infty}_t L^p_x} + \|a_{(\overline{\xi})}\|_{C^0} \|{\partial}_t \mathbb{W}_{(\overline{\xi})}\|_{L^{\infty}_t L^p_x}\\
& \lesssim \lambda_{q+1} \sigma \mu r^{5/2 - 3/p} \mathcal{C}_1,\\
\|{\partial}_t w_{q+1}^{(c)}\|_{L^{\infty}_t L^p_x} & \lesssim \lambda_{q+1}^{-1} \sum_{\overline{\xi} \in \Lambda} \| a_{(\overline{\xi})}\|_{C^2_{t,x}} \Big( \| \eta_{(\overline{\xi})} \|_{L^{\infty}_t L^p_x} + \| \nabla \eta_{(\overline{\xi})} \|_{L^{\infty}_t L^p_x} + \|{\partial}_t \eta_{(\overline{\xi})} \|_{L^{\infty}_t L^p_x} \\
& \qquad + \| {\partial}_t \nabla \eta_{(\overline{\xi})}\|_{L^{\infty}_t L^p_x}\Big) \lesssim \sigma r\lambda_{q+1} \sigma \mu r^{5/2 - 3/p} \mathcal{C}_2 \lesssim \lambda_{q+1} \sigma \mu r^{5/2 - 3/p}\mathcal{C}_2.
\end{aligned}$$ For $N \geq 1$, Using and , we obtain that $$\begin{aligned}
\|\nabla^N w_{q+1}^{(p)}\|_{L^{\infty}_t L^p_x} & \lesssim \sum_{\overline{\xi} \in \Lambda} \sum_{k=0}^N \|\nabla^k a_{(\overline{\xi})}\|_{C^0} \| \nabla^{N-k} \mathbb{W}_{(\overline{\xi})}\|_{L^{\infty}_t L^p_x} \\
& \lesssim \lambda_{q+1}^N r^{3/2 - 3/p}\mathcal{C}_N,\\
\|\nabla^N w_{q+1}^{(c)}\|_{L^{\infty}_t L^p_x} & \lesssim \lambda_{q+1}^{-1} \sum_{\overline{\xi} \in \Lambda} \sum_{m=0}^{N}\sum_{k=0}^m \lambda_{q+1}^{N-m} \|\nabla^{k+1} a_{(\overline{\xi})}\|_{C^0} \| \nabla^{m-k} \eta_{(\overline{\xi})}\|_{L^{\infty}_t L^p_x} \\
& \quad + \lambda_{q+1}^{-1} \sum_{\overline{\xi} \in \Lambda} \sum_{m=0}^{N}\sum_{k=0}^m \lambda_{q+1}^{N-m} \|\nabla^{k} a_{(\overline{\xi})}\|_{C^0} \| \nabla^{m-k+1} \eta_{(\overline{\xi})}\|_{L^{\infty}_t L^p_x}\\
& \lesssim \lambda_{q+1}^N r^{3/2 - 3/p} \mathcal{C}_{N+1},\\
\|\nabla^N w_{q+1}^{(t)}\|_{L^{\infty}_t L^p_x} & \lesssim \mu^{-1}\sum_{\overline{\xi} \in \Lambda} \sum_{m=0}^{N} \| \nabla^{N-m}(a_{(\overline{\xi})}^2)\|_{C^0} \sum_{k=0}^m \|\nabla^{k}\eta_{(\overline{\xi})}\|_{L^{\infty}_t L^{2p}_x} \|\nabla^{m-k}\eta_{(\overline{\xi})}\|_{L^{\infty}_t L^{2p}_x}\\
&\lesssim \lambda_{q+1}^N r^{3/2 - 3/p}\frac{(\sigma r)^N r^{3/2}}{\mu} \mathcal{C}_N \lesssim \lambda_{q+1}^N r^{3/2 - 3/p}\mathcal{C}_N,
\end{aligned}$$ where we use and .
Estimates for the stress
------------------------
Let us recall the following operator in [@dLSz4].
\[Lemma:symm-anti-div\] There exists a linear operator $\mathcal{R}$, of order $-1$, mapping vector fields to symmetric matrices such that $$\begin{aligned}
\nabla \cdot \mathcal{R}(u) = u - \fint_{ \mathbb{T}^3} u, \label{eq:mcR}
\end{aligned}$$ with standard Calderon-Zygmund estimates, for $1 < p < \infty$, $$\| \mathcal{R} \|_{L^p \to W^{1,p}} \lesssim 1, \quad \| \mathcal{R} \|_{C^0 \to C^0} \lesssim 1,\quad
\| \mathcal{R} \mathbb{P}_{\neq 0} u \|_{L^p} \lesssim \| |\nabla|^{-1}\mathbb{P}_{\neq 0} u\|_{L^p}. \label{est-mcR}$$
Suppose $u \in C^{\infty}(\mathbb{T}^3, \mathbb{R}^3)$ is a smooth vector field. Define $$\begin{aligned}
\mathcal{R}(u) = \frac{1}{4}\left(\nabla \mathbb{P}_{LH} v + (\nabla \mathbb{P}_{LH} v)^T\right) + \frac{3}{4}\left(\nabla v + (\nabla v)^T\right) - \frac{1}{2}(\nabla \cdot v) \mathrm{Id}
\end{aligned}$$ where $v \in C^{\infty}(\mathbb{T}^3, \mathbb{R}^3)$ is the unique solution to $\Delta v = u - \fint_{ \mathbb{T}^3} u$ with $\fint_{ \mathbb{T}^3} v = 0$.
It is direct to verify that $\mathcal{R}(u)$ is a symmetric matrix field depending linearly on $u$ and satisfies . Note that $\mathcal{R}$ is a constant coefficient ellitpic operator of order $-1$. We refer to [@Grafakos] for the Calderon-Zygmund estimates $\| \mathcal{R} \|_{L^p \to W^{1,p}} \lesssim 1$ and $\| \mathcal{R} \mathbb{P}_{\neq 0} u \|_{L^p} \lesssim \| |\nabla|^{-1}\mathbb{P}_{\neq 0} u\|_{L^p}$. Combining these with Sobolev embeddings, we have $\| \mathcal{R} u\|_{C^{\alpha}} \lesssim \|\mathcal{R} u\|_{W^{1,4}} \lesssim \| u\|_{L^4} \lesssim \| u\|_{C^0}$, with $\alpha = 1/4$.
We have the following variant of [@BV17 Lemma B.1] in [@BV17].
\[Lemma:commutator-est\] Let $a\in C^2(\mathbb{T}^3)$. For $1 < p < \infty$, and any smooth function $f \in L^p(\mathbb{T}^3)$, we have $$\begin{aligned}
\| |\nabla|^{-1}\mathbb{P}_{\neq 0}(a \mathbb{P}_{\geq k} f)\|_{L^p(\mathbb{T}^3)} \lesssim k^{-1} \|\nabla^2 a\|_{L^{\infty}(\mathbb{T}^3)} \|f\|_{L^p(\mathbb{T}^3)}. \label{est-commutator-1}
\end{aligned}$$
We follow the proof in [@BV17]. Note that $$\begin{aligned}
|\nabla|^{-1}\mathbb{P}_{\neq 0}(a \mathbb{P}_{\geq k} f) = |\nabla|^{-1}\mathbb{P}_{\geq k/2}(\mathbb{P}_{\leq k/2} a \mathbb{P}_{\geq k} f) + |\nabla|^{-1}\mathbb{P}_{\neq 0}(\mathbb{P}_{\geq k/2} a \mathbb{P}_{\geq k} f).
\end{aligned}$$ As direct consequences of the Littlewood-Paley decomposition and Schauder estimates we have the bounds for $1 < p < \infty$ (see, for example, [@Grafakos]) $$\begin{aligned}
\|\mathbb{P}_{\leq k/2}\|_{L^p \to L^p} \lesssim 1, \quad \| |\nabla|^{-1}\mathbb{P}_{\geq k/2} \|_{L^p \to L^p} \lesssim k^{-1}, \quad \| |\nabla|^{-1}\mathbb{P}_{\neq 0} \|_{L^p \to L^p} \lesssim 1.
\end{aligned}$$ Combining these bounds with Hölder’s inequality and the embedding $W^{1,4}(\mathbb{T}^3) \subset L^{\infty}(\mathbb{T}^3)$, we obtain $$\begin{aligned}
& \quad \| |\nabla|^{-1}\mathbb{P}_{\neq 0}(a \mathbb{P}_{\geq k} f)\|_{L^p} \lesssim k^{-1} \| \mathbb{P}_{\leq k/2} a \mathbb{P}_{\geq k} f \|_{L^p} + \| \mathbb{P}_{\geq k/2} a \mathbb{P}_{\geq k} f \|_{L^p} \\
& \lesssim k^{-1} (\|\mathbb{P}_{\leq k/2} a\|_{L^{\infty}} + k\| \mathbb{P}_{\geq k/2} a\|_{L^{\infty}}) \|f\|_{L^p} \lesssim k^{-1} (\|\nabla \mathbb{P}_{\leq k/2} a\|_{L^4} + k\|\nabla \mathbb{P}_{\geq k/2} a\|_{L^4}) \|f\|_{L^p}\\
& \lesssim k^{-1} (\|\mathbb{P}_{\leq k/2} \nabla a\|_{L^4} + k\||\nabla|^{-1} \mathbb{P}_{\geq k/2} |\nabla|\nabla \mathbb{P}_{\geq k/2} a\|_{L^4}) \|f\|_{L^p} \\
&\lesssim k^{-1} (\| \nabla a\|_{L^4} + \|\nabla^2 \mathbb{P}_{\geq k/2} a\|_{L^4}) \|f\|_{L^p} \lesssim k^{-1} \| \nabla^2 a\|_{L^4} \|f\|_{L^p}.
\end{aligned}$$
It follows from the definition of $w_{q+1}$ that $$\begin{aligned}
\int_{\mathbb{T}^3} w_{q+1} dx = \int_{\mathbb{T}^3} \frac{1}{\lambda_{q+1}} \sum_{\overline{\xi} \in \Lambda} \nabla \left( a_{(\overline{\xi})}\eta_{(\overline{\xi})} W_{(\overline{\xi})} \right) dx + \int_{\mathbb{T}^3} \frac{1}{\mu}\sum_{\overline{\xi} \in \Lambda^+}{P}_{LH} \mathbb{P}_{\neq 0}\left( a_{(\overline{\xi})}^2 \eta_{(\overline{\xi})}^2 \overline{\xi} \right)dx = 0.\end{aligned}$$ Hence $\int_{\mathbb{T}^3} \nu (-\Delta)^{\theta} w_{q+1}dx = 0$ and $\dfrac{d}{dt} \int_{\mathbb{T}^3} w_{q+1} dx = 0$. We obtain $R_{q+1}$ by plugging $v_{q+1} = v_q + w_{q+1}$ in , using and the assumption that $(v_q,R_q)$ solves : $$\begin{aligned}
\nabla \cdot R_{q+1} &= \nabla \cdot \left[\mathcal{R}( \nu(-\Delta)^{\theta} w_{q+1} + {\partial}_t w_{q+1}^{(p)} + {\partial}_t w_{q+1}^{(c)}) + v_q \otimes w_{q+1} + w_{q+1} \otimes v_q \right]\\
& \quad + \nabla \cdot \left[(w_{q+1}^{(c)} + w_{q+1}^{(t)}) \otimes w_{q+1} + w_{q+1}^{(p)} \otimes (w_{q+1}^{(c)} + w_{q+1}^{(t)})\right]\\
& \quad \left[\nabla \cdot (w_{q+1}^{(p)} \otimes w_{q+1}^{(p)} - R_{q}) + {\partial}_t w_{q+1}^{(t)}\right] + \nabla (p_{q+1} - p_q)\\
& := \nabla \cdot (\widetilde{R}_{linear} + \widetilde{R}_{corrector} + \widetilde{R}_{oscillation}) + \nabla(p_{q+1}-p_q).\end{aligned}$$
It follows from Lemma \[Lemma:est-w\] that $$\begin{aligned}
\| \widetilde{R}_{corrector} \|_{L^{\infty}_t L^p_x} &\lesssim \left(\|w_{q+1}^{(c)}\|_{L^{\infty}_t L^{2p}_x} + \|w_{q+1}^{(t)}\|_{L^{\infty}_t L^{2p}_x} \right) \left(\|w_{q+1}\|_{L^{\infty}_t L^{2p}_x} + \|w_{q+1}^{(p)}\|_{L^{\infty}_t L^{2p}_x}\right) \\
&\lesssim (\sigma r + \mu^{-1}r^{3/2})r^{3-3/p}\mathcal{C}_1.\end{aligned}$$
Noting that $\nabla \times \dfrac{w_{q+1}^{(p)}}{\lambda_{q+1}} = w_{q+1}^{(p)} + w_{q+1}^{(c)}$, Lemma \[Lemma:est-w\] and yield that $$\begin{aligned}
&\|\widetilde{R}_{linear}\|_{L^{\infty}_t L^p_x} \nonumber\\
&\lesssim \lambda_{q+1}^{-1}\|{\partial}_t \mathcal{R} \nabla \times (w_{q+1}^{(p)})\|_{L^{\infty}_t L^p_x} + \|\mathcal{R}(\nu(-\Delta)^{\theta}w_{q+1})\|_{L^{\infty}_t L^p_x} \nonumber\\
&\quad + \|v_q \otimes w_{q+1} + w_{q+1} \otimes v_q\|_{L^{\infty}_t L^p_x} \nonumber\\
&\lesssim \lambda_{q+1}^{-1}\|{\partial}_t w_{q+1}^{(p)}\|_{L^{\infty}_t L^p_x} + \| |\nabla|^{2\theta - 1} w_{q+1}\|_{L^{\infty}_t L^p_x} + \|v_q\|_{C^0} \|w_{q+1}\|_{L^{\infty}_t L^p_x}\nonumber \\
& \lesssim
\sigma \mu r^{5/2-3/p} \mathcal{C}_2
+ r^{3/2-3/p}(\lambda_{q+1}^{2\theta-1} + \|v_q\|_{C^0})\mathcal{C}_3. \label{est-R-linear}\end{aligned}$$ This is the crucial estimate to control the fractional viscosity. If we assume that $p \sim 1, r \sim \lambda_{q+1}^{-1}$, we must have $\theta < 5/4$ in order that the second term in is small for $\lambda_{q+1}$ sufficiently large. It remains to estimate $\widetilde{R}_{oscillation}$, which can be handled in the same way as in [@BV17]. It follows from that $$\begin{aligned}
& \nabla \cdot (w_{q+1}^{(p)} \otimes w_{q+1}^{(p)} - R_{q}) = \nabla \cdot (\sum_{\overline{\xi}, \overline{\xi}' \in \Lambda} a_{(\overline{\xi})} a_{(\overline{\xi}')} \mathbb{W}_{\overline{\xi}} \otimes \mathbb{W}_{(\overline{\xi}')} - R_q)\\
&= \nabla \cdot (\sum_{\overline{\xi}, \overline{\xi}' \in \Lambda} a_{(\overline{\xi})} a_{(\overline{\xi}')} \mathbb{P}_{\geq \lambda_{q+1} \sigma/2}\mathbb{W}_{(\overline{\xi})} \otimes \mathbb{W}_{(\overline{\xi}')} ) + \nabla \rho\\
&:= \sum_{\overline{\xi}, \overline{\xi}' \in \Lambda} E_{(\overline{\xi}, \overline{\xi}')} + \nabla \rho.\end{aligned}$$ Since $E_{(\overline{\xi}, \overline{\xi}')}$ has zero mean, we can split it as $$\begin{aligned}
E_{(\overline{\xi}, \overline{\xi}')} + E_{(\overline{\xi}', \overline{\xi})} &= \mathbb{P}_{\neq 0} \left( \nabla (a_{(\overline{\xi})} a_{(\overline{\xi}')}) \cdot (\mathbb{P}_{\geq \lambda_{q+1} \sigma/2}(\mathbb{W}_{(\overline{\xi})} \otimes \mathbb{W}_{(\overline{\xi}')} + \mathbb{W}_{\overline{\xi'}} \otimes \mathbb{W}_{(\overline{\xi})}) ) \right)\\
& \quad + \mathbb{P}_{\neq 0} \left( a_{(\overline{\xi})} a_{(\overline{\xi}')} \nabla \cdot (\mathbb{W}_{(\overline{\xi})} \otimes \mathbb{W}_{(\overline{\xi}')} + \mathbb{W}_{\overline{\xi'}} \otimes \mathbb{W}_{(\overline{\xi})}) \right) \\
&:= E_{(\overline{\xi}, \overline{\xi}',1)} + E_{(\overline{\xi}, \overline{\xi}',2)}.\end{aligned}$$
Using , and , we obtain $$\begin{aligned}
\|\mathcal{R} E_{(\overline{\xi}, \overline{\xi}', 1)}\|_{L^{\infty}_t L^p_x} & \lesssim \| |\nabla|^{-1} E_{(\overline{\xi}, \overline{\xi}', 1)}\|_{L^{\infty}_t L^p_x} \\
& \lesssim (\lambda_{q+1} \sigma)^{-1} \|a_{(\overline{\xi})} a_{(\overline{\xi}')}\|_{C^3} \|\mathbb{W}_{(\overline{\xi})} \otimes \mathbb{W}_{(\overline{\xi}')}\|_{L^{\infty}_t L^p_x}\\
& \lesssim (\lambda_{q+1} \sigma)^{-1} \|a_{(\overline{\xi})} a_{(\overline{\xi}')}\|_{C^3} \|\mathbb{W}_{(\overline{\xi})}\|_{L^{\infty}_t L^{2p}_x} \| \mathbb{W}_{(\overline{\xi}')}\|_{L^{\infty}_t L^{2p}_x}\\
& \lesssim (\lambda_{q+1} \sigma)^{-1} r^{3-3/p} \mathcal{C}_3.\end{aligned}$$
Recall the vector identity $A \cdot \nabla B + B \cdot \nabla A = \nabla (A \cdot B) - A \times (\nabla \times B) - B \times (\nabla \times A)$. For $\overline{\xi}, \overline{\xi}' \in \Lambda$, using the anti-symmetry of the cross product, we can write $$\begin{aligned}
& \nabla \cdot (\mathbb{W}_{(\overline{\xi})} \otimes \mathbb{W}_{(\overline{\xi}')} + \mathbb{W}_{(\overline{\xi}')} \otimes \mathbb{W}_{(\overline{\xi})} )\\
&= \left( W_{(\overline{\xi})} \otimes W_{(\overline{\xi}')} + W_{(\overline{\xi}')} \otimes W_{(\overline{\xi})} \right) \nabla \left( \eta_{(\overline{\xi})} \eta_{(\overline{\xi}')} \right) + \eta_{(\overline{\xi})} \eta_{(\overline{\xi}')} \left( W_{(\overline{\xi})} \cdot \nabla W_{(\overline{\xi}')} + W_{(\overline{\xi}')} \cdot \nabla W_{(\overline{\xi})} \right)\\
&= \left( W_{(\overline{\xi}')} \cdot \nabla \left( \eta_{(\overline{\xi})} \eta_{(\overline{\xi}')} \right) \right) W_{(\overline{\xi})} + \left( W_{(\overline{\xi})} \cdot \nabla \left( \eta_{(\overline{\xi})} \eta_{(\overline{\xi}')} \right) \right) W_{\overline{\xi}' } + \eta_{(\overline{\xi})} \eta_{(\overline{\xi}')} \nabla \left( W_{(\overline{\xi})} \cdot W_{(\overline{\xi}')} \right).\end{aligned}$$
For the term $E_{(\overline{\xi}, \overline{\xi}',2)}$, first consider the case $\overline{\xi} + \overline{\xi'} \neq 0$. It follows from the above identity and that $$\begin{aligned}
& \quad a_{(\overline{\xi})} a_{(\overline{\xi}')} \nabla \cdot (\mathbb{W}_{(\overline{\xi})} \otimes \mathbb{W}_{(\overline{\xi}')} + \mathbb{W}_{(\overline{\xi}')} \otimes \mathbb{W}_{(\overline{\xi})})\\
&= a_{(\overline{\xi})} a_{(\overline{\xi}')} \nabla \cdot \mathbb{P}_{\geq \lambda_{q+1}/10} \left(\eta_{(\overline{\xi})} \eta_{(\overline{\xi}')} \left( W_{(\overline{\xi})} \otimes W_{(\overline{\xi}')} + W_{(\overline{\xi}')} \otimes W_{(\overline{\xi})} \right) \right)\\
& = a_{(\overline{\xi})} a_{(\overline{\xi}')} \mathbb{P}_{\geq \lambda_{q+1}/10} \left( \nabla \left( \eta_{(\overline{\xi})} \eta_{(\overline{\xi}')} \right) \cdot \left( W_{(\overline{\xi})} \otimes W_{(\overline{\xi}')} + W_{(\overline{\xi}')} \otimes W_{(\overline{\xi})} \right) \right)\\
& \quad + a_{(\overline{\xi})} a_{(\overline{\xi}')} \mathbb{P}_{\geq \lambda_{q+1}/10} \left( \eta_{(\overline{\xi})} \eta_{(\overline{\xi}')} \nabla \left(W_{(\overline{\xi})} \cdot W_{(\overline{\xi}')}\right) \right) \\
& = a_{(\overline{\xi})} a_{(\overline{\xi}')} \mathbb{P}_{\geq \lambda_{q+1}/10} \left( \nabla \left( \eta_{(\overline{\xi})} \eta_{(\overline{\xi}')} \right) \cdot \left( W_{(\overline{\xi})} \otimes W_{(\overline{\xi}')} + W_{(\overline{\xi}')} \otimes W_{(\overline{\xi})} \right) \right)\\
& \quad + \nabla \left( a_{(\overline{\xi})} a_{(\overline{\xi}')} \mathbb{W}_{(\overline{\xi})} \cdot \mathbb{W}_{(\overline{\xi}')} \right) - \nabla \left(a_{(\overline{\xi})} a_{(\overline{\xi}')}\right) \mathbb{P}_{\geq \lambda_{q+1}/10}\left(\mathbb{W}_{(\overline{\xi})} \cdot \mathbb{W}_{(\overline{\xi}')} \right)\\
& \quad - a_{(\overline{\xi})} a_{(\overline{\xi}')} \mathbb{P}_{\geq \lambda_{q+1}/10} \left( \left(W_{(\overline{\xi})} \cdot W_{(\overline{\xi}')}\right) \nabla\left(\eta_{(\overline{\xi})} \eta_{(\overline{\xi}')}\right) \right),\end{aligned}$$ where the second term is a pressure, the third can be estimated analogously to $E_{(\overline{\xi}, \overline{\xi}', 1)}$. Also note that the first and fourth term can estimated analogously. Using , and , we obtain $$\begin{aligned}
& \quad \|\mathcal{R} \left( a_{(\overline{\xi})} a_{(\overline{\xi}')} \mathbb{P}_{\geq \lambda_{q+1}/10} \left( \nabla \left( \eta_{(\overline{\xi})} \eta_{(\overline{\xi}')} \right) \cdot \left( W_{(\overline{\xi})} \otimes W_{(\overline{\xi}')} + W_{(\overline{\xi}')} \otimes W_{(\overline{\xi})} \right) \right) \right)\|_{L^{\infty}_t L^p_x}\\
& \lesssim \lambda_{q+1}^{-1} \|a_{(\overline{\xi})} a_{(\overline{\xi}')}\|_{C^3} \|\nabla \left( \eta_{(\overline{\xi})} \eta_{(\overline{\xi}')} \right)\|_{L^{\infty}_t L^p_x}\\
& \lesssim \sigma r^{4-3/p} \mathcal{C}_3.\end{aligned}$$
Now consider $E_{(\overline{\xi}, -\overline{\xi},2)}$. We can write $$\begin{aligned}
&\nabla \cdot (\mathbb{W}_{(\overline{\xi})} \otimes \mathbb{W}_{(-\overline{\xi})} + \mathbb{W}_{(-\overline{\xi})} \otimes \mathbb{W}_{(\overline{\xi})} ) = \left( W_{(-\overline{\xi})} \cdot \nabla \eta_{(\overline{\xi})}^2 \right) W_{(\overline{\xi})} + \left( W_{(\overline{\xi})} \cdot \nabla \eta_{(\overline{\xi})}^2 \right) W_{(-\overline{\xi})}\\
&= (A_{\overline{\xi}} \cdot \nabla \eta_{(\overline{\xi})}^2) A_{\overline{\xi}} + ((\overline{\xi} \times A_{\overline{\xi}}) \cdot \nabla \eta_{(\overline{\xi})}^2) (\overline{\xi} \times A_{\overline{\xi}}) = \nabla \xi_{(\overline{\xi})}^2 - (\overline{\xi} \cdot \nabla \eta_{(\overline{\xi})}^2) \overline{\xi} = \nabla \eta_{(\overline{\xi})}^2 - \frac{\overline{\xi}}{\mu}{\partial}_t \eta_{(\overline{\xi})}^2,\end{aligned}$$ where we use and the fact that $\{ \overline{\xi}, A_{\overline{\xi}}, \overline{\xi} \times A_{\overline{\xi}} \}$ forms an orthonormal basis of $\mathbb{R}^3$. Therefore, we can write $$\begin{aligned}
E_{(\overline{\xi}, -\overline{\xi},2)} &= \mathbb{P}_{\neq 0}\left( a_{(\overline{\xi})}^2 \nabla \mathbb{P}_{\geq \lambda_{q+1} \sigma/2} \eta_{(\overline{\xi})}^2 - a_{(\overline{\xi})}^2 \frac{\overline{\xi}}{\mu}{\partial}_t \eta_{(\overline{\xi})}^2 \right)\\
&= \nabla \left( a_{(\overline{\xi})}^2 \mathbb{P}_{\geq \lambda_{q+1} \sigma/2} \eta_{(\overline{\xi})}^2 \right) - \mathbb{P}_{\neq 0}\left(\mathbb{P}_{\geq \lambda_{q+1} \sigma/2}(\eta_{(\overline{\xi})}^2) \nabla a_{(\overline{\xi})}^2 \right)\\
& \quad - \mu^{-1} {\partial}_t \mathbb{P}_{\neq 0}\left(a_{(\overline{\xi})}^2 \eta_{(\overline{\xi})}^2 \overline{\xi} \right) + \mu^{-1} \mathbb{P}_{\neq 0} \left( {\partial}_t\left(a_{(\overline{\xi})}^2\right) \eta_{(\overline{\xi})}^2 \overline{\xi} \right).\end{aligned}$$ Using the identity $\mathrm{Id} - \mathbb{P}_{LH} = \nabla \Delta^{-1} \operatorname{div}$ , we obtain $$\begin{aligned}
&\sum_{\overline{\xi}} E_{(\overline{\xi}, -\overline{\xi},2)} + {\partial}_t w_{q+1}^{(t)} = \nabla \sum_{\overline{\xi}} \left( a_{(\overline{\xi})}^2 \mathbb{P}_{\geq \lambda_{q+1} \sigma/2} \eta_{(\overline{\xi})}^2 \right) - \nabla \sum_{\overline{\xi}} \mu^{-1} \Delta^{-1} \nabla \cdot {\partial}_t \left(a_{(\overline{\xi})}^2 \eta_{(\overline{\xi})}^2 \overline{\xi}\right)\\
& \quad - \sum_{\overline{\xi}} \mathbb{P}_{\neq 0}\left(\mathbb{P}_{\geq \lambda_{q+1} \sigma/2}(\eta_{(\overline{\xi})}^2) \nabla a_{(\overline{\xi})}^2 \right) + \mu^{-1} \sum_{\overline{\xi}} \mathbb{P}_{\neq 0} \left( {\partial}_t\left(a_{(\overline{\xi})}^2\right) \eta_{(\overline{\xi})}^2 \overline{\xi} \right),\end{aligned}$$ where the first and second terms are pressure terms. Using , and , we obtain $$\begin{aligned}
\|\mathcal{R} \mathbb{P}_{\neq 0}\left(\mathbb{P}_{\geq \lambda_{q+1} \sigma/2}(\eta_{(\overline{\xi})}^2) \nabla a_{(\overline{\xi})}^2 \right)\|_{L^{\infty}_t L^p_x} &\lesssim (\lambda_{q+1} \sigma)^{-1} \| \eta_{(\overline{\xi})} \|_{L^{\infty}_t L^{2p}_x}^2\mathcal{C}_3 \\
&\lesssim (\lambda_{q+1} \sigma)^{-1} r^{3-3/p} \mathcal{C}_3.\end{aligned}$$ It follows from and that $$\begin{aligned}
\mu^{-1}\|\mathcal{R} \mathbb{P}_{\neq 0} \left( {\partial}_t\left(a_{(\overline{\xi})}^2\right) \eta_{(\overline{\xi})}^2 \overline{\xi} \right) \|_{L^{\infty}_t L^p_x} &\lesssim \mu^{-1}\| {\partial}_t\left(a_{(\overline{\xi})}^2\right) \eta_{(\overline{\xi})}^2 \overline{\xi} \|_{L^{\infty}_t L^p_x} \\
&\lesssim \mu^{-1} r^{3-3/p} \mathcal{C}_1.\end{aligned}$$
Let us now give the explicit definition of $\widetilde{R}_{oscillation}$: $$\begin{aligned}
&\widetilde{R}_{oscillation} = \sum_{\overline{\xi}, \overline{\xi}' \in \Lambda} \mathbb{P}_{\neq 0} \left( \nabla (a_{(\overline{\xi})} a_{(\overline{\xi}')}) \cdot (\mathbb{P}_{\geq \lambda_{q+1} \sigma/2}(\mathbb{W}_{(\overline{\xi})} \otimes \mathbb{W}_{(\overline{\xi}')} + \mathbb{W}_{\overline{\xi'}} \otimes \mathbb{W}_{(\overline{\xi})}) ) \right)\\
&+ \sum_{\overline{\xi}, \overline{\xi}' \in \Lambda, \overline{\xi} \neq \overline{\xi}' } a_{(\overline{\xi})} a_{(\overline{\xi}')} \mathbb{P}_{\geq \lambda_{q+1}/10} \left( \nabla \left( \eta_{(\overline{\xi})} \eta_{(\overline{\xi}')} \right) \cdot \left( W_{(\overline{\xi})} \otimes W_{(\overline{\xi}')} + W_{(\overline{\xi}')} \otimes W_{(\overline{\xi})} \right) \right)\\
& - \sum_{\overline{\xi}, \overline{\xi}' \in \Lambda, \overline{\xi} \neq \overline{\xi}' } \nabla \left(a_{(\overline{\xi})} a_{(\overline{\xi}')}\right) \mathbb{P}_{\geq \lambda_{q+1}/10}\left(\mathbb{W}_{(\overline{\xi})} \cdot \mathbb{W}_{(\overline{\xi}')} \right)\\
& - \sum_{\overline{\xi}, \overline{\xi}' \in \Lambda, \overline{\xi} \neq \overline{\xi}' } a_{(\overline{\xi})} a_{(\overline{\xi}')} \mathbb{P}_{\geq \lambda_{q+1}/10} \left( \left(W_{(\overline{\xi})} \cdot W_{(\overline{\xi}')}\right) \nabla\left(\eta_{(\overline{\xi})} \eta_{(\overline{\xi}')}\right) \right)\\
& - \sum_{\overline{\xi} \in \Lambda} \mathbb{P}_{\neq 0}\left(\mathbb{P}_{\geq \lambda_{q+1} \sigma/2}(\eta_{(\overline{\xi})}^2) \nabla a_{(\overline{\xi})}^2 \right) + \mu^{-1} \sum_{\overline{\xi} \in \Lambda} \mathbb{P}_{\neq 0} \left( {\partial}_t\left(a_{(\overline{\xi})}^2\right) \eta_{(\overline{\xi})}^2 \overline{\xi} \right).\end{aligned}$$
Finally, we estimate the time support of $R_{q+1}$. Using we obtain $$\begin{aligned}
\operatorname{supp}_t R_{q+1} \subset \operatorname{supp}_t w_{q+1} \cup \operatorname{supp}_t R_{q} \subset N_{\delta_{q+1}}(\operatorname{supp}_t R_q).\end{aligned}$$
Now we choose the parameters $r, \sigma, \mu$. Fix $\alpha$ so that $$\begin{aligned}
\max\{0,\frac{2}{3}(2\theta - 1)\} < \alpha < 1,\end{aligned}$$ which is possible since $\theta \in (-\infty,5/4)$. Fix $$\begin{aligned}
r = \lambda^{\alpha}_{q+1}, \quad \sigma = \lambda^{-(\alpha + 1)/2}_{q+1}, \quad \mu = \lambda^{(5\alpha + 1)/4}_{q+1}.\end{aligned}$$ Clearly is satisfied. Choose $p > 1$ sufficiently close to $1$ so that $$\begin{aligned}
-\frac{\alpha+1}{2} + \frac{5\alpha + 1}{4} +\left(\frac{5}{2} - \frac{3}{p}\right)\alpha < 0
, \quad \left(\frac{3}{2} - \frac{3}{p}\right)\alpha + \max(0,2\theta - 1) < 0, \\
-\frac{5\alpha + 1}{4} + \left(\frac{9}{2} - \frac{3}{p}\right)\alpha < 0, \quad -\frac{1-\alpha}{2} + \left(3 - \frac{3}{p}\right)\alpha < 0.\end{aligned}$$ Note that $\mathcal{C}_N$ is independent of $\lambda_{q+1}$, due to . Combining the above estimates with Lemma \[Lemma:est-w\], it is easy to check that, by taking $\lambda_{q+1}$ sufficiently large, we arrive at , and . This completes the proof of Lemma \[Lemma:Iteration\].
0.125in
[**Acknowledgement**]{} The authors would like to thank H. Ibdah and the anonymous referee for carefully reading the paper and for their constructive suggestions. The authors would also like to thank the “The Institute of Mathematical Sciences", Chinese University of Hong Kong, for the warm and kind hospitality during which part of this work was completed. The work of T.L. is supported in part by NSFC Grants 11601258. The work of E.S.T. is supported in part by the ONR grant N00014-15-1-2333, the Einstein Stiftung/Foundation - Berlin, through the Einstein Visiting Fellow Program, and by the John Simon Guggenheim Memorial Foundation.
[99]{}
C. Bardos and E.S. Titi, *Onsager’s conjecture for the incompressible Euler equations in bounded domains*, Arch. Ration. Mech. Anal. **228** (2018), 197-207.
T. Buckmaster, C. De Lellis, P. Isett, L. Székelyhidi, Jr., *Anomalous dissipation for $1/5$-Hölder Euler flows*, Ann. of Math. **182:1** (2015), 127–172.
T. Buckmaster, C. De Lellis, L. Székelyhidi, *Dissipative Euler flows with Onsager-critical spatial regularity*, Comm. Pure Appl. Math. **69:9** (2016), 1613–1670.
T. Buckmaster, C. De Lellis, L. Székelyhidi, V. Vicol, *Onsager’s conjecture for admissible weak solutions*, Comm. Pure Appl. Math. 72 (2019), no. 2, 229–274.
T. Buckmaster, V. Vicol, *Nonuniqueness of weak solutions to the Navier-Stokes equation*, Ann. of Math. (2) 189 (2019), no. 1, 101–144.
L. Caffarelli, R. Kohn, L. Nirenberg, *Partial regularity of suitable weak solutions to the Navier–Stokes equations*, Comm. Pure Appl. Math. **35** (1982), 771–831.
A. Cheskidov, X. Luo, *Stationary and discontinuous weak solutions of the Navier-Stokes equations*, arXiv:1901.07485.
M. Colombo, C. De Lellis, L. De Rosa,*Ill-Posedness of Leray Solutions for the Hypodissipative Navier–Stokes Equations*, Comm. Math. Phys. 362 (2018), no. 2, 659–688.
P. Constantin, W. E, E. S. Titi, [*Onsager’s conjecture on the energy conservation for solutions of Euler’s equation*]{}, Comm. Math. Phys. **165:1** (1994), 207–209.
S. Daneri, L. Székelyhidi, Jr., *Non-uniqueness and h-principle for H" older-continuous weak solutions of the Euler equations*. Arch. Ration. Mech. Anal. **224:2** (2017), 471–514.
C. De Lellis, L. Székelyhidi, Jr., *The [E]{}uler equations as a differential inclusion*. **170:3** (2009), 1417–1436.
C. De Lellis, L. Székelyhidi, Jr., *Dissipative continuous Euler flows*, **193:2** (2013), 377–407.
L. De Rosa, *Infinitely many Leray-Hopf solutions for the fractional Navier-Stokes equations*, Communications in Partial Differential Equations, Volume 44, 2019 - Issue. 4
L. Grafakos, *Classical Fourier Analysis*, second ed., Grad. Texts in Math. 249, Springer-Verlag, New York, 2008
P. Isett, *Holder continuous Euler flows with compact support in time.* Thesis (Ph.D.)–Princeton University. 2013.
P. Isett, *A Proof of Onsager’s Conjecture*, Ann. of Math. **188:3** (2018), 1–93. [arXiv:1608.08301v1](https://arxiv.org/abs/1608.08301v1).
Q. Jiu, Y. Wang, *On possible time singular points and eventual regularity of weak solutions to the fractional Navier-Stokes equations*. Dyn. Partial Differ. Equ. **11:4** (2014), 321–343.
N. H. Katz, N. A. Pavlović, *A cheap Caffarelli-Kohn-Nirenberg inequality for the Navier-Stokes equation with hyper-dissipation*. Geom. Funct. Anal. **12:2** (2002), 355–379.
J. Leray. *Sur le mouvement d’un liquide visqueux emplissant l’espace*. Acta Math., 63(1):193– 248, 1934.
J. L. Lions, *Quelques résultats d’existence dans des équations aux dérivées partielles non linéaires*. Bull. Soc. Math. France **87** (1959), 245–273.
J. L. Lions, *Quelques Méthodes de Resolution des Problémes aux Limites Non linéaires*, Vol 1. Dunod, Paris, 1969.
X. Luo, *Stationary solutions and nonuniqueness of weak solutions for the Navier-Stokes equations in high dimensions*, Archive for Rational Mechanics and Analysis, volume 233, pages 701–747(2019).
S. Modena, G. Sattig, *Convex integration solutions to the transport equation with full dimensional concentration*, arXiv:1902.08521.
S. Modena, L. Székelyhidi, *Non-uniqueness for the transport equation with Sobolev vector fields*, Annals of PDE, Volume 4, Article number: 18 (2018). S. Modena, L. Szekelyhidi, *Non-renormalized solutions to the continuity equation*, Calculus of Variations and Partial Differential Equations, Volume 58, Article number: 208 (2019).
E. Olson, E. S. Titi, *Viscosity versus vorticity stretching: Global well-posedness for a family of Navier–Stokes-alpha-like models*, Nonlinear Analysis: Theory, Methods & Applications **66:11** (2007), 2427–2458.
V. Scheffer, *An inviscid flow with compact support in space-time*, , **3:4** (1993), 343–401.
J. Wu, *Generalized MHD equations*, J. Differential Equations **195** (2003), 284–312.
T. Tao, *Global regularity for a logarithmically supercritical hyperdissipative Navier-Stokes equation*, Analysis & PDE **3** (2009), 361-–366.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'For a graph $G$, let ${\chi}(G)$ and ${\lambda}(G)$ denote the chromatic number of $G$ and the maximum local edge connectivity of $G$, respectively. A result of Dirac [@Dirac53] implies that every graph $G$ satisfies ${\chi}(G)\leq {\lambda}(G)+1$. In this paper we characterize the graphs $G$ for which ${\chi}(G)={\lambda}(G)+1$. The case ${\lambda}(G)=3$ was already solved by Alboulker [*et al.*]{} [@AlboukerV2016]. We show that a graph $G$ with ${\lambda}(G)=k\geq 4$ satisfies ${\chi}(G)=k+1$ if and only if $G$ contains a block which can be obtained from copies of $K_{k+1}$ by repeated applications of the Hajós join.'
author:
- '[[Michael Stiebitz]{} [^1] [^2]]{}'
- '[[Bjarne Toft]{} [^3] ]{}'
title: '**A Brooks type theorem for the maximum local edge connectivity**'
---
[ 05C15]{}
[ Graph coloring, Connectivity, Critical graphs, Brooks’ theorem.]{}
Introduction and main result
============================
The paper deals with the classical vertex coloring problem for graphs. The term graph refers to a finite undirected graph without loops and without multiple edges. The [*chromatic number*]{} of a graph $G$, denoted by ${\chi}(G)$, is the least number of colors needed to color the vertices of $G$ such that each vertex receives a color and adjacent vertices receive different colors. There are several degree bounds for the chromatic number. For a graph $G$, let ${\delta}(G)=\min_{v\in V(G)}d_G(v)$ and ${\Delta}(G)=\max_{v\in V(G)}d_G(v)$ denote the [*minimum degree*]{} and the [*maximum degree*]{} of $G$, respectively. Furthermore, let $${{\rm col}}(G)=1+\max_{H\subseteq G}{\delta}(H)$$ denote the [*coloring number*]{} of $G$, and let $${{\rm mad}}(G)=\max_{{\varnothing}\not=H\subseteq G} \frac{2|E(H)|}{|V(H)|}$$ denote the [*maximum average degree*]{} of $G$. By $H\subseteq G$ we mean that $H$ is a subgraph of $G$. If $G$ is the [*empty graph*]{}, that is, $V(G)={\varnothing}$, we briefly write $G={\varnothing}$ and define ${\delta}(G)={\Delta}(G)={{\rm mad}}(G)=0$ and ${{\rm col}}(G)=1$. A simple sequential coloring argument shows that ${\chi}(G)\leq {{\rm col}}(G)$, which implies that every graph $G$ satisfies $${\chi}(G)\leq {{\rm col}}(G)\leq \lfloor {{\rm mad}}(G)\rfloor+1\leq {\Delta}(G)+1.$$ These inequalities were discussed in a paper by Jensen and Toft [@JensenT95]. Brooks’ famous theorem provides a characterization for the class of graphs $G$ satisfying ${\chi}(G)={\Delta}(G)+1$. Let $k\geq 0$ be an integer. For $k\not=2$, let ${{\cal B}}_k$ denote the class of complete graphs having order $k+1$; and let ${{\cal B}}_2$ denote the class of odd cycles. A graph in ${{\cal B}}_k$ has maximum degree $k$ and chromatic number $k+1$. Brooks’ theorem [@Brooks41] is as follows.
\[Brooks 1941\] Let $G$ be a non-empty graph. Then ${\chi}(G)\leq {\Delta}(G)+1$ and equality holds if and only if $G$ has a connected component belonging to the class ${{\cal B}}_{{\Delta}(G)}$. \[Th:Brooks\]
In this paper we are interested in connectivity parameters of graphs. Let $G$ be a graph with at least two vertices. The [*local connectivity*]{} ${\kappa}_G(v,w)$ of distinct vertices $v$ and $w$ is the maximum number of internally vertex disjoint $v$-$w$ paths of $G$. The [*local edge connectivity*]{} ${\lambda}_G(v,w)$ of distinct vertices $v$ and $w$ is the maximum number of edge-disjoint $v$-$w$ paths of $G$. The [*maximum local connectivity*]{} of $G$ is $${\kappa}(G)=\max{\{{\kappa}_G(v,w) \;|\; v,w\in V(G), v\not=w \}},$$ and the [*maximum local edge connectivity*]{} of $G$ is $${\lambda}(G)=\max{\{{\lambda}_G(v,w) \;|\; v,w\in V(G), v\not=w \}}.$$ For a graph $G$ having only one vertex, we define ${\kappa}(G)={\lambda}(G)=0$. Clearly, the definition implies that ${\kappa}(G)\leq {\lambda}(G)$ for every graph $G$. By a result of Mader [@Mader73] it follows that ${\delta}(G)\leq {\kappa}(G)$. Since ${\kappa}$ is a monotone graph parameter in the sense that $H\subseteq G$ implies ${\kappa}(H)\leq {\kappa}(G)$, it follows that every graph $G$ satisfies ${{\rm col}}(G)\leq {\kappa}(G)+1$. Consequently, every graph $G$ satisfies $$\label{Equ:lambda}
{\chi}(G)\leq {{\rm col}}(G) \leq {\kappa}(G)+1\leq {\lambda}(G)+1\leq {\Delta}(G)+1.$$ Our aim is to characterize the class of graphs $G$ for which ${\chi}(G)={\lambda}(G)+1$. For such a characterization we use the fact that if we have an optimal coloring of each block of a graph $G$, then we can combine these colorings to an optimal coloring of $G$ by permuting colors in the blocks if necessary. For every non-empty graph $G$, we thus have $$\label{Equ:Block}
{\chi}(G)=\max{\{{\chi}(H) \;|\; H \mbox{ is a block of } G \}}.$$ We also need a famous construction, first used by Hajós [@Hajos61]. Let $G_1$ and $G_2$ be two vertex-disjoint graphs and, for $i=1,2$, let $e_i=v_iw_i$ be an edge of $G_i$. Let $G$ be the graph obtained from $G_1$ and $G_2$ by deleting the edges $e_1$ and $e_2$ from $G_1$ and $G_2$, respectively, identifying the vertices $v_1$ and $v_2$, and adding the new edge $w_1w_2$. We then say that $G$ is the [*Hajós join*]{} of $G_1$ and $G_2$ and write $G=(G_1,v_1,w_1)\bigtriangleup (G_2,v_2,w_2)$ or briefly $G=G_1 \bigtriangleup G_2$.
For an integer $k\geq 0$ we define a class ${{\cal H}}_k$ of graphs as follows. If $k\leq 2$, then ${{\cal H}}_k={{\cal B}}_k$. The class ${{\cal H}}_3$ is the smallest class of graphs that contains all odd wheels and is closed under taking Hajós joins. Recall that an [*odd wheel*]{} is a graph obtained from on odd cycle by adding a new vertex and joining this vertex to all vertices of the cycle. If $k\geq 4$, then ${{\cal H}}_k$ is the smallest class of graphs that contains all complete graphs of order $k+1$ and is closed under taking Hajós joins. Our main result is the following counterpart of Brooks’ theorem. In fact, Brooks’ theorem may easily be deduced from it.
Let $G$ be a non-empty graph. Then ${\chi}(G)\leq {\lambda}(G)+1$ and equality holds if and only if $G$ has a block belonging to the class ${{\cal H}}_{{\lambda}(G)}$. \[Th:local\]
For the proof of this result, let $G$ be a non-empty graph with ${\lambda}(G)=k$. By (\[Equ:lambda\]), we obtain ${\chi}(G)\leq k+1$. By an observation of Hajós [@Hajos61] it follows that every graph in ${{\cal H}}_k$ has chromatic number $k+1$. Hence if some block of $G$ belongs to ${{\cal H}}_k$, then (\[Equ:Block\]) implies that ${\chi}(G)=k+1$. So it only remains to show that if ${\chi}(G)=k+1$, then some block of $G$ belongs to ${{\cal H}}_k$. For proving this, we shall use the critical graph method, see [@StiebitzT2015].
A graph $G$ is [*critical*]{} if every proper subgraph $H$ of $G$ satisfies ${\chi}(H)<{\chi}(G)$. We shall use the following two properties of critical graphs. As an immediate consequence of (\[Equ:Block\]) we obtain that if $G$ is a critical graph, then $G={\varnothing}$ or $G$ contains no separating vertex, implying that $G$ is its only block. Furthermore, every graph contains a critical subgraph with the same chromatic number.
Let $G$ be a non-empty graph with ${\lambda}(G)=k$ and ${\chi}(G)=k+1$. Then $G$ contains a critical subgraph $H$ with chromatic number $k+1$, and we obtain that ${\lambda}(H)\leq {\lambda}(G)=k$. So the proof of Theorem \[Th:local\] is complete if we can show that $H$ is a block of $G$ which belongs to ${{\cal H}}_k$. For an integer $k\geq 0$, let ${{\cal C}}_k$ denote the class of graphs $H$ such that $H$ is a critical graph with chromatic number $k+1$ and with ${\lambda}(H)\leq k$. We shall prove that the two classes ${{\cal C}}_k$ and ${{\cal H}}_k$ are the same.
Connectivity of critical graphs
===============================
In this section we shall review known results about the structure of critical graphs. First we need some notation. Let $G$ be an arbitrary graph. For an integer $k\geq 0$, let ${{\cal CO}}_k(G)$ denote the set of all colorings of $G$ with color set $\{1,2, \ldots, k\}$. Then a function $f:V(G) \to \{1,2, \ldots, k\}$ belongs to ${{\cal CO}}_k(G)$ if and only if $f^{-1}(c)$ is an independent vertex set of $G$ (possibly empty) for every color $c\in \{1,2, \ldots, k\}$. A set $S\subseteq V(G) \cup E(G)$ is called a [*separating set*]{} of $G$ if $G-S$ has more components than $G$. A vertex $v$ of $G$ is called a [*separating vertex*]{} of $G$ if $\{v\}$ is a separating set of $G$. An edge $e$ of $G$ is called a [*bridge*]{} of $G$ if $\{e\}$ is a separating set of $G$. For a vertex set $X\subseteq V(G)$, let ${\partial_G}(X)$ denote the set of all edges of $G$ having exactly one end in $X$. Clearly, if $G$ is connected and ${\varnothing}\not= X \varsubsetneq V(G)$, then $F={\partial_G}(X)$ is a separating set of edges of $G$. The converse is not true. However if $F$ is a minimal separating edge set of a connected graph $G$, then $F={\partial_G}(X)$ for some vertex set $X$. As a consequence of Menger’s theorem about edge connectivity, we obtain that if $v$ and $w$ are two distinct vertices of $G$, then $${\lambda}_G(v,w)=\min{\{|{\partial_G}(X)| \;|\; X\subseteq V(G), v\in X, w\not\in X \}}.$$
Color critical graphs were first introduced and investigated by Dirac in the 1950s. He established the basic properties of critical graphs in a series of papers [@Dirac52], [@Dirac53] and [@Dirac57]. Some of these basic properties are listed in the next theorem.
\[Dirac 1952\] Let $G$ be a critical graph with chromatic number $k+1$ for an integer $k\geq 0$. Then the following statements hold:
- ${\delta}(G)\leq k$
- If $k=0,1$, then $G$ is a complete graph of order $k+1$; and if $k=2$, then $G$ is an odd cycle.
- No separating vertex set of $G$ is a clique of $G$. As a consequence, $G$ is connected and has no separating vertex, i.e., $G$ is a block.
- If $v$ and $w$ are two distinct vertices of $G$, then ${\lambda}_G(v,w)\geq k$. As a consequence $G$ is $k$-edge-connected.
\[Th:Dirac\]
Theorem \[Th:Dirac\](a) leads to a very natural way of classifying the vertices of a critical graph into two classes. Let $G$ be a critical graph with chromatic number $k+1$. The vertices of $G$ having degree $k$ in $G$ are called [*low vertices*]{} of $G$, and the remaining vertices are called [*high vertices*]{} of $G$. So any high vertex of $G$ has degree at least $k+1$ in $G$. Furthermore, let $G_L$ be the subgraph of $G$ induced by the low vertices of $G$, and let $G_H$ be the subgraph of $G$ induced by the high vertices of $G$. We call $G_L$ the [*low vertex subgraph*]{} of $G$ and $G_H$ the [*high vertex subgraph*]{} of $G$. This classification is due to Gallai [@Gallai63a] who proved the following theorem. Note that statements (b) and (c) of Gallai’s theorem are simple consequences of statement (a), which is an extension of Brooks’ theorem.
\[Gallai 1963\] Let $G$ be a critical graph with chromatic number $k+1$ for an integer $k\geq 1$. Then the following statements hold:
- Every block of $G_L$ is a complete graph or an odd cycle
- If $G_H={\varnothing}$, then $G$ is a complete graph of order $k+1$ if $k\not=2$, and $G$ is an odd cycle if $k=2$.
- If $|V(G_H)|=1$, then either $G$ has a separating vertex set of two vertices or $k=3$ and $G$ is an odd wheel.
\[Th:Gallai\]
As observed by Dirac, a critical graph is connected and contains no separating vertex. Dirac [@Dirac52] and Gallai [@Gallai63a] characterized critical graphs having a separating vertex set of size two. In particular, they proved the following theorem, which shows how to decompose a critical graph having a separating vertex set of size two into smaller critical graphs.
\[Dirac 1952 and Gallai 1963\] Let $G$ be a critical graph with chromatic number $k+1$ for an integer $k\geq 3$, and let $S\subseteq V(G)$ be a separating vertex set of $G$ with $|S|\leq 2$. Then $S$ is an independent vertex set of $G$ consisting of two vertices, say $v$ and $w$, and $G-S$ has exactly two components $H_1$ and $H_2$. Moreover, if $G_i=G[V(H_i) \cup S]$ for $i=1,2$, we can adjust the notation so that for some coloring $f_1\in {{\cal CO}}_k(G_1)$ we have $f_1(v)=f_1(w)$. Then the following statements hold:
- Every coloring $f\in {{\cal CO}}_k(G_1)$ satisfies $f(v)=f(w)$ and every coloring $f\in {{\cal CO}}_k(G_2)$ satisfies $f(v)\not= f(w)$.
- The subgraph $G_1'=G_1+vw$ obtained from $G_1$ by adding the edge $vw$ is critical and has chromatic number $k+1$.
- The vertices $v$ and $w$ have no common neighbor in $G_2$ and the subgraph $G_2'=G_2/S$ obtained from $G_2$ by identifying $v$ and $w$ is critical and has chromatic number $k+1$.
\[Th:2Conn\]
Dirac [@Dirac64] and Gallai [@Gallai63a] also proved the converse theorem, that $G$ is critical and has chromatic number $k+1$ provided that $G_1'$ is critical and has chromatic number $k+1$ and $G_2$ obtained from the critical graph $G_2'$ with chromatic number $k+1$ by splitting a vertex into $v$ and $w$ has chromatic number $k$.
Hajós [@Hajos61] invented his construction to characterize the class of graphs with chromatic number at least $k+1$. Another advantage of the Hajós join is the well known fact that it not only preserve the chromatic number, but also criticality. It may be viewed as a special case of the Dirac–Gallai construction, described above.
\[Hajós 1961\] Let $G=G_1 \bigtriangleup G_2$ be the Hajós join of two graphs $G_1$ and $G_2$, and let $k\geq 3$ be an integer. Then $G$ is critical and has chromatic number $k+1$ if and only if both $G_1$ and $G_2$ are critical and have chromatic number $k+1$. \[Th:Hajos\]
If $G$ is the Hajós join of two graphs that are critical and have chromatic number $k+1$, where $k\geq 3$, then $G$ is critical and has chromatic number $k+1$. Moreover, $G$ has a separating set consisting of one edge and one vertex. Theorem \[Th:2Conn\] implies that the converse statement also holds.
Let $G$ be a critical graph graph with chromatic number $k+1$ for an integer $k\geq 3$. If $G$ has a separating set consisting of one edge and one vertex, then $G$ is the Hajós join of two graphs. \[Th:2Sep=Hajos\]
Next we will discuss a decomposition result for critical graphs having chromatic number $k+1$ an having an separating edge set of size $k$. Let $G$ be an arbitrary graph. By an [*edge cut*]{} of $G$ we mean a triple $(X,Y,F)$ such that $X$ is a non-empty proper subset of $V(G)$, $Y=V(G){\setminus}X$, and $F={\partial_G}(X)={\partial_G}(Y)$. If $(X,Y,F)$ is an edge cut of $G$, then we denote by $X_F$ (respectively $Y_F$) the set of vertices of $X$ (respectively, $Y$) which are incident to some edge of $F$. An edge cut $(X,Y,F)$ of $G$ is non-trivial if $|X_F|\geq 2$ and $|Y_F|\geq 2$. The following decomposition result was proved independently by T. Gallai and Toft [@Toft70].
\[Toft 1970\] Let $G$ be a critical graph with chromatic number $k+1$ for an integer $k\geq 3$, and let $F\subseteq E(G)$ be a separating edge set of $G$ with $|F|\leq k$. Then $|F|=k$ and there is an edge cut $(X,Y,F)$ of $G$ satisfying the following properties:
- Every coloring $f\in {{\cal CO}}_k(G[X])$ satisfies $|f(X_F)|=1$ and every coloring $f\in {{\cal CO}}_k(G[Y])$ satisfies $|f(Y_F)|=k$.
- The subgraph $G_1$ obtained from $G[X \cup Y_F]$ by adding all edges between the vertices of $Y_F$, so that $Y_F$ becomes a clique of $G_1$, is critical and has chromatic number $k+1$.
- The subgraph $G_2$ obtained from $G[Y]$ by adding a new vertex $v$ and joining $v$ to all vertices of $Y_F$ is critical and has chromatic number $k+1$.
\[Th:Toft\]
A particular nice proof of this result is due to T. Gallai (oral communication to the second author). Recall that the [*clique number*]{} of a graph $G$, denoted by ${\omega}(G)$, is the largest cardinality of a clique in $G$. A graph $G$ is [*perfect*]{} if every induced subgraph $H$ of $G$ satisfies ${\chi}(H)={\omega}(H)$. For the proof of the next lemma, due to Gallai, we use the fact that complements of bipartite graphs are perfect.
Let $H$ be a graph and let $k\geq 3$ be an integer. Suppose that $(A,B,F')$ is an edge cut of $H$ such that $|F'|\leq k$ and $A$ as well as $B$ are cliques of $H$ with $|A|=|B|=k$. If ${\chi}(H)\geq k+1$, then $|F'|=k$ and $F'={\partial_H}(\{v\})$ for some vertex $v$ of $H$. \[Le:perfect\]
The graph $H$ is perfect and so ${\omega}(H)={\chi}(H)\geq k+1$. Consequently, $H$ contains a clique $X$ with $|X|=k+1$. Let $s=|A\cap X|$ and hence $k+1-s=|B\cap X|$. Since $|A|=|B|=k$, this implies that $s\geq 1$ and $k+1-s\geq 1$. Since $X$ is a clique of $H$, the set $E'$ of edges of $H$ joining a vertex of $A\cap X$ with a vertex of $B\cap X$ satisfies $E'\subseteq F'$ and $|E'|=s(k+1-s)$. Clearly, $g''(s)=-2$, which implies that the function $g(s)=s(k+1-s)$ is strictly concave on the real interval $[1,k]$. Since $g(1)=g(k)=k$, we conclude that $g(s)>k$ for all $s\in (1,k)$. Since $g(s)=|E'|\leq |F'|\leq k$, this implies that $s=1$ or $s=k$. In both cases we obtain that $|E'|=|F'|=k$, and hence $E'=F'={\partial_H}(\{v\})$ for some vertex $v$ of $H$.
Based on Lemma \[Le:perfect\] it is easy to give a proof of Theorem \[Th:Toft\], see also the paper by Dirac, S[ø]{}rensen, and Toft [@DiracT74]. Theorem \[Th:Toft\] is a reformulation of a result by Toft in his Ph.D thesis. Toft gave a complete characterization of the class of critical graphs, having chromatic number $k+1$ and containing a separating edge set of size $k$. The characterization involves critical hypergraphs.
Figure \[Fig:A1\] shows three critical graphs with ${\chi}=4$. The first graph is an odd wheel and the second graph is the Hajós join of two $K_4$’s; both graphs belong to the class ${{\cal C}}_3$. The third graph does not belong to ${{\cal C}}_3$; it has an separating edge set of size 3, but ${\lambda}=4$.
![Three critical graphs with chromatic number ${\chi}=4$.[]{data-label="Fig:A1"}](Ext){height="3cm"}
Proof of the main result
========================
Let $k\geq 0$ be an integer. Then the two graph classes ${{\cal C}}_k$ and ${{\cal H}}_k$ coincide. \[Th:Ck=Hk\]
That the two classes ${{\cal C}}_k$ and ${{\cal H}}_k$ coincide if $0\leq k \leq 2$ follows from Theorem \[Th:Dirac\](a). In this case both classes consists of all critical graphs with chromatic number $k+1$. In what follows we therefore assume that $k\geq 3$. The proof of the following claim is straightforward and left to the reader.
The odd wheels belong to the class ${{\cal C}}_3$ and the complete graphs of order $k+1$ belong to the class ${{\cal C}}_k$. \[Cl:A1\]
Let $k\geq 3$ be an integer, and let $G=G_1 \bigtriangleup G_2$ the Hajós join of two graphs $G_1$ and $G_2$. Then $G$ belongs to the class ${{\cal C}}_k$ if and only if both $G_1, G_2$ belong to the class ${{\cal C}}_k$. \[Cl:A2\]
[*Proof:*]{}We may assume that $G=(G_1,v_1,w_1) \bigtriangleup (G_2,v_2,w_2)$ and $v$ is the vertex of $G$ obtained by identifying $v_1$ and $v_2$. First suppose that $G_1, G_2\in {{\cal C}}_k$. From Theorem \[Th:Hajos\] it follows that $G$ is critical and has chromatic number $k+1$. So it suffices to prove that ${\lambda}(G)\leq k$. To this end let $u$ and $u'$ be distinct vertices of $G$ and let $p={\lambda}_G(u,u')$. Then there is a system ${{\cal P}}$ of $p$ edge disjoint $u$-$u'$ paths in $G$. If $u$ and $u'$ belong both to $G_1$, then only one path $P$ of ${{\cal P}}$ may contain vertices not in $G_1$. In this case $P$ contains the vertex $v$ and the edge $w_1w_2$. If we replace in $P$ the subpath $vPw_1$ by the edge $v_1w_1$, we obtain a system of $p$ edge disjoint $u$-$u'$ paths in $G_1$, and hence $p\leq {\lambda}_{G_1}(u,u')\leq k$. If $u$ and $u'$ belong to $G_2$, a similar argument shows that $p\leq k$. It remains to consider the case that one vertex, say $u$, belongs to $G_1$ and the other vertex $u'$ belongs to $G_2$. By symmetry we may assume that $u\not=v$. Again at most one path $P$ of ${{\cal P}}$ uses the edge $w_1w_2$ and the remaining paths of ${{\cal P}}$ all uses the vertex $v(=v_1=v_2)$. If we replace $P$ by the path $uPw_1+w_1v_1$, then we obtain $p$ edge disjoint $u$-$v_1$ path in $G_1$, and hence $p\leq {\lambda}_{G_1}(u,v_1)\leq k$. This shows that ${\lambda}(G)\leq k$ and so $G\in {{\cal C}}_k$.
Suppose conversely that $G\in {{\cal C}}_k$. From Theorem \[Th:Hajos\] it follows that $G_1$ and $G_1$ are critical graphs, both with chromatic number $k+1$. So it suffices to show that ${\lambda}(G_i)\leq k$ for $i=1,2$. By symmetry it suffices to show that ${\lambda}(G_1)\leq k$. To this end let $u$ and $u'$ be distinct vertices of $G_1$ and let $p={\lambda}_G(u,u')$. Then there is a system ${{\cal P}}$ of $p$ edge disjoint $u$-$u'$ paths in $G_1$. At most one path $P$ of ${{\cal P}}$ can contain the edge $v_1w_1$. Clearly, there is a $v_2$-$w_2$ path $P'$ in $G_2$ not containing the edge $v_2w_2$. So if we replace the edge $v_1w_1$ of $P$ by the path $P'$, we get $p$ edge disjoint $u$-$u'$ paths of $G$, and hence $p\leq {\lambda}_G(u,u')\leq k$. This shows that ${\lambda}(G_1)\leq k$ and by symmetry ${\lambda}(G_2)\leq k$. Hence $G_1, G_2\in {{\cal C}}_k$.
As a consequence of Claim \[Cl:A1\] and Claim \[Cl:A2\] and the definition of the class ${{\cal H}}_k$ we obtain the following claim.
Let $k\geq 3$ be an integer. Then the class ${{\cal H}}_k$ is a subclass of ${{\cal C}}_k$. \[Cl:A3\]
Let $k\geq 3$ be an integer, and let $G$ be a graph belonging to the class ${{\cal C}}_k$. If $G$ is 3-connected, then either $k=3$ and $G$ is an odd wheel, or $k\geq 4$ and $G$ is a complete graph of order $k+1$. \[Cl:A4\]
[*Proof:*]{}The proof is by contradiction, where we consider a counterexample $G$ whose order $|G|$ is minimum. Then $G\in {{\cal C}}_k$ is a 3-connected graph, and either $k=3$ and $G$ is not an odd wheel, or $k\geq 4$ and $G$ is not a complete graph of order $k+1$. First we claim that $|G_H|\geq 2$. If $G_H={\varnothing}$, then Theorem \[Th:Gallai\](b) implies that $G$ is a complete graph of order $k+1$, a contradiction. If $|G_H|=1$, then Theorem \[Th:Gallai\](c) implies that $k=3$ and $G$ is an odd wheel, a contradiction. This proves the claim that $|G_H|\geq 2$. Then let $u$ and $v$ be distinct high vertices of $G$. Since $G\in {{\cal C}}_k$, Theorem \[Th:Dirac\](d) implies that ${\lambda}_G(u,v)=k$ and, therefore, $G$ contains a separating edge set $F$ of size $k$ which separates $u$ and $v$. From Theorem \[Th:Toft\] it then follows that there is an edge cut $(X,Y,F)$ satisfying the three properties of that theorem. Since $F$ separates $u$ and $v$, we may assume that $u\in X$ and $v\in Y$. By Theorem\[Th:Toft\](a), $|Y_F|=k$ and hence each vertex of $Y_F$ is incident to exactly one edge of $F$. Since $Y$ contains the high vertex $v$, we conclude that $|Y_F|<|Y|$. Now we consider the graph $G'$ obtained from $G[X \cup Y_F]$ by adding all edges between the vertices of $Y_F$, so that $Y_F$ becomes a clique of $G'$. By Theorem \[Th:Toft\](b), $G'$ is a critical graph with chromatic number $k+1$. Clearly, every vertex of $Y_F$ is a low vertex of $G$ and every vertex of $X$ has in $G'$ the same degree as in $G$. Since $X$ contains the high vertex $u$ of $G$, this implies that $|X_F|<|X|$. Since $G$ is 3-connected, we conclude that $|X_F|\geq 3$ and that $G'$ is 3-connected.
Now we claim that ${\lambda}(G')\leq k$. To prove this, let $x$ and $y$ be distinct vertices of $G'$. If $x$ or $y$ is a low vertex of $G'$, then ${\lambda}_{G'}(x,y)\leq k$ and there is nothing to prove. So assume that both $x$ and $y$ are high vertices of $G'$. Then both vertices $x$ and $y$ belong to $X$. Let $p={\lambda}_{G'}(x,y)$ and let ${{\cal P}}$ be a system of $p$ edge disjoint $x$-$y$ paths in $G'$. We may choose ${{\cal P}}$ such that the number of edges in ${{\cal P}}$ is minimum. Let ${{\cal P}}_1$ be the paths in ${{\cal P}}$ which uses edges of $F$. Since $|Y_F|=k$ and each vertex of $Y_F$ is incident with exactly one edge of $F$, this implies that each path $P$ in ${{\cal P}}_1$ contains exactly two edges of $F$. Since $|X_F|<|X|$ and $|Y_F|<|Y|$, there are vertices $u'\in X{\setminus}X_F$ and $v'\in Y{\setminus}Y_F$. By Theorem \[Th:Dirac\](d) it follows that ${\lambda}_G(u',v')=k$ and, therefore, there are $k$ edge disjoint $u'$-$v'$ paths in $G$. Since $|Y_F|=k$, for each vertex $z\in Y_F$, there is a $v'$-$z$ path $P_z$ in $G[Y]$ such that these paths are edge disjoint. Now let $P$ be an arbitrary path in ${{\cal P}}_1$. Then $P$ contains exactly two vertices of $Y_F$, say $z$ and $z'$, and we can replace the edge $zz'$ of the path $P$ by a $z$-$z'$ path contained in $P_z \cup P_{z'}$. In this way we obtain a system of $p$ edge disjoint $x$-$y$ paths in $G$, which implies that $p\leq {\lambda}_G(x,y)\leq k$. This proves the claim that ${\lambda}(G')\leq k$. Consequently $G'\in {{\cal C}}_k$. Clearly, $|G'|<|G|$ and either $k=3$ and $G'$ is not an odd wheel, or $k\geq 4$ and $G$ is not a complete graph of order $k+1$. This, however, is a contradiction to the choice of $G$. Thus the claim is proved.
Let $k\geq 3$ be an integer, and let $G$ be a graph belonging to the class ${{\cal C}}_k$. If $G$ has a separating vertex set of size 2, then $G=G_1\bigtriangleup G_2$ is the Hajós sum of two graphs $G_1$ and $G_2$, which both belong to ${{\cal C}}_k$. \[Cl:A5\]
[[*Proof:*]{}If $G$ has a separating set consisting of one edge and one vertex, then Theorem \[Th:2Sep=Hajos\] implies that $G$ is the Hajoś join of two graphs $G_1$ and $G_2$. By Claim \[Cl:A2\] it then follows that both $G_1$ and $G_2$ belong to ${{\cal C}}_k$ and we are done. It remains to consider the case that $G$ does not contain a separating set consisting of one edge and one vertex. By assumption, there is a separating vertex set of size 2, say $S=\{u,v\}$. Then Theorem \[Th:2Conn\] implies that $G-S$ has exactly two components $H_1$ and $H_2$ such that the graphs $G_i=G[V(H_i) \cup S]$ with $i=1,2$ satisfies the three properties of that theorem. In particular, we have that $G_1'=G_1+uv$ is critical and has chromatic number $k$. By Theorem \[Th:Dirac\](c), it then follows that ${\lambda}_{G_1'}(u,v)\geq k$ implying that ${\lambda}_{G_1}(u,v)\geq k-1$. Since $G\in {{\cal C}}_k$, we then conclude that ${\lambda}_{G_2}(u,v)\leq 1$. Since $G_2$ is connected, this implies that $G_2$ has a bridge $e$. Since $k\geq 3$, we conclude that $\{u,e\}$ or $\{v,e\}$ is a separating set of $G$, a contradiction. ]{}
As a consequence of Claim \[Cl:A4\] and Claim \[Cl:A5\], we conclude that the class ${{\cal C}}_k$ is a subclass of the class ${{\cal H}}_k$. Together with Claim \[Cl:A3\] this yields ${{\cal H}}_k={{\cal C}}_k$ as wanted.
[*Proof of of Theorem \[Th:local\]:*]{}For the proof of this theorem let $G$ be a non-empty graph with ${\lambda}(G)=k$. By (\[Equ:lambda\]) we obtain that ${\chi}(G)\leq k+1$. If one block $H$ of $G$ belongs to ${{\cal H}}_k$, then $H\in {{\cal C}}_k$ (by Theorem \[Th:Ck=Hk\]) and hence ${\chi}(G)=k+1$ (by (\[Equ:Block\]).
Assume conversely that ${\chi}(G)=k+1$. Then $G$ contains a subgraph $H$ which is critical and has chromatic number $k+1$. Clearly, ${\lambda}(H)\leq {\lambda}(G)\leq k$, and, therefore, $H\in {{\cal C}}_k$. By Theorem \[Th:Dirac\](b), $H$ contains no separating vertex. We claim that $H$ is a block of $G$. For otherwise, $H$ would be a proper subgraph of a block $G'$ of $G$. This implies that there are distinct vertices $u$ and $v$ in $H$ which are joined by a path $P$ of $G$ with $E(P)\cap E(H)={\varnothing}$. Since ${\lambda}_H(u,v)\geq k$ (by Theorem \[Th:Dirac\](c)), this implies that ${\lambda}_G(u,v)\geq k+1$, which is impossible. This proves the claim that $H$ is a block of $G$. By Theorem \[Th:Ck=Hk\], ${{\cal C}}_k={{\cal H}}_k$ implying that $H\in {{\cal H}}_k$. This completes the proof of the theorem [$\Box$]{}
The case ${\lambda}=3$ of Theorem \[Th:local\] was obtained earlier by Alboulker [*et al.*]{} [@AlboukerV2016]; their proof is similar to our proof. Let ${{\cal L}}_k$ denote the class of graphs $G$ satisfying ${\lambda}(G)\leq k$. It is well known that membership in ${{\cal L}}_k$ can be tested in polynomial time. It is also easy to show that there is a polynomial-time algorithm that, given a graph $G\in {{\cal L}}_k$, decides whether $G$ or one of its blocks belong to ${{\cal H}}_k$. So it can be tested in polynomial time whether a graph $G\in {{\cal L}}_k$ satisfies ${\chi}(G)\leq k$. Moreover, the proof of Theorem \[Th:local\] yields a polynomial-time algorithm that, given a graph $G\in {{\cal L}}_k$, finds a coloring of ${{\cal CO}}_k(G)$ when such a coloring exists. This result provides a positive answer to a conjecture made by Alboulker [*et al.*]{} [@AlboukerV2016 Conjecture 1.8]. The case $k=3$ was solved by Alboulker [*et al.*]{} [@AlboukerV2016].
For fixed $k\geq 1$, there is a polynomial-time algorithm that, given a graph $G\in {{\cal L}}_k$, finds a coloring in ${{\cal CO}}_k(G)$ or a block belonging to ${{\cal H}}_k$. \[Th:Algorithm\]
[*Sketch of Proof:*]{}The Theorem is evident if $k=1,2$; and the case $k=3$ was solved by Alboulker [*et al.*]{} [@AlboukerV2016]. Hence we assume that $k\geq 4$ and $G\in {{\cal L}}_k$. If we find for each block $H$ of $G$ a coloring in ${{\cal CO}}_k(H)$, we can piece these colorings together by permuting colors to obtain a coloring in ${{\cal CO}}_k(G)$. Hence we may assume that $G$ is a block. First, we check whether $G$ has a separating set $S$ consisting of one vertex and one edge. If we find such a set, say $S=\{v,e\}$ with $v\in V(G)$ and $e\in E(G)$. Then $G-e$ is the union of two connected graphs $G_1$ and $G_2$ having only vertex $v$ in common where $e=w_1w_2$ and $w_i\in V(G_i)$ for $i=1,2$. Both blocks $G_1'=G_1+vw_1$ and $G_2'=G_2+vw_2$ belong to ${{\cal L}}_k$. Now we check whether these blocks belong to ${{\cal H}}_k$. If both blocks $G_1'$ and $G_2'$ belong to ${{\cal H}}_k$, then $vw_i\not\in E(G_i)$ for $i=1,2$, and hence $G$ belongs to ${{\cal H}}_k$ and we are done. If one of the blocks, say $G_1'$ does not belong to ${{\cal H}}_k$, we can construct a coloring $f_1\in {{\cal CO}}_k(G_1')$. Moreover, no block of $G_2$ belongs to ${{\cal H}}_k$, hence we can construct a coloring $f_2\in {{\cal CO}}_k(G_2)$. Then $f_1\in {{\cal CO}}_k(G_1)$ and $f_1(v)\not=f_1(w_1)$. Since $k\geq 4$, we can permute colors in $f_2$ such that $f_1(v)=f_2(v)$ and $f_1(w_1)\not=f_2(w_2)$. Consequently, $f=f_1 \cup f_2$ belongs to ${{\cal CO}}_k(G)$ and we are done.
It remains to consider the case that $G$ contains no separating set consisting of one vertex and one edge. Then let $p$ denote the number of vertices of $G$ whose degree is greater that $k$. If $p\leq 1$, then let $v$ be a vertex of maximum degree in $G$. Color $v$ with color $1$ and let $L$ be a list assignment for $H=G-v$ satisfying $L(u)=\{2,3, \ldots ,k\}$ if $vu\in E(G)$ and $L(u)=\{1,2, \ldots, k\}$ otherwise. Then $H$ is connected and $|L(u)|\geq d_H(u)$ for all $u\in V(H)$. Now we can use the degree version of Brooks’ theorem, see [@StiebitzT2015 Theorem 2.1]. Either we find a coloring $f$ of $H$ such that $f(u)\in L(u)$ for all $u\in V(H)$, yielding a coloring of ${{\cal CO}}_k(G)$, or $|L(u)|=d_H(u)$ for all $u\in V(H)$ and each block of $H$ is a complete graph or an odd cycle. In this case, $d_H(u)\in \{k,k-1\}$ for all $u\in V(H)$ and, since $k\geq 4$, each block of $H$ is a $K_k$ or a $K_2$. Since $G$ contains no separating set consisting of one vertex and one edge, this implies that $H=K_k$ and so $G=K_{k+1}\in {{\cal H}}_k$ and we are done. If $p\geq 2$, then we choose two vertices $u$ and $u'$ whose degrees are greater that $k$. Then we construct an edge cut $(X,Y,F)$ with $u\in X$, $u'\in Y$, and $|F|={\lambda}_G(u,u')$. We may assume that $a=|X_F|$ and $b=|Y_F|$ satisfies $a\leq b\leq k$. If $b\leq k-1$, then both graphs $G[X]$ and $G[Y]$ belong to ${{\cal L}}_k$ and there are colorings $f_X\in {{\cal CO}}_k(G[X])$ and $f_Y\in {{\cal CO}}_k(G[Y])$. Note that no block of these two graphs can belong to ${{\cal H}}_k$. By permuting colors in $f_Y$, we can combine the two colorings $f_X$ and $f_Y$ to obtain a coloring $f\in {{\cal CO}}_k(G)$ (by Lemma \[Le:perfect\]). If $a<b=k$, then we consider the graph $G_1$ obtained from $G[X \cup Y_F]$ by adding all edges between the vertices of $Y_F$, so that $Y_F$ becomes a clique of $G_1$. Then $G_1$ belongs to ${{\cal L}}_k$ (see the proof of Claim 4) and, since $G$ contains no separating set consisting of one vertex and one edge, the block $G_1$ does not belongs to ${{\cal H}}_k$. Hence there are colorings $f_1\in {{\cal CO}}_k(G_1)$ and $f_Y\in {{\cal CO}}_k(G[Y])$. Then the restriction of $f_1$ to $X$ yields a coloring $f_X\in {{\cal CO}}_k(G[X])$ such that $|f_X(X)|\geq 2$. By permuting colors in $f_Y$, we can combine the two colorings $f_X$ and $f_Y$ to obtain a coloring $f\in {{\cal CO}}_k(G)$ (by Lemma \[Le:perfect\]). It remains to consider the case $a=b=k$. Then let $G_2$ be the graph obtained from $G[Y \cup X_F]$ by adding all edges between the vertices of $X_F$, so that $X_F$ becomes a clique of $G_2$. Then we find colorings $f_1\in {{\cal CO}}_k(G_1)$ and $f_2\in {{\cal CO}}_k(G_2)$ and, hence, colorings $f_X\in {{\cal CO}}_k(G[X])$ and $f_Y\in {{\cal CO}}_k(G[Y])$ such that $|f_X(X)|\geq 2$ and $|f_Y(Y)|\geq 2$. By permuting colors in $f_Y$, we can combine the two colorings $f_X$ and $f_Y$ to obtain a coloring $f\in {{\cal CO}}_k(G)$ (by Lemma \[Le:perfect\]). [$\Box$]{}
[99]{}
P. Alboulker, N. Brettell, F. Havet, D. Marx, and N. Trotignon, Colouring graphs with constraints on connectivity, arXiv:1505.01616v1 \[math.CO\] 7 ay 2015.
R. L. Brooks, On colouring the nodes of a network. [*Proc. Cambridge Philos. Soc.*]{} [**37**]{} (1941), 194–197.
G. A. Dirac, A property of 4-chromatic graphs and some remarks on critical graphs. [*J. London Math. Soc.*]{} [**27**]{} (1952), 85–92.
G. A. Dirac, The structure of $k$-chromatic graphs. [*Fund. Math.*]{} [**40**]{} (1953), 42–55.
G. A. Dirac, A theorem of R. L. Brooks and a conjecture of H. Hadwiger. [*Proc. London Math. Soc.*]{} (3) [**7**]{} (1957), 161–195.
G. A. Dirac, On the structure of 5- and 6-chromatic graphs. [*J. Reine Angew. Math.*]{} [**214/215**]{} (1974), 43–52.
G. A. Dirac, B. A. S[ø]{}rensen, and B. Toft, An extremal result for graphs with an application to their colorings. [*J. Reine Angew. Math.*]{} [**268/269**]{} (1974), 216–221.
T. Gallai, Kritische Graphen I. [*Publ. Math. Inst. Hungar. Acad. Sci.*]{} [**8**]{} (1963), 165-192.
G. Hajós, Über eine Konstruktion nicht $n$-färbbarer Graphen. [*Wiss. Z. Martin Luther Univ. Halle-Wittenberg, Math.-Natur. Reihe*]{} [**10**]{} (1961), 116–117.
T. Jensen and B. Toft, Choosability versus chromaticity. [*Geombinatorics*]{} [**5**]{} (1995), 45–64
W. Mader, Grad und lokaler Zusammenhang in endlichen Graphen. [*Math. Ann.*]{} [**205**]{} (1973), 9–11.
M. Stiebitz and B. Toft, Brooks’s theorem. In: L. W. Beineke and R. Wilson, eds., [*Graph Coloring Theory*]{}, pp. 41–65, Cambridge Press, 2015.
B. Toft, Some contribution to the theorry of colour-critical graphs. Ph.D thesis, University of London 1970. Published as No. 14 in Various Publication Series, Matematisk Institu, Aarhus Universitet 1970.
B. Toft, Colour-critical graphs and hypergraphs. [*J. Combin. Thoory (B)*]{} [**16**]{} (1974), 145–161.
B. Toft, Critical subgraphs of colour critical graphs. [*Discrete Math.*]{} [**7**]{} (1974), 377–392.
[^1]: The authors thank the Danish Research Council for support through the program Algodisc.
[^2]: Technische Universität Ilmenau, Inst. of Math., PF 100565, D-98684 Ilmenau, Germany. E-mail address: [email protected]
[^3]: University of Southern Denmark, IMADA, Campusvej 55, DK-5320 Odense M, Denmark E-mail address: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Resonant $e^+e^-$ pair production by an electron in a magnetic field near the process threshold has been analytically studied. Using the Nikishov’s theorem an estimation of the number of events has been made in the magnetic field equivalent to laser wave in the SLAC experiment \[D. Burke *et al.*, Phys. Rev. Lett. **79**, 1626 (1997)\]. The obtained estimation is in reasonable agreement with the experimental data.'
author:
- 'O. P. Novak'
- 'R. I. Kholodov'
title: |
Electron-positron pair production by an electron\
in a magnetic field in the resonant case
---
Introduction
============
Fundamental processes in intense external electromagnetic fields are of great interest due to the existence of strongly magnetized neutron stars and the construction of high-power laser systems. Known physical processes are modified and new ones occur in strong field environments [@Harding91]. For instance, second-order processes become more substantial, e. g., double photon emission [@Lotstedt; @Fomin]. Thus, the quantum electrodynamic treatment of such processes is necessary when field strength is comparable with the critical one ($B_c=m^2c^3/e\hbar \approx {4.4\cdot 10^{13}}$ G).
Strong enough constant magnetic field is not feasible in laboratory at the present time. Nevertheless, it is possible to observe quantum electrodynamic (QED) processes in a strong magnetic field in experiments on heavy ion collisions. If the impact parameter has order of magnitude ${\sim10^{-10}}$ sm, then the magnetic field of moving ions can approach magnitude of ${\sim10^{12}}$ G in the region between the ions, while electric fields compensate each other.
At the present time, FAIR (Facility for Antiproton and Ion Research) is under construction at the GSI Helmholtz Centre for Heavy Ion Research, Darmstadt, Germany. One of the goals of the FAIR project is to test QED in strong electromagnetic fields. Experiments on observation of QED processes in strong magnetic fields in ion collisions are possible in the frame of FAIR project.
Note that the process of pair production by an electron has been experimentally observed in an intense laser field at SLAC National Accelerator Laboratory [@Burke]. After the SLAC experiment, pair creation in laser-proton collisions or in counterpropagating laser beams was studied in a number of works, e. g. [@Muller03]–[@Muller11].
Electron-positron pair production by an electron in intense laser wave was numerically studied in Ref. [@Hu]. In particular, the authors considered both resonant and nonresonant regimes of the process. In Ref. [@Ilderton11] the trident pair production amplitude in a strong laser background was calculated.
Pair production by an electron in a magnetic field was first studied by T. Erber [@Erber66]. In Ref. [@Erber66], the rate of a cascade of photon emission process followed by photoproduction in a magnetic field has been estimated for both cases of real and virtual intermediate photon.
In the high-energy limit the considered process in arbitrary homogeneous constant electromagnetic field has been studied in Ref. [@Baier72].
In Ref. [@Novak10] kinematics of the pair production in a magnetic field was considered and the expressions for the total process rate containing integrals over orbit centers coordinates were obtained.
The purpose of the present paper is to calculate the integrals and obtain the explicit analytical expressions for the process rate. The resonant case is studied, when the rate factorizes and can be expressed via the product of the rates of the corresponding first-order processes. It is assumed that all final particles occupy the ground Landau level. The explicit analytical expressions for the total rate are obtained for subcritical magnetic field strength, ${B \lesssim B_c}$.
Using Nikishov’s theorem [@Nikishov64; @FIAN] the obtained result has been compared with the experiment on observation of pair production by an electron in a laser field [@Burke].
Relativistic units ($\hbar = c = 1$) are used throughout the paper.
Process rate
============
Feynman diagrams of the considered process are shown in Fig. \[fig1\], where the double lines represent the solutions of Dirac equations in a magnetic field.
The process is studied near the threshold, when the final particles occupy the ground Landau level. Lorentz transformation does not change magnetic field when passing to a reference frame moving along the field. Thus, without loss of generality the longitudinal momentum of the initial electron can be chosen equal to zero, $p_z=0$.
The corresponding probability amplitude can be written as $$\begin{gathered}
\displaystyle
S_{fi}=i\alpha \iint d^4x \, d^4x' \times\\
\times \left[
(\bar\Psi_2 \gamma^\mu \Psi)D_{\mu\nu}
(\bar\Psi_1' \gamma^\nu \Psi_+') -
\right. \\
\left.
-(\bar\Psi_1 \gamma^\mu \Psi)D_{\mu\nu}
(\bar\Psi_2' \gamma^\nu \Psi_+') \right],\end{gathered}$$ where $\alpha$ is the fine structure constant and $D_{\mu\nu}$ is the photon propagator, $$D_{\mu\nu}=\frac{g_{\mu\nu}}{(2\pi)^4}
\int d^4k\,e^{-ik(x-x')}\frac{4\pi}{k^\lambda k_\lambda},$$ and $g_{\mu\nu}$ is the metric tensor.
The process rate is defined by the following equation: $$dW=\frac12 |S_{fi}|^2
\frac{S d^2p_1}{(2\pi)^2}
\frac{S d^2p_2}{(2\pi)^2}
\frac{S d^2p_+}{(2\pi)^2}.$$ Here, $S$ is the normalizing area, $d^2p=dp_{y}dp_{z}$.
The general expressions for the process rate look like [@Novak10] $$\begin{gathered}
\label{Wtotal+}
W^+\approx\frac{\alpha^2m}{3\pi^2\sqrt{3} l!}Y,\\
\label{Wtotal-}
W^-\sim b W^+,\end{gathered}$$ where the superscript denotes initial electron spin projection, $l$ is the Landau level number of the initial electron and ${b=B/B_c}$, $B$ is magnetic field strength and $B_c$ is critical field strength. The integral $Y$ has the form $$\label{Y13}
Y=\iint ds\,du\left|e^{-s^2}D\right|^2,$$ where $$\label{intX}
D=\int{\frac{(s+iq)^l}{r^2-q^2}e^{-q^2-2iuq}dq}.$$ Here, the following notations are used: $$\label{ab}
\begin{array}{l}
s=m\Omega(x_0-x_{01}),\\
u=m\Omega(x_0-x_{02}),\\
q=k_x/m\sqrt{2b},\\
r^2=\Omega^2-s^2, \\
\Omega^2=2/b.
\end{array}$$ $x_0$, $x_{01}$, and $x_{02}$ are the $x$ coordinates of the classical orbit centers of the initial and final electrons, respectively, and $k_x$ is the $x$ component of intermediate photon momentum.
The purpose of this paper is to carry out integration in Eqs. (\[intX\]) – (\[Y13\]) and obtain the explicit analytical expressions for the process rate in the resonant case. The integral $D$ in Eq. (\[intX\]) can be expressed in the form $$\label{sum}
D=\sum_{k=0}^l C_k^l s^{l-k} i^k D_k$$ where $C_k^l=l!/k!(l-k)!$ are binomial coefficients and $$D_k=
\int_{-\infty}^\infty q^k \frac{e^{-q^2-2iuq}}{r^2-q^2}dq.$$ The integrand has a singularity when the condition ${r^2<0}$ is true, and the value of the integral (\[intX\]) is small if $r^2$ is positive. Thus, it is necessary to consider the case $r^2<0$, when the inequality $-\Omega<s<\Omega$ is true.
The resonant divergence results in the infinite value of the process rate. To eliminate the divergence, one should introduce a width of the intermediate state $\Delta$ in accordance with Breit-Wigner prescription [@Graziani] and replace $$\label{x0}
r^2\rightarrow \rho^2=r^2+ig, \qquad g=\frac{\Delta}{mb}.$$
The integration in $D_k$ can be carried out analytically (see the Appendix) and the result is represented by Eq. (\[app-Xkres\]).
As noted in the Appendix, the quantity $D_0$ contains a divergence in the point $s=\Omega$. Thus, when substituting Eq. (\[sum\]) to Eq. (\[Y13\]), the summands with $k\geqslant 1$ can be neglected: $$\begin{gathered}
\label{X}
D=\frac{s^l\pi e^{-\rho^2}}{2i\rho}
\left\{ e^{-2iu\rho} \mbox{erfc}({u-i\rho})+\right.\\
\left. +e^{2iu\rho} \mbox{erfc}({-u-i\rho})\right\}.\end{gathered}$$
Taking into account, that the width $\Delta$ is small, the integration over $ds$, $du$ in Eq. (\[Y13\]) can be carried out analytically too. After the corresponding calculations \[Eqs. (\[app-intdb\])–(\[app-Yres\])\] the expression $Y$ takes the form $$Y=b\pi^2\sqrt{\pi}\frac{\Omega^{2l}e^{-2\Omega^2}}{\Delta/m}
\frac{\Gamma(l+1/2)}{l!},$$ where $\Gamma(l+1/2)$ is the gamma function.
Averaging the rate over the initial electron spin projection, finally we obtain (in CGS units) $$\label{Wppe}
W=\alpha^2\left(\frac{mc^2}{\hbar}\right)
\frac{b\sqrt{\pi}}{6\sqrt{3}}
\frac{\Omega^{2l}e^{-2\Omega^2}}{\Delta/m}
\frac{\Gamma(l+1/2)}{(l!)^2}.$$
The quantity $\Delta$ in Eq. (\[Wppe\]) should be considered as the total width of the intermediate state. The main contribution to the width is made by the total radiation rate of the initial electron. There are a number of works related to this problem, e. g. [@Novak09]–[@Pavlov].
As an example, let us calculate the rate (\[Wppe\]) when field strength is $b=0.1$ ($B\approx4.4\cdot 10^{12}$ G). In this case the threshold Landau level number is $l=40$ and $$\begin{gathered}
\label{Gamma-rad}
\Delta\approx 3.9 \cdot 10^{17} \quad (\mbox{s}^{-1}), \\
\label{Westim1}
W=1.2\cdot10^4 \quad (\mbox{s}^{-1}).\end{gathered}$$
The dependence of the rate (14) on the magnetic field strength is shown in Fig. \[fig:rate\].
Factorization
=============
One can see that the main contribution to the process rate is made by the resonant mode. In this case, the total rate factorizes and can be expressed via the product of the rates of the first-order processes of magneto-bremsstrahlung and $e^+e^-$ pair production by a single photon [@Novak09; @Novak08]. With account of the threshold condition $E\approx3m$, and consequently $bl=4$ and $l\gg1$, the following expression can be found: $$\label{factor}
W=\frac{\sqrt{\delta E / m}}{3\sqrt{6}}
\frac{W_{e\rightarrow\gamma e}W_{\gamma \rightarrow ee^+}}{\Delta}.$$ Here, $W_{e\rightarrow\gamma e}$ and $W_{\gamma \rightarrow ee^+}$ are the rates of the corresponding firs-order processes, cyclotron radiation and pair photoproduction, respectively: $$\label{Wrad}
W_{e\rightarrow\gamma e}=\alpha m \;\sqrt{\pi}
\frac{\Omega^{2l}e^{-\Omega^2}}{\Gamma(l+1/2)l},$$ $$\label{Wpp}
W_{\gamma \rightarrow ee^+}=\alpha m
\frac{be^{-\Omega^2}}
{\sqrt{2\:\delta E/m}},$$ where $\delta E=E-3m$ and $E = m\sqrt{1 + 2lb}$ is the incident electron energy.
Note that Eq. (\[Wpp\]) does not take account of the state widths and diverges if $\delta E$ goes to zero. In order that the final particles were not allowed to occupy exited energy levels the condition $\delta E < mb$ should be fulfilled. For exapmple, let $\delta E$ be $\frac 1 2 mb$ and $l = 40$, then $b = 0.10375$ and Eqs. (\[Wrad\]), (\[Wpp\]) give the following numerical values: $$\begin{gathered}
W_{e\rightarrow\gamma e}= 2.1\cdot10^{13} \quad (\mbox{s}^{-1}), \\
W_{\gamma\rightarrow ee^+}=7.9\cdot10^{9} \quad (\mbox{s}^{-1}).\end{gathered}$$
In the review [@Erber66] resonant pair production by an elecron was considered as a cascade of synchrotron emission and photoproduction. However, the rates of radiation and photoproduction in the high-energy limit were used that imply both initial and final states to be ultrarelativistic. This approach is not applicable near the process threshold when the final particles occupy the ground Landau level.
Moreover, in Ref. [@Erber66] the resonant width is not taken into account too. In fact, decay time of the virtual state assumed to be equal to the half of the time of observation.
As a result, the approach of Ref. [@Erber66] overestimates the process rate near the threshold. For the above parameters and time of observation equal to the doubled radiative decay time, it yields $4.7 \cdot 10^7$ s$^{-1}$ while Eq. (\[Wppe\]) gives about $1.2 \cdot 10^3$ s$^{-1}$.
Discussion
==========
As stated in the introduction, critical or subcritical magnetic field is not feasible in laboratory conditions. On the other hand, QED processes have been already observed in SLAC experiments involving the interaction of an intense laser with an electron beam [@Burke; @Bula; @Bamber]. In Ref. [@Burke] observation of $e^+e^-$ pair production by en electron in laser field was reported. About 100 positrons have been observed in 21 962 collisions of a ${46.6}$ GeV electron beam with green ($\lambda = 527$ nm) terawatt laser pulses for which $\eta = 0.36$, where $\eta = e\sqrt{A^{\mu}A_{\mu}} / mc^2$ and $A_\mu$ is four-vector potential of the laser wave.
The positrons were interpreted as arising from Compton back scattering followed by the multiphoton Breit-Wheeler reaction, $$\begin{gathered}
\label{compton}
e^- + n\omega_0 \rightarrow e^- +\omega',\\
\label{breit}
\omega' + n'\omega_0 \rightarrow e^- +e^+,\end{gathered}$$ where $\omega_0$ denotes laser photons. Such a two-step process was distinguished from the less probable trident reaction $$\label{trident}
e^- +n''\omega_0\rightarrow e^- +e^-e^+.$$
Nevertheless, it is impossible to observe the intermediate photon $\omega'$ without destroying the whole process. The photon should be represented by an internal line in the Feynman diagram and by a photon propagator in the probability amplitude (but not by a wave vector). Consequently, to develop a consistent theory, one should consider the more general trident reaction Eq. (\[trident\]).
However, when kinematics allows on-shell intermediate state (so-called resonance), the Feynman diagram of the trident process (\[trident\]) decomposes into two first-order diagrams corresponding to the processes (\[compton\]), (\[breit\]). In this case the total rate can be expressed via the rates of the processes (\[compton\]), (\[breit\]) with some additional coefficient, that can be obtained only in the frame of the full theory.
It is necessary to note that Nikishov and Ritus [@Nikishov64] have proven the form of the expression of the process rate to be the same for any external field if the rate is expressed in terms of gauge invariants and velocity of the incident particle is ultrarelativistic. In Ref. [@Nikishov64] the rates of one-vertex processes were obtained in the case of a laser field. If the variability of the laser field is irrelevant, the obtained expressions reduce to the rate of the processes in crossed electric and magnetic fields, when ${\vec{\mathcal{E}} \perp \vec B}$ and $ \mathcal{E} = B$. The total rates of such processes depends on the single invariant parameter $e^2(F_{\mu\nu}p_\nu)^2/m^6$ where $F_{\mu\nu}$ is the electromagnetic tensor and $p_\nu$ is 4-momentum. It allows to pass to the general case of arbitrary constant field. In this case rates depend also on two other parameters $e^2F^2_{\mu\nu}/m^4$ and $ie^2\varepsilon_{\mu\nu\lambda\sigma}F^{\mu\nu}F^{\lambda\sigma}$ (they are equal to zero if $\vec{\mathcal{E}} \perp \vec B$ and $\mathcal{E} = B$).
However, since feasible fields are much less than the critical one $m^2/e$, these additional parameters are much less than unity. On the other hand, if the particle energy is high enough, then these parameters are much less than the first one as well and could be omitted. Therefore, the obtained rates are applicable in the case of arbitrary constant field, if the incident particle has relativistic energy.
In particular, considering $F_{\mu\nu}$ as a magnetic field, Nikishov and Ritus have obtained the results of Klepikov [@Klepikov] for intensity of a photon emission by an electron and for the rate of pair production by a photon in a magnetic field.
The physical reason is that due to Lorentz transformation arbitrary electromagnetic field goes to almost equal and almost perpendicular electric and magnetic fields when passing to the rest frame of the relativistic particle.
Thus, it is possible to compare the analytical result for the case of magnetic field with the experimental data of Ref. [@Burke].
If a relativistic electron propagates opposite to electromagnetic wave of field strength $\mathcal{E}_L$, then it experiences the field strength of ${\mathcal{E}_0=2\gamma \mathcal{E}_L}$ in the rest frame, where $\gamma$ is the gamma factor. On the other hand, if an electron moves perpendicular to a magnetic field $B_{eq}$, then the field strength in the rest frame is approximately ${\mathcal{E}_{0eq}=\gamma B_{eq}}$. Comparing $\mathcal{E}_0$ and $\mathcal{E}_{0eq}$ one can see that strength of the equivalent magnetic field in the lab frame is $$B_{eq}=2\mathcal{E}_L.$$ Note that factor 2 arises because equivalent magnetic field should take into account both electric and magnetic fields of the electromagnetic wave.
In order to pass to the case of alternating field of an electromagnetic wave, the rate $W$ (\[Wppe\]) for the process in a magnetic field should be averaged over the wave period to obtain the equivalent process rate in laser field $W_{eq}$ [@Nikishov64; @FIAN]: $$\label{nikishov}
W_{eq}=\frac{2}{\pi}
\int\limits_0^{\pi/2}W(B_{eq}sin{\phi})d \phi.$$ Equation (\[nikishov\]) allows us to compare the rates of processes in a magnetic field and in an intense laser wave.
However, Eq. (\[Wppe\]) is true near the process threshold only, when the condition $E\approx 3m$ is fulfilled. Therefore, it is necessary to calculate the rate in the moving “threshold” frame where the electron energy is equal to $E\approx 3m$, and threshold conditions are fulfilled explicitly. The amplitude value of equal magnetic field in the threshold frame is $B_{eq} \approx 6.1 \cdot 10^{12}$ G and, consequently, $b\approx 0.14$.
It should be noted that in the SLAC experiment pair production has been observed near the threshold too [@Burke]. Although the electron beam energy was 46.6 GeV, the major part of this energy was the energy of rectilinear motion of the mass center.
It is possible to estimate the electron-laser interaction time in the laboratory frame $\Delta t_L$ and the number of electrons in the interaction region $N_{int}$ using the data from Ref. [@Burke]: the electron beam size is $\sim 25\times40\;\mu\mbox{m}^2$, bunches contained $\sim 7\cdot10^9$ electrons, laser beam focal area is $30 \;\mu\mbox{m}^2$, beams crossing angle is $17^\circ$. Thus, $\Delta t_L\approx 50$ fs, $N_{int}\sim 2.8\cdot10^8$.
Note that to calculate the rate Eq. (\[Wppe\]) it is necessary to take into account limited interaction time as well as the radiative width (\[Gamma-rad\]). Therefore, the intermediate state width is a sum of the radiative width and the quantity $1/\Delta t_T$ where $\Delta t_T=\Delta t_L/\gamma$ is laser-electron interaction time in the threshold frame.
The number of produced pairs can be estimated according to the expression $$N_{e^+e^-}=k\cdot N_{int}(1-e^{-W_{eq} \Delta t_T}),$$ where $k=21\:962$ is the number of collisions of the electron and laser beams [@Burke].
The corresponding value of $\sim 80$ events is in reasonable agreement with the experimental result of $~106 \pm 14$ indicated in Ref. [@Burke].
Note that the authors of Ref. [@Burke] pointed out the possible residual background of about $2\times 10^{-3}$ positrons/laser shot due to interactions of Compton backscattered photons with beam gas. If the data are restricted to events with $\eta > 0.216$, one can find $69 \pm 9$ positrons, and the agreement of their number with theoretical estimations is improved.
Thus, in the present work the analytical expression for the rate of electron-positron pair production by an electron in a magnetic field near the process threshold was obtained. The number of $e^+e^-$ pairs created in the SLAC experiment was estimated using Nikishov’s theorem. The obtained value is in reasonable agreement with experimental results as well as with numerical calculation of the Ref. [@Hu].
We thank V. Yu. Storizhko and S. P. Roshchupkin for useful discussions.
Calculation the integrals
=========================
To take an integral of the form $$\label{app-X}
D_0=\int\limits_{-\infty}^\infty
\frac{e^{-q^2-2iuq}}{\rho^2-q^2}dq$$ it is convenient to use the apparent relation $$\int\limits_0^1 e^{t(\rho^2-q^2)}dt=\frac{e^{\rho^2-q^2}}{\rho^2-q^2}-
\frac{1}{\rho^2-q^2}.$$ The integral (\[app-X\]) takes on the form $$\label{app-X1}
D_0=e^{-\rho^2}\int\limits_{-\infty}^\infty \frac{e^{-2iuq}}{\rho^2-q^2} dq +
e^{-\rho^2}\int_0^1 \sqrt{\frac{\pi}{t}}e^{t\rho^2-\frac{u^2}{t}}dt.$$ The first integral in (\[app-X1\]) can be found using Jordan’s lemma. The result is $$\label{app-X11}
\int\limits_{-\infty}^\infty \frac{e^{-2iuq}}{\rho^2-q^2} dq =
\frac{\pi}{i\rho}e^{2i|u|q}.$$ To find the second integral one should use the substitutions $$\begin{array}{l}
\sigma_+=\rho\sqrt{t}+{i|u|}/{\sqrt t},\\
\sigma_-=\rho\sqrt{t}-{i|u|}/{\sqrt t}.
\end{array}$$ After simple calculations the result of integration takes on the form $$\label{app-X2}
\frac{\pi}{2i\rho}\left[e^{-2i\rho|u|} \mbox{erfc}(|u|-i\rho)-
e^{2i\rho|u|}\mbox{erfc}(|u|+i\rho)\right].$$ Finally, substituting Eqs. (\[app-X11\]), (\[app-X2\]) to Eq. (\[app-X1\]) the result for $D_0$ can be expressed as $$\label{app-Xres}
D_0=\frac{\pi e^{-\rho^2}}{2i\rho}
\left[e^{-2i\rho u} \mbox{erfc}(u-i\rho)+
e^{2i\rho u}\mbox{erfc}(-u-i\rho)\right].$$ The above expression is valid for both $u>0$ and $u<0$ cases.
Integrals containing $q^k$ can be reduced to the considered one using the derivative with respect to the parameter $u$: $$\label{app-Xk}
D_k=
\int_{-\infty}^\infty q^k \frac{e^{-q^2-2iuq}}{\rho^2-q^2}dq=
\frac{1}{(-2i)^k} \frac{\partial^k}{\partial u^k} D_0.$$ Taking into account the relation $$H_n(x)=(-1)^n e^{x^2} \frac{d^n}{dx^n}e^{-x^2}$$ where $H_n(x)$ is Hermite polynomial, the explicit form of $D_k$ can be expressed as $$\begin{gathered}
\label{app-Xkres}
D_k=
\frac{\pi e^{-\rho^2}}{2i\rho}\rho^k
\left[e^{-2iu\rho}\mbox{erfc}(u-i\rho)+ \right.\\
+\left.(-1)^k e^{2iu\rho}\mbox{erfc}(-u-i\rho)\right] + \\
+\frac{\sqrt{\pi}}{i\rho}\frac{e^{-u^2}}{(2i)^k}
\sum_{m=1}^k C_m^k(2i\rho)^{k-m}
\left[ H_{m-1}(u-i\rho) +\right. \\
\left.+(-1)^k H_{m-1}(-u-i\rho) \right].\end{gathered}$$
Note that the value $D_0$ is in inverse proportion to $\rho$ and contains a divergence in the point $s=\Omega$. On the contrary, the value $D_k$ is finite for $k\geqslant 1$. Indeed, the summands in Eq. (\[app-Xkres\]) contain factors $\rho^{k-1}$ and $\rho^{m-k-1}$ and apparently do not diverge when $k > m$. When the conditions $m=k$ and $\rho=0$ are true, then the second summand contains the expression $$\left[ H_{k-1}(u)+(-1)^kH_{k-1}(-u)\right]=0$$ where the relation $H_n(-x)=(-1)^nH_n(x)$ is used.
Let us proceed to calculating the integral $Y$ in Eq. (\[Y13\]). Taking into account, that the quantity $D$ in Eq. (\[X\]) is an even function of $b$, the integral over $db$ in Eq. (\[Y13\]) can be expressed as $$\label{app-intdb}
\int_{-\infty}^{\infty}|e^{-s^2}D|^2\,du=
\frac{\pi^2 s^{2l} e^{-2\Omega^2} }{2|\rho^2|}[J_1+J_2]$$ where $$\label{J1def}
J_1=\int_{-\infty}^\infty |e^{-2iu\rho}\mbox{erfc}(u-i\rho)|^2du,$$ $$\label{J2def}
J_2=\int_{-\infty}^\infty e^{-4iru}\mbox{erfc}(u-ir)\mbox{erfc}(-u+ir) du.$$
After integration by parts the quantity $J_1$ takes the form $$\begin{gathered}
\label{J1}
J_1=\frac{1}{\sqrt{\pi}\Im{\rho}}
\Re\left[
e^{-2ig}j(\rho)
\right],
\\%
j(\rho)=\int_{-\infty}^{\infty}e^{-(u+i\rho)^2}\mbox{erfc}(u-i\rho)du.\end{gathered}$$ The parameter $\rho$ can be eliminated from the argument of the exponent by introducing the new variable $t=u+i\rho$: $$j(\rho)=\int_{-\infty}^{\infty}e^{-t^2}\mbox{erfc}(t-2i\rho)dt.$$ The derivative of $j(\rho)$ with respect to $\rho$ reduces to the Poisson integral and takes the form $$j'(\rho)=2\sqrt{2}ie^{2\rho^2}.$$ This differential equation can be easily solved. Finally, after substituting the result into Eq. (\[J1\]) the quantity $J_1$ takes on the form $$J_1=\frac{\Re\left[e^{-2ig}
\mbox{erfc}(-i\rho\sqrt{2})\right]}{\Im(\rho)}.$$ Note that $J_1$ can be expressed as $$\label{J1mod}
J_1\approx \frac{1}{\Im (\rho)}+
\frac{\Re[e^{-2ig}\mbox{erf}(i\rho\sqrt{2})]}{\Im (\rho)}.$$
The integral $J_2$ in Eq. (\[J2def\]) can be calculated in the same way and looks like $$J_2=\frac{\mbox{erf}(ir\sqrt{2})}{ir}.$$
The value of the integral over $s$ is determined by the region in the vicinity of the point $s=\Omega$ due to the presence of the factor $s^{2l}$ in the integrand in Eq. (\[app-intdb\]). In the points $s=\pm \Omega$ the first summand in $J_1$ Eq. (\[J1mod\]) goes to $\sqrt{2/g}$, while the second one and the quantity $J_2$ go to $\pm\sqrt{8/\pi}$. Thus, $$Y=\pi^2e^{-2\Omega^2}
\int_{-\Omega}^{\Omega} \frac{s^{2l}}{|\rho^2|} \frac{ds}{\Im(\rho)}.$$ Taking into account that $$\Im(\rho)=\frac{1}{\sqrt{2}}\sqrt{\sqrt{\rho^4+g^2}-\rho^2}$$ and introducing a new variable $x=s/\Omega$, the quantity $Y$ can be transformed to $$\begin{gathered}
Y=\pi^2\sqrt{2}\Omega^{2l}e^{-2\Omega^2}\frac{1}{g}\times\\
\times
\int_0^1 x^{2l}\sqrt{
\frac{\sqrt{(1-x^2)^2+\delta^2}+(1-x^2)}
{(1-x^2)^2+\delta^2}
}dx\end{gathered}$$ where $\delta = g^2/\Omega$. When $\delta$ goes to zero the integral over $x$ in the above expression converges to $$\frac{1}{\sqrt{2}}
\frac{\Gamma(1/2)\Gamma(l+1/2)}{\Gamma(l+1)}.$$ Thus, $$\label{app-Yres}
Y=
\pi^2\sqrt{\pi}\Omega^{2l}e^{-2\Omega^2}\frac{1}{g}\frac{\Gamma(l+1/2)}{l!}.$$
[50]{}
A. K. Harding, Science, **251**, 1033 (1991).
E. Lötstedt and U. D. Jentschura. Phys. Rev. A **80**, 053419 (2009).
P. I. Fomin and R. I. Kholodov, Zh. Éksp. Teor. Fiz. **123**, 356 (2003), \[JETP **96**, 315 (2003)\].
D. L. Burke et al., Phys. Rev. Lett. **79**, 1626 (1997).
C. Müller, A. B. Voitkiv, and N. Grün, Phys. Rev. Lett. **91**, 223601 (2003).
A. I. Milstein, C. Müller, K. Z. Hatsagortsyan, U. D. Jentschura, and C. H. Keitel, Phys. Rev. A **73**, 062106 (2006).
J. Z. Kaminśki, K. Krajewska, and F. Ehlotzky, Phys. Rev. A **74**, 033402 (2006).
A. Ringwald, Phys. Lett. B **510**, 107 (2001).
N. B. Narozhny, S. S. Bulanov, V. D. Mur, and V. S. Popov, JETP Lett. **80**, 382 (2004).
F. Ehlotzky, K. Krajewska, and J. Z. Kaminśki, Rep. Prog. Phys. **72**, 046401 (2009).
A. Di Piazza, A. I. Milstein, and C. Muller, Phys. Rev. A **82**, 062110 (2010).
T.-O. Müller and C. Müller, Phys. Lett. B **696**, 201 (2011).
H. Hu, C. Müller, and C. H. Keitel, Phys. Rev. Lett. **105**, 080401 (2010).
A. Ilderton, Phys. Rev. Lett. **106**, 020404 (2011).
T. Erber, Rev. Mod. Phys. **38**, 626 (1966).
V. N. Baier, V. M. Katkov, and V. M. Strakhovenko, Sov. J. Nucl. Phys. **14**, 572 (1972).
O. P. Novak, R. I. Kholodov, and P. I. Fomin, JETP **110**, 978 (2010).
A. I. Nikishov and V. I. Ritus, Sov. Phys. JETP **19**, 529 (1964).
A. I. Nikishov, Tr. Fiz. Inst. im. P. N. Lebedeva, Akad. Nauk SSSR **111**, 152 (1979).
C. Graziani, A. K. Harding, and R. Sina, Phys. Rev. D **51**, 7097 (1995).
O. P. Novak and R. I. Kholodov, Phys. Rev. D, **80** 025025 (2009).
N. P. Klepikov, Zh. Éksp. Teor. Fiz. **26**, 19 (1954).
H. Herold, H. Ruder, and G. Wunner, Astron. Astrophys. **115**, 90 (1982).
A. A. Sokolov and I. M. Ternov, *Synchrotron Radiation from Relativistic Electrons* (American Institute of Physics, New York, 1986).
A. K. Harding and R. Preece, Astrophys. J. **319**, 939 (1987).
G. G. Pavlov, V. G. Bezchastnov, P. Meszaros, and S. G. Alexander, The Astrophysical Journal **380**, 541 (1991).
A. P. Novak and R. I. Kholodov, Ukr. Phys. J. **53**, 185 (2008).
C. Bula et al., Phys. Rev. Lett. **76**, 3116 (1996).
C. Bamber et al., Phys. Rev. D **60** 092004 (1999).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'High-energy photons from cosmological emitters suffer attenuation due to pair production interactions with the extragalactic background light (EBL). The collective emission of any high-energy emitting cosmological population will exhibit an absorption feature at the highest energies. We calculate this absorption feature in the collective emission of blazars for various models of the blazar gamma-ray luminosity function (GLF) and the EBL. We find that models of the blazar GLF that predict higher relative contributions of high-redshift blazars to the blazar collective spectrum result in emission that is more susceptible to attenuation by the EBL, and hence result in more prominent absorption features, allowing for better differentiation amongst EBL models. We thus demonstrate that observations of such an absorption feature will contain information regarding both the blazar GLF and the EBL, and we discuss tests for EBL models and the blazar GLF that will become possible with upcoming *Fermi* observations.'
author:
- 'Tonia M. Venters, Vasiliki Pavlidou & Luis C. Reyes'
title: 'The Extragalactic Background Light Absorption Feature in the Blazar Component of the Extragalactic Gamma-ray Background'
---
Introduction
============
The Energetic Gamma-ray Experiment Telescope (EGRET) aboard the *Compton Gamma-ray Observatory* observed the gamma-ray sky between 1991 and 2000 at energies between 30 MeV and $\sim$ 10 GeV. The EGRET gamma-ray sky consisted of 271 resolved gamma-ray sources included in the third EGRET Catalog of Point Sources (Hartmann et al. 1999) and the diffuse gamma-ray emission comprised of emission from the galaxy and from the extragalactic gamma-ray background (EGRB). The origins of the EGRB are, as yet, unknown; however, since EGRET observed a number of resolved, extragalactic point sources, it is expected that unresolved sources of the same populations comprise sizable contributions to the EGRB.
Of the 271 resolved point sources observed by EGRET, 93 were identified, either confidently or potentially, as blazars (gamma-ray-loud active galactic nuclei) and in its first few months of observations, the *Fermi Gamma-ray Space Telescope* has already identified 108 blazars (Abdo et al. 2009). Thus, blazars comprise the largest class of identified gamma-ray emitters. As such, unresolved blazars are expected to have a significant contribution to the EGRB. The exact amount of this contribution remains undetermined due to the uncertainty in the distribution of blazars in redshift and luminosity space, the blazar gamma-ray luminosity function (GLF; Padovani et al. 1993; Stecker et al. 1993; Salamon & Stecker 1994; Chiang et al. 1995; Stecker & Salamon 1996, hereafter SS96; Kazanas & Perlman 1997; Chiang & Mukherjee 1998; Mukherjee & Chiang 1999; M[ü]{}cke & Pohl 2000; Kneiske & Mannheim 2005; Dermer 2007; Giommi et al. 2006; Narumoto & Totani 2006, hereafter NT06). Hence, to this day, it is still unclear whether the collective unresolved blazar emission comprises the bulk of the EGRB or only a small fraction of it.
In addition to the dependence on the blazar GLF, the blazar contribution to the EGRB also depends on the distribution of blazar spectral indices at GeV energies. The spread in the blazar spectral index distribution (SID) determines the fraction of blazars with hard spectra, which will contribute most significantly at high energies and hence will introduce curvature in the shape of the unresolved blazar emission (SS96; Venters & Pavlidou 2007, hereafter VP07; Pavlidou & Venters 2008, hereafter PV08). However, as with the blazar GLF, the blazar SID is also uncertain due to the low number of EGRET blazars. The uncertainty in the blazar SID results in an uncertainty in the shape of the collective unresolved blazar spectrum. Thus, it remains unclear whether blazars can simultaneously account for the high-energy emission and the low-energy emission.
The extragalactic background light (EBL) is composed of photons from starlight (at optical and ultraviolet wavelengths) and reprocessed starlight (at infrared wavelengths). At observed energies beyond the EGRET energy range, photons suffer significant attenuation due to pair production interactions with the soft photons of the EBL (Salamon & Stecker 1998; Chen, Reyes & Ritz 2004; Kneiske et al. 2004; Stecker et al. 2006, 2007; Franceschini et al. 2008; Primack et al. 2008; Gilmore et al. 2009). Thus, any cosmological population emitting high-energy gamma rays will exhibit an absorption feature at the highest energies in its collective spectrum. The strength of such an absorption feature will depend on the distribution of sources with respect to redshift and luminosity. If the relative contribution of high-redshift, high-energy emitters to the collective emission is significant, the feature will be quite prominent.
Through the absorption of high-energy photons, interactions with EBL photons will produce pairs of electrons and positrons, which will Inverse Compton scatter EBL photons to high energies. The upscattered photons will, in turn, pair produce off of other EBL photons, and the process continues until the energies of the resulting photons are low enough that pair production is no longer efficient. For the collective emission of a cosmological, gamma-ray emitting population, the effect of the “electromagnetic cascade” emission results in a suppression at high energies and an enhancement at lower energies. In the case of blazars, which could comprise a sizable contribution to the EGRB, predictions of the resulting enhancement at lower energies could overproduce the EGRB if the high-energy emission is high and/or the EBL emission is high (Coppi & Aharonian 1997). Thus, in order to fully appreciate the blazar contribution to the EGRB, we need to include the effects of high-energy attenuation.
While the effects of the full cascade are outside the scope of this paper (see T. M. Venters 2009, in preparation), we revisit the absorption feature of the blazar cumulative emission. Specifically, we seek to determine the possible implications of the observation of the blazar absorption feature for our understanding of blazars as a cosmological population and the EBL. As previously indicated, the absorption feature depends on the GLF, the SID, and the EBL model. However, all of these inputs (including the EBL model) remain quite uncertain. Thus, we seek to demonstrate that the study of the absorption feature could be used to constrain the inputs of the collective emission and the information they can provide about blazars and the EBL.
Finally, the study of the blazar absorption feature could provide insight into the intrinsic blazar spectrum or the possible participation of multiple populations in producing the EGRB. In the era of *Fermi*, improved blazar number statistics will provide much stronger constraints on GLF models. If the absorption feature in the collective unresolved blazar emission is sensitive to the GLF, then future studies of the *Fermi* observations of the EGRB could have implications for blazar spectra or the relative contributions of multiple populations. For instance, if the observed absorption feature is more prominent than expected from the favored GLF, then one might suspect that blazar spectra break above some energy (e.g., spectral cutoffs, which as noted in Abdo et al. (2009), have been observed in some cases by *Fermi*). On the other hand, if the observed feature is less prominent than expected, then one might suspect that other gamma-ray sources play a significant role in the production of the EGRB.
In this paper, we revisit the absorption feature of the collective unresolved blazar emission at high energies in investigating the effects of the blazar GLF and SID and the EBL model. In Section 2, we present the formalism of the calculation of the collective unresolved blazar emission. In Section 3, we discuss the inputs of the calculation and their uncertainties. In Section 4, we present the results of the calculation, and we discuss these results in Section 5.
Formalism
=========
We define the blazar GLF per unit comoving density of blazars at GeV energies, $\rho_{\gamma}(L_{\gamma},z)$, through the following expression: $$\label{fe}
\rho_{\gamma}(L_{\gamma},z)p_{L}(\alpha)= \frac{d^3N}{dL_{\gamma}dV_{\rm com}d\alpha}\,,$$ where $L_\gamma$ is the gamma-ray luminosity in ${\rm erg \, \, s^{-1}}$ at a fiducial energy $E_f$ (or, equivalently, $E_f^2$ times the differential photon luminosity $dN_{\gamma}/dtdE$ measured at $E_f$), $V_{\rm com}$ is the comoving volume, and $p_{L}(\alpha)$ is the luminosity-independent distribution of blazar spectral indices (spectral index distribution, SID). In order to determine the blazar luminosity-independent SID, one must correct the blazar SID for errors in measurement in gamma-ray spectral indices (VP07) and for biases introduced through determining the SID from a flux-limited sample of blazars. The resulting $p_L(\alpha)$ is given by $$p_{L}(\alpha) = \frac{\hat{p}(\alpha)}{\hat{M}(\alpha)},$$ where $\hat{p}(\alpha)$ is the SID corrected for measurement error in the spectral indices, $$\hat{M}(\alpha) \propto \int_{F_{\gamma,\rm min}}^{\infty} dF_\gamma
\frac{1}{F_\gamma}\int_{z=0}^{\infty} dz \hat{\rho}_\gamma (\alpha,z,F_\gamma)
\frac{dV_{\rm com}}{dz}(z)$$ is the correction for the sample bias (see Appendix \[appendixa\]), $$\begin{aligned}
\hat{\rho}_\gamma &=& L_\gamma \times \rho_\gamma(L_\gamma,z) \nonumber \\
&& \!\!\!\! \!\! = 4\pi D^2 (\alpha - 1) (1+z)^\alpha E_f F_\gamma \nonumber \\
&& \times \rho_\gamma [4\pi D^2 (\alpha - 1) (1+z)^\alpha E_f F_\gamma, z]\,,\end{aligned}$$ and $D$ is the distance measure for the standard $\Lambda$CDM cosmology. For an isotropic distribution of sources, the number of objects with luminosities between $L_\gamma$ and $L_\gamma+dL_\gamma$ and spectral indices between $\alpha$ and $\alpha + d\alpha$ residing within a spherical shell at redshift $z$ with radial extent $dz$ is $$dN = \rho_\gamma (L_\gamma, z)p_L(\alpha)\,dL_\gamma\frac{dV_{\rm com}}{dz}\,dz \,d\alpha.$$ A blazar of gamma-ray luminosity $L_\gamma$ at a redshift $z$ with a power-law source spectrum defined by the spectral index $\alpha$ has a photon flux of (see Appendix \[appendixb\]) $$\begin{aligned}
\label{fluxoneblazr}
F_{\rm 1,ph} (E_0, z, L_\gamma, \alpha) &=& \frac{L_\gamma}{4 \pi E_f^2
[d_L(z)]^2}(1+z)^{2-\alpha}\left(\frac{E_0}{E_f}\right)^{-\alpha}
\nonumber \\
&& \times \exp\left[-\tau(E_0,z)\right]\!,\end{aligned}$$ where $d_L(z)$ is the luminosity distance, defined in the concordance cosmology by $$d_L(z) = \frac{c}{H_0} (1+z) \!\! \int_0^z \!\! \left[\Omega_\Lambda+\Omega_m(1+z')^3\right]^{-1/2}\,dz'\,,$$ and $\tau(E_0,z)$ is the optical depth due to pair production on the EBL for gamma rays of observer-frame energy $E_0$ originating at redshift $z$. The total contribution of blazars to the gamma-ray background [*if we ignore secondary emission from cascades of primary gammas due to interactions with the EBL*]{} can be expressed as $$\label{theone}
I_{E}(E_0)\! = \!\!\! \int \!\!\!\! \int \!\!\!\! \int \!\! F_{\rm ph,1}(E_0,z,L_{\gamma},\alpha)\rho_{\gamma}p_L(\alpha)\frac{d^2V_{\rm com}}{dzd\Omega}\,dL_{\gamma}\,dz\,d\alpha,$$ where $I_{E}(E_0)$ is the intensity of the collective unresolved blazar emission given in units of photons $\mbox{s}^{-1} \mbox{cm}^{-2} \mbox{sr}^{-1} \mbox{GeV}^{-1}$.
Inputs
======
Models for the Extragalactic Background Light
---------------------------------------------
The EBL intensity at the present epoch $\left(z=0\right)$ provides an integral constraint on the history of electromagnetic energy release in the universe since recombination. Measurement of this cumulative output however, cannot address its evolution and thus cannot be related to issues such as the history of star and element formation. For this reason, several models have been developed to calculate the EBL luminosity density as a function of redshift, $\mathcal{L}\left(\nu,z\right)$, from fundamental astrophysical principles. These models were composed with varying degrees of complexity, observational constraints, and data inputs.
The calculation of the high-energy absorption feature in the blazar component of the EGRB requires a model of the EBL and its evolution over cosmic time. The great degree of uncertainty associated with the EBL models and their predictions renders selecting the “best” candidate model impossible. Therefore, we use several models with widely ranging predictions in order to bracket the range of possible EBL realizations.
Kneiske et al. (2004) treat the EBL-modeling problem using separate approaches at the UV–optical and infrared wavelengths. In determining the EBL at various wavelengths, they make use of a cosmic chemical evolution model at the UV–optical wavelengths and a backwards evolution model for the infrared (see Hauser & Dwek (2001) for a complete review of the different types of models). Additionally, this hybrid model was parameterized in terms of the main observational uncertainties such as the redshift dependence of the cosmic star formation rate and the fraction of UV radiation released from star forming regions. Thus, the Kneiske et al. EBL model consists of several [*flavors*]{} that allow for the inclusion of various EBL scenarios that are consistent with the available data. Specifically, the [*Best-Fit*]{} model best interpolates the data with the important caveat that the assumed complete UV absorption by interstellar gas introduces a sharp cutoff at $0.1\mu {\rm m}$. In the [*Stellar-UV*]{} model, all the UV radiation produced by the stellar populations escapes to the intergalactic medium after reprocessing by the interstellar gas, with the [*High-Stellar-UV*]{} model allowing for a strong UV-field at high redshifts. Since the $\gamma$-ray sources likely responsible for the EGRB at GeV energies are particularly sensitive to the EBL density at UV wavelengths, for the purposes of this analysis, the [*Best-Fit*]{} and [*High-Stellar-UV*]{} models are used to bracket the possible ranges of attenuation.
Primack et al. (2008) have pioneered the use of semianalytical models that attempt to reproduce the process of structure formation and evolution through simulations. Recent iterations of this model incorporate highly precise knowledge of the local luminosity density at optical–UV (Gilmore et al. 2009) and NIR (Primack et al. 2008) wavelengths and a well-established cosmological model. The key parameters in their approach (those that govern the rate of star formation, supernova feedback, and metallicity) have been adjusted to fit the local galaxy data. With respect to estimates by Primack et al. from previous years, this version of the model yields a lower luminosity density at optical wavelengths, thereby resulting in a reduced EBL density. Recent TeV observations of nearby blazars seem to support such low values (Aharonian et al. 2006).
Finally, we consider the most recent EBL model by Stecker et al. (2006). In this model, Stecker et al. calculate the EBL at infrared and optical–UV wavelengths separately. At infrared wavelengths, they use a backwards evolution model based on observational knowledge of: (1) luminosity-dependent galaxy SEDs, (2) galaxy luminosity functions, and (3) parameterized functions for luminosity evolution. For optical–UV wavelengths, they consider the redshift evolution of stellar populations with an analytical approximation to the more sophisticated SEDs used in Salamon & Stecker (1998). The SEDs adapted from Bruzual & Charlot (1993) reflect stellar population synthesis models for galaxy evolution and the observational fact that star forming galaxies are “bluer” (brighter in the blue part of the optical spectrum) at $z>0.7$. Notably, the UV spectra for all SEDs are assumed to cut off at the Lyman limit, and the effects of extinction by dust are not included in the model. The former is a matter of debate since it is not really known how much UV radiation short of the Lyman limit can escape from the star-forming regions, while the latter would result inexorably in an overprediction of the UV photon density and, consequently, the optical depth at higher redshifts. In a similar vein, Franceschini et al. (2008) also employ a backwards evolution model based rather detailed observations. However, their determination of the EBL departs significantly from that of Stecker et al., particularly at the optical and UV wavelengths to which GeV photons are most sensitive. The differences in these models are likely due to differences in the treatment of galaxy evolution.
EBL attenuation is a function of the observed $\gamma$-ray energy $E_0$ and the redshift $z$ of the emitting source. The attenuation is generally parameterized by the optical depth $\tau\left(E_0,z\right)$, which is defined as the number of e-fold reductions of the observed flux, $F_{\mathrm{obs}}$, as compared with the emitted source flux, $F_{\mathrm{emitted}}$, at redshift $z$:$$F_{\mathrm{obs}}=e^{-\tau\left(E_0,z\right)}F_{\mathrm{emitted}}\label{eq:ebl_attenuation}\,.$$
The optical depth is calculated from physical principles. Using the cross section for pair production $\sigma$, and assuming isotropic background radiation with spectral density $n\left(\epsilon\right)$ at energy $\epsilon$, the absorption probability of $\gamma$-rays per unit path is given by $$\frac{d\tau}{dl}=\int_{0}^{2\pi}\sin\theta d\theta\int_{\epsilon_{\rm th}}^{\infty}n\left(\epsilon\right)\sigma\left(E_0,\epsilon,\theta\right)d\epsilon\label{eq:dtau_dl}$$ where $\theta$ is the scattering angle for the $\gamma-\gamma$ collision, $\epsilon_{\rm th}=2m^{2}c^{4}/\left[E\left(1-\cos\theta\right)\right]$ is the energy threshold for the reaction, and $m$ is the electron mass. Since blazars are the sources being considered, redshift is a good choice to measure the distance, with the total distance being the look-back time (times the speed of light, $c$)$$\begin{aligned}
L & = & \int_{0}^{z}dz\frac{dl}{dz}\nonumber \\
& = & \int_{0}^{z} \!\!\! dz\, \frac{c}{H_{0}\left(1+z\right)}\! \left[\left(1+z\right)^{2}\left(1+\Omega_{M}z\right)-z\left(2+z\right)\Omega_{\Lambda}\right]^{-1/2}\label{eq:dtau_dz}\nonumber \\ \end{aligned}$$ where $H_{0}$, $\Omega_{M}$, and $\Omega_{\Lambda}$ are the well known cosmological parameters.
Using the expressions above, the optical depth can be written as a function of the observed energy $E_0$ and the redshift of the emitting source$$\begin{aligned}
\tau\left(E_0,z\right) & = & \int_{0}^{L}\frac{d\tau}{dl}dl=\int_{0}^{z}dz'\frac{dl}{dz'}\frac{d\tau\left(E',z'\right)}{dl}\label{eq:optical_depth}\\
& = & \int_{0}^{z}\!\!\! dz'\frac{dl}{dz'}\int_{0}^{2\pi}\!\!\! \sin\theta'd\theta'\int_{\epsilon'_{\rm th}}^{\infty} \!\!\!\! d\epsilon\,\, n\left(\epsilon',z'\right)\,\sigma\left(E',\epsilon',\theta'\right)\,, \nonumber \end{aligned}$$ where the primed variables ($E'$,$\epsilon'$ $n\left(\epsilon',z'\right)$, $\theta'$) refer to the values calculated in the comoving frame at $z=z'$.
The Blazar Gamma-ray Luminosity Function
----------------------------------------
While the blazar GLF is probably one of the most studied and debated properties of the blazar population, to this day, it remains uncertain. Model GLFs are typically constructed from luminosity functions in other wavelengths (most notably, radio or X-ray) exploiting a possible association of gamma-ray emission with emission in these wavelengths while correcting for possible differences in sizes of emission regions between wavelengths. However, since blazar gamma-ray emission is also, as yet, not well understood, it is unclear which lower energy luminosity function(s) could be applicable to gamma-ray–loud blazars. Typically, a model for blazar emission is assumed and, hence, a functional form of the luminosity function is adopted. For instance, if one assumes the synchrotron self-Compton model for blazar gamma-ray emission, then one would expect that the low-energy synchrotron emission in blazars would be closely related to the gamma-ray emission. The unknown parameters (e.g., normalization due to relativistic beaming, the faint-end slope of the luminosity function) are subsequently fitted to gamma-ray data (see e.g., SS96; NT06; Giommi et al. 2006).
While such a procedure represents the best that can be done with current data, it should be noted that a great degree of uncertainty remains as several issues remain unresolved. Since it is more difficult to observe fainter objects (which are crucial in the calculation of the collective unresolved blazar calculation), any flux-limited sample will be biased toward brighter objects. Thus, uncertainties will always be larger in the faint-end slope of the luminosity function. Additionally, there is some indication that not all blazars are necessarily explained by the same emission process and that different types of blazars (i.e., BL Lacertae-like objects and flat-spectrum radio quasars) could form separate populations with respect to emission (Sikora et al. 2002; Böttcher 2007) and, hence, require separate luminosity functions (M[ü]{}cke & Pohl 2000; Dermer 2007). There is also the possibility that flaring blazars (and different *types* of flaring blazars) could also form separate emission populations with respect to emission and require separate luminosity functions (SS96). Finally, due to the large positional error circles, there are many unidentified EGRET sources which could also have a sizable contribution to the EGRB (Pavlidou et al. 2008). A number of these unidentified sources could, in fact, be (and are fairly likely to be) *unidentified blazars*. Thus, one might underestimate the normalization of gamma-ray blazars with respect to low-energy blazars in not accounting for the possibility of not being able to identify some resolved blazars; in addition, the existence of resolved but unidentified blazars would introduce uncertainties to the redshift distribution of resolved blazars, which is one of the constraints that luminosity functions are typically required to satisfy.
With the availability of *Fermi* data, many more blazars will be observed, and at least some of the aforementioned uncertainty will be alleviated. However, for now, with these caveats in mind, for the purposes of this paper, we perform our calculations using the best-guess pure luminosity evolution (PLE) and the luminosity-dependent density evolution (LDDE) models of NT06. It should be noted that the functional form of the PLE model of the blazar GLF originated from radio data, while the functional form of the LDDE model originated from X-ray data. Thus, if the calculated blazar absorption feature is sensitive to the blazar GLF, then the observation of such a feature could be used as an additional constraint to the preferred luminosity functional form and, by extension, to blazar emission models.
The Spectral Index Distribution
-------------------------------
The unresolved blazar contribution to the EGRB is not just a question of magnitude, but also of spectral shape, and the spectral shape is sensitive to the distribution of blazar spectral indices at GeV energies. If all blazars had the same spectral index then the spectrum of unresolved emission would be simply a power law. If, on the other hand, the SID has some spread, the spectrum will have some curvature (SS96; PV08). If the spread is small, then even if the blazar contribution dominates the EGRB at lower energies, it may not be enough to explain the emission at higher energies (PV08). Thus, in order to answer the question of the unresolved blazar contribution to the EGRB, the blazar SID has to be determined and the resulting shape calculated.
Obtaining the SID of blazars is complicated by the presence of large errors in measurement of individual blazar spectral indices. If these errors are not properly taken into account, sampling of the underlying *intrinsic* spectral index distribution (ISID) will be contaminated by exaggerating its spread, leading, in turn, to exaggeration in the curvature of the unresolved collective emission (VP07). Furthermore, the presence of spectrally distinct populations of blazars (e.g,. flaring versus quiescent, BL Lac objects versus flat-spectrum radio quasars) can also contaminate the ISID. In order to determine the ISID of the collective blazar population while minimizing the contamination due to measurement errors, VP07 performed a likelihood analysis fitting the third EGRET data set of confident blazars to a Gaussian ISID. They determined that the maximum-likelihood Gaussian ISID can be characterized by a mean ($\alpha_0$) of 2.27 and a spread ($\sigma_0$) of 0.2. They also performed the analysis by dividing the sample of confident blazars into their subpopulations, flaring versus quiescent and BL Lac objects versus FSRQs. In the case of flaring and quiescent blazars, they found no evidence that the subpopulations are spectrally distinct (though the lack of adequate time resolution made dividing the subpopulations difficult). In the case of BL Lac objects and FSRQs, they found a marginal $1 \sigma$ separation between BL Lac objects ($\alpha_0 = 2.15$, $\sigma_0 = 0.28$) and FSRQs ($\alpha_0 = 2.3$, $\sigma_0 = 0.19$). They also found that the flaring and quiescent blazar populations are spectrally consistent.
In PV08, the shapes of the unresolved emission were calculated for the collective blazar population and BL Lac objects and FSRQs. In the cases of the collective population and FSRQs, the curvatures of the shapes were not enough to allow the populations to explain all of the EGRB, though in the case of BL Lac objects, the curvature was enough to, in principle, allow BL Lac objects to explain the EGRB. However, in all cases, the normalizations of the emission were not determined, and the uncertainties in the shapes resulting from the uncertainties in the likelihood analysis are considerable.
It should be noted that since the PLE and LDDE GLF models do not distinguish between the subpopulations of blazars, for the purposes of self-consistency, we also do not distinguish between them with regards to their SIDs. Thus, for the purposes of this analysis, we include only the collective blazar population ISID of VP07(correcting for biases introduced in sampling a flux-limited catalog). Notably, *Fermi* has already provided evidence that BL Lac objects and FSRQs are spectrally distinct (Abdo et al. 2009). However, since the *Fermi* blazar catalog is not yet complete, it is currently premature to construct luminosity functions (especially those that distinguish between BL Lac objects and FSRQs) from *Fermi* data. In recognizing the importance of correctly treating spectral distinctions among subpopulations of blazars, we will return to this issue in a future publication.
Results
=======
The blazar contributions to the EGRB as calculated assuming two separate models of the blazar GLF and several models of the EBL are plotted in Figure 1. The black lines represent contributions determined assuming the LDDE model of the blazar GLF, while the gray lines represent contributions determined assuming the PLE model of the blazar GLF. For comparison, the blazar contributions assuming no absorption (solid lines) and the Strong et al. (2004) determination of the EGRET EGRB (data points with statistical error bars) are also plotted. Since the GLFs used include the maximum-likelihood parameters determined by NT06, the blazar contributions comprise $\sim 50$% of the overall background[^1]. As demonstrated in SS96 and PV08, there is considerable curvature in the unresolved blazar emission due to the spread in the blazar SID indicating the increasing importance of blazars with harder spectral indices at higher energies[^2].
The most striking feature in Figure 1 is that of the suppression at high energies due to the considerable amount of absorption by pair production interactions with EBL photons. Certain EBL models are quite distinguishable from the others. Most notably, the Kneiske et al. (2004) high UV model and the Stecker et al. (2006) model predict a greater degree of absorption than the other three models. This is due to the fact that the Kneiske high UV model and the Stecker model predict a higher amount of UV background radiation than the others. Since the pair production cross section as a function of the center-of-mass energy peaks at the electron mass, one would expect that gamma-ray photons of energy $\sim$ tens of GeV are most likely to interact with UV background photons. Thus, unsurprisingly, models with high UV backgrounds will result in more suppression at high energies.
Another striking observation from Figure 1 is that the high-energy suppressions for the PLE model are consistently steeper than those of the LDDE model. The different appearances of the features can be explained by considering that the blazar GLF is the distribution of blazars in luminosity and redshift space. Since the PLE model suppressions are steeper than those of the LDDE model, one would conclude that high-redshift blazars contribute more to the high-energy emission in the PLE model than in the LDDE model. In investigating this possibility, we plot the unresolved emission evaluated at several energies as a function of redshift (Figure 2). As in Figure 1, the black lines represent the emission assuming the LDDE model of the blazar GLF, and the gray lines represent the emission assuming the PLE model of the blazar GLF. As can be seen in Figure 2, the emission for the LDDE model peaks at lower redshifts ($z \sim 0.05$) than that for the PLE model ($z \sim 0.6$)[^3]. Thus, high-energy photons in the PLE model suffer more attenuation due to interactions with the EBL photons than in the LDDE model. Additionally, the emission is more sharply peaked in the LDDE model than in the PLE model indicating that the participating blazars in the LDDE model are more concentrated to a particular epoch while those in the PLE model are more spread out over epoch.
Discussion and Conclusions
==========================
We have calculated the blazar contribution to the EGRB including attenuation of high-energy photons due to interactions with the EBL. We have found that (a) the EBL attenuation of high-energy gamma rays results in a suppression in the spectrum of the unresolved emission; (b) the shape of the high-energy suppression depends on the blazar GLF model and the EBL model; and (c) the high-energy suppression for the PLE model is steeper than that for the LDDE model indicating that higher-redshift blazars contribute more to the emission at high energies in the PLE model than in the LDDE model.
As demonstrated in this paper, the suppression at high energies can be a probe for the underlying redshift–luminosity distribution of gamma-ray blazars. If, in fact, blazars do comprise the bulk of the EGRB, then this suppression should be observed in the EGRB that will be measured by *Fermi*. Thus, the observation of the EBL absorption feature at high energies could provide information about the relative contribution of blazars to the EGRB.
However, it is conceivable that any features observed in the EGRB could also be attributable to other effects. Multi-wavelength observations of blazars reveal the characteristic doubly-peaked synchrotron/Compton structures in blazar spectra. Thus, while it is true that when viewed at small energy intervals in the GeV band blazar spectra appear as single power laws, we do not expect this observation to remain true as the high-energy end of the observational energy interval increases. Instead, blazar spectra are expected (and, as noted in Abdo et al. (2009), in certain cases have already been observed by [*Fermi*]{}) to break beyond GeV energies. Such spectral breaks will also manifest as a feature in the collective emission of blazars. In this case, the feature would be sensitive to the nature of blazar spectra at high energies, which, in turn, encode information about blazar emission mechanisms. There is also the possibility that blazars could account for the bulk of the emission at lower energies, but not at higher energies. If the collective gamma-ray emission of another gamma-ray source *peaks* at high energies, then we would also expect to observe a high-energy feature due to the transition to a different population. An intriguing example of this scenario would include high-energy emission from dark matter annihilation, which peaks near the mass of the dark matter particle (e.g., Ando et al. 2007; Siegal-Gaskins & Pavlidou 2009). In order to distinguish between the EBL suppression feature and features arising from other possible scenarios, it will be essential to constrain (to the extent possible) the expected shape and strength of the high-energy absorption feature in the blazar EGRB contribution, as well as to quantify the associated theoretical uncertainties.
Due to its substantially increased sensitivity with respect to that of EGRET, *Fermi* will resolve more than an order of magnitude more blazars than EGRET did. The number of blazars that *Fermi* will observe will also provide insight into the blazar GLF (see, e.g., SS96, NT06). The measured GLF will allow us to not only determine the shape of the blazar absorption feature, but also the collective blazar emission as a function of redshift. With the *Fermi*-measured blazar GLF, we can compare the redshift-dependent gamma-ray emission with that of blazar GLFs determined from lower-energy observations. In so doing, we can determine how closely the gamma-ray emission in blazars is tied with the emission at lower energies.
*Fermi* measurements of blazar spectral indices would also allow further investigation into the possible existence of a spectral subpopulation of blazars with harder spectral indices such as high-frequency-peaked BL Lac objects (HBLs). There is some speculation that the harder spectral indices of the HBLs could be indicative of a greater contribution due to a particular emission mechanism relative to the contribution due to the emission mechanism that is more prevalent for blazars with softer spectral indices[^4]. So far, HBLs have mainly been observed at low redshifts, and as such, one would expect the absorption feature of the collective spectrum to be not very prominent. However, if *Fermi* measurements reveal that there are more hard blazars at high redshifts relative to soft blazars than indicated by EGRET data, then the effect of their intrinsic spectra would be to flatten the overall collective spectra of blazars. Moreover, the collective spectrum of their subpopulation would exhibit quite a prominent absorption feature, and hence, their high-energy emission would further impact the collective blazar spectrum through electromagnetic cascade radiation (T. M. Venters 2009, in preparation). Furthermore, investigating the absorption features of distinct spectral subpopulations of blazars could provide intriguing insight into blazar gamma-ray emission and their evolution with cosmic time.
Thus, the study of the collective unresolved blazar emission from *Fermi* observations should provide important constraints about both blazars and other possible sources of gamma-ray emission.
In this paper, we computed only the attenuation of the high-energy emission due to interactions with the EBL. However, when high-energy photons interact with EBL photons, they initiate electromagnetic cascades, which generate emission at lower energies. However, an accurate inclusion of the such secondary emission requires detailed Monte-Carlo simulations of the EBL-induced cascades and is outside the scope of this paper. We plan to return to this problem in a future publication (T. M. Venters 2009, in preparation). Furthermore, while the *Fermi Gamma-ray Space Telescope* has begun taking data and has already observed many more blazars than EGRET did, a complete catalog of blazars has not yet been released by the Fermi Collaboration. Thus, the determination of the blazar GLF(s) from *Fermi* data is, as yet, premature. With this in mind, rather than making a solid prediction for the anticipated *Fermi* results, we have demonstrated the sensitivity of the shape of the spectrum suppression due to EBL absorption to various model inputs, thus making explicit the information content of such a feature.
Abdo, A. et al. (Fermi LAT Collaboration) 2009, , 700, 597
Ando, S., Komatsu, E., Narumoto, T., & Totani, T. 2007, , 75, 063519
Böttcher, M. 2007, , 309, 95
Bruzual, A. G. & Charlot, S. 1993, , 405, 538
Chiang, J., Fichtel, C. E., von Montigny, C., Nolan, P. L., & Petrosian, V. 1995, , 452, 156
Chiang, J., & Mukherjee, R. 1998, Astrophys. J., 496, 752
Chen, A., Reyes, L. C., & Ritz, S. 2004, , 608, 686
Coppi, P. & Aharonian, F. A. 1997, , 487, L9
Dermer, C. D. 2007, , 659, 958
Franceschini, A., Rodighiero, G., & Vaccari, M. 2008, A&A, 487, 837
Giommi, P., Colafrancesco, S., Cavazzuti, E., Perri, M., & Pittori, C. 2006, A&A, 445, 843
Gilmore, R. C., Madau, P., Primack, J. R., Somerville, R. S., & Haardt, F. 2009, arXiv:0905.1144
Hartmann, R. C., et al. 1999, , 123, 79
Hauser, M. G. & Dwek, E. 2001, ARA&A, 39, 249
Kazanas, D., & Perlman, E. 1997, , 476, 7
Kneiske, T. M., Bretz, T., Mannheim, K., & Hartmann, D. H. 2004, A&A, 413, 807
Kneiske, T. M., Mannheim, K. 2005, Proceedings of the 29th International Cosmic Ray Conference, 4, 1
M[ü]{}cke, A., & Pohl, M. 2000, , 312, 177
Mukherjee, R., & Chiang, J. 1999, Astroparticle Physics, 11, 213
Narumoto, T., & Totani, T. 2006, , 643, 81
Padovani, P., Ghisellini, G., Fabian, A. C., & Celotti, A. 1993, , 260, L21
Pavlidou, V., Siegal-Gaskins, J. M, Fields, B. D., Olinto, A. V., & Brown, C. 2008, , 677, 27
Pavlidou, V. & Venters, T. M. 2008, , 673, 114
Primack, J. R., Gilmore, R. C., & Somerville, R. S. 2008, arXiv:0811.3230
Salamon, M. H., & Stecker, F. W. 1994, , 430, L21
Salamon, M. H., & Stecker, F. W. 1998, , 493, 547
Siegal-Gaskins, J. M., & Pavlidou, V. 2009, , 102, 241301
Sikora, M., B[ł]{}a[ż]{}ejowski, M., Moderski, R., & Madejski, G. M. 2002, , 577, 78
Stecker, F. W., Salamon, M. H., & Malkan, M. A. 1993, , 410, L71
Stecker, F. W., & Salamon, M. H. 1996, , 464, 600
Stecker, F. W., Malkan, M. A., & Scully, S. T. 2006, , 648, 774
—. 2007, , 658, 1392
Strong, A. W., Moskalenko, I. V., & Reimer, O. 2004, , 613, 956
Venters, T. M. & Pavlidou, V. 2007, ApJ, 666, 128
Venters, T. M. 2009, [*in preparation*]{}
A. Sample Bias Correction $\hat{M}(\alpha)$ {#appendixa}
===========================================
The ISID $\hat{p}(\alpha)$ obtained by VP07 using EGRET data is measured using a roughly flux-limited data set (to the extent that we can postulate that EGRET resolved all blazars of integrated gamma-ray flux greater than some value $F_{\gamma, \rm min}$ and none with smaller $F_\gamma$), so that $$\label{a1}
\hat{p}(\alpha) = \frac{1}{N_{\rm tot}}
\int_{F_\gamma=F_{\gamma,\rm min}}^\infty \frac{d^2N}{dF_\gamma d\alpha} dF_\gamma\,$$ where $N_{\rm tot}$ is the total number of objects in the sample. However, this is not the SID $p_L(\alpha)$ that enters Eq. \[theone\]. The latter is defined by Eq. (\[fe\]) and is the distribution of spectral indices for blazars in a luminosity interval between $L_\gamma$ and $L_\gamma+dL_\gamma$. A flux-limited sample will be biased toward harder spectral indices than a fixed gamma-ray luminosity interval, because not all blazars with the same $L_\gamma$ have the same flux: harder blazars have higher fluxes in the high-energy band and are more easy to detect. A relation can be derived between $\hat{p}(\alpha)$ and $p_L(\alpha)$, starting from a relation between $d^2N/dF_\gamma d\alpha$ and $d^3N/dL_\gamma dV_{\rm com} d\alpha$: $$\label{step1}
\frac{d^2N}{dF_\gamma d\alpha} = \int_{z=0}^{\infty} dz\frac{d^3N}{dL_\gamma dV_{\rm com}d\alpha}\left|\frac{\partial(L_\gamma,V_{\rm com},\alpha)}{\partial(F_\gamma,z,\alpha)}\right|\,.$$ $L_\gamma$ is proportional to $F_\gamma$ multiplied by a function of $z$ and $\alpha$ (see Eq. \[lfrelation\]). In addition, in writing Eq. (\[fe\]), we have assumed that $\alpha$ is independent of $L_\gamma$ and $z$. Thus, we obtain $$\label{step2}
\left|\frac{\partial(L_\gamma,V_{\rm com},\alpha)}{\partial(F_\gamma,z,\alpha)}\right| =
\left|\frac{\partial L_\gamma}{\partial F_\gamma}\frac{dV_{\rm com}}{dz}\right| =
\frac{L_\gamma}{F_\gamma}\left|\frac{dV_{\rm com}}{dz}\right|\,,$$ since $$L_{\gamma} = 4\pi d_L^2 (\alpha-1)(1+z)^{\alpha-2} E_f F_{\gamma}\,.$$ Eq. (\[step2\]) combined with Eqs. (\[step1\]) and (\[fe\]), gives $$\frac{d^2N}{dF_\gamma d\alpha} = p_L(\alpha) \frac{1}{F_\gamma}\int_{z=0}^{\infty} dz L_\gamma \rho_{\gamma}(L_\gamma,z)\frac{dV_{\rm com}}{dz}\,.$$ Substituting into Eq. (\[a1\]) we obtain $$\hat{p}(\alpha) = \frac{1}{N_{\rm tot}}p_L(\alpha)
\int_{F_{\gamma,\rm min}}^{\infty} dF_\gamma
\frac{1}{F_\gamma}\int_{z=0}^{\infty} dz L_\gamma \rho_{\gamma}(L_\gamma,z)\frac{dV_{\rm com}}{dz} \equiv
p_L(\alpha) \hat{M}(\alpha)$$ where the last equality gives the definition of the sample correction bias $\hat{M}(\alpha)$; the normalization, $N_{\rm tot}$, is obtained by requiring that $p_L(\alpha)$ integrates to 1.
B. Derivation of the Blazar Contribution to the Extragalactic Gamma-ray Background {#appendixb}
==================================================================================
The isotropic gamma-ray luminosity of a blazar at some fiducial rest-frame energy, $E_f$ (the energy emitted in photons of energy $E_f$ per unit time, assuming that the blazar emits isotropically), is related to its integrated photon flux, $F_\gamma$ (the number of photons emitted in energies above [*observer*]{} frame energy $E_f$ per unit time per unit area), through $$\label{lfrelation}
L_{\gamma} = 4\pi d_L^2 (\alpha-1)(1+z)^{\alpha-2} E_f F_{\gamma},$$ where $d_L$ is the luminosity distance, and we have assumed that the blazar has a single–power-law energy spectrum ($dN_{\gamma}/dE_{\gamma} \propto E_{\gamma}^{-\alpha}$). In turn, the differential single-blazar flux, $F_{\rm ph,1}(E_0)$ (number of photons per unit energy per unit time per unit area emitted at observer-frame energy, $E_0$), is related to $F_\gamma$ through $$F_{\gamma} = \int_{E_f}^{\infty} F_{\rm ph,1}(E_0)\,dE_0.$$ Having assumed a power-law spectrum, $F_{\gamma}$ becomes $$F_{\gamma} = \int_{E_f}^{\infty} F_{\rm ph,1}(E_f) \left(\frac{E_0}{E_f} \right)^{-\alpha}\,dE_0 = \frac{E_f F_{\rm ph,1}(E_f)}{\alpha - 1}\,.$$ Substituting the above into the equation for $L_{\gamma}$ and solving for $F_{\rm ph,1}(E_f)$, we get $$F_{\rm ph,1}(E_f) = \frac{L_{\gamma}}{4\pi d_L^2 E_f^2}(1+z)^{2-\alpha}.$$ Thus, neglecting absorption, $F_{\rm ph,1}(E_0)$ is given by $$F_{\rm ph,1}(E_0) = F_{\rm ph,1}(E_f)\left(\frac{E_0}{E_f}\right)^{-\alpha} = \frac{L_{\gamma}}{4\pi d_L^2 E_f^2}(1+z)^{2-\alpha}\left(\frac{E_0}{E_f}\right)^{-\alpha}.$$ Including the effects of absorption, we arrive at Eq. (\[fluxoneblazr\]) of Section 2: $$F_{\rm ph,1}(E_0) = \frac{L_{\gamma}}{4\pi d_L^2 E_f^2}(1+z)^{2-\alpha}\left(\frac{E_0}{E_f}\right)^{-\alpha} e^{-\tau (E_0,z)}.$$ The intensity of emission is defined as $$I_E(E) = \frac{d^4N_{\gamma}}{dtdAdEd\Omega} = \frac{d}{d\Omega} \int \!\! dN F_{\rm ph,1},$$ where $N_{\gamma}$ is the number of photons, $N$ is the number of objects, and $F_{\rm ph,1}$ is the flux from a single contributing object. Making the dependencies explicit, the intensity can be expressed as $$I_E(E_0) = \frac{d}{d\Omega} \int F_{\rm ph,1}(E_0,z,L_{\gamma},\alpha) \frac{d^3 N}{dL_{\gamma} dV_{\rm com} d\alpha}\,dL_{\gamma}\,dV_{\rm com}\,d\alpha.$$ The differential number of objects can be expressed in terms of the GLF and the ISID: $$\frac{d^3N}{dL_{\gamma}dV_{\rm com} d\alpha} = \rho_{\gamma}(L_{\gamma},z)p_L(\alpha).$$ The comoving volume element is given by (assuming $\Lambda$CDM cosmology) $$\frac{d^2V_{\rm com}}{dzd\Omega} = \frac{c}{H_0}D^2[\Omega_{\Lambda}+\Omega_m(1+z)^3]^{-1/2},$$ where $D = d_L/(1+z)$ is the distance measure. Substituting for $F_{\rm ph,1}$, $dV_{\rm com}$, and the differential number of objects, we finally arrive at the sought for expression for the emission [*without attenuation*]{}: $$I_E(E_0) = \frac{c}{H_0} \frac{1}{4\pi E_f^2} \!\! \int \! d\alpha\,p_L(\alpha)\left(\frac{E_0}{E_f}\right)^{-\alpha}\!\!\! \int \! dz\,(1+z)^{-\alpha}[\Omega_{\Lambda}+\Omega_m(1+z)^3]^{-1/2} \!\! \int \! dL_{\gamma}L_{\gamma}\rho_{\gamma}.$$ The above expression for the collective unresolved blazar emission is equivalent to Equation (10) of SS96, corrected for cosmology and assuming that blazars consist of a single spectral population.
Including the attenuation factor, $\exp\left[-\tau(E_0,z)\right]$, in the expression for the single-blazar flux, we obtain: $$I_E(E_0) = \frac{c}{H_0} \frac{1}{4\pi E_f^2} \!\! \int \! d\alpha\,p_L(\alpha)\left(\frac{E_0}{E_f}\right)^{-\alpha}\!\!\! \int \! dz\,e^{-\tau(E_0,z)}(1+z)^{-\alpha}[\Omega_{\Lambda}+\Omega_m(1+z)^3]^{-1/2} \!\! \int \! dL_{\gamma}L_{\gamma}\rho_{\gamma}.$$
[^1]: However, as discussed in Section 3.2, the determinations of the GLFs and their parameters remain highly uncertain. In the likelihood analysis performed by NT06, the most likely level of unresolved emission from blazars is $\sim 25\%-50$% of the EGRB. Nevertheless, in several of the cases presented in NT06, parameters for which unresolved blazars can account for $100$% of the background are within the $1\sigma$ contours.
[^2]: There is some uncertainty in the determination of the parameters of the blazar spectral index, which will result in uncertainty in the overall shape of the spectrum of the unresolved blazar emission. As indicated in PV08, this uncertainty in the spectral shape is quite large for EGRET blazars, but will improve considerably with *Fermi* observations. For this reason, and because *Fermi* is currently taking data, we simply calculate the EBL absorption for the best guess spectra.
[^3]: In both cases, the redshift where the emission peaks increases as the minimum luminosity included in the integration increases. This is to be expected since high-luminosity objects that contribute to the EGRB will be distributed toward higher redshifts. The inclusion of low-luminosity blazars allows more low-redshift objects to participate.
[^4]: For instance, in the leptonic emission scenarios, the harder spectral indices of the HBLs could possibly indicate the greater importance of external Compton emission relative to the synchrotron self-Compton emission that could dominate the high-energy emission in blazars with softer spectral indices, such as the FSRQs.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We modify the Chapman sandpile model (Chapman *et al* *Physical Review Letters* 86, 2814 (2001)) to form comparisons with pellet pacing, which is used to reduce or eliminate ELMs in a fusion plasma. We employ a variation of that model in which a pedestal with feedback is introduced (Bowie and Hole *Phys. Plasmas* 25, 012511 (2018)), which we further modify to provide for dual fuelling - sand is added both at the centre of the sandpile, and near the edge. We observe that when the additional sand is added at the top of the pedestal, MLEs are largely suppressed. While this suppression comes at a cost by way of reduction in total energy confinement, that reduction is lower than the reduction in MLE size. The trade-off between MLE suppression and reduction in energy confinement depends not only on the amount of extra sand, but also on its precise location relative to the top of the pedestal. We suggest that the approach of constant dual fuelling may be equally applicable to plasmas, and may suggest a strategy for ELM suppression in fusion plasmas. We observe that when the proposed amount of extra sand is added in ’pellets’, using frequencies and amounts based on those proposed for ELM suppression for ITER, MLEs are similarly suppressed, although MLEs are not significantly suppressed when the pellet rate does not substantially exceed the MLE frequency. This suggests that pellet injection at the top of the pedestal at small pellet size and high frequency may represent a reasonable physical proxy for our proposed scheme. However, our results suggest that it is not the synchronisation of pellets to ELM frequencies which is the key factor for ELM suppression in this regime, but rather the introduction of additional fuelling at the top of the pedestal.'
author:
- 'C. A. Bowie'
- 'M. J. Hole'
title: Sandpile modelling of dual location fuelling in fusion plasmas
---
Introduction \[sec:Introduction\]
=================================
Nuclear fusion, if it can be effectively controlled, may be critical to our future energy needs. The primary method of seeking to achieve fusion power is via a plasma which is magnetically confined in a torus known as a tokamak. The goal of fusion research is to increase the fusion triple product of temperature, plasma density, and particle confinement time. A step towards this goal, known as H-mode, occurs when the plasma enters into a higher confinement mode, via a mechanism which is not yet fully understood, but which results in the production of a ‘pedestal’ at the edge of the plasma, in which energy confinement rises sharply over a distance of approx 3% of the toroidal minor radius[@Beurskens2009]. However, with H-mode comes a plasma instability known as an edge localised mode, or ELM, which triggers a loss of confinement [@ASDEX1989]. A large ELM may result in a loss of confinement of 5% [@ASDEX1989], or from 10-40% of the pedestal energy [@Beurskens2009] and can cause damage to the first wall of the tokamak[@Igitkhanov2013]. For ITER, an upper tolerable limit for ELMs of $\sim$1% of the pedestal energy has been suggested[@Beurskens2009; @Zhitlukhin2007]. Controlling ELMs in H-mode is therefore a key objective of fusion plasma research.
Injection of fuel ‘pellets’ has been extensively used as a candidate for ELM control and reduction in fusion plasmas, using pellets to trigger ELMs to increase ELM frequency ($f_{ELM}$), and consequently decrease their maximum size ($W_{ELM}$), on the basis that $f_{ELM}*W_{ELM}=constant$. [@Hong1999; @Baylor2005; @Baylor2007; @Baylor2013; @Baylor2015; @Lang2004; @Lang2014; @Lang2015; @Pegourie2007; @Rhee2012] Pellet size, frequency, and location have all been tested experimentally on ASDEX Upgrade [@Lang2004; @Lang2015; @Lang2018], DIII-D [@Baylor2005; @Baylor2013], JET [@Baylor2015; @Lang2011; @Lang2015], and EAST [@Li2014; @Yao2017] and ELM control using pellets is being considered for use in ITER [@Doyle2007; @Baylor2015].
Injection of pellets to the top of the pedestal has been suggested to produce ELM pacing with reduced energy loss in modelling by Hayashi [@Hayashi2013], using the code TOPICS-IB. That modelling suggested that pellets with $\sim$1% of the pedestal particle content, with speed sufficient to reach the pedestal top, will reduce energy loss significantly. The penetration depth of the pellet depends both on its size and speed, as smaller pellets do not penetrate as far into the plasma before ablation. Experiments at JET determined a minimum threshold pellet size which was necessary to reach the top of the pedestal in order to trigger ELMs [@Lang2011], where the pellet frequency exceeded the natural ELM frequency. For example, Lang[@Lang2015] discusses the use of pellets of $1.5-3.7\times10^{20}$D, introduced into a plasma with particle inventory of $6\times10^{20}$D, i.e. $25-60\%$ of the total plasma inventory. It has also been observed that in a 2016 series of discharges in JET, the highest fusion performance was observed using a particle fuelling scheme consisting of pellet injection combined with low gas puffing [@Kim2018]. Lang [@Lang2015] discussed pellets added at lower frequencies (higher $\Delta t_P$) with pellet timing aligned to ELM onset. These pellets triggered ELMs. Lang[@Lang2015] observes that as pellets increase the plasma density, this in turn increases the L-H threshold. At DIII-D, pellet injection has been observed to trigger synchronous ELMs with a frequency of $12$ times the natural $f_{ELM}$[@Huijsmans2015; @Baylor2013]. It is proposed that a dual pellet injection system will be used in ITER with large pellets to provide fuelling, and smaller pellets to trigger ELMs[@Baylor2015], and it has been suggested that a pellet frequency of $\sim45$ times the natural $f_{ELM}$ will be required to provide the necessary reduction in heat flux.
One way of understanding the impact of pellet injection on both confinement and ELM behaviour is to seek to identify a physical system whose relaxation processes have characteristics similar to those of the ELMing process under consideration. Of particular interest is the sandpile [@Bak1987], whose relevance to fusion plasmas is well known [@Chapman1999; @Dendy1997].
Sandpile models generate avalanches, which may be internal or result in loss of particles from the system. These avalanches are the response to steady fuelling of a system which relaxes through coupled near-neighbour transport events that occur whenever a critical gradient is locally exceeded. The possibility that, in some circumstances, ELMing may resemble avalanching was raised [@Chapman2001A] in studies of the specific sandpile model of Chapman [@Chapman2000]. This simple one-dimensional N-cell sandpile model [@Chapman2000; @Chapman2001A] incorporates other established models [@Bak1987; @Dendy1998A] as limiting cases. It is centrally fuelled at cell $n = 1/500$, and its distinctive feature is the rule for local redistribution of sand near a cell (say at $n = k$) at which the critical gradient $Z_{c}$ is exceeded. The sandpile is conservatively flattened around the unstable cell over a fast redistribution lengthscale $L_{f}$, which spans the cells $n = k - (L_{f} - 1), k - (L_{f} - 2), ... , k+1$, so that the total amount of sand in the fluidization region before and after the flattening is unchanged. Because the value at cell $n = k+1$ prior to the redistribution is lower than the value of the cells behind it (at $n<k+1$), the redistribution results in the relocation of sand from the fluidization region, to the cell at $n = k + 1$. If redistributions are sequentially triggered outwards across neighbouring cells, leading to sand ultimately being output at the edge of the sandpile, an avalanche is said to have occurred. The sandpile is then fuelled again, after the sandpile has iterated to stability so that sand ceases to escape from the system.
The lengthscale $L_{f}$, normalized to the system scale $N$, is typically [@Chapman1999; @Chapman2001A; @Chapman2001B; @Chapman2003; @Chapman2004] treated as the model’s primary control parameter $L_{f}/N$, which governs different regimes of avalanche statistics and system dynamics. Here, we employ a modification to the classic model in which the lengthscale is variable over a distance from the edge, which itself depends upon the energy of the system [@Bowie2018]. As $L_f$ reduces near the edge, the gradient increases at the edge, resulting in a pedestal which is subject to feedback due to the dependence of the distance on the energy. The resulting pedestal was introduced as a proxy for the pedestal of a fusion plasma in a H-mode plasma [@Bowie2018]. The feedback loops were seen to be analagous to the feedback effects intrinsic to the H-mode pedestal in a fusion plasma [@Bowie2018]. It was suggested that reduction of feedback in the pedestal could result in ELM suppression within a H-mode plasma [@Bowie2018].
Typically, the model is centrally fuelled only. Here, we introduce a new feature, being dual fuelling, in which the sandpile is constantly fuelled concurrently at two locations, in order to observe the effect on energy confinement and mass loss event (MLE) size. We observe that by adding $\sim$2.5% of the sand at a location near the top of the pedestal (near the edge of the plasma), the maximum amount of sand lost in an MLE ($\Delta S_{max}$) is significantly reduced.
Dual-fuelled sandpile
=====================
We begin with a feedback model in which sand is added only at the core (as is typical for other implementations of the model). We add sand at a constant rate ($dx_{fc}=1.2$) until the sandpile builds up and enters a ‘steady state’ in which the time averaged amount of sand lost in MLEs equals the amount of sand added. The median waiting time, $\Delta t_n$, between MLEs is $\sim$$135000$, and $\Delta S_{max}$ is $\sim$$630000$. The energy of the system ($E_p$), measured by the sum of the squares of the values of the cells, is $\sim$$2.7\times10^9$. The parameters chosen are based on Bowie and Hole [@Bowie2018].
For the sandpile chosen, the width of the pedestal, $P_w$, is $\sim$$15/500$ cells, meaning that the top of the pedestal is located at $n=485$. Due to the feedback effects built into the model, the pedestal edge moves with time, approximately synchronously with $E_p$. The resulting shape of the sandpile is shown in Figure \[fig:Sandpile, Ep, and Max MLE\](a), with the values of $E_p$ and $P_w$ over 2 million iterations shown in Figure \[fig:Sandpile, Ep, and Max MLE\](b) and (c).
![(L to R) Sandpile, $E_p$, and $P_w$ plots for base case ($dx_{fe} = 0$) (top); $dx_{fe} = 0.03$, added at $n=487$ (bottom)[]{data-label="fig:Sandpile, Ep, and Max MLE"}](./"Sandpile_-_Pellet_-_Model_3_-Cell_480-492__RLstart300_LFstart_100_Time_1_NO_BONUS_SAND"){width="\linewidth"}
![(L to R) Sandpile, $E_p$, and $P_w$ plots for base case ($dx_{fe} = 0$) (top); $dx_{fe} = 0.03$, added at $n=487$ (bottom)[]{data-label="fig:Sandpile, Ep, and Max MLE"}](./"Potential_Energy_-_Pellet_-_Model_3_-Cell_490__RLstart300_LFstart_100_Time_NO_BONUS_SAND"){width="\linewidth"}
![(L to R) Sandpile, $E_p$, and $P_w$ plots for base case ($dx_{fe} = 0$) (top); $dx_{fe} = 0.03$, added at $n=487$ (bottom)[]{data-label="fig:Sandpile, Ep, and Max MLE"}](./"Larmor_Radius_-Pellet_-_Model_3_-Cell_480-492__RLstart300_LFstart_100_Time_1_NO_ADDED_SAND"){width="\linewidth"}
![(L to R) Sandpile, $E_p$, and $P_w$ plots for base case ($dx_{fe} = 0$) (top); $dx_{fe} = 0.03$, added at $n=487$ (bottom)[]{data-label="fig:Sandpile, Ep, and Max MLE"}](./"Sandpile_-_Pellet_-_Model_3_-Cell_487__RLstart300_LFstart_100_Time_1_Pellet_0_03"){width="\linewidth"}
![(L to R) Sandpile, $E_p$, and $P_w$ plots for base case ($dx_{fe} = 0$) (top); $dx_{fe} = 0.03$, added at $n=487$ (bottom)[]{data-label="fig:Sandpile, Ep, and Max MLE"}](./"Potential_Energy_-_Pellet_-_Model_3_-Cell_487__RLstart300_LFstart_100_Time_1_Pellet_0_03"){width="\linewidth"}
![(L to R) Sandpile, $E_p$, and $P_w$ plots for base case ($dx_{fe} = 0$) (top); $dx_{fe} = 0.03$, added at $n=487$ (bottom)[]{data-label="fig:Sandpile, Ep, and Max MLE"}](./"Larmor_radius_-_Pellet_-_Model_3_-Cell_487__RLstart300_LFstart_100_Time_1_Pellet_0_03"){width="\linewidth"}
We then modify the model, by adding most of the sand ($dx_{fc}=1.2$) at the core ($n=1$), and some of the sand ($dx_{fe}$)at another location within the sandpile, $n_{fe}$. Sand is added continuously at both the core and $n_{fe}$, representing dual fuelling rather than time separated pellets. We record the average value of $E_p$, and $\Delta S_{max}$. We then repeat the process for a number of values in the range from $n_{fe}=2$ to $n_{fe}=500$. The sandpile, and values of $E_p$ and $P_w$, using this dual fuelling model, are shown in Figure \[fig:Sandpile, Ep, and Max MLE\](d-f). We observe that, consistent with the reduction in $\Delta S_{max}$, the $E_p$ and $P_w$ traces are much smoother where dual fuelling is employed. Figure \[fig:Sandpile, Ep, and Max MLE\](f) shows us that for $dx_{fe}=0.03$, $P_w$ is about $13$ when $n_{fe}=487$, i.e. the sand is added at about the top of the pedestal.
Figure \[fig:pellet-size-0\_03\] shows how $E_p$, and $\Delta S_{max}$ vary as we change $n_{fe}$, for $dx_{fe}=0.03$. Both $E_p$, and $\Delta S_{max}$ are minimised when $n_{fe}$ is located within the pedestal. MLEs are maximally suppressed when $n_{fe}$ is in the range from $487 - 497$, with the maximum $E_p$ in that range at $n_{fe}=487$ (i.e. the top of the pedestal). When $n_{fe}$ is located at the top of the pedestal, $E_p$ declines by about 30%, with a concurrent $\sim$93% reduction in $\Delta S_{max}$. If $n_{fe}$ is located just outside the pedestal, a reduction in $\Delta S_{max}$ of $\sim$50% can be achieved with little effect on $E_p$. By contrast, dual fuelling significantly outside the pedestal has little effect on either $E_p$ or $\Delta S_{max}$, as shown in Figure \[fig:pellet-size-0\_03\](a).
Essentially, what is observed is that $n_{fe}$, when located at the top of, or within, the pedestal, sets a maximum value for $P_w$, by suppressing further growth of $P_w$. This in turn prevents the sandpile from becoming sufficiently large that it collapses.
The trade-off between reduction in $\Delta S_{max}$ and $E_p$ can also be seen if $dx_{fe}$ is varied. In Figure \[fig:variable-pellet-sizes—max-mle-and-pe\], we show $\Delta S_{max}$ and $E_p$ for a range of pellet sizes, added at $n_{fe}=490$, which is near the top of the pedestal. We see that as we increase $dx_{fe}$, $\Delta S_{max}$ and $E_p$ both decline. $\Delta S_{max}$ has been reduced by an order of magnitude at $dx_{fe}=0.03$ and remains relatively steady after that, while $E_p$ continues to decrease as we increase $dx_{fe}$.
In addition, generally speaking, for values of $dx_{fe}$ below $0.03$, the ‘dip’ in $E_p$ and $\Delta S_{max}$ is smaller, and occurs over a smaller range of values of $n_{fe}$. For higher values, the dip is larger over a $\sim17$ cell range, representing an approximate radial width of $17/500=0.034$ of the plasma. The ‘sweet spot’ appears where the dip is over a wide enough range such that extreme precision in adding $dx_{fe}$ is not required, without resulting in a large decrease in $E_p$.
Taking these factors into account, we suggest that the optimal value for $dx_{fe}$ is about $0.03$, or $2.5\%$ of $dx_{fc}$. As noted above, for $dx_{fe}=0.03$, maximal suppression of MLEs, coupled with minimal reduction in $E_p$, occurs at about $n_{fe}=487$, being the top of the pedestal.
Discussion
==========
To date, pellet fuelling in fusion plasmas has been aimed at the triggering of an ELM immediately following the introduction of a pellet, so as to increase $f_{ELM}$, and consequently decrease $W_{ELM}$, on the basis that $f_{ELM}\times W_{ELM}=constant$. [@Hong1999; @Baylor2005; @Baylor2007; @Baylor2013; @Baylor2015; @Lang2004; @Lang2014; @Lang2015; @Pegourie2007; @Rhee2012]. Here we suggest a potentially different path to ELM reduction, as the dual fuelling proposed here is constant, rather than pelletized, and therefore does not produce MLEs synchronised with the introduction of additional fuelling. Instead, the constant injection of fuel at or about the top of the pedestal in a feedback modified sandpile, when coupled with the feedback mechanism, triggers MLEs more regularly, but still with a waiting time of at least several thousand time steps.
We observe that MLE suppression does not occur when $n_{fe}$ is significantly outside the pedestal in which feedback occurs. MLE suppression also does not occur for dual fuelling in the classic sandpile model, in which no feedback occurs. This suggests that MLE suppression by dual fuelling is directly related to modification of feedback in the pedestal.
The feedback model, including a pedestal, has been suggested to be analogous to a fusion plasma, including a H-mode pedestal in which feedback effects occur[@Bowie2018], perhaps because a common underlying dynamical behaviour occurs in both the model and the fusion plasma. As a result, we suggest that dual fuelling in a fusion plasma may similarly lead to ELM suppression. Specifically, it may be advantageous to operate a fusion plasma in a mode in which most of the fuelling occurs at the core, while 2.5% of the fuelling occurs at the top of the pedestal. If our conjecture is correct, and the fuelling properties/insights of the MLE model are portable to a tokamak, such an operating mode will result in the suppression of ELMs at a low energy density and temperature cost.
Notwithstanding that existing pellet fuelling schemes have been aimed at the triggering of an ELM immediately following the introduction of a pellet, there may nonetheless be a relationship between the proposal here and pellet fuelling schemes employed to date. Minimum pellet sizes have been suggested for production of ELMs in experiments, as a consequence of the practical requirement that pellets be large enough to reach the top of the pedestal. The minimum size is also a function of pellet velocity, as the pellet size necessary to reach the top of the pedestal decreases as pellet velocity increases. These minimum sizes are coupled with the maximum practically achievable injection frequency in each experiment. If our analogy is correct, the minimum necessary size to reach the top of the pedestal will couple with the injection frequency to produce an optimal injection frequency, which may be less than the maximum achievable injection frequency.
In order to make a comparison with the proposed ITER scheme, we have ’pelletized’ $dx_{fe}$ by adding sand at every $4,000$ time steps (being approximately the natural waiting time in the model, divided by $45$, based on the assumption that the pellet frequency in ITER will be $45$Hz[@Baylor2015]), with $f_{ELM}=1$Hz[@Baylor2015]. The amount of sand added in total is equal to the amount added continuously, i.e. $4000\times0.03=120$. On the assumption that pellets take effect over their ablation time, rather than instanteously, we have delivered the pellet over $400$ time steps, adopting an observed ablation time for a MAST pellet of $13\times200 \micro \second = 2.6 \milli \second$ [@Garzotti2010], which equates to $\sim 400$ time steps in our model. The result is that at each time step during pellet injection, $dx=1.2$ and $dx_{fe}=0.3$, while for all other time steps $dx=1.2$ and $dx_{fe}=0$. We also observe that the amount of sand in the pedestal in the model is about $11,000$ units, so that a pellet size of $120$ units is $\sim 1\%$ of the particles in the pedestal, which is consistent with modelling by Hayashi[@Hayashi2013], suggesting that the pellet size should be 1% of the pedestal particle content. With these parameter settings, $E_p\sim1.9\times10^9$ (a reduction of $\sim30\%$ from the base case), and $\Delta S_{max}\sim13000$ (a reduction of $\sim98\%$).
By contrast, if pellets are injected at a rate equal to the natural MLE frequency, consistent with pellet pacing experiments at JET, then while $E_p\sim1.9\times10^9$ (the same as for the reduction from the base case of $\sim30\%$), $\Delta S_{max}\sim99000$ (a reduction of only $\sim75\%$). The continuing occurrence of significant MLEs is consistent with the result observed at JET in which ELMs still occurred during pellet pacing, rather than being fully suppressed.
This suggests that a series of pellets, such as those to be used in ITER, represent a good approximation to the continuous edge fuelling proposed here, particularly with regard to the practical limitations of implementing such a scheme. Our model also suggests that the relevant consideration for pellet pacing is whether the total amount of particles delivered reaches the ELMing threshold, whether delivered continuously, or over several pellets or gas puffs. This result contrasts with pellet pacing schemes in which pellet timing is aligned to ELM onset [@Lang2015] - our result suggests that it is not synchronisation of the pellets which is relevant in this regime, but instead the total amount of fuelling delivered (at least quasi-continuously) at the top of the pedestal.
The scheme may alternatively be implemented by gas puffing, to the extent that gas puffs can be controllably injected at the top of the pedestal as part of a dual fuelling scheme in the proportions suggested here.
Conclusion
==========
We have implemented a feedback modified sandpile model, to which we have added dual fuelling. The sandpile model incorporates feedback effects within an edge pedestal. We have observed that when additional fuelling is added at the top of the pedestal, MLEs are almost entirely suppressed while $E_p$ is reduced to a lesser extent.
We observe that optimal MLE suppression, with minimal $E_p$ reduction, occurs when edge fuelling represents approximately 2.5% of core fuelling, and the edge fuelling is added at the top of the pedestal. We conjecture that this MLE suppression results from suppression of feedback in the pedestal of the model. We suggest that a similar scheme employed in a fusion plasma may result in the suppression of ELMs at a low particle density and temperature cost.
We have shown that this scheme is related to a scheme of pellet injection at frequencies up to 45 times the natural $f_{ELM}$ proposed for use in ITER[@Baylor2015], and tested in DIII-D[@Baylor2013], and to a scheme modelled by Hayashi[@Hayashi2013],who suggests that small pellets of the order of 1% of the pedestal particle content, which are fully ablated at the top of the pedestal, may be sufficient to trigger ELMs, and thereby reduce their size. However, significant ELM suppression may not occur unless the pellet rate significantly exceeds $f_{ELM}$. Our result suggests that it is not the synchronisation of pellets to ELMs which is relevant for ELM suppression in this regime, but rather the total amount of fuel delivered (at least quasi-continuously) at the top of the pedestal.
Gas puffing which provides relatively constant edge fuelling may also suppress ELMs at the same ratio of core to edge fuelling.
We suggest that others may wish to implement the scheme proposed here in a fusion plasma, to determine whether edge fuelling can suppress ELMs at a particle density and temperature cost which is considered acceptable for the experiment in question, consistent with the results of our model.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work was jointly funded by the Australian Research Council through grant FT0991899 and the Australian National University. One of the authors, C. A. Bowie, is supported through an ANU PhD scholarship, an Australian Government Research Training Program (RTP) Scholarship, and an Australian Institute of Nuclear Science and Engineering Postgraduate Research Award.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study the Gross-Pitaevskii equation with an attractive delta function potential and show that in the high velocity limit an incident soliton is split into reflected and transmitted soliton components plus a small amount of dispersion. We give explicit analytic formulas for the reflected and transmitted portions, while the remainder takes the form of an error. Although the existence of a bound state for this potential introduces difficulties not present in the case of a repulsive potential, we show that the proportion of the soliton which is trapped at the origin vanishes in the limit.'
address: |
Mathematics Department, University of California\
Evans Hall, Berkeley, CA 94720, USA
author:
- Kiril Datchev
- Justin Holmer
title: Fast soliton scattering by attractive delta impurities
---
Introduction {#in}
============
The nonlinear Schrödinger equation (NLS) or Gross-Pitaevskii equation (GP) $$\label{E:fNLS}
i\partial_t u + \tfrac12\partial_x^2 u + |u|^2u=0 \, ,$$ where $u=u(x,t)$ and $x\in \mathbb{R}$, possesses a family of soliton solutions $$u(x,t) = e^{i\gamma} e^{ivx}e^{-\frac12itv^2}\lambda {\textnormal{sech}}(\lambda(x-x_0-vt))$$ parameterized by the constants of phase $\gamma\in \mathbb{R}$, velocity $v\in \mathbb{R}$, initial position $x_0\in \mathbb{R}$, and scale $\lambda>0$. Given that these solutions are exponentially localized, they very nearly solve the perturbed equation $$\label{E:pNLS}
i\partial_t u + \tfrac12\partial_x^2 u - q\delta_0(x) + |u|^2u=0\, ,$$ when the center of the soliton $|x_0+vt| \gg 1$. In fact, if we consider initial data $$u(x,0) = e^{ixv}{\textnormal{sech}}(x-x_0)$$ for $x_0 \ll -1$, then we expect the solution to essentially remain the rightward propagating soliton $e^{ixv}e^{-\frac12 iv^2t} {\textnormal{sech}}(x-x_0-vt)$ until time $t\sim |x_0|/v$ at which point a substantial amount of mass “sees” the delta potential. It is of interest to examine the subsequent behavior of the solution, as this arises as a model problem in nonlinear optics and condensed matter physics (see Cao-Malomed [@CM] and Goodman-Holmes-Weinstein [@GHW]). In the case $|q| \ll 1$, Holmer-Zworski [@HZ1] find that the soliton remains intact and the evolution of the center of the soliton approximately obeys Hamilton’s equations of motion for a suitable effective Hamiltonian. This result applies to both the repulsive ($q>0$) case and the attractive ($q<0$) case, and identifies the $|q|\ll 1$ setting as a *semi-classical* regime.
On the other hand, *quantum* effects dominate for high velocities $|v|\gg 1$. In Holmer-Marzuola-Zworski [@HMZ][@HMZ2], the case of $q>0$ and $v \gg 1$ is studied (most interesting is the regime $q\sim v$), and it is proved that the incoming soliton is split into a transmitted component and a reflected component. The transmitted component continues to propagate to the right at velocity $v$ and the reflected component propagates back to the left at velocity $-v$, see Fig. \[F:snapshots\]. The transmitted mass and reflected mass are determined as well as the detailed asymptotic form of the transmitted and reflected waves. The rigorous analysis in [@HMZ] is rooted in the heuristic that at high velocities, the time of interaction of the solution with the delta potential is short, and thus the solution is well-approximated in $L^2$ by the solution to the corresponding linear problem $$\label{E:plin}
i\partial_t u + \tfrac12\partial_x^2 u -q\delta_0(x) u =0 \,.$$ This heuristic is typically valid provided the problem is $L^2$ subcritical with respect to scaling. In this case, it is shown to hold using Strichartz estimates for solutions to this linear problem and its inhomogeneous counterpart, with bounds independent of $q$. The Strichartz estimates are also used in a perturbative analysis comparing the incoming solution (pre-interaction) and outgoing solution (post-interaction) with the solution to the free NLS equation . One then proceeds with an analysis of the linear problem to understand the interaction. Let $H_q = -\frac12\partial_x^2 + q\delta_0(x)$ and consider a general plane wave solution to $(H_q-\frac12\lambda^2)w=0$, $$w(x) =
\left\{
\begin{aligned}
&A_+ e^{-i\lambda x} + B_- e^{i\lambda x} & \text{for } x>0\\
&A_- e^{-i\lambda x} + B_+ e^{i\lambda x} & \text{for } x<0
\end{aligned}
\right.$$ The matrix $$S(\lambda): \begin{bmatrix} A_+ \\ B_+ \end{bmatrix} \mapsto \begin{bmatrix} A_- \\ B_- \end{bmatrix}$$ sending incoming ($+$) coefficients to outgoing ($-$) coefficients is called the scattering matrix, and in this case it can be easily computed as $$S(\lambda) = \begin{bmatrix} t_q(\lambda) & r_q(\lambda) \\ r_q(\lambda) & t_q(\lambda) \end{bmatrix} \, ,$$ where $t_q(\lambda)$ and $r_q(\lambda)$ are the transmission and reflection coefficients $$t_q(\lambda) = \frac{i\lambda}{i\lambda-q} \quad \text{and} \quad r_q(\lambda) = \frac{q}{i\lambda -q}\, .$$ We have that at high velocities and for $x_1\ll -1$, $$\label{E:linscat}
e^{-itH_q}[e^{ixv}{\textnormal{sech}}(x-x_1)] \approx t(v)e^{-itH_0}[e^{ixv}{\textnormal{sech}}(x-x_1)] + r(v) e^{-itH_0}[e^{-ixv}{\textnormal{sech}}(x+x_1)].$$ From this we can infer that the transmitted mass $$T_q(v) = \frac{\|u(t)\|_{L_{x>0}^2}^2}{\|u(t)\|_{L_x^2}^2} = \tfrac12\|u(t)\|_{L_{x>0}^2}^2$$ matches the quantum transmission rate at velocity $v$, i.e. the square of the transmission coefficient $$T_q(v) \approx |t_q(v)|^2 = \frac{v^2}{v^2+q^2}$$ This is confirmed by a numerical analysis of this problem in Holmer-Marzuola-Zworski [@HMZ2], where it is reported that for $q/v$ fixed, $$T_q(v) = \frac{v^2}{v^2+q^2} + \mathcal{O}(v^{-2}), \quad \text{as }v\to +\infty \, .$$ Further, gives approximately the form of the solution just after the interaction, and one can then model the post-interaction evolution by the free nonlinear equation and apply the inverse scattering method to yield a detailed asymptotic. The results of [@HMZ] are valid up to time $\log v$, at which point the errors accumulated in the perturbative analysis become large.
![\[F:snapshots\] Numerical simulation of the case $q=-3$, $v=3$, $x_0=-10$, at times $t=0.0, 2.7, 3.3, 4.0$. Each frame is a plot of amplitude $|u|$ versus $x$.](snap1a "fig:")![\[F:snapshots\] Numerical simulation of the case $q=-3$, $v=3$, $x_0=-10$, at times $t=0.0, 2.7, 3.3, 4.0$. Each frame is a plot of amplitude $|u|$ versus $x$.](snap1b "fig:") ![\[F:snapshots\] Numerical simulation of the case $q=-3$, $v=3$, $x_0=-10$, at times $t=0.0, 2.7, 3.3, 4.0$. Each frame is a plot of amplitude $|u|$ versus $x$.](snap1c "fig:")![\[F:snapshots\] Numerical simulation of the case $q=-3$, $v=3$, $x_0=-10$, at times $t=0.0, 2.7, 3.3, 4.0$. Each frame is a plot of amplitude $|u|$ versus $x$.](snap1d "fig:")
When $q<0$, the nonlinear equation has a one-parameter family of bound state solutions $$\label{E:bound}
u(x,t) = e^{it\lambda^2/2}\lambda {\textnormal{sech}}(\lambda|x|+\tanh^{-1}(|q|/\lambda)), \quad 0<|q|<\lambda$$ The numerical simulations in [@HMZ2] show that at high velocities, the incoming soliton is still split into a rightward propagating transmitted component and a leftward propagating reflected component, although in addition some mass is left behind at the origin ultimately resolving to a bound state of the form . However, the amount of mass trapped at the origin diminishes exponentially as $v\to +\infty$ and the observed mass of the transmitted and reflected waves is consistent with the assumption that the outgoing solution is still initially well-modelled by .
In this paper, we undertake a rigorous analysis of the $q<0$ and $v$ large case. This analysis is complicated by the presence an eigenstate solution $u(x,t) = e^{\frac12 itq^2}e^{-|q||x|}$ to the linear problem . Therefore, the Strichartz estimates, which involve global time integration, cannot be valid for general solutions to . However, they can be shown to hold for the *dispersive* component of the solution $e^{-itH_q}(\phi - P\phi)$, where $P$ is the orthogonal projection onto the eigendirection $e^{-|q||x|}$. In the pre-interaction, interaction, and post-interaction perturbative analyses, this eigenstate must be separately analyzed. This introduces the most difficulty in the post-interaction analysis, although (as explained in more detail below), we are able to obtain suitable estimates by introducing a more refined decomposition of the outgoing waves and invoking some nonlinear energy estimates. We thus obtain the following:
\[th:1\] Fix $0 < {\varepsilon}\ll 1$. If $u(x,t)$ is the solution of with initial condition $u(x,0)=e^{ixv}{\textnormal{sech}}(x-x_0)$ and $x_0\leq -v^{\varepsilon}$, then for $|q|\gtrsim 1$ and $$\label{E:vlarge}
v \ge C(\log |q|)^{1/{\varepsilon}} + C|q|^{\frac{13}{14}(1+2{\varepsilon})} + C_{{\varepsilon},n} {\langle}q {\rangle}^{\frac 1n}$$ we have $$\label{eq:th}
\frac{1}{2} \int_{x> 0 } | u ( x , t ) |^2 dx = \frac{v^2}{ v^2 + q^2 } +
{\mathcal O}( |q|^\frac13v^{-\frac76(1-2{\varepsilon})})+\mathcal{O}(v^{-(1-2{\varepsilon})})$$ uniformly for “post-interaction” times $$\frac{|x_0|}{v}+v^{-1+{\varepsilon}}\leq t \leq {\varepsilon}\log v.$$ Here the constant $C$ and the constants in $\mathcal{O}$ are independent of $q,v,{\varepsilon}$, while $C_{{\varepsilon},n}$ is a constant depending on ${\varepsilon}$ and $n$ which goes to infinity as ${\varepsilon}\to 0$ or $n \to \infty$.
The proof is outlined in §\[proof\]. It is decomposed into estimates for the pre-interaction phase (Phase 1), interaction phase (Phase 2), and post-interaction phase (Phase 3). The details of the estimates for each of the phases are then given in §\[S:phase1\] (Phase 1), §\[S:phase2\] (Phase 2), and §\[S:phase3\] (Phase 3).
The assumption that $v \gtrsim |q|^{\frac{13}{14}(1+2{\varepsilon})}$ is new to our $q<0$ analysis; no assumption of this strength was required in the $q>0$ case treated in [@HMZ]. It is needed in order to iterate over unit-sized time intervals in the post-interaction phase. The perturbative equation in that analysis has a forcing term whose size can be at most comparable to the size of the initial error. The condition that emerges is $|q|^{3/2}(\text{error})^2 \leq c$. Since the error bestowed upon us from the interaction phase analysis is $|q|^\frac13v^{-\frac76(1-{\varepsilon})}$, the condition $|q|^{3/2}(\text{error})^2 \leq c$ equates to $v \gtrsim |q|^{\frac{13}{14}(1+2{\varepsilon})}$. Provided this condition is satisfied, we can interate over $\sim {\varepsilon}\log v$ unit-sized time intervals, with the error bound doubling over each interval, and incur a loss of size $v^{{\varepsilon}}$. This enables us to reach time ${\varepsilon}\log v$.
The assumption $v \gtrsim |q|^{\frac{13}{14}(1+2{\varepsilon})}$ is not a serious limitation, however, since the most interesting phenomenom (even splitting or near even splitting) occurs for $|q|\sim v$. Furthermore, if the analysis is only carried through the interaction phase (ending at time $|x_0|/v+v^{-1+{\varepsilon}}$) and no further, then only the assumption $v \gtrsim |q|^{\frac12(1+{\varepsilon})}$ is needed. We believe that if our post-interaction arguments are amplified with a series of technical refinements, we could relax the restriction needed there from $v \gtrsim |q|^{\frac{13}{14}(1+2{\varepsilon})}$ to $v \gtrsim |q|^{\frac12(1+{\varepsilon})}$. On the other hand, the condition $v \gtrsim |q|^{\frac12(1+{\varepsilon})}$ shows up in a more serious way in the interaction analysis, and to relax this restriction even further (if it is possible) would require a more significant new idea.
The condition $v \gtrsim |q|^{\frac12(1+{\varepsilon})}$ comes about as a result of applying Strichartz estimates to the flow $e^{-itH_q}\phi$ rather than just to the dispersive part $e^{-itH_q}(\phi-P\phi)$, and the additional error found in the $q<0$ case compared to the $q>0$ case arises in the same way. As discussed in Theorem \[th:3\] below, in the case $q>0$ we have $\mathcal{O}(v^{-1+})$ in place of ${\mathcal O}( |q|^\frac13v^{-\frac76+})+\mathcal{O}(v^{-1+})$ for $q<0$ (which in the crucial regime $|q| \sim v$ becomes $\mathcal{O}(v^{-\frac 56 +})$). However, the numerical study conducted in [@HMZ2] (see equation (2.4), Table 2, and Fig. 5 in that paper) suggests that the trapping at the origin should be exponentially small instead, indicating that this is probably only an artifact of our method of proof.
The proof of Theorem \[th:1\] is based entirely upon estimates for the perturbed and free linear propagators, and some nonlinear conservation laws (energy and mass); there is no use of the inverse scattering theory. However, as in [@HMZ], we can combine the inverse scattering theory with the proof of Theorem \[th:1\] to obtain a strengthened result giving more information about the behavior of the outgoing waves. This result we state as:
\[th:2\] Under the hypothesis of Theorem \[th:1\] and for $$\frac{|x_0|}{v}+1 \leq t \leq {\varepsilon}\log v,$$ we have $$\label{eq:th2}
u(x,t) =
\begin{aligned}[t]
&\phi_0(|t_q(v)|) e^{\frac{1}{2}i|\tilde T_q(v)|^2t}e^{i\arg t_q(v)} e^{ixv}e^{-itv^2}\tilde T_q(v) {\textnormal{sech}}(\tilde T_q(v)(x-x_0-tv))\\
&+\phi_0(|r_q(v)|) e^{\frac{1}{2}i|\tilde R_q(v)|^2t}e^{i\arg r_q(v)} e^{-ixv}e^{-itv^2}\tilde R_q(v) {\textnormal{sech}}(\tilde R_q(v)(x+x_0+tv))\\
& + \mathcal{O}_{L_x^\infty}\left(\left(t-\frac{|x_0|}{v}\right)^{-1/2}\right) +\mathcal{O}_{L_x^2}(|q|^{\frac13}v^{-\frac76(1-2{\varepsilon})})+\mathcal{O}(v^{-1+2{\varepsilon}})
\end{aligned}$$ where $$\label{eq:th3}
\tilde T_q(v)=[2|t_q(v)|-1]_+, \qquad \tilde R_q(v)=[2|r_q(v)|-1]_+\, ,$$ $$\phi_0(\alpha) = \int_0^\infty \log\left( 1 + \frac{\sin^2\pi \alpha}{\cosh^2\pi \zeta} \right) \frac{\zeta}{\zeta^2+(2\alpha-1)^2} \, d\zeta$$ When $ 2 | t_q ( v ) | =1 $ or $ 2 | r_q ( v ) | = 1 $ the first error term in is modified to $ {\mathcal O}_{ L^\infty_x } ( (\log ( t - |x_0|/v ))/( t - |x_0|/v ))^{\frac12} ) $.
The proof of Theorem \[th:2\] is not discussed in the main body of this paper, since all of the needed information is contained in §4 and Appendix B of [@HMZ]. The main point is that Theorem \[th:1\] in fact establishes that for times $|x_0|/v+1\leq t\leq {\varepsilon}\log v$, we have $$u(x,t) =
\begin{aligned}[t]
& e^{-itv^2/2}e^{it_2/2}e^{ixv}{\textnormal{NLS}_0}(t-t_2)[t(v){\textnormal{sech}}(x)](x-x_0-tv) \\
&+ e^{-itv^2/2}e^{it_2/2}e^{-ixv}{\textnormal{NLS}_0}(t-t_2)[r(v){\textnormal{sech}}(x)](x+x_0+tv) \\
&+\mathcal{O}(v^{-(1-{\varepsilon})}) + \mathcal{O}(|q|^\frac 13v^{-\frac 76(1-2{\varepsilon})})
\end{aligned}$$ where ${\textnormal{NLS}_0}(t)\phi$ denotes the free nonlinear flow according to . This is the starting point of the arguments provided in §4 and Appendix B of [@HMZ], which carry out an asymptotic (in time) description of the free nonlinear evolution of $\alpha{\textnormal{sech}}x$, for a constant $0\leq \alpha<1$.
Although the main point of the present paper is to handle the difficulties involved in the case $q<0$ stemming from the presence of a linear eigenstate, some of the refinements we introduce (specifically, cubic correction terms in the interaction phase analysis) improve the result of [@HMZ] in the case $q>0$. In fact, these refinements are simpler when carried out for $q>0$ directly, and we therefore write them out separately in that setting in §\[S:phase2pos\]. We summarize the results as:
\[th:3\] In the case $q>0$, the assumption in Theorem \[th:1\] can be replaced by the less restrictive $ v \ge C(\log |q|)^{1/{\varepsilon}} + C_{{\varepsilon},n} {\langle}q {\rangle}^{\frac 1n}$, and the conclusion holds with the first error term dropped (that is, $\mathcal{O}_{L_x^2}(|q|^{\frac13}v^{-\frac76(1-2{\varepsilon})})$ is dropped and only $\mathcal{O}(v^{-1+2{\varepsilon}})$ is kept). Also, the conclusion of Theorem 2 holds with $\mathcal{O}_{L_x^2}(|q|^{\frac13}v^{-\frac76(1-2{\varepsilon})})$ dropped from .
Thus in the $q > 0$ case we improve the $L^2$ error from $\mathcal{O}(v^{-1/2 +})$ to $\mathcal{O}(v^{-1 +})$. It may be possible to to improve this error further to $\mathcal{O}(v^{-2 +})$ using an iterated integral expansion of the error in the spirit of Sections \[S:phase2\] and \[S:phase2pos\], although a more detailed analysis than the one given there would be needed.
We now outline the proof of Theorem \[th:1\], the main result of the paper, highlighting the modifications of the argument in [@HMZ] needed to address the case of $q<0$. We will use the following terminology: the *free linear* evolution is according to the equation $i\partial_t u + \tfrac12\partial_x^2 u =0$, the *perturbed linear* evolution is according to the equation $i\partial_t u + \tfrac12\partial_x^2 u -q\delta_0 u=0$, the *free nonlinear* evolution is according to the equation $i\partial_t u + \tfrac12\partial_x^2 u +|u|^2u=0$, and the *perturbed nonlinear* evolution is according to the equation $i\partial_t u + \tfrac12\partial_x^2 u -q\delta_0 u+|u|^2u=0$.
The analysis breaks into three separate time intervals: Phase 1 (pre-interaction), Phase 2 (interaction), and Phase 3 (post-interaction). The analysis of Phase 2, discussed in part earlier, is initially based on the principle that at high velocities, the time length of interaction is short $\sim v^{-1+}$, and thus the perturbed nonlinear flow is well-approximated by the perturbed linear flow. In [@HMZ], this was proved to hold for $q>0$ with a bound on the $L^2$ discrepancy of size $\sim v^{-\frac12+}$. In the case $q<0$, we suffer some loss in the strength of the estimates due to the flow along the eigenstate $|q|^\frac12e^{-|q||x|}$, and by directly following the approach of [@HMZ] the best error bound we could obtain is $\sim |q|^\frac13v^{-\frac23+}+v^{-\frac12+}$. In the important regime $|q|\sim v$, this gives an error bound of size $v^{-\frac13+}$, which does not suffice to carry through the Phase 3 post-interaction analysis discussed below. For this reason, we are forced to introduce a cubic correction term to the linear approximation analysis in Phase 2. The Strichartz based argument then shows that the $L^2$ size of the difference between the solution and the linear flow plus cubic correction is of size $\sim |q|^\frac13v^{-\frac76+} + v^{-1+}$. However, since the cubic correction term is fairly explicit, we can do a direct analysis of it (not using the Strichartz estimates) and show that it is also of size $v^{-1+}$. Thus, in the end, we learn that the solution itself is approximated by the perturbed linear flow with error $|q|^\frac13v^{-\frac76+}+v^{-1+}$.
We then carry out the analysis of the perturbed linear evolution, as disscussed earlier, and show that by the end of the interaction phase, the solution is decomposed into a transmitted component (modulo a phase factor) $$\label{E:intro_trans}
t(v)e^{ixv}{\textnormal{sech}}(x-x_0-t_2v)$$ and a reflected component (again modulo a phase factor) $$\label{E:intro_refl}
r(v)e^{-ixv}{\textnormal{sech}}(x+x_0+t_2v) \,.$$ In the post-interaction analysis, we aim to argue that the solution is well-approximated by the free nonlinear flow of (that we denote $u_{{\operatorname{tr}}}$) plus the free nonlinear flow of (that we denote $u_{{\textnormal{ref}}})$. It is at this stage that the most serious difficulties beyond those in [@HMZ] are encountered. The approach employed in [@HMZ] was to model the solution $u$ as $u= u_{{\operatorname{tr}}}+u_{{\textnormal{ref}}}+w$, write the equation for $w$ induced by the equations for $u$, $u_{{\operatorname{tr}}}$, and $u_{{\textnormal{ref}}}$, and bound $\|w\|_{L_{[t_a,t_b]}L_x^2}$ over unit-sized time intervals $[t_a,t_b]$ in terms of the initial size $\|w(t_a)\|_{L^2}$ for that time interval. This was accomplished by using the Strichartz estimates. The Strichartz estimates provide a bound on a whole family of space-time norms $\|w\|_{L_{[t_a,t_b]}^qL_x^r}$ where $(q,r)$ are exponents satisfying an admissibilty condition $\frac{2}{q}+\frac1{r}=\frac12$. This family includes the norm $L_{[t_a,t_b]}^\infty L_x^2$; the other norms (such as $L_{[t_a,t_b]}^6L_x^6$) are needed since they necessarily arise on the right-hand size of the estimates. From these estimates, we are able to conclude that the error at most doubles over unit-sized time intervals, and thus after $\sim {\varepsilon}\log v$ time intervals, we have incurred at most an error of size $v^{\varepsilon}$.
This strategy presents a problem for the case $q<0$, since the linear eigenstate $|q|^{1/2}e^{-|q||x|}$ is well-controlled in $L_{[t_a,t_b]}^\infty L_x^2$ (of size $\sim 1$) but poorly controlled in $L_{[t_a,t_b]}^6L_x^6$ (of size $\sim |q|^\frac13$). We thus opt to model the post-interaction solution as $u=u_{{\operatorname{tr}}}+u_{{\textnormal{ref}}}+u_{{\textnormal{bd}}}+w$, where $u_{{\textnormal{bd}}}$ is the *perturbed nonlinear* evolution of the $L_x^2$ projection of $(u(t_a)-u_{{\textnormal{ref}}}(t_a)-u_{{\operatorname{tr}}}(t_a))$ onto the linear bound state $|q|^{\frac12}e^{-|q||x|}$. Then we can use nonlinear estimates based on mass conservation and energy conservation to control the growth of $u_{{\textnormal{bd}}}$ over the interval $[t_a,t_b]$. Then $w(t_a)$ is orthogonal to the linear eigenstate, and we can use the Strichartz estimates to control it over the interval $[t_a,t_b]$. In the estimates, we take care to only evaluate $u_{{\textnormal{bd}}}$ in one of the norms controlled by mass or energy conservation. This argument is carried out in detail in §\[S:phase3\].
[Acknowledgments.]{} We would like to thank Maciej Zworski for helpful discussions during the preparation of this paper. The first author was supported in part by NSF grant DMS-0654436 and the second author was supported in part by an NSF postdoctoral fellowship.
Scattering by a delta function {#ros}
==============================
Here we present some basic facts about scattering by a $ \delta $-function potential on the real line. Let $q < 0$ and put $$H_q = - \frac{1}2 \frac{d^2}{dx^2} + q \delta_0 ( x ), \qquad H_0 = -\frac 1 2 \frac{d^2}{dx^2} \, .$$ The operator $H_q$ is self-adjoint on the following domain: $$\mathcal{D}(H_q) = \{f \in H^2({{\mathbb R}}\setminus \{0\}): f'(0^+) - f'(0^-) = 2q f(0)\},$$ where $f(0)$ means $\lim_{x \to 0} f(x)$ and $f'(0^\pm)$ means $\lim_{x \to 0^\pm} f'(x)$. This can be seen by verifying that the operators $H_q \pm i$ are both symmetric and surjective on $\mathcal{D}(H_q)$. We define special solutions, $ e_\pm ( x , \lambda ) $, to $ ( H_q - \lambda^2 /2 ) e_\pm = 0 $, as follows $$\label{211}
e_{\pm}(x,{\lambda}) = t_q ({\lambda})e^{\pm i {\lambda}x} x_{\pm}^0
+ (e^{\pm i {\lambda}x} + r_q ({\lambda})e^{\mp i{\lambda}x}) x_{\mp}^0 \,,$$ where $ t_q $ and $ r_q $ are the transmission and reflection coefficients: $$\label{eq:tr}
t_q ( \lambda ) = \frac{ i \lambda } { i \lambda - q } \,, \ \
r_q ( \lambda ) = \frac{ q} {i \lambda - q } \,.$$ They satisfy two equations, one standard (unitarity) and one due to the special structure of the potential: $$\label{eq:trpr} | t_q ( \lambda ) |^2 + | r_q ( {\lambda}) |^2 = 1 \,, \ \
t_q ( \lambda ) = 1 + r_q ( \lambda ) \,.$$ Let $P$ denote the $L^2$-projection onto the eigenstate $e^{q|x|}$. Specifically, $$\label{E:Pdef}
P\phi(x) = |q|^{1/2}e^{q|x|} \int_y |q|^{1/2}e^{q|y|}\phi(y)\, dy$$ We have $e^{-itH_q}P\phi(x) = e^{\frac{1}{2}itq^2}P\phi(x)$. Note that $P \phi$ is defined for $\phi \in L_x^r$, $1\leq r \leq \infty$, and by the Hölder inequality, $$\label{E:PHolder}
\|P\phi\|_{L^{r_2}} \leq c|q|^{\frac{1}{r_1}-\frac{1}{r_2}}\|\phi\|_{L^{r_1}}, \quad 1\leq r_1, r_2 \leq \infty.$$
We use the representation of the propagator in terms of the generalized eigenfunctions – see the notes [@TZ] covering scattering by compactly supported potentials. The resolvent $$R_q ( \lambda ) {\stackrel{\rm{def}}{=}}( H_q - \lambda^2 / 2 )^{-1} \,,$$ is given by $$R_q ( {\lambda})(x,y) =
\begin{aligned}[t]
\frac{1}{i{\lambda}t_q ({\lambda})}\, \big(e_+(x,{\lambda})e_-(y,{\lambda})(x-y)^0_+ + e_+(y,{\lambda})e_-(x,{\lambda})(x-y)^0_-\big)
\end{aligned}$$ Using Stone’s thoerem, this gives an explicit formula for the spectral projection, and hence the Schwartz kernel of the propagator: $$\label{eq:sppr}
e^{ - i t H_q } = \frac{1}{2\pi}\int^\infty_0 e^{- i t \lambda^2/2 } \left(e_+(x,{\lambda})\overline{e_+(y,{\lambda})} + e_-(x,{\lambda}) \overline{e_-(y,{\lambda})}\right) \,d{\lambda}+ e^{\frac{1}{2}itq^2}P$$ We introduce the following notation for the dispersive part of $e^{ - i t H_q }$: $$U_q(t) {\stackrel{\rm{def}}{=}}e^{ - i t H_q } - e^{\frac{1}{2}itq^2}P.$$ The propagator for $ H_q $ is then described in the following
\[p:lin\] Suppose that $ \phi \in L^1 $ and that $ {\operatorname{supp}}\phi \subset ( -\infty , 0] $. Then $$\label{eq:prop}
e^{-itH_q}\phi(x) =
\begin{aligned}[t]
& e^{\frac 1 2 itq^2}P\phi(x) + e^{-itH_0}(\phi \ast \tau_q)(x)\, x_+^0 \\
& + (e^{-itH_0}\phi(x) + e^{-itH_0}(\phi\ast \rho_q)(-x))\, x_-^0
\end{aligned}$$ where $$\label{eq:ftr}
\tau_q ( x ) = \delta_0 ( x) + \rho_q ( x ), \qquad
\rho_q ( x) = qe^{qx} x_+^0$$
Observe that we have, using a deformation of contour, $$\begin{aligned}
\hat \rho_q ({\lambda}) &= q\int_0^\infty e^{x(q-i{\lambda})} dx = \frac q {q-i{\lambda}} \int_0^{-\infty} e^x dx = r_q({\lambda}) \\
\hat \tau_q ({\lambda}) &= 1 + r_q({\lambda}) = t_q({\lambda}).\end{aligned}$$ Observe also that $H_q R = R H_q $, where $R\phi(x) = \phi(-x)$, so that the restriction on the support of $\phi$ is not a serious one, and the formula will allow us to estimate operator norms of $U_q$ using $e^{-itH_0}$. Indeed, from the Hausdorff-Young inequality for $e^{-itH_0}$ we conclude $$\label{eq:HY}
\|U_q \phi \|_{L^p} \le \|\hat\phi\|_{L^{p'}},$$ where $p \in [2,\infty]$ and $p'= \frac p {p-1}$.
It is enough to show $$U_q(t)\phi(x) = \big[e^{-itH_0}\phi(x) + e^{-itH_0}(\phi * \rho_q)(-x)\big]x_-^0 + e^{-itH_0}(\phi * \tau_q)(x)x_+^0.$$ From the definition of the propagator we have $$U_q(t)\phi(x) = \frac 1 {2\pi} \int_0^\infty \!\! \int e^{-it\lambda^2/2} \left( e_+(x,\lambda) \overline{e_+(y,\lambda)} + e_-(x,\lambda)\overline{e_-(y,\lambda)}\right) \phi(y) dy d\lambda,$$ and so we must verify $$\begin{aligned}
\frac 1 {2\pi} \int_0^\infty e^{-it\lambda^2/2} &\left( e_+(x,\lambda)\int \overline{e_+(y,\lambda)}\phi(y)dy + e_-(x,\lambda)\int\overline{e_-(y,\lambda)} \phi(y) dy \right) d\lambda \\
&= \big[e^{-itH_0}\phi(x) + e^{-itH_0}(\phi * \rho_q)(-x)\big]x_-^0 + e^{-itH_0}(\phi * \tau_q)(x)x_+^0.\end{aligned}$$ We compute first $$\begin{aligned}
\int \overline{e_+(y,\lambda)}\phi(y)dy &= \int_{-\infty}^0 e^{-i\lambda y} \phi(y) d\lambda + \overline{r_q(\lambda)} \int_{-\infty}^0 e^{i\lambda y} \phi(y) d\lambda \\
&= \hat\phi(\lambda) + r_q(-\lambda) \hat\phi(-\lambda), \\
\int \overline{e_-(y,\lambda)}\phi(y)dy &= \overline{t_q(\lambda)}\int_{-\infty}^0 e^{i\lambda y} \phi(y) d\lambda = t_q(-\lambda) \hat\phi(-\lambda).\end{aligned}$$ We first verify the equation for positive $x$: $$\begin{aligned}
&\frac 1 {2\pi} \int_0^\infty e^{-it\lambda^2/2} \left( e_+(x,\lambda)\int \overline{e_+(y,\lambda)}\phi(y)dy + e_-(x,\lambda)\int\overline{e_-(y,\lambda)} \phi(y) dy \right) d\lambda \\
&=\frac 1 {2\pi} \int_0^\infty e^{-it\lambda^2/2} \left( e_+(x,\lambda)(\hat\phi(\lambda) + r_q(-\lambda) \hat\phi(-\lambda)) + e_-(x,\lambda) t_q(-\lambda) \hat\phi(-\lambda)\right) d\lambda \\
&=\frac 1 {2\pi} \int_0^\infty e^{-it\lambda^2/2}
\begin{aligned}[t]
\Big( & t_q(\lambda)e^{i\lambda x}(\hat\phi(\lambda) + r_q(-\lambda) \hat\phi(-\lambda)) \\
&+ (e^{-i\lambda x} + r_q(\lambda) e^{i\lambda x}) t_q(-\lambda) \hat\phi(-\lambda)\Big) d\lambda
\end{aligned}
\intertext{At this stage we use $t_q(\lambda) r_q(-\lambda) + r_q(\lambda) t_q(-\lambda) = 2 {\mathop{\rm Re}\nolimits}\left(t_q(\lambda) \overline{r_q(\lambda)}\right) = 0$:}
&=\frac 1 {2\pi} \int_0^\infty e^{-it\lambda^2/2} \left( t_q(\lambda)e^{i\lambda x}\hat\phi(\lambda) + e^{-i\lambda x} t_q(-\lambda)\hat\phi(-\lambda)\right) d\lambda \\
&=\frac 1 {2\pi} \int_{-\infty}^\infty e^{-it\lambda^2/2} t_q(\lambda) e^{i\lambda x} \hat\phi(\lambda) d\lambda = e^{-itH_0} (\tau_q * \phi) (x).\end{aligned}$$ The proof for negative $x$ is similar, except that it uses $r_q(\lambda) r_q(-\lambda) + t_q(\lambda) t_q(-\lambda) = |r_q(\lambda)|^2 + |t_q(\lambda)|^2=1$.
We have two simple applications of Lemma \[p:lin\]: the Strichartz estimate (Proposition \[p:Str\]) and the asymptotics of the linear flow $e^{itH_q}$ as $v \to \infty$ (Proposition \[p:as\]). We start with the Strichartz estimate, which will be used several times in the various approximation arguments of §\[proof\].
\[p:Str\] Suppose $$\label{eq:eq}
i \partial_t u ( x , t ) + \tfrac{1}{2}\partial_{x}^2 u ( x, t )
- q \delta_0 ( x ) u ( x , t ) = f ( x , t ) \,, \ \ u ( x , 0 ) = \phi ( x )
\,.$$ Let the indices $p,r$, $\tilde p$, $\tilde r$ satisfy $$2 \leq p, r \leq \infty \,, \ \ 1 \leq \tilde p , \tilde r \leq 2 \,, \ \
\frac 2 p + \frac 1 r = \frac 12 \,, \ \ \ \frac 2 {\tilde p }
+ \frac 1 {\tilde r} = \frac 52$$ and fix a time $T>0$. Then $$\label{eq:Strneg}
\| u \|_{ L^p_{[0,T]} L^r_x } \leq c (\| \phi \|_{L^2} +T^\frac 1p \| P\phi\|_{L^r} + \|f\|_{L_{[0,T]}^{\tilde p} L_x^{\tilde r}}+T^{\frac 1p}\| Pf \|_{ L_{[0,T]}^1 L_x^r})$$ The constant $c$ is independent of $q$ and $T$. Moreover we can take $f(x,t) = g(t)\delta_0(x)$ and, on the right-hand side, replace $\| f \|_{ L_{[0,T]}^{\tilde p} L_x^{\tilde r} }$ with $\|g\|_{L_{[0,T]}^\frac{4}{3}}$ and replace $T^{\frac 1p}\| Pf \|_{ L_{[0,T]}^1 L_x^r}$ with $T^\frac{1}{p}|q|^{1-\frac{1}{r}}\|g\|_{L_{[0,T]}^1}$.
This will follow from $$\label{eq:Struq}
\left\|U_q(t)\phi(x) + \int_0^t U_q(t-s)f(x,s)ds \right\|_{L^p_{[0,T]}L^r_x} \le c(\| \phi \|_{L^2} + \|f\|_{L^{\tilde p}_{[0,T]}L^{\tilde r}_x }).$$ To prove this estimate, we observe that the case $q=0$ is the standard Euclidean Strichartz estimate (see [@KT] and [@HMZ Proposition 2.2]). The case $q \ne 0$ reduces to this case as follows. We write $\phi^- = x^0_- \phi$ and $\phi^+(x) = R x^0_+ \phi$ where again $R\phi(x) = \phi(-x)$. Note that $\phi = \phi^- + R \phi^+$, that $U_q(t) R = R U_q(t)$, and that $(Rf)*g = R(f*Rg)$. Now, from Lemma \[p:lin\], we have $$\begin{aligned}
U_q \phi = &\big[U_0(t)\phi^- + U_0(t)(\phi^- * \rho_q)\big]x_-^0
+ U_0(t)(\phi^- * \tau_q)x_+^0 \\
&+ \big[U_0(t)\phi^+ + U_0(t)(\phi^+ * R\rho_q)\big]x_+^0
+ U_0(t)(\phi^+ * R\tau_q)x_-^0. \end{aligned}$$ We must now show that $\|f^\pm * \sigma_q\|_{L^{\tilde p}_{[0,T]}L^{\tilde r}_x } \le c\|f\|_{L^{\tilde p}_{[0,T]}L^{\tilde r}_x}$, where $\sigma_q$ is either $\tau_q$ or $\rho_q$. This follows from applying Young’s inequality to the spatial integral.
This completes the proof of (\[eq:Struq\]). To obtain (\[eq:Strneg\]), we observe that $\|P\phi(x)\|_{L^p_{[0,T]}L^r_x} = T^{1/p}\|P\phi(x)\|_{L^r_x}$ and $\left\|\int_0^t P f(x,s) ds \right\|_{L^p_{[0,T]}L^r_x} \le T^{1/p}\|Pf(x,s)\|_{L^1_{[0,T]}L^r_x}$. The first is immediate, and the second follows from the generalized Minkowski inequality: $$\left\|\int_0^t P f(x,s) ds \right\|_{L^p_{[0,T]}L^r_x} \le \left\|\int_0^t \|P f(x,s) \|_{L^r_x}ds\right\|_{L^p_{[0,T]}} \le T^{1/p}\int_0^T \|P f(x,s) \|_{L^r_x}ds.$$
We now turn to the large velocity asymptotics of the linear flow $e^{-itH_q}$.
\[p:as\] Let $\theta(x)$ be a smooth function bounded, together with all of its derivatives, on $\mathbb{R}$. Let $\phi\in \mathcal{S}(\mathbb{R})$, $v>0$, and suppose ${\operatorname{supp}}[\theta(x)\phi(x-x_0)] \subset (-\infty,0]$. Then for $2|x_0|/v \leq t \leq 1$, $$\label{E:as2}
e^{-itH_q}[e^{ixv}\phi(x-x_0)] =
\begin{aligned}[t]
& t(v)e^{-itH_0}[e^{ixv}\phi(x-x_0)] + r(v) e^{-itH_0}[e^{-ixv}\phi(-x-x_0)] \\
&+ e(x,t)
\end{aligned}$$ where, for any $k\geq 0$, $$\|e(x,t)\|_{L_x^2} \leq
\begin{aligned}[t]
&\frac{1}{v}\|\partial_x [\theta(x)\phi(x-x_0)]\|_{L_x^2} +\frac{c_k}{(tv)^k} \| {\langle}x {\rangle}^k \phi(x) \|_{H_x^k}\\
&+4\|(1-\theta(x))\phi(x-x_0)\|_{L_x^2} + \|P[e^{ixv}\theta(x)\phi(x-x_0)]\|_{L_x^2}
\end{aligned}$$
In §\[proof\], Proposition \[p:as\] will be applied with $\theta(x)$ a smooth cutoff to $x<0$, and $\phi(x)={\textnormal{sech}}(x)$ with $x_0=-v^{\varepsilon}\ll 0$.
Before proving Proposition \[p:as\], we need the following
\[l:as\] Let $\psi\in \mathcal{S}(\mathbb{R})$ with ${\operatorname{supp}}\psi \subset (-\infty,0]$. Then $$\label{E:as1}
\begin{aligned}[t]
U_q(t)[e^{ixv}\psi(x)](x) &= e^{-itH_0}[e^{ixv}\psi(x)](x)x_-^0 + t(v)e^{-itH_0}[e^{ixv}\psi(x)](x)x_+^0 \\
& \qquad + r(v)e^{-itH_0}[e^{-ixv}\psi(-x)](x)x_-^0 + e(x,t),
\end{aligned}$$ where $$\|e(x,t)\|_{L_x^2} \leq \frac{1}{v}\|\partial_x \psi\|_{L^2}$$ uniformly in $t$.
We apply (\[eq:prop\]) with $\phi(x) = e^{ixv}\psi(x)$ to find that $$e(x,t) = e^{-itH_0}[\phi * (\rho_q - r_q(v)\delta_0)(-x)]x_-^0 + e^{-itH_0}[\phi * (\tau_q - t_q(v)\delta_0)(x)]x_+^0.$$ We pass to the Fourier transform using Plancherel’s theorem: $$\|e^{-itH_0} \varphi\|_{L^2} = c \|\hat\varphi\|_{L^2}.$$ So that it suffices to verify $$\|\hat \psi(\lambda-v) (r_q(\lambda) - r_q(v))\|_{L^2} \le \frac c v \|\lambda\hat\psi(\lambda)\|_{L^2},$$ $$\|\hat \psi(\lambda-v) (t_q(\lambda) - t_q(v))\|_{L^2} \le \frac c v \|\lambda\hat\psi(\lambda)\|_{L^2}.$$ We write out now $$r_q(\lambda) - r_q(v) = \frac{iq(\lambda - v)}{(i\lambda - q)(iv-q)}, \qquad t_q(\lambda) - t_q(v) = \frac{-iq(\lambda - v)}{(i\lambda - q)(iv-q)}.$$ $$|r_q(\lambda) - r_q(v)| = |t_q(\lambda) - t_q(v)| = \frac{|\lambda - v|}{\sqrt{\lambda^2/q^2 + 1}\sqrt{v^2 + q^2}} \le \frac {|\lambda - v|} v.$$ We plug this in to obtain our estimate: $$\|\hat \psi(\lambda - v) (r_q(\lambda) - r_q(v))\|_{L^2} = \|\hat \psi(\lambda - v) (t_q(\lambda) - t_q(v))\|_{L^2} \le \frac 1 v \|\lambda \hat \psi(\lambda)\|_{L^2} \le \frac c v \|{{\partial}}_x \psi\|_{L^2}.$$
Now we turn to the proof of Proposition \[p:as\].
We will prove (\[E:as2\]) by showing that $$U_q(t)[e^{ixv}\phi(x-x_0)] = t(v)e^{-itH_0}[e^{ixv}\phi(x-x_0)] + r(v)e^{-itH_0}[e^{-ivx}\phi(-x-x_0)] + \tilde e(x,t),$$ where, for any $k \ge 0$, $$\|\tilde e(x,t)\|_{L^2_x} \le \frac 1 v \|{{\partial}}_x[\theta(x)\phi(x-x_0)]\|_{L^2_x} + \frac {c_k}{(tv)^k} \| \langle x \rangle^k \phi(x)\|_{H^k_x} + 4 \|(1 - \theta(x))\phi(x-x_0)\|_{L^2_x}.$$
We use (\[E:as1\]) with $\psi(x) = \theta(x)\phi(x-x_0)$, which gives (using $e_1$ for the error term arising from the lemma), $$\begin{aligned}
U_q(t)[e^{ixv}\theta(x)\phi(x-x_0)](x) = & e^{-itH_0}[e^{ixv}\theta(x)\phi(x-x_0)](x)x_-^0 \\
&+ t(v)e^{-itH_0}[e^{ixv}\theta(x)\phi(x-x_0)](x)x_+^0 \\
&+ r(v)e^{-itH_0}[e^{-ixv}\theta(x)\phi(-x-x_0)](x)x_-^0 \\
&+ e_1(x,t).\end{aligned}$$ We rewrite this equation with $\theta$ omitted at the cost of an additional error term: $$\begin{aligned}
U_q(t)[e^{ixv}\phi(x-x_0)](x) = &e^{-itH_0}[e^{ixv}\phi(x-x_0)](x)x_-^0 \\
&+ t(v)e^{-itH_0}[e^{ixv}\phi(x-x_0)](x)x_+^0 \\
&+ r(v)e^{-itH_0}[e^{-ixv}\phi(-x-x_0)](x)x_-^0 + e_1(x,t) + e_2(x,t).\end{aligned}$$ Using the notation $f(x) = e^{ixv}(1 - \theta(x))\phi(x-x_0)$, this error term is given by $$e_2(x,t) = U_q(t)f(x) - e^{-itH_0}f(x)x_-^0 - t(v)e^{-itH_0}f(x)x_+^0 - r(v)e^{-itH_0}f(-x)x_-^0.$$ Recall that $e^{-itH_0}$ is unitary, and that $\|U_q(t)\|_{L^2_x \to L^2_x} = 1$. This gives us a bound on $e_2$: $$\|e_2(x,t)\|_{L^2_x} \le 4 \|f\|_{L^2_x} = 4 \|(1 - \theta(x))\phi(x-x_0)\|_{L^2_x}.$$
We have a bound on $e_1$ from the lemma, so combining everything we have, we see that to prove the proposition it remains only to prove $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}\|e^{-itH_0}[e^{ixv}\phi(x-x_0)](x)x_-^0\|_{L^2_x} + \|t(v)e^{-itH_0}[e^{ixv}\theta(x)\phi(x-x_0)](x)x_-^0\|_{L^2_x} \\
&+ \|r(v)e^{-itH_0}[e^{-ixv}\phi(-x-x_0)](x)x_-^0\|_{L^2_x} \le \frac {c_k}{(tv)^k} \| \langle x \rangle^k \phi(x)\|_{H^k_x},\end{aligned}$$ for every $k \ge 0$. However, because $|t(v)| \le 1$ and $|r(v)| \le 1$, and because $e^{-itH_0} R = R e^{-itH_0}$, where $R\phi(x) = \phi(-x)$, it suffices to prove $$\label{suff}
\|e^{-itH_0}[e^{ixv}\phi(x-x_0)](x)x_-^0\|_{L^2_x} \le \frac {c_k}{(tv)^k} \| \langle x \rangle^k \phi(x)\|_{H^k_x}.$$ We expand as follows, first using the definition of the propagator: $$e^{-itH_0}[e^{ixv}\phi(x-x_0)](x) = \frac 1 {2\pi} \int e^{ix{\lambda}- it{\lambda}^2/2} \int e^{-i{\lambda}y}e^{iyv}\phi(y-x_0)dyd{\lambda}.$$ Here we change variables $y \mapsto y + x_0$ to simplify the Fourier transform. $$\begin{aligned}
&= \frac 1 {2\pi} \int e^{ix{\lambda}- it{\lambda}^2/2} e^{ix_0(v - {\lambda})} \hat \phi({\lambda}- v) d{\lambda}.
\intertext{Here we change variables ${\lambda}\mapsto {\lambda}+ v$.}
&= \frac 1 {2\pi} e^{ixv}\int e^{ix{\lambda}- it({\lambda}+v)^2/2} e^{-ix_0{\lambda}} \hat \phi({\lambda}) d{\lambda}\\
&= \frac 1 {2\pi} e^{ixv}e^{-itv^2/2}\int e^{i{\lambda}(x-x_0 - tv)}e^{-it{\lambda}^2/2} \hat \phi({\lambda}) d{\lambda}.
\intertext{After $k$ integrations by parts in ${\lambda}$, this becomes}
&= \frac 1 {2\pi} \frac {e^{ixv-itv^2/2}} {(i(x-x_0 - tv))^k} \int e^{i{\lambda}(x-x_0 - tv)} {{\partial}}_{\lambda}^k\left(e^{-it{\lambda}^2/2} \hat \phi({\lambda})\right) d{\lambda}.\end{aligned}$$ By assumption we have $2|x_0|/v \le t \le 1$, so that $-x_0 -tv < 0$ and $|x-x_0 - tv| \ge |-x_0 - tv| \ge tv/2$ when $x < 0$. Hence $$\begin{aligned}
\left\| e^{-itH_0}[e^{ixv}\phi(x-x_0)](x)x^0_- \right\|_{L^2_x} &\le \frac {c_k}{(tv)^k} \left\| \int e^{i{\lambda}(x-x_0 - tv)} {{\partial}}_{\lambda}^k\left(e^{-it{\lambda}^2/2} \hat \phi({\lambda})\right) d{\lambda}\right\|_{L^2_x} \\
&= \frac {c_k}{(tv)^k} \left\|{{\partial}}_{\lambda}^k\left(e^{-it{\lambda}^2/2} \hat \phi({\lambda})\right) \right\|_{L^2_{\lambda}} \\
&\le \frac {c_k}{(tv)^k} \sum_{j=0}^k t^j\left\| \langle x \rangle^{k-j} \phi(x) \right\|_{H^j_x}.\end{aligned}$$ Since $t \le 1$, (\[suff\]) follows.
Soliton scattering {#proof}
==================
In this section, we outline the proof of Theorem 1, the details of which are executed in §\[S:phase1\]–\[S:phase3\]. We recall the notation for operators from §\[ros\] and introduce short hand notation for the nonlinear flows:
- $H_0=-\frac{1}{2} \partial_x^2$. The flow $e^{-itH_0}$ is termed the “free linear flow”.
- $H_q = -\frac{1}{2} \partial_x^2+q\delta_0(x)$. The flow $e^{-itH_q}$ is termed the “perturbed linear flow”. We also use $U_q(t) {\stackrel{\rm{def}}{=}}e^{ - i t H_q } - e^{\frac{1}{2}itq^2}P$, the propagator corresponding to the continuous part of the spectrum of $H_q$.
- ${\textnormal{NLS}_q}(t)\phi$, termed the “perturbed nonlinear flow” is the evolution of initial data $\phi(x)$ according to the equation $i\partial_tu + \tfrac{1}{2}\partial_x^2 u - q\delta_0(x)u + |u|^2u=0$.
- ${\textnormal{NLS}_0}(t)\phi$, termed the “free nonlinear flow” is the evolution of initial data $\phi(x)$ according to the equation $i\partial_th + \tfrac{1}{2}\partial_x^2 h + |h|^2h=0$.
From §\[in\] and the statement of Theorem 1, we recall the form of the initial condition, $u_0(x) = e^{ixv}{\textnormal{sech}}(x-x_0)$, $v \gtrsim 1$, $x_0 \leq -v^{{\varepsilon}}$ where $0<{\varepsilon}<1$ is fixed, and put $u(x,t) = {\textnormal{NLS}_q}(t)u_0(x)$. We begin by outlining the scheme, and will then supply the details. In this section, the $\mathcal O$ notation always means $L_x^2$ difference, uniformly on the time interval specified, and up to a multiplicative factor that is independent of $q$, $v$, and ${\varepsilon}$.
**Phase 1 (Pre-interaction)**. Consider $0\leq t \leq t_1$, where $t_1 = |x_0|/v-v^{-(1-{\varepsilon})}$ so that $x_0+vt_1=-v^{\varepsilon}$. The soliton has not yet encountered the delta obstacle and propagates according to the free nonlinear flow. Indeed, there exists a small absolute constant $c$ such that ${\langle}q {\rangle}^3 e^{-cv^{\varepsilon}} \le c$ implies $$\label{E:approx1}
u(x,t) = e^{-itv^2/2}e^{it/2}e^{ixv}{\textnormal{sech}}(x-x_0-vt) + \mathcal{O}(|q|^\frac 32 {\langle}q {\rangle}^\frac 12e^{-v^{\varepsilon}}), \quad 0\leq t\leq t_1.$$ This is deduced as a consequence of Lemma \[L:approx1\] in §\[S:phase1\] below.
**Phase 2 (Interaction)**. Let $t_2 = t_1+v^{1-{\varepsilon}}$, so that $x_0+t_2v = v^{\varepsilon}$, and consider $t_1\leq t \leq t_2$. The incident soliton, beginning at position $-v^{\varepsilon}$, encounters the delta obstacle and splits into a transmitted component and a reflected component, which by $t=t_2$, are concentrated at positions $v^{\varepsilon}$ and $-v^{\varepsilon}$, respectively. More precisely, there exists a small absolute constant $c$ such that if $v {\langle}q {\rangle}^3 \left(e^{\frac{-|q|v^{\varepsilon}}2} + e^{-cv^{\varepsilon}}\right) + v^{- \frac 23(1-{\varepsilon})}|q|^\frac 13 \le c$, then at the conclusion of this phase (at $t=t_2$), $$\label{E:approx4}
u(x,t_2) =
\begin{aligned}[t]
&t(v)e^{-it_2v^2/2}e^{it_2/2}e^{ixv}{\textnormal{sech}}(x-x_0-vt_2)\\
&+r(v)e^{-it_2v^2/2}e^{it_2/2}e^{-ixv}{\textnormal{sech}}(x+x_0+vt_2) \\
&+ \mathcal{O}(v^{-\frac 76 (1-{\varepsilon})}|q|^{\frac 13}) + \mathcal O(v^{-(1-{\varepsilon})}).
\end{aligned}$$ This is proved as a consequence of Lemmas \[linapp\], \[L:approx2\], and \[L:dropint\] in §\[S:phase2\].
**Phase 3 (Post-interaction)**. Let $t_3=t_2+ {\varepsilon}\log v$, and consider $[t_2,t_3]$. Suppose $|q|^{10}{\langle}q {\rangle}^3 v^{-14(1-2{\varepsilon})} + v {\langle}q {\rangle}^3 \left(e^{\frac{-|q|v^{\varepsilon}}2} + e^{-cv^{\varepsilon}}\right) \le c$ and ${\langle}q{\rangle}v^{-n} \le c_{{\varepsilon},n}$, where $c$ is a small absolute constant and $c_{{\varepsilon},n}$ is a small constant, dependent only on ${\varepsilon}$ and $n$, which goes to zero as ${\varepsilon}\to 0$ or $n \to \infty$. The transmitted and reflected waves essentially do not encounter the delta potential and propagate according the free nonlinear flow, $$\label{E:post}
u(x,t) =
\begin{aligned}[t]
& e^{-itv^2/2}e^{it_2/2}e^{ixv}{\textnormal{NLS}_0}(t-t_2)[t(v){\textnormal{sech}}(x)](x-x_0-tv) \\
&+ e^{-itv^2/2}e^{it_2/2}e^{-ixv}{\textnormal{NLS}_0}(t-t_2)[r(v){\textnormal{sech}}(x)](x+x_0+tv) \\
&+\mathcal{O}(v^{-(1-{\varepsilon})}) + \mathcal{O}(|q|^\frac 13v^{-\frac 76(1-2{\varepsilon})}), \qquad t_2\leq t \leq t_3.
\end{aligned}$$
Now we turn to the details.
Phase 1 {#S:phase1}
=======
Let $u_1(x,t) = {\textnormal{NLS}_0}(t)u_0(x)$ and note that $$u_1(x,t) = e^{-itv^2/2}e^{it/2}e^{ixv}{\textnormal{sech}}(x-x_0-tv)$$ Recall that $t_1= |x_0|/v-v^{-(1-{\varepsilon})}$, so that at the conclusion of Phase 1 (when $t=t_1$), the position of the soliton is $x_0+vt_1=-v^{{\varepsilon}}$. Recall that $u(x,t)={\textnormal{NLS}_q}(t)u_0(x)$ and let $w= u-u_1$. We will need the following perturbation lemma.
\[L:approx1\] If $t_a < t_b \le t_1$, $t_b - t_a \le c_1 \le 1$, and $${\langle}q{\rangle}^\frac 12\|w(x,t_a)\|_{L^2_x} + |q|^\frac 32 {\langle}q {\rangle}^\frac 12\|u_1(0,t)\|_{L^1_{[t_a,t_b]}} \le c_2|q|^{-\frac 14},$$ then $$\|w\|_{L^p_{[t_a,t_b]}L^r_x} \le c_3 \left({\langle}q{\rangle}^\frac 12\|w(x,t_a)\|_{L^2_x} + |q|^\frac 32 {\langle}q {\rangle}^\frac 12\|u_1(0,t)\|_{L^1_{[t_a,t_b]}}\right),$$ where $(p,r)$ is either $(\infty,2)$ or $(4,\infty)$, and the constants $c_1, c_2$, and $c_3$ are independent of the parameters ${\varepsilon}, v,$ and $q$.
Before proving this lemma, we show how the phase 1 estimate follows from it. Let $k\geq 0$ be the integer such that $kc_1 \leq t_1 < (k+1)c_1$. (Note that $k=0$ if the soliton starts within a distance $v$ of the origin, i.e. $-v-v^{\varepsilon}< x_0\leq -v^{\varepsilon}$, and the inductive analysis below is skipped.) Apply Lemma \[L:approx1\] with $t_a=0$, $t_b=c_1$ to obtain (since $w(\cdot,0)=0$) $$\|w\|_{L_{[0,c_1]}^\infty L_x^2} \leq c_3|q|^\frac 32 {\langle}q {\rangle}^\frac 12\|u_1(0,t)\|_{L_{[0,c_1]}^\infty}\leq c_3|q|^\frac 32 {\langle}q {\rangle}^\frac 12{\textnormal{sech}}(x_0+vc_1).$$ Apply Lemma \[L:approx1\] again with $t_a=c_1$, $t_b=2c_1$ to obtain $$\begin{aligned}
\|w\|_{L_{[c_1,2c_1]}^\infty L_x^2} &\leq c_3({\langle}q {\rangle}^\frac12\|w(\cdot,c_1)\|_{L_x^2}+ |q|^\frac 32 {\langle}q {\rangle}^\frac 12\|u_1(0,t)\|_{L_{[c_1,2c_1]}^\infty}) \\
&\leq c_3^2|q|^\frac 32 {\langle}q {\rangle}{\textnormal{sech}}(x_0+vc_1)+c_3^1|q|^\frac 32 {\langle}q {\rangle}^\frac 12{\textnormal{sech}}(x_0+2vc_1).\end{aligned}$$ We continue inductively up to step $k$, and then collect all $k$ estimates to obtain the following bound on the time interval $[0,kc_1]$: $$\begin{aligned}
\|w\|_{L_{[0,kc_1]}^\infty L_x^2} &\leq c_3|q|^\frac 32 \sum_{j=1}^k c_3^{k-j} {\langle}q{\rangle}^\frac{k-j}2{\textnormal{sech}}(x_0+jvc_1).
\intertext{We use here the estimate ${\textnormal{sech}}\alpha \leq 2e^{-|\alpha|}$:}
& \le c_3^k|q|^\frac32 {\langle}q {\rangle}^\frac {k-1}2 e^{x_0+c_1v} \sum_{j=0}^{k-1}c_3^{-j}{\langle}q{\rangle}^{-\frac{j}{2}}e^{jvc_1}.
\intertext{We introduce here the assumption that $c_3^{-1}{\langle}q{\rangle}^{-\frac{1}{2}}e^{vc_1} \ge 2$, which allows us to estimate the geometric series by twice its final term, giving}
&\le c |q|^\frac32 e^{x_0+c_1vk}.\end{aligned}$$ Finally, applying Lemma \[L:approx1\] on $[k c_1,t_1]$, $$\|w\|_{L_{[0,t_1]}^\infty L_x^2} \leq c\left(|q|^\frac32{\langle}q {\rangle}^\frac 12 e^{x_0+c_1vk} + |q|^\frac 32 {\langle}q {\rangle}^\frac 12 {\textnormal{sech}}(x_0+t_1v)\right) \leq c|q|^\frac32{\langle}q {\rangle}^\frac 12e^{-v^{\varepsilon}},$$ where we have used $x_0+c_1vk \le x_0+t_1v = - v^{\varepsilon}$. The constant $c$ here is still independent of $q$, $v$, and ${\varepsilon}$. As a consequence, follows. So long as this last line is bounded by $c{\langle}q{\rangle}^\frac 14$ the repeated applications of Lemma \[L:approx1\] are justified.
Now we prove Lemma \[L:approx1\]:
We begin by writing a differential equation for $w$ in terms of $u_1$: $$\begin{aligned}
i{{\partial}}_t w + \frac 1 2 {{\partial}}_x^2 w - q\delta_0(x) w &= u_1|u_1|^2 - (w+u_1)|w + u_1|^2 + q \delta_0(x) u_1 \\
&= - w|w|^2 - 2 u_1 |w|^2 - {\overline{u_1}}w^2 - 2|u_1|^2w - u_1^2{\overline{w}}+ q \delta_0(x) u_1.\end{aligned}$$ We rewrite this as an integral equation in terms of the perturbed linear propagator $e^{-iH_q t}=U_q(t) + e^{itq^2/2}P$, regarding the right hand side as a forcing term. $$\begin{aligned}
&w(x,t) = \left(U_q(t-t_a) + e^{i(t-t_a)q^2/2}P\right)w(x,t_a) \\
&\qquad + \int_{t_a}^t U_q(t-s)\left(- w|w|^2 - 2 u_1 |w|^2 - {\overline{u_1}}w^2 - 2|u_1|^2w - u_1^2{\overline{w}}+ q \delta_0(x) u_1\right) ds \\
&\qquad + \int_{t_a}^t e^{i(t-s)q^2/2}P\left(- w|w|^2 - 2 u_1 |w|^2 - {\overline{u_1}}w^2 - 2|u_1|^2w - u_1^2{\overline{w}}+ q \delta_0(x) u_1\right) ds \\
&= \textrm{I} + \textrm{II} + \textrm{III}.\end{aligned}$$ We define $\|w\|_X = \|w\|_{L^\infty_{[t_a,t_b]}L^2_x} + \|w\|_{L^4_{[t_a,t_b]}L^\infty_x}$, and then proceed to estimate the $X$ norm of the right hand side term by term. In what follows, $(p,r)$ denotes either $(\infty,2)$ or $(4,\infty)$.
I. We observe from the Strichartz estimate that $$\|U_q(t-t_a) w\|_X \le c \|w(x,t_a)\|_{L^2_x}.$$ Next, using the Hölder estimate for $P$, we have $$\|P w(x,t_a)\|_{L^r_x} = (t_b-t_a)^{1/p}\|P w(x,t_a)\|_{L^r_x} \le c (t_b-t_a)^{1/p}|q|^{1/2 - 1/r}\|w(x,t_a)\|_{L^2_x}.$$ Taken together, and using $t_b - t_a \le c_1$, these bounds can be written as $$\left\|\left(U_q(t-t_a) + e^{i(t-t_a)q^2/2}P\right)w(x,t_a)\right\|_X \le c {\langle}q{\rangle}^{1/2}\|w(x,t_a)\|_{L^2_x}.$$ II. For the terms involving $U_q$ we will use the Strichartz estimate, which tells us that $$\left\|\int_{t_a}^t U_q(t-s)f(x,s)ds \right\|_X \le c\|f\|_{L^{\tilde p}_{[t_a,t_b]}L^{\tilde r}_x },$$ whenever $2/\tilde p + 1 / \tilde r = 5/2$. The first term, $w|w|^2$, is cubic, and we will use $\tilde p = 1$ and $\tilde r = 2$: $$\left\|\int_{t_a}^t U_q(t-s)w|w|^2(x,s)ds \right\|_X \le c \|w^3\|_{L^1_{[t_a,t_b]}L^2_x}.$$ We first pass to the $L^\infty_x$ norm for two of the factors. $$\begin{aligned}
&\le c \left\|\|w\|_{L^2_x}(t)\|w^2\|_{L^\infty_x}(t)\right\|_{L^1_{[t_a,t_b]}}.
\intertext{And then pass to the $L^\infty_{[t_a,t_b]}$ norm for the other factor.}
&\le c \|w\|_{L^\infty_{[t_a,t_b]}L^2_x}\|w^2\|_{L^1_{[t_a,t_b]}L^\infty_x} = c \|w\|_{L^\infty_{[t_a,t_b]}L^2_x}\|w\|^2_{L^2_{[t_a,t_b]}L^\infty_x}.
\intertext{Finally we use the boundedness of $t_b-t_a$ to pass to the $X$ norm.}
&\le c (t_b-t_a)^\frac 12 \|w\|^3_X.\end{aligned}$$ The quadratic and linear terms follow the same pattern. Observe that the distinction between $w$ and ${\overline{w}}$, and between $u_1$ and ${\overline{u_1}}$, does not play a role here: $$\begin{aligned}
\left\|\int_{t_a}^t U_q(t-s)u_1|w|^2(x,s)ds \right\|_X &\le c(t_b-t_a)^\frac 12\|u_1\|_X\|w\|^2_X \le c(t_b-t_a)^\frac 12\|w\|^2_X \\
\left\|\int_{t_a}^t U_q(t-s)w|u_1|^2(x,s)ds \right\|_X &\le c(t_b-t_a)^\frac 12\|w\|_{L^6_{[t_a,t_b]}L^6_x}. \end{aligned}$$ For the delta term we have no flexibility in the choice of exponents. $$\begin{aligned}
|q|\left\|\int_{t_a}^t U_q(t-s)\delta_0(x)u_1(0,s)ds \right\|_X \le c |q| \|u_1(0,t)\|_{L^{4/3}_{[t_a,t_b]}}.\end{aligned}$$
III\. The $P$ terms we will similarly estimate one by one: $$\bigg\|\int_{t_a}^t e^{i(t-s)q^2/2}P\big(- w|w|^2 - 2 u_1 |w|^2 - {\overline{u_1}}w^2 - 2|u_1|^2w - u_1^2{\overline{w}}+ q \delta_0(x) u_1\big)ds\bigg\|_{{L^1_{[t_a,t_b]}L^r_x}}$$ $$\le
\begin{aligned}[t]
c \Big( &\|P |w|^3\|_{L^1_{[t_a,t_b]}L^r_x} + \|P u_1 |w|^2\|_{L^1_{[t_a,t_b]}L^r_x} \\
&+\| P |u_1|^2w \|_{L^1_{[t_a,t_b]}L^r_x} + |q| \| P \delta_0(x)u_1(0,t) \|_{L^1_{[t_a,t_b]}L^r_x}\Big).
\end{aligned}$$ Here we used a generalized Minkowski inequality to pass the norm through the integral, just as we did in the proof of the Strichartz estimate. Note that the constant $c$ in the second line depends on $p$, but since $p$ only takes two different values we will not make this dependence explicit in our notation. For the delta term as before we have no flexibility in the choice of $L^{r_1}$ norm: $$|q| \| P \delta_0(x)u_1(0,t) \|_{L^1_{[t_a,t_b]}L^r_x} \le c |q|^{2 - 1/r} \|u_1(0,t)\|_{L^1_{[t_a,t_b]}} \le c |q|^\frac32{\langle}q{\rangle}^\frac12 \|u_1(0,t)\|_{L^1_{[t_a,t_b]}}.$$ For the cubic term in $w$ we proceed using the same Hölder estimate for $P$. Here we use $r_1 = 2$, giving this term a factor no worse than ${\langle}q{\rangle}^\frac12$: $$\|P |w|^3\|_{L^1_{[t_a,t_b]}L^r_x} \le c{\langle}q{\rangle}^\frac 12 \|w^3\|_{L^1_{[t_a,t_b]}L^2_x} = c{\langle}q{\rangle}^\frac 12 \|w\|^3_X.$$ The last step is the same as that in the $w^3$ term in II above. We now estimate the quadratic and linear terms in $w$. We have $$\begin{aligned}
\| P u_1w^2\|_{L^1_{[t_a,t_b]}L^r_x} &\le c\|w^2\|_{L^1_{[t_a,t_b]}L^r_x} = c\|w\|^2_{L^2_{[t_a,t_b]}L^{2r}_x} \le c \|w\|^2_X \\
\| P u_1^2w\|_{L^1_{[t_a,t_b]}L^r_x} &\le c \|w\|_{L^1_{[t_a,t_b]}L^r_x} \le c (t_b - t_a)^\frac 34 \|w\|_X. \end{aligned}$$ When $r=\infty$ this last step is achieved by passing from $L^2_{[t_a,t_b]}$ to $L^4_{[t_a,t_b]}$ and using the boundedness of the time interval. When $r=2$ we interpolate using Hölder’s inequality between $L^2_x$ and $L^\infty_x$.
Having estimated each of the terms individually, we combine our results. We use $t_b - t_a \le c_1 \le 1$ to pass from lower norms in time to higher ones, only tracking the power of $(t_b-t_a)$ for the linear term in $w$. $$\begin{aligned}
\|w\|_X \le c \bigg( &{\langle}q{\rangle}^\frac 12\|w(x,t_a)\|_{L^2_x} + |q|^\frac 32 {\langle}q {\rangle}^\frac 12\|u_1(0,t)\|_{L^1_{[t_a,t_b]}} \\
&+ (t_b-t_a)^{1/2}\|w\|_X + \|w\|_X^2 + |q|^{1/3} \|w\|^3_X\bigg).\end{aligned}$$ We now take $c_1$ sufficiently small (recall that $t_b - t_a \le c_1$) so that the linear term in $\|w\|_X$ can be absorbed into the left hand side: $$\|w\|_X \le c\left({\langle}q{\rangle}^\frac 12\|w(x,t_a)\|_{L^2_x} + |q|^\frac 32 {\langle}q {\rangle}^\frac 12\|u_1(0,t)\|_{L^1_{[t_a,t_b]}}+ \|w\|^2_X + |q|^{1/3} \|w\|^3_X\right),$$ with a slightly worse leading constant $c$. We rewrite this inequality schematically using $x = \|w\|_X$: $$0 \le A - x + Bx^2 + Cx^3.$$ We now consider $\|w\|_{X(t')}$ for $\|w\|_{X(t')} \stackrel{\textrm{def}}= \|w\|_{L^\infty_{[t_a,t']}L^2_x} + \|w\|_{L^4_{[t_a,t']}L^\infty_x}$ for $t' \in [t_a,t_b]$. This is a continuous function of $t'$, and, for each $t' \in [t_a,t_b]$, $\|w\|_{X(t')}$ obeys the above inequality. Therefore if we find a positive value $x_0$ for which the inequality does not hold, we will be able to conclude that $\|w\|_{X(t')} < x_0$ for every $t' \in [t_a,t_b]$, and hence also that $\|w\|_X < x_0$.
We will use $x_0 = 2A$, and arrange $A$, $B$ and $C$ so that this gives a negative right hand side. In fact, we have $$A - 2A + 4BA^2 + 8CA^3 = -A + A(4BA + 8CA^2).$$ To make this negative we impose $4BA \le \frac 1 4$ and $8CA^2 \le \frac 1 4$. We thus obtain $x \le 2A$, or, in the language of $\|w\|_X$, $$\|w\|_X \le c \left({\langle}q{\rangle}^\frac 12\|w(x,t_a)\|_{L^2_x} + |q|^\frac 32 {\langle}q {\rangle}^\frac 12\|u_1(0,t)\|_{L^1_{[t_a,t_b]}}\right),$$ provided that ${\langle}q{\rangle}^\frac 12\|w(x,t_a)\|_{L^2_x} + |q|^\frac 32 {\langle}q {\rangle}^\frac 12\|u_1(0,t)\|_{L^1_{[t_a,t_b]}} \le c|q|^{-1/6}$.
Phase 2 {#S:phase2}
=======
We begin with a succession of three lemmas stating that the free nonlinear flow is approximated by the free linear flow, and that the perturbed nonlinear flow is approximated by the perturbed linear flow. The first lemma states that the nonlinear flows are well approximated by the corresponding linear flows, the second gives a better approximation by adding a cubic correction term, and the third shows that the improvement is retained even if the cubic term is omitted. In other words, we ‘add and subtract’ the cubic correction. Our estimates are consequences of the corresponding Strichartz estimates (Proposition \[p:Str\]). Crucially, the hypotheses and estimates of this lemma depend only on the $L^2$ norm of the initial data $\phi$. Below, the lemmas are applied with $\phi(x)=u(x,t_1)$, and $\|u(x,t_1)\|_{L_x^2}=\|u_0\|_{L^2}$ is independent of $v$; thus $v$ does not enter adversely into the analysis. We first state the lemmas and show how they are applied, deferring the proofs to the end of the section.
\[linapp\] Let $\phi\in L^2$ and $0<t_b$. If $(t_b^{1/2} + t_b^{2/3} |q|^{1/3}) \le c_1 (\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^{-2}$, then $$\|{\textnormal{NLS}_q}(t)\phi - e^{-itH_q}\phi\|_{L^P_{[0,t_b]}L^R_x} \le c_2 t_b^{1/2}(1 + t_b^{1/P}|q|^{2/P})(\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^3,$$ where $P$ and $R$ satisfy $\frac 2 {P} + \frac 1 {R} = \frac 1 2$, and $c_1$ and $c_2$ depend only on the constant appearing in the Strichartz estimates. We alert the reader that in our notation $P$ is used both as a Strichartz exponent and as the bound state projection .
\[L:approx2\] Under the same hypotheses as the previous lemma, $$\begin{aligned}
\label{E: approx2}
\|{\textnormal{NLS}_q}(t)\phi - g\|_{L^\infty_{[0,t_b]}L^2_x} \le c \bigg[&t_b^2\left(1 + t_b^{1/6}|q|^{1/3}\right)^3 (\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^9 \\
\nonumber &+ t_b^{3/2}\left(1 + t_b^{1/6}|q|^{1/3}\right)^2(\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^6\|\phi\|_{L^2} \\
\nonumber &+ t_b \left(1 + t_b^{1/6}|q|^{1/3}\right)(\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^3\|\phi\|^2_{L^2}\bigg],\end{aligned}$$ where $$g(t) = e^{-itH_q}\phi + \int_0^t e^{-i(t-s)H_q}|e^{-isH_q}\phi|^2e^{-isH_q}\phi ds.$$
\[L:dropint\] For $t_1 < t_2$ and $\phi = u(x,t_1)$, we have $$\begin{aligned}
\label{E:dropint}
{\hspace{0.3in}&\hspace{-0.3in}}\bigg\| \int_{t_1}^{t_2} e^{-i(t_2-s)H_q}|e^{-isH_q}\phi|^2e^{-isH_q}\phi ds \bigg\|_{L^2_x} \\
\nonumber &\leq c \left[(t_2-t_1) + (t_2-t_1)^{\frac 12}\left({\langle}q {\rangle}^\frac 32 |q|^\frac 32 e^{-v^{\varepsilon}} + {\langle}q {\rangle}^2 |q|^3 e^{-2v^{\varepsilon}} + {\langle}q {\rangle}^\frac 52 |q|^\frac 92 e^{-3v^{\varepsilon}}\right)\right],\end{aligned}$$ where $c$ is independent of the parameters of the problem.
In order to apply these lemmas, we need to estimate $\|P\phi\|_{L_x^6}$. As before, let $\phi_1(x) = e^{-itv^2/2}e^{it/2}e^{ixv}{\textnormal{sech}}(x-x_0-vt_1)$ and $\phi_2=\phi-\phi_1$. It suffices to estimate $\|P\phi_1\|_{L_x^6}$ and $\|P\phi_2\|_{L_x^6}$. By , $$\|\phi_2\|_{L_x^2} \leq c {\langle}q {\rangle}^\frac 32 |q|^\frac 12 e^{-v^{\varepsilon}},$$ and thus, by $$\| P\phi_2\|_{L_x^6} \leq c {\langle}q {\rangle}^\frac 32 |q|^\frac 56 e^{-v^{\varepsilon}}.$$ On the other hand, a direct computation gives $$\begin{aligned}
|P\phi_1(x)| &\le |q| e^{q|x|} \int e^{q|y|} {\textnormal{sech}}(y - x_0 - vt_1) dy \\
&= |q| e^{q|x|}
\begin{aligned}[t]
\Big( &\int_{-\infty}^{\frac {x_0+vt_1} 2} e^{q|y|} {\textnormal{sech}}(y - x_0 - vt_1) dy \\
&+ \int_{\frac {x_0 + vt_1}2}^\infty e^{q|y|} {\textnormal{sech}}(y - x_0 - vt_1) dy\Big)
\end{aligned}\end{aligned}$$ In the first integral we use the fact that $e^{q|y|}$ is uniformly small in the region of integration. For the second we use the fact that $e^{q|y|}$ is bounded by 1 and that ${\textnormal{sech}}\alpha \le 2 e^{-\alpha}$. $$|P\phi_1(x)| \le c|q| e^{q|x|} \left(e^{\frac{|q|}2(x_0+vt_1)} + e^{\frac 12 (x_0 + vt_1)}\right) \le c|q| e^{q|x|} \left(e^{\frac{-|q|v^{\varepsilon}}2} + e^{\frac {-v^{\varepsilon}} 2}\right).$$ This implies that $$\|P\phi_1\|_{L^6_x} \le c|q|^\frac 56 \left(e^{\frac{-|q|v^{\varepsilon}}2} + e^{\frac {-v^{\varepsilon}} 2}\right)$$ Combining, $$\label{E:pphi6}\|P\phi\|_{L_x^6} \le c {\langle}q {\rangle}^\frac 32 |q|^\frac 56 \left(e^{-\frac 12v^{\varepsilon}} + e^{\frac q2 v^{\varepsilon}}\right).$$
Set $t_2=t_1+2v^{1-{\varepsilon}}$ and $\phi(x) = u(x,t_1)$. We now give an interpretation of our three lemmas under the assumption $v {\langle}q {\rangle}^3 \left(e^{\frac{-|q|v^{\varepsilon}}2} + e^{\frac {-v^{\varepsilon}} 2}\right) + v^{- \frac 23(1-{\varepsilon})}|q|^\frac 13 \le 1$. This makes into $$\|u(x,t_2) - g(t_2)\|_{L^2_x} \le c \left(v^{-1} + v^{-\frac 76 (1-{\varepsilon})}|q|^{\frac 13}\right),$$ and into $$\bigg\|g(t_2) - e^{-i(t_2-t_1)H_q}u(x,t_1)\bigg\|_{L^2_x} \le c v^{-(1-{\varepsilon})}.$$ Overall this amounts to $$\begin{aligned}
u(\cdot,t_2) &= {\textnormal{NLS}_q}(t_2-t_1)[u(\cdot,t_1)] \\
&=
\begin{aligned}[t]
&e^{-i(t_2-t_1)H_q}[u(\cdot,t_1)]+ \mathcal{O}(v^{-\frac 76 (1-{\varepsilon})}|q|^{\frac 13}) + \mathcal O(v^{-(1-{\varepsilon})}).
\end{aligned}\end{aligned}$$ By combining this with we find that under our new assumption the errors from this phase are strictly larger, giving $$\label{E:approx2}
u(\cdot,t_2) =
\begin{aligned}[t]
&e^{-it_1v^2/2}e^{it_1/2}e^{-i(t_2-t_1)H_q}[e^{ixv}{\textnormal{sech}}(x-x_0-t_1v)] \\
&+ \mathcal{O}(v^{-\frac 76 (1-{\varepsilon})}|q|^{\frac 13}) + \mathcal O(v^{-(1-{\varepsilon})}).
\end{aligned}$$
By Proposition \[p:as\] with $\theta(x)=1$ for $x\leq -1$ and $\theta(x)=0$ for $x\geq 0$, $\phi(x) = {\textnormal{sech}}(x)$, and $x_0$ replaced by $x_0+t_1v$, $$\label{E:approx3}
\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}e^{-i(t_2-t_1)H_q}[e^{ixv}{\textnormal{sech}}(x-x_0-vt_1)](x) \\
&=
\begin{aligned}[t]
&t ( v ) e^{-i(t_2-t_1)H_0}[e^{ixv}{\textnormal{sech}}(x-x_0-vt_1)](x) \\
&+ r ( v ) e^{-i(t_2-t_1)H_0}[e^{-ixv}{\textnormal{sech}}(x+x_0+vt_1)](x) \\
&+ {\mathcal O} (v^{-1}),
\end{aligned}
\end{aligned}$$ where we have used the assumption that $e^{-v^{\varepsilon}} \le v^{-1}$. Now we use and to approximate $e^{-itH_0}$ by ${\textnormal{NLS}_0}$, picking up an error of ${\mathcal O}(v^{1-{\varepsilon}})$. By combining with and , we obtain $$u(\cdot,t) =
\begin{aligned}[t]
&t(v)e^{-it_1v^2/2}e^{it_1/2}{\textnormal{NLS}_0}(t_2-t_1)[e^{ixv}{\textnormal{sech}}(x-x_0-vt_1)](x) \\
&+r(v)e^{-it_1v^2/2}e^{it_1/2}{\textnormal{NLS}_0}(t_2-t_1)[e^{-ixv}{\textnormal{sech}}(x+x_0+vt_1)](x)\\
&+ \mathcal{O}(v^{-\frac 76 (1-{\varepsilon})}|q|^{\frac 13}) + \mathcal O(v^{-(1-{\varepsilon})}).
\end{aligned}$$ By noting that $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}{\textnormal{NLS}_0}(t_2-t_1)[e^{ixv}{\textnormal{sech}}(x-x_0-t_1v)] \\
&= e^{-i(t_2-t_1)v^2/2}e^{i(t_2-t_1)/2}e^{ixv}{\textnormal{sech}}(x-x_0-t_2v),\end{aligned}$$ and $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}{\textnormal{NLS}_0}(t_2-t_1)[e^{-ixv}{\textnormal{sech}}(x+x_0+t_1v)] \\
&= e^{-i(t_2-t_1)v^2/2}e^{i(t_2-t_1)/2}e^{-ixv}{\textnormal{sech}}(x+x_0+t_2v),\end{aligned}$$ we obtain .
Now we prove Lemma \[linapp\]:
Let $h(t) = {\textnormal{NLS}_q}(t)\phi$, so that $$i{{\partial}}_t h + \frac 1 2 {{\partial}}_x^2 h - q \delta_0(x)h + |h|^2h = 0.$$ Where in Phase 1 we used $L^4_tL^\infty_x$ as an auxiliary Strichartz norm, here we will use $L^6_tL^6_x$. We introduce the notation $\|h\|_{X'} = \|h\|_{L^\infty_{[0,t_b]}L^2_x} + \|h\|_{L^6_{[0,t_b]}L^6_x}$, and apply the Strichartz estimate $$\|h\|_{L^p_{[0,t_b]}L^r_x} \le c(\| \phi \|_{L^2} + t_b^{1/p} \| P \phi \|_{L^r} + \||h|^2h\|_{L^{\tilde p}_{[0,t_b]}L^{\tilde r}_x }+ t_b^{1/p} \|P |h|^2h\|_{L^1_{[0,t_b]}L^r_x})$$ once with $(p,r) = (\infty, 2)$ and $(\tilde p, \tilde r) = (6/5,6/5)$, and once with $(p,r) = (6,6)$ and $(\tilde p, \tilde r) = (6/5,6/5)$. We observe that Hölder’s inequality implies that $\|f\|^3_{L^p} \le \|f\|_{L^{p_1}}\|f\|_{L^{p_2}}\|f\|_{L^{p_3}}$ provided $\frac 1 p = \frac 1 {3p_1} + \frac 1 {3p_2} + \frac 1 {3p_3}$. This gives us $$\begin{aligned}
\|h\|^3_{L^{18/5}_{[0,t_b]}L^{18/5}_x} &\le \|h\|^2_{L^6_{[0,t_b]}L^6_x} \|h\|_{L^2_{[0,t_b]}L^2_x} \le c t_b^{1/2} \|h\|^3_{X'}.
\intertext{We also have}
\|P |h|^2h\|_{L^1_{[0,t_b]}L^2_x} &\le \|h\|^3_{L^3_{[0,t_b]}L^6_x} \le t_b^{1/2} \|h\|_{X'}^3, \\
t_b^{1/6}\|P |h|^2h\|_{L^1_{[0,t_b]}L^6_x} &\le c t_b^{2/3}|q|^{1/3} \|h\|_{X'}^3,\end{aligned}$$ yielding $$\|h\|_{X'} \le c(\|\phi\|_{L^2} + \|P\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6} + t_b^{1/2} \|h\|^3_{X'} + t_b^{2/3} |q|^{1/3} \|h\|_{X'}^3).$$ We then use the fact that $P$ is a projection on $L^2$ to write $$\|h\|_{X'} \le c(\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6} + t_b^{1/2} \|h\|^3_{X'} + t_b^{2/3} |q|^{1/3} \|h\|_{X'}^3).$$ Using, as in Phase 1, the continuity of $\|h\|_{{X'}(t_b)}$, we conclude that $$\|h\|_{X'} \le 2c(\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6}),$$ so long as $8c^2(t_b^{1/2} + t_b^{2/3} |q|^{1/3})(\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^2 \le 1$.
We now apply the Strichartz estimate to $u(t) = h(t) - e^{-itH_q}\phi$, observing that the initial condition is zero and the effective forcing term $-|h|^2h$, to get $$\begin{aligned}
\|h(t) - e^{-itH_q}\phi\|_{L^P_{[0,t_b]}L^R_x} &\le c \||h|^2h\|_{L^{6/5}_{[0,t_b]}L^{6/5}_x} + t_b^{1/P}\|P |h|^2h\|_{L^1_{[0,t_b]}L^R_x}\\
\nonumber&\le ct_b^{1/2}\|h\|_{X'}^3 + ct_b^{1/P}|q|^{1/2 - 1/R}\|h^3\|_{L^1_{[0,t_b]}L^2_x}) \\
\nonumber&\le c(t_b^{1/2} + ct_b^{1/P + 1/2}|q|^{2/P})\|h\|_{X'}^3 \\
\nonumber&\le ct_b^{1/2}(1 + t_b^{1/P}|q|^{2/P})(\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^3.\end{aligned}$$
Now we prove Lemma \[L:approx2\]:
A direct calculation shows $$h(t) - g(t) = \int_0^t e^{-i(t-s)H_q}\left(|h(s)|^2h(s) - |e^{-isH_q}\phi|^2e^{-isH_q}\phi\right)ds.$$ The Strichartz estimate gives us in this case $$\begin{aligned}
\|h - g\|_{L^\infty_{[0,t_b]}L^2_x} &\le \||h|^2h - |e^{-isH_q}\phi|^2e^{-isH_q}\phi\|_{L^{6/5}_{[0,t_b]}L^{6/5}_x} \\
&\qquad\qquad+ \|P\left(|h|^2h - |e^{-isH_q}\phi|^2e^{-isH_q}\phi\right)\|_{L^1_{[0,t_b]}L^2_x} \\
&= \qquad \textrm{I} \qquad + \qquad \textrm{II}.\end{aligned}$$ We introduce the notation $w(t) = h(t) - e^{-itH_q}\phi$, and use this to rewrite our difference of cubes: $$|h|^2h - |e^{-isH_q}\phi|^2e^{-isH_q}\phi =
\begin{aligned}[t]
&w|w|^2 + 2 e^{-isH_q}\phi |w|^2 + e^{isH_q}\overline{\phi} w^2 \\
&+ 2|e^{-isH_q}\phi|^2w + \left(e^{-isH_q}\phi\right)^2{\overline{w}}.
\end{aligned}$$ We proceed term by term, using Hölder estimates similar to the ones in the previous lemma and in Phase 1. Our goal is to obtain Strichartz norms of $w$, so that we can apply Lemma \[linapp\].
I. We have, for the cubic term, $$\begin{aligned}
\|w^3\|_{L^{6/5}_{[0,t_b]}L^{6/5}_x} &\le c t_b^{1/2}\|w\|_{L^\infty_{[t_a,t_b]}L^2_x}\|w\|^2_{L^6_{[t_a,t_b]}L^6_x} \\
&\le c t_b^2(1 + t_b^{1/6}|q|^{1/3})^2 (\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^9.
\intertext{For the first inequality we used H\"older, and for the second Lemma \ref{linapp}. Next we treat the quadratic and linear terms using the same strategy (observe that as before we ignore complex conjugates):}
\|e^{-isH_q}\phi |w|^2\|_{L^{6/5}_{[0,t_b]}L^{6/5}_x} &\le c t_b^{1/2}\|w\|_{L^\infty_{[t_a,t_b]}L^2_x}\|w\|_{L^6_{[t_a,t_b]}L^6_x}\|e^{-isH_q}\phi\|_{L^6_{[t_a,t_b]}L^6_x} \\
&\le c t_b^{3/2}(1 + t_b^{1/6}|q|^{1/3})(\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^6\|\phi\|_{L^2}. \\
\||e^{-isH_q}\phi|^2 w\|_{L^{6/5}_{[0,t_b]}L^{6/5}_x} &\le c t_b^{1/2}\|w\|_{L^\infty_{[t_a,t_b]}L^2_x}\|e^{-isH_q}\phi\|^2_{L^6_{[t_a,t_b]}L^6_x} \\
&\le c t_b (\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^3\|\phi\|^2_{L^2}.\end{aligned}$$
II\. In this case we have $$\begin{aligned}
\|P |w|^2 w\|_{L^1_{[0,t_b]}L^2_x} &\le ct_b^{1/2}\|w\|^3_{L^6_{[0,t_b]}L^6_x} \\
&\le ct_b^2 (1 + t_b^{1/6}|q|^{1/3})^3(\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^9, \\
\|P |w|^2 e^{-isH_q}\phi\|_{L^1_{[0,t_b]}L^2_x} &\le ct_b^{1/2}\|w\|^2_{L^6_{[0,t_b]}L^6_x} \|e^{-isH_q}\phi\|_{L^6_{[0,t_b]}L^6_x}\\
&\le ct_b^{3/2}(1 + t_b^{1/6}|q|^{1/3})^2 (\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^6\|\phi\|_{L^2}, \\
\|P w |e^{-isH_q}\phi|^2\|_{L^1_{[0,t_b]}L^2_x} &\le ct_b^{1/2}\|w\|_{L^6_{[0,t_b]}L^6_x} \|e^{-isH_q}\phi\|^2_{L^6_{[0,t_b]}L^6_x}\\
&\le ct_b(1 + t_b^{1/6}|q|^{1/3}) (\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^3\|\phi\|^2_{L^2}.\end{aligned}$$
Putting all this together, we see that $$\begin{aligned}
\|h - g\|_{L^\infty_{[0,t_b]}L^2_x} \le c \bigg[&t_b^2\left(1 + t_b^{1/6}|q|^{1/3}\right)^3 (\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^9 \\
&+ t_b^{3/2}\left(1 + t_b^{1/6}|q|^{1/3}\right)^2(\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^6\|\phi\|_{L^2} \\
&+ t_b \left(1 + t_b^{1/6}|q|^{1/3}\right)(\|\phi\|_{L^2} + t_b^{1/6}\|P\phi\|_{L^6})^3\|\phi\|^2_{L^2}\bigg].\end{aligned}$$
Finally we prove Lemma \[L:dropint\]:
We write $\phi(x) = \phi_1(x) + \phi_2(x)$, where $\phi_1(x) = e^{-it_1v^2/2}e^{it_1/2}e^{ixv}{\textnormal{sech}}(x-x_0-vt_1)$, and estimate individually the eight resulting terms. We know that for large $v$, $\phi_2$ is exponentially small in $L^2$ norm from Lemma \[L:approx1\]. This makes the term which is cubic in $\phi_1$ the largest, and we treat this one first.
I. We claim $\left\| \int_{t_1}^{t_2} e^{-i(t_2-s)H_q}|e^{-isH_q}\phi_1|^2e^{-isH_q}\phi_1 ds \right\|_{L^2_x} \le c (t_2-t_1)$.
We begin with a direct computation $$\begin{aligned}
\label{t1t2}
{\hspace{0.3in}&\hspace{-0.3in}}\left\| \int_{t_1}^{t_2} e^{-i(t_2-s)H_q}|e^{-isH_q}\phi_1|^2e^{-isH_q}\phi_1 ds \right\|_{L^2_x} \\
&\le (t_2 - t_1) \left\| e^{-i(t_2-s)H_q}|e^{-isH_q}\phi_1|^2e^{-isH_q}\phi_1 \right\|_{L^\infty_{[t_1,t_2]}L^2_x} \\
\nonumber&\le c (t_2-t_1) \left\| |e^{-isH_q}\phi_1|^2e^{-isH_q}\phi_1 \right\|_{L^\infty_{[t_1,t_2]}L^2_x}.\end{aligned}$$ It remains to show that this last norm is bounded by a constant. We use to express $e^{-itH_q}$ in terms of $e^{-itH_0}$, recalling the formula here for the reader’s convenience: $$e^{-itH_q}\phi_1(x) =
\begin{aligned}[t]
&\big[e^{-itH_0}\phi_1(x) + e^{-itH_0}(\phi_1 * \rho_q)(-x)\big]x_-^0 \\
&+ e^{-itH_0}(\phi_1 * \tau_q)(x)x_+^0 + e^{\frac 12 itq^2} P \phi_1(x).
\end{aligned}$$ This formula is only valid for functions suported in the negative half-line, but this will not cause serious difficulty and we ignore the problem for now. We first evaluate this expression with $e^{ixv} \psi(x)$ in place of $\phi_1(x)$. Here $\psi(x) = {\textnormal{sech}}(x-x_0-vt)$, and the other phase factors do not affect the norm. The first term uses the Galilean invariance of $e^{-itH_0}$ directly: $$\begin{aligned}
e^{-itH_0}e^{ixv}\psi(x) x_-^0 &= e^{-itv^2/2}e^{ixv}e^{-itH_0}\psi(x - vt) x_-^0.
\intertext{For the second and third terms we use in addition the fact that $e^{-itH_0}$ is a convolution operator, and convolution is associative:}
e^{-itH_0}(e^{ixv}\psi * \rho_q)(-x) x_-^0 &= \left[(e^{-itH_0}e^{ixv}\psi) * \rho_q\right](-x) x_-^0 \\
&= e^{-itv^2/2}\left[(e^{ixv}e^{-itH_0}\psi(x-vt) * \rho_q\right](-x) x_-^0 \\
&= e^{-itv^2/2}e^{-ixv}\left[(e^{-itH_0}\psi(x-vt) * (e^{-ixv}\rho_q)\right](-x) x_-^0 \\
e^{-itH_0}(e^{ixv}\psi * \tau_q)(x)x_+^0 &= e^{-itv^2/2} e^{ixv} \left[(e^{-itH_0}\psi(x-vt)) * (e^{-ixv}\tau_q)\right](x) x_+^0.\end{aligned}$$ The final term we leave as it is, so that we have $$\begin{aligned}
\label{phase}
e^{-itH_q}e^{ivx}\psi(x) = & e^{-itv^2/2}e^{ixv}\big[e^{-itH_0}\psi(x - vt) \\
&+ \left[(e^{-itH_0}\psi(x-vt) * (e^{-ixv}\rho_q)\right](-x)\big]x_-^0 \\
\nonumber&+ e^{-itv^2/2} e^{ixv} \left[(e^{-itH_0}\psi(x-vt)) * (e^{-ixv}\tau_q)\right](x) x_+^0 \\
&+ e^{\frac 12 itq^2} P e^{ivx}\psi(x).\end{aligned}$$ Before proceeding to the estimate of (\[t1t2\]) we introduce the following notation: $$f_-(x) = f(x) x^0_-, \qquad f_+(x) = f(-x) x^0_-$$ so that $f = Rf_+ + f_-$, where $Rf(x) = f(-x)$, and (\[phase\]) will be applicable to $f_\pm$. We then write $$|e^{-isH_q}\phi_1|^2e^{-isH_q}\phi_1 = R(|e^{-isH_q}\phi_{1+}|^2e^{-isH_q}\phi_{1+}) + \cdots + |e^{-isH_q}\phi_{1-}|^2e^{-isH_q}\phi_{1-},$$ with a total of 8 terms. After applying (\[phase\]) three times to each of them, we will have $8 \cdot 4^3$ terms. We will estimate these terms in groups. We observe first that the distinction between $\phi_{1+}$ and $\phi_{1-}$ will not play a role, and neither will the presence or absence of $R$. We accordingly write $\psi$ for ${\textnormal{sech}}(x-x_0-vt_1)_\pm$ and omit $R$ when it appears. In what follows $\sigma_q$ denotes either $e^{-ixv}\rho_q$, $e^{-ixv}\tau_q$, or $\delta_0$, each of which has $L^1$ norm $1$. $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}\left\| \left|(e^{-isH_0}\psi(x-vs)) * \sigma_q\right|^2(e^{-isH_0}\psi(x-vs))*\sigma_q \right\|_{L^\infty_{[t_1,t_2]}L^2_x} \\
&\le c \left\|\left|(e^{-isH_0}\psi(x-vs)) * \sigma_q\right|^2(e^{-isH_0}\psi(x-vs))*\sigma_q \right\|_{L^\infty_{[t_1,t_2]}L^2_x}\end{aligned}$$ Now we pass from $L^2$ to $L^\infty$ for two of the factors, and use Young’s inequality:$\|f * g\|_p \le \|f\|_p\|g\|_1$, once for $p=2$ and twice for $p = \infty$, and then use the Gagliardo-Nirenberg-Sobolev inequality which states that the $L^\infty$ norm is controlled by the $H^1$ norm. Because the $H^1$ norm is preserved by $e^{-isH_0}$, we are home free: $$\begin{aligned}
&\le c \left\|(e^{-isH_0}\psi(x-vs))*\sigma_q \right\|_{L^\infty_{[t_1,t_2]}L^2_x}\left\|(e^{-isH_0}\psi(x-vs))*\sigma_q \right\|_{L^\infty_{[t_1,t_2]}L^\infty_x}^2 \\
&\le c \|{\textnormal{sech}}(x)\|_{L^\infty_{[t_1,t_2]}L^2_x}\|{\textnormal{sech}}(x)\|^2_{L^\infty_{[t_1,t_2]}H^1_x} \le c.\end{aligned}$$ Terms where one or more of the factors of $(e^{-isH_0}\psi(x-vs))$ are replaced by $e^{\frac 12 itq^2} P e^{ivx}\psi(x)$ are treated in the same way. We have $$\left\|e^{\frac 12 itq^2} P e^{ivx}\psi(x)\right\|_{L^\infty_{[t_1,t_2]}L^p_x} \le \|\psi(x)\|_{L^\infty_{[t_1,t_2]}L^p_x},$$ where $p$ is either $2$ or $\infty$.
II\. For the other terms the phases will play no role, because we will use Strichartz estimates. The smallness will come more from the smallness of $\phi_2$ than from the brevity of the time interval. $$\bigg\| \int_{t_1}^{t_2} e^{-i(t_2-s)H_q}|e^{-isH_q}\phi_1|^2e^{-isH_q}\phi_2 ds \bigg\|_{L^2_x} \le c \||e^{-isH_q}\phi_1|^2e^{-isH_q}\phi_2\|_{L^1_{[t_1,t_2]}L^2_x}.$$ We have used the Strichartz estimate with $(\tilde p, \tilde r) = (1,2)$, and combined the resulting terms using . We use Hölder’s inequality so as to put ourselves in a position to reapply the Strichartz estimate. $$\begin{aligned}
&\le c (t_2-t_1)^{\frac 12}\|e^{-isH_q}\phi_1\|^2_{L^4_{[t_1,t_2]}L^\infty_x}\|e^{-isH_q}\phi_2\|_{L^\infty_{[t_1,t_2]}L^2_x} \\
&\le c (t_2-t_1)^{\frac 12} \left(\|\phi_1\|_{L^2_x} + (t_2-t_1)^{\frac 1 4}\|P\phi_1\|_{L^\infty_x}\right)^2\left(\|\phi_2\|_{L^2_x} + \|P\phi_2\|_{L^2_x}\right).
\intertext{Once again we use \eqref{E:PHolder} to combine terms, this time with a penalty in $|q|$.}
&\le c (t_2-t_1)^{\frac 12} {\langle}q{\rangle}\|\phi_1\|_{L^2_x}^2\|\phi_2\|_{L^2_x} \\
&\le c (t_2-t_1)^{\frac 12} {\langle}q {\rangle}^\frac 32 |q|^\frac 32 e^{-v^{\varepsilon}}.\end{aligned}$$ Similarly we find that $$\begin{aligned}
\bigg\| \int_{t_1}^{t_2} e^{-i(t_2-s)H_q}|e^{-isH_q}\phi_2|^2e^{-isH_q}\phi_1 ds \bigg\|_{L^2_x} &\le c (t_2-t_1)^{\frac 12}{\langle}q {\rangle}^2 |q|^3 e^{-2v^{\varepsilon}} \\
\bigg\| \int_{t_1}^{t_2} e^{-i(t_2-s)H_q}|e^{-isH_q}\phi_2|^2e^{-isH_q}\phi_2 ds \bigg\|_{L^2_x} &\le c (t_2-t_1)^{\frac 12}{\langle}q {\rangle}^\frac 52 |q|^\frac 92 e^{-3v^{\varepsilon}}.\end{aligned}$$
Phase 3 {#S:phase3}
=======
Let $t_3=t_2+{\varepsilon}\log v$. Label $$u_{\textnormal{tr}}(x,t) =e^{-itv^2/2}e^{it_2/2}e^{ixv}{\textnormal{NLS}_0}(t-t_2)[t(v){\textnormal{sech}}(x)](x-x_0-tv)$$ for the transmitted (right-traveling) component and $$u_{\textnormal{ref}}(x,t) = e^{-itv^2/2}e^{it_2/2}e^{-ixv}{\textnormal{NLS}_0}(t-t_2)[r(v){\textnormal{sech}}(x)](x+x_0+tv)$$ for the reflected (left-traveling) component. By Appendix A from [@HMZ], for each $k\in \mathbb{N}$ there exists a constant $c_k>0$ and an exponent $\sigma(k)>0$ such that $$\label{E:trans}
\|u_{\textnormal{tr}}(x,t)\|_{L_{x<0}^2} \leq \frac{c_k(\log v)^{\sigma(k)}}{v^{k{\varepsilon}}}, \qquad |u_{\textnormal{tr}}(0,t)| \leq \frac{c_k(\log v)^{\sigma(k)}}{v^{k{\varepsilon}}}$$ and $$\label{E:refl}
\|u_{\textnormal{ref}}(x,t)\|_{L_{x>0}^2} \leq \frac{c_k(\log v)^{\sigma(k)}}{v^{k{\varepsilon}}}, \qquad |u_{\textnormal{ref}}(0,t)| \leq \frac{c_k(\log v)^{\sigma(k)}}{v^{k{\varepsilon}}}$$ both uniformly on the time interval $[t_2,t_3]$.
Let us first give an outline of the argument in this section. We would like to control $\|w\|_{L_{[t_2,t_3]}^\infty L_x^2}$, where $w = u - u_{{\operatorname{tr}}} - u_{{\textnormal{ref}}}$. If, after subdividing the interval $[t_2,t_3]$ into unit-sized intervals, we could argue that $w(t)$ at most doubles (or more accurately, multiplies by a fixed constant independent of $q$ and $v$) over each interval, then we could take $t_3\sim \log v$ and conclude that the size of $\|w(t)\|_{L_x^2}$ would only increment by at most a small positive power of $v$ over $[t_2,t_3]$. This was the strategy employed in [@HMZ]. The equation for $w$ induced by the equations for $u$, $u_{{\operatorname{tr}}}$, and $u_{{\textnormal{ref}}}$ took the form $$\label{E:w10}
0=i\partial_t w + \tfrac12\partial_x^2 w - q\delta_0 w + F$$ where $F$ involved product terms of the form (omitting complex conjugates) $w^3$, $w^2u_{{\operatorname{tr}}}$, $w^2u_{{\textnormal{ref}}}$, $w u_{{\operatorname{tr}}}^2$, $w u_{{\operatorname{tr}}} u_{{\textnormal{ref}}}$, and $w u_{{\textnormal{ref}}}^2$, $u_{{\textnormal{ref}}}^2u_{{\operatorname{tr}}}$, $u_{{\operatorname{tr}}}^2u_{{\textnormal{ref}}}$, $q\delta_0 u_{{\operatorname{tr}}}$, $q\delta_0 u_{{\textnormal{ref}}}$. The Strichartz estimates were applied to to deduce a bound on $\|w\|_{L_{[t_a,t_b]}^p L_x^r}$ for all admissible pairs $(p,r)$ over unit-sized time intervals $[t_a,t_b]$. Although a bound for $(p,r)=(\infty,2)$ would have sufficed, the analysis forced the use of the full range of admissible pairs $(p,r)$ since such norms necessarily arose on the right-hand side of the estimates.
A direct implementation of this strategy does not work for $q<0$. The difficulty stems from the fact that the initial data $w(t_a)$ for the time interval $[t_a,t_b]$ has a nonzero projection onto the eigenstate $|q|^{1/2}e^{-|q||x|}$. While the perturbed linear flow of this component is adequately controlled in $L_{[t_a,t_b]}^\infty L_x^2$, it is equal to a positive power of $|q|$ when evaluated in other Strichartz norms. For example, $\| e^{\frac12itq^2}|q|^{1/2}e^{-|q||x|} \|_{L_{[t_a,t_b]}^6L_x^6} = |q|^{1/3}$. Thus each iterate over a unit-sized time interval $[t_a,t_b]$ will result in a multiple of $|q|^{1/3}$, and we cannot carry out more than two iterations.
Our remedy is to separate from the above $w = u - u_{{\operatorname{tr}}} - u_{{\textnormal{ref}}}$ at $t=t_a$ the projection onto the eigenstate $|q|^{1/2}e^{-|q||x|}$, and evolve this piece by the ${\textnormal{NLS}_q}$ flow. Specifically, we set $$u_{\textnormal{bd}}(t) = {\textnormal{NLS}_q}(t)\big[ P\big(u(t_a)-u_{{\operatorname{tr}}}(t_a)-u_{{\textnormal{ref}}}(t_a)\big) \big]$$ and then model $u(t)$ as $$u(t) = u_{{\operatorname{tr}}}(t)+u_{{\textnormal{ref}}}(t)+u_{{\textnormal{bd}}}(t)+w(t)$$ This equation *redefines* $w(t)$ from that discussed above, and it now has the property that $w(t_a)$ is orthogonal to the eigenstate $|q|^{1/2}e^{-|q||x|}$. We will, over the interval $[t_a,t_b]$, estimate $w(t)$ in the full family of Strichartz norms but will only put the norms $L_{[t_a,t_b]}^\infty L_x^2$ and $L_{[t_a,t_b]}^\infty \dot H_x^1$ on $u_{{\textnormal{bd}}}(t)$. These norms are controlled by *nonlinear* information: the $L^2$ conservation and energy conservation of the ${\textnormal{NLS}_q}$ flow. The use here of nonlinear information is the key new ingredient; perturbative linear estimates for $u_{{\textnormal{bd}}}$ are too weak to complete the argument.
The equation for $w(t)$ induced by the equations for $u(t)$, $u_{{\textnormal{bd}}}(t)$, $u_{{\operatorname{tr}}}(t)$, and $u_{{\textnormal{ref}}}(t)$ takes the form $$0=i\partial_t w + \tfrac12\partial_x^2 w -q\delta_0(x)w + F$$ where $F$ contains terms of the following types (ignoring complex conjugates):
- (delta terms) $u_{{\operatorname{tr}}}\delta_0$, $u_{{\textnormal{ref}}}\delta_0$
- (cubic in $w$) $w^3$
- (quadratic in $w$) $w^2u_{{\operatorname{tr}}}$, $w^2u_{{\textnormal{ref}}}$, $w^2u_{{\textnormal{bd}}}$
- (linear in $w$) $wu_{{\operatorname{tr}}}^2$, $wu_{{\textnormal{ref}}}^2$, $wu_{{\textnormal{bd}}}^2$, $wu_{{\textnormal{ref}}}u_{{\operatorname{tr}}}$, $w u_{{\textnormal{bd}}}u_{{\operatorname{tr}}}$, $wu_{{\textnormal{bd}}}u_{{\textnormal{ref}}}$
- (interaction) $u_{{\textnormal{bd}}}u_{{\operatorname{tr}}}^2$, $u_{{\textnormal{bd}}}u_{{\textnormal{ref}}}^2$, $u_{{\textnormal{ref}}}u_{{\textnormal{bd}}}^2$, $u_{{\textnormal{ref}}}u_{{\operatorname{tr}}}^2$, $u_{{\operatorname{tr}}}u_{{\textnormal{bd}}}^2$, $u_{{\operatorname{tr}}}u_{{\textnormal{ref}}}^2$
The integral equation form of $w$ is $$w(t) =
\begin{aligned}[t]
& U_q(t)[(1-P)(u(t_a)-u_{{\operatorname{tr}}}(t_a)-u_{{\textnormal{ref}}}(t_a))] \\
&-i \int_0^t \Big( e^{\frac12i(t-t')q^2}P F(t') + U_q(t-t')(1-P)F(t') \Big) \, dt'
\end{aligned}$$ We estimate $w$ in the full family of Strichartz norms, and encounter the most adverse powers of $|q|$ in the $PF$ component of the Duhamel term. Of the terms making up $F$, the most difficult are the “interaction” terms listed above that involve at least one $u_{{\textnormal{bd}}}$. Let $A {\stackrel{\rm{def}}{=}}\|u(t_a) - u_{{\operatorname{tr}}}(t_a) - u_{{\textnormal{ref}}}(t_a)\|_{L_x^2}$. Then we find from the $L^2$ conservation and energy conservation (see Lemma \[L:energycon\]) of the ${\textnormal{NLS}_q}$ flow, that $$\|u_{{\textnormal{bd}}}\|_{L_{[t_a,t_b]}^\infty L_x^2} \leq A, \quad \|\partial_x u_{{\textnormal{bd}}}\|_{L_{[t_a,t_b]}^\infty L_x^2} \leq c |q|A\, .$$ Using these bounds, we are able to control the interaction terms $u_{{\operatorname{tr}}}u_{{\textnormal{bd}}}$ and $u_{{\textnormal{ref}}}u_{{\textnormal{bd}}}$ as $$\|u_{{\operatorname{tr}}}u_{{\textnormal{bd}}}\|_{L_{[t_a,t_b]}^\infty L_x^2} + \|u_{{\textnormal{ref}}}u_{{\textnormal{bd}}}\|_{L_{[t_a,t_b]}^\infty L_x^2} \leq |q|^{-\frac12}A+|q|^1A^3$$ (see Lemma \[L:interaction\]). The estimate of the interaction terms comes with a factor $|q|^{1/2}$, and thus the bound on the interaction terms with at least one copy of $u_{{\textnormal{bd}}}$ is of size $A+|q|^\frac32A^3$. We want this to be at most comparable to $A$, so we need $|q|^\frac32A^2 \lesssim 1$. But the estimates of Phase 2 leave us starting Phase 3 with an error of size $|q|^\frac13 v^{-\frac76(1-{\varepsilon})}$. Let us assume, as a bootstrap assumption, that we are able to maintain a control of size $|q|^\frac13 v^{-\frac76(1-2{\varepsilon})}$ on the error. Then, even in the worst case in which $A\sim |q|^\frac13 v^{-\frac76(1-2{\varepsilon})}$, the condition $|q|^\frac32A^2 \lesssim 1$ is implied by $v \gtrsim |q|^{\frac{13}{14}(1+2{\varepsilon})}$. So, if we impose the assumption $v \gtrsim |q|^{\frac{13}{14}(1+2{\varepsilon})}$, we can carry out the iterations, with the error at most doubling over each iterate. If it begins at $|q|^\frac13 v^{-\frac76(1-{\varepsilon})}$, then after $\sim {\varepsilon}\log v$ iterations, it is no more than $|q|^\frac13 v^{-\frac76(1-2{\varepsilon})}$, the size of the bootstrap assumption.
Now, to prove the error bound of size $|q|^\frac13 v^{-\frac76(1-{\varepsilon})}$ in the Phase 2 analysis required the introduction of the cubic correction refinement; the shorter argument in [@HMZ] would only have provided a bound of size $|q|^\frac16v^{-\frac12+{\varepsilon}}$. The bound $|q|^\frac16v^{-\frac12+{\varepsilon}}$ combined with the condition $|q|^\frac32A^2 \lesssim 1$ gives the requirement $v\gtrsim |q|^{\frac{11}{6}+{\varepsilon}}$, which is unacceptable since the most interesting phenomena occur for $|q|\sim v$. This is the reason we *needed* the cubic correction refinement in Phase 2.
We now give the full details of the argument. We begin with an energy estimate for our differential equation:
\[L:energycon\] If $u$ satisfies $$i{{\partial}}_t u + \frac 1 2 {{\partial}}_x^2 u - q \delta_0 u + |u|^2u = 0,$$ then $$\label{E:energy}
\|{{\partial}}_x u(x,t)\|_{L^2_x} \le 2\|{{\partial}}_x u(x,0)\|_{L^2_x} + 2|q| \|u(x,0)\|_{L^2_x} + \|u(x,0)\|^3_{L^2_x}.$$
Multiplying by ${{\partial}}_t\overline{u}$, intergrating in space, and taking the real part, we see that $$\begin{aligned}
\textrm{Re} \left(\frac 1 2 \int {{\partial}}_x^2 u {{\partial}}_t \overline{u} dx - q u(0,t){{\partial}}_t\overline{u}(0,t) + \int |u|^2u {{\partial}}_t\overline{u} dx\right) &= 0.
\intertext{Here we multiply by 4 and integrate by parts in the first term:}
-{{\partial}}_t\int |{{\partial}}_x u|^2 dx - 2q{{\partial}}_t|u(0,t)|^2 + {{\partial}}_t\int |u|^4 dx &= 0\end{aligned}$$ Integrating from $0$ to $t$ and solving for $\|{{\partial}}_x u(t)\|^2_{L^2_x}$, we find that $$\|{{\partial}}_x u(x,t)\|^2_{L^2_x} =
\begin{aligned}[t]
&\|{{\partial}}_x u(x,0)\|^2_{L^2_x} + 2 q |u(0,0)|^2 - 2q |u(0,t)|^2 \\
&+ \|u(x,t)\|^4_{L^4_x} - \|u(x,0)\|^4_{L^4_x}.
\end{aligned}$$ Dropping the terms with a favorable sign from the right hand side, we see that $$\|{{\partial}}_x u(x,t)\|^2_{L^2_x} \le \|{{\partial}}_x u(x,0)\|^2_{L^2_x} + 2|q| |u(0,t)|^2 + \|u(x,t)\|^4_{L^4_x}$$ Next, using $\|u\|^2_{L^4_x} \le \|u\|_{L^2_x}\|u\|_{L^\infty_x}$, together with $\|u\|^2_{L^\infty_x} \le \|u\|_{L^2_x}\|{{\partial}}_xu\|_{L^2_x}$, we have $$\|{{\partial}}_x u(x,t)\|^2_{L^2_x} \le
\begin{aligned}[t]
&\|{{\partial}}_x u(x,0)\|^2_{L^2_x} + 2|q| \|u(x,t)\|_{L^2_x}\|{{\partial}}_xu(x,t)\|_{L^2_x} \\
&+ \|u(x,t)\|^3_{L^2_x}\|{{\partial}}_x u(x,t)\|_{L^2_x}
\end{aligned}$$ Here we use $\|u(x,t)\|_{L^2_x} = \|u(x,0)\|_{L^2_x}$: $$\|{{\partial}}_x u(x,t)\|^2_{L^2_x} \le \|{{\partial}}_x u(x,0)\|^2_{L^2_x} + \left(2|q| \|u(x,0)\|_{L^2_x} + \|u(x,0)\|^3_{L^2_x} \right)\|{{\partial}}_xu(x,t)\|_{L^2_x}$$ Using $ab \le \frac 12 a^2 + \frac 1 2 b^2$ to solve for $\|{{\partial}}_x u(x,t)\|$, we obtain $$\|{{\partial}}_x u(x,t)\|^2_{L^2_x} \le 2\|{{\partial}}_x u(x,0)\|^2_{L^2_x} + \left(2|q| \|u(x,0)\|_{L^2_x} + \|u(x,0)\|^3_{L^2_x} \right)^2.$$ This implies the desired result.
We now give an approximation lemma analogous to that in Phase 1, but with the error divided into two parts.
\[L:approx6\] Suppose $t_a < t_b$ and $t_b - t_a \le c_1$. Let $u_{\textnormal{bd}}(t)$ be the flow of ${\textnormal{NLS}_q}$ with initial condition $$\label{ubd}
u_{{\textnormal{bd}}}(t_a) = P[u(t_a) - u_{\textnormal{tr}}(t_a) - u_{\textnormal{ref}}(t_a)],$$ and put $$\label{E:wdef}
w = u - u_{\textnormal{tr}}- u_{\textnormal{ref}}- u_{\textnormal{bd}}\,.$$ Suppose $|q|^{\frac 12} \|u_{{\textnormal{bd}}}(t_a)\|_{L^2_x} \le 1$ and, for some $k \in {{\mathbb N}}$, $$\|w(t_a)\|_{L^2_x} + c(k){\langle}q{\rangle}^2\frac{(\log v)^{\sigma(k)}}{v^{k{\varepsilon}}} + \|u_{\textnormal{bd}}(t_a)\|_{L^2_x} + |q| {\langle}q {\rangle}^\frac 12 \|u_{{\textnormal{bd}}}(t_a)\|^3_{L^2_x} \le c_2 \, .$$ Then $$\|w\|_{L^\infty_{[t_a,t_b]}L^2_x} \le c_3
\begin{aligned}[t]
\bigg( &\|w(t_a)\|_{L^2_x} + c(k){\langle}q{\rangle}^2\frac{(\log v)^{\sigma(k)}}{v^{k{\varepsilon}}} \\
&+ \|u_{\textnormal{bd}}(t_a)\|_{L^2_x} + |q| {\langle}q {\rangle}^\frac 12 \|u_{{\textnormal{bd}}}(t_a)\|^3_{L^2_x}\bigg).\end{aligned}$$ The constants $c_1$, $c_2$, and $c_3$ are independent of $q$, $v$, and ${\varepsilon}$.
Once again we apply our lemma before proving it. Suppose now that for some large $c$ and $k$, where $c$ is absolute and $k$ depends on ${\varepsilon}$, we have $|q|{\langle}q{\rangle}^\frac 12 \|w(t_2)\|^2\le c$ and $c(k){\langle}q{\rangle}^2\frac{(\log v)^{\sigma(k)}}{v^{k{\varepsilon}}}\le c \|w(t_2)\|_{L^2_x}$. These additional assumptions cause the conclusion of the lemma to become $\|w\|_{L^\infty_{[t_a,t_b]}L^2_x} \le c\|w(t_a)\|_{L^2_x}$, and they will follow from assuming $|q|^{10}{\langle}q {\rangle}^3 v^{-14(1-2{\varepsilon})} \le c$ and ${\langle}q {\rangle}v^{-n} \le c_{{\varepsilon},n}$, where $c$ is an absolute constant and $c_{{\varepsilon},n}$ is a small constant, dependent only on ${\varepsilon}$ and $n$, which goes to zero when ${\varepsilon}\to 0$ or when $n \to \infty$. Let $m$ be the integer such that $mc_1< {\varepsilon}\log v < (m+1)c_1$. We apply Lemma \[L:approx6\] successively on the intervals $[t_2,t_2+c_1], \ldots, [t_2+(m-1)c_1,t_2+mc_1]$ as follows. On $[t_2,t_2+c_1]$, we obtain $$\|w(\cdot,t)\|_{L_{[t_2,t_2+1]}^\infty L_x^2} \leq c_3\|w(\cdot,t_2)\|_{L_x^2}.$$ Applying Lemma \[L:approx6\] on $[t_2+c_1,t_2+2c_1]$ and combining with the above estimate, $$\|w(\cdot,t)\|_{L_{[t_2+1,t_2+2]}^\infty L_x^2} \leq c_3^2\|w(\cdot,t_2)\|_{L_x^2}.$$ Continuing up to the $k$-th step and then collecting all of the above estimates, $$\|w(\cdot,t)\|_{L_{[t_2,t_3]}^\infty L_x^2} \leq c_3^{m+1}\|w(\cdot,t_2)\|_{L_x^2} \le c v^{\varepsilon}\|w(\cdot,t_2)\|_{L_x^2} \le c\left(v^{-1+2{\varepsilon}} + |q|^{\frac 13} v^{-\frac 76 (1-2{\varepsilon})} \right).$$
Notice that $u_{{\textnormal{bd}}}(t)$ and $w(t)$ are *redefined* by and as we move from one interval $I_j{\stackrel{\rm{def}}{=}}[t_2+(j-1)c_1,t_2+jc_1]$ to the next interval $I_{j+1}{\stackrel{\rm{def}}{=}}[t_2+jc_1,t_2+(j+1)c_1]$ in the above iteration argument. That is, on $I_j$, $u_{{\textnormal{bd}}}(t)$ is the ${\textnormal{NLS}_q}$ flow of an initial condition at $t=t_2+(j-1)c_1$, and on $I_{j+1}$, $u_{{\textnormal{bd}}}(t)$ is the ${\textnormal{NLS}_q}$ flow of an initial condition at $t=t_2+jc_1$ that does not necessarily match the value of the previous flow at that point. In other words, at the interface of these two intervals, $$\lim_{t\nearrow t_2+jc_1} u_{\textnormal{bd}}(t) \neq \lim_{t\searrow t_2+jc_1} u_{\textnormal{bd}}(t)$$
It remains to prove Lemma \[L:approx6\]. On the way we will need to estimate the overlap of $u_{\textnormal{bd}}$ with $u_{\textnormal{tr}}$ and $u_{\textnormal{ref}}$.
\[L:interaction\] Let $t_a < t_b$, let $u_{\textnormal{tr}}$ and $u_{\textnormal{ref}}$ satisfy and , and let $u_{\textnormal{bd}}$ be the flow under ${\textnormal{NLS}_q}$ of an initial condition proportional to $e^{q|x|}$. Then $$\label{E:bdtr}
\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}\|u_{\textnormal{tr}}u_{{\textnormal{bd}}}\|_{L^\infty_{[t_a,t_b]}L^2_x} + \|u_{{\textnormal{ref}}}u_{{\textnormal{bd}}}\|_{L^\infty_{[t_a,t_b]}L^2_x} \\
&\le c \left(|q|^{-\frac 12}\|u_{\textnormal{bd}}(t_a)\|_{L^2_x} + |q|^\frac 12 {\langle}q {\rangle}^\frac 12 \|u_{{\textnormal{bd}}}(t_a)\|^3_{L^2_x} \right).
\end{aligned}$$
We prove the inequality only for $\|u_{\textnormal{tr}}u_{{\textnormal{bd}}}\|_{L^\infty_{[t_a,t_b]}L^2_x}$, the argument for $\|u_{{\textnormal{ref}}}u_{{\textnormal{bd}}}\|_{L^\infty_{[t_a,t_b]}L^2_x}$ being identical. We will use the following formula for $u_{{\textnormal{bd}}}$: $$u_{{\textnormal{bd}}}(x,t) = c_u e^{\frac i 2 t v^2} |q|^{\frac 1 2} e^{q|x|} + \int_{t_a}^t e^{-i(t-s)H_q} |u_{{\textnormal{bd}}}(x,s)|^2 u_{{\textnormal{bd}}}(x,s) ds,$$ where $c_u$ is the overlap of $u_{\textnormal{bd}}$ with the linear eigenstate, and is a constant bounded by the $L^2_x$ norm of $u_{\textnormal{bd}}$. We then have $$\|u_{\textnormal{tr}}u_{{\textnormal{bd}}}\|_{L^\infty_{[t_a,t_b]}L^2_x} \le
\begin{aligned}[t]
& c_u|q|^{\frac 1 2}\left\|u_{\textnormal{tr}}e^{q|x|}\right\|_{L^\infty_{[t_a,t_b]}L^2_x} \\
&+ \left\|u_{\textnormal{tr}}\int_{t_2}^t e^{-i(t-s)H_q} |u_{{\textnormal{bd}}}(s)|^2 u_{{\textnormal{bd}}}(s) ds \right\|_{L^\infty_{[t_a,t_b]}L^2_x}
\end{aligned}$$
For the first term it is enough that the linear eigenstate has $L^1$ norm proportional to $|q|^{-\frac 12}$. We remark that a better estimate is possible using the fact that $u_{\textnormal{bd}}$ and $u_{\textnormal{tr}}$ are concentrated in different parts of the real line. $$\begin{aligned}
\int |u_{\textnormal{tr}}(x,t)|^2e^{2q|x|} dx &\le c\|u_{\textnormal{tr}}(x,t)\|^2_{L^\infty_x} |q|^{-1} \le c |q|^{-1}\end{aligned}$$ Putting back in the factor of $\|u_{\textnormal{bd}}(t_a)\|_{L^2_x}|q|^\frac 12$, we obtain the first term of the desired estimate.
Now we treat the integral term. $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}\left\|u_{\textnormal{tr}}\int_{t_2}^t e^{-i(t-s)H_q} |u_{{\textnormal{bd}}}(s)|^2 u_{{\textnormal{bd}}}(s) ds \right\|_{L^\infty_{[t_a,t_b]}L^2_x} \\
&\le c\left\|\int_{t_2}^t e^{-i(t-s)H_q} |u_{{\textnormal{bd}}}(s)|^2 u_{{\textnormal{bd}}}(s) ds \right\|_{L^\infty_{[t_a,t_b]}L^2_x},
\intertext{where we have used the explicit formula for $u_{\textnormal{tr}}$ to take its supremum. We now use a Strichartz estimate:}
&\le c \left(\left\||u_{{\textnormal{bd}}}|^3\right\|_{L^{4/3}_{[t_a,t_b]}L^1_x} + \left\|P|u_{{\textnormal{bd}}}|^3\right\|_{L^1_{[t_a,t_b]}L^2_x}\right) \\
&\le c |q|^{\frac 12} \|u_{{\textnormal{bd}}}^3\|_{L^\infty_{[t_a,t_b]}L^1_x} \\
&\le c |q|^{\frac 12} \|u_{{\textnormal{bd}}}\|^2_{L^\infty_{[t_a,t_b]}L^2_x}\|u_{{\textnormal{bd}}}\|_{L^\infty_{[t_a,t_b]}L^\infty_x}.
\intertext{Here we use $\|f\|^2_{L^\infty} \le \|f\|_{L^2_x}\|{{\partial}}f\|_{L^2_x}$.}
&\le c |q|^{\frac 12} \|u_{{\textnormal{bd}}}\|^{\frac 52}_{L^\infty_{[t_a,t_b]}L^2_x}\|{{\partial}}_x u_{{\textnormal{bd}}}\|^{\frac 12}_{L^\infty_{[t_a,t_b]}L^2_x}\end{aligned}$$ Applying gives $$\le c |q|^{\frac 12} \|u_{{\textnormal{bd}}}\|^{\frac 52}_{L^\infty_{[t_a,t_b]}L^2_x} \left(\|{{\partial}}_x u_{{\textnormal{bd}}}(x,t_a)\|^{\frac 12}_{L^2_x} + |q|^\frac 12 \|u_{{\textnormal{bd}}}(x,t_a)\|^\frac 12_{L^2_x} + \|u_{{\textnormal{bd}}}(x,t_a)\|^\frac 32_{L^2_x}\right).$$ But the $L^2_x$ norm of $u_{{\textnormal{bd}}}$ is conserved, so that $\|u_{{\textnormal{bd}}}\|_{L^\infty_{[t_a,t_b]}L^2_x} = \|u_{\textnormal{bd}}(t_a)\|_{L^2_x}$. Moreover, because $u_{\textnormal{bd}}(t_a)$ is proportional to $e^{q|x|}$, we have $|{{\partial}}_x u_{\textnormal{bd}}(x,t_a)| = |q| |u_{\textnormal{bd}}(x,t_a)|$, giving $\|{{\partial}}_x u_{\textnormal{bd}}(x,t_a)\|_{L^2_x} = |q|\|u_{\textnormal{bd}}(t_a)\|_{L^2_x}$. This allows us to conclude $$= c |q|^{\frac 12} \|u_{{\textnormal{bd}}}(t_a)\|^{\frac 52}_{L^2_x} \left(|q|^{\frac 12}\|u_{\textnormal{bd}}(t_a)\|^{\frac 12}_{L^2_x} + |q|^\frac 12 \|u_{{\textnormal{bd}}}(x,t_a)\|^\frac 12_{L^2_x} + \|u_{{\textnormal{bd}}}(x,t_a)\|^\frac 32_{L^2_x}\right)$$ Combining terms, we obtain $$\le c |q|^\frac 12 {\langle}q{\rangle}^\frac 12 \|u_{{\textnormal{bd}}}(t_a)\|^3_{L^2_x},$$ giving us the second term of the conclusion.
Now we proceed to the proof of Lemma \[L:approx6\].
We begin by computing the differential equation solved by $w$. $$\begin{aligned}
0 &= i {{\partial}}_t u + \frac 1 2 {{\partial}}_x^2u - q\delta_0(x) u + u|u|^2 \\
&= \begin{aligned}[t]
&i {{\partial}}_t w + \frac 1 2 {{\partial}}_x^2w - q\delta_0(x) u_{{\textnormal{tr}}} - q\delta_0(x) u_{{\textnormal{ref}}} \\
&+ (u_{{\textnormal{tr}}} + u_{{\textnormal{ref}}} + u_{{\textnormal{bd}}} + w)|u_{{\textnormal{tr}}} + u_{{\textnormal{ref}}} + u_{{\textnormal{bd}}} + w|^2\\
& - |u_{{\textnormal{tr}}}|^2u_{{\textnormal{tr}}} - |u_{{\textnormal{ref}}}|^2u_{{\textnormal{ref}}} - |u_{{\textnormal{bd}}}|^2u_{{\textnormal{bd}}}.
\end{aligned}\end{aligned}$$ This can also be written as the following integral equation: $$\label{E:intw}
\begin{aligned}
w(t) = e^{-i(t-t_2)H_q}w(t_2)+ \int_{t_2}^t &e^{-i(t-s)H_q} \big[ q\delta_0(x) u_{{\textnormal{tr}}}(s) + q\delta_0(x) u_{{\textnormal{ref}}} \\
&- |u_{{\textnormal{tr}}}(s)|^2u_{{\textnormal{ref}}}(s) - \cdots - |w(s)|^2w(s)\big] ds.
\end{aligned}$$
For this estimate we will use $\|w\|_X = \|w\|_{L^\infty_{[t_a,t_b]}L^2_x} + \|w\|_{L^4_{[t_a,t_b]}L^\infty_x}$, and accordingly estimate the $L^p_{[t_a,t_b]}L^r_x$ norms of each of the terms on the right hand side of for $(p,r)$ equal to either $(\infty,2)$ or $(4,\infty)$. We choose these norms in order to be able to apply the Strichartz estimate.
First we treat the term arising from the initial condition. Here we use the fact that by construction, our initial condition satisfies $Pw_2(t_a) = 0$. $$\begin{aligned}
\left\| e^{-i(t-t_a)H_q}w(t_a) \right \|_{L^p_{[t_a,t_b]}L^r_x} \le c \|w(t_a)\|_{L^2_x}\end{aligned}$$
Second we treat the delta terms, for which we obtain the following: $$\begin{aligned}
\left\| \int_{t_a}^t e^{-i(t-s)H_q} q\delta_0(x) u_{{\textnormal{tr}}}(s)ds \right\|_{L^p_{[t_a,t_b]}L^r_x} &\le c {\langle}q{\rangle}^2\|u_{{\textnormal{tr}}}(0,t)\|_{L^\infty_{[t_a,t_b]}},\\
\left\| \int_{t_a}^t e^{-i(t-s)H_q} q\delta_0(x) u_{{\textnormal{ref}}}(s)ds \right\|_{L^p_{[t_a,t_b]}L^r_x} &\le c {\langle}q{\rangle}^2\|u_{{\textnormal{ref}}}(0,t)\|_{L^\infty_{[t_a,t_b]}}.\end{aligned}$$ We have used here the bound on $t_b - t_a$ to pass from the $L^{4/3}_{[t_a,t_b]}$ norm to the $L^\infty_{[t_a,t_b]}$ norm, and we have combined all the terms under the one with the least favorable power of ${\langle}q {\rangle}$. These last expressions are estimated using and respectively, to obtain, in both cases $$\le c {\langle}q{\rangle}^2\frac{c(k)(\log v)^{\sigma(k)}}{v^{k{\varepsilon}}}.$$
Third we the treat term which is cubic in $w$. For this we have $$\begin{aligned}
\left\| \int_{t_a}^t e^{-i(t-s)H_q} |w(s)|^2w(s) ds \right\|_{L^p_{[t_a,t_b]}L^r_x} &\le c \left(\|w^3\|_{L^{6/5}_{[t_a,t_b]}L^{6/5}_x} + \|Pw^3\|_{L^1_{[t_a,t_b]}L^r_x} \right) \\
&\le c \left(\|w\|^3_{L^{18/5}_{[t_a,t_b]}L^{18/5}_x} + \|w\|^3_{L^3_{[t_a,t_b]}L^{3r}_x} \right).
\intertext{At this stage we use $\|w\|_{L^{18/5}_x} \le \|w\|^{5/9}_{L^2_x}\|w\|^{4/9}_{L^\infty_x}$, and an analogous inequality for $\|w\|_{L^{3r}_x}$. We then use the boundedness of $t_b - t_a$ to pass to the $L^\infty_{[t_a,t_b]}L^2_x$ and $L^4_{[t_a,t_b]}L^\infty_x$ norms, giving us}
&\le c \|w\|^3_X.\end{aligned}$$
Fourth we treat terms of the form $w^2u_{{\textnormal{bd}}}$, as usual ignoring complex conjugates. This time we use $\tilde p = 1$, $\tilde r = 2$ on the right hand side of the Strichartz estimate. $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}\left\| \int_{t_a}^t e^{-i(t-s)H_q} w(s)^2 u_{{\textnormal{bd}}}(s) ds \right\|_{L^p_{[t_a,t_b]}L^r_x} \\
&\le c \left(\|w^2 u_{{\textnormal{bd}}}\|_{L^1_{[t_a,t_b]}L^2_x} + \|Pw^2u_{{\textnormal{bd}}}\|_{L^1_{[t_a,t_b]}L^r_x} \right) \\
&\le c \left(\|w^2 u_{{\textnormal{bd}}}\|_{L^1_{[t_a,t_b]}L^2_x} + |q|^{\frac 12}\|w^2u_{{\textnormal{bd}}}\|_{L^1_{[t_a,t_b]}L^2_x} \right)
\intertext{We have used $\|Pf\|_{L^{r_1}} \le c|q|^{\frac 1 {r_2} -\frac 1 {r_1}}\|f\|_{L^{r_2}}$ to pass to the $L^2_x$ norm, incurring a penalty no worse than $|q|^{1/2}$. This allows us to put an $L^2_x$ norm on $u_{{\textnormal{bd}}}$, which is our preferred norm for this part of the error.}
&\le c |q|^{\frac 12} \|u_{{\textnormal{bd}}}\|_{L^\infty_{[t_a,t_b]}L^2_x} \|w\|^2_X.
\intertext{Here we use $|q|^{\frac 12} \|u_{{\textnormal{bd}}}\|_{L^\infty_{[t_a,t_b]}L^2_x} \le 1$:}
&\le c \|w\|^2_X.\end{aligned}$$
Fifth we treat terms of the form $w^2u_{{\textnormal{tr}}}$ and $w^2u_{{\textnormal{ref}}}$. These terms obey the same estimate, and we write out the computation in one case only. $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}\left\| \int_{t_a}^t e^{-i(t-s)H_q} w(s)^2 u_{{\textnormal{tr}}}(s) ds \right\|_{L^p_{[t_a,t_b]}L^r_x} \\
&\le c \left(\|w^2 u_{{\textnormal{tr}}}\|_{L^1_{[t_a,t_b]}L^2_x} + \|Pw^2u_{{\textnormal{tr}}}\|_{L^1_{[t_a,t_b]}L^r_x} \right).
\intertext{The first term is the same as in the $u_{{\textnormal{bd}}}$ case. For the second term we pass to the $L^\infty_x$ norm, so as not to be penalized in $|q|$.}
&\le c \left(\|w\|^2_X \|u_{{\textnormal{tr}}}\|_{L^\infty_{[t_a,t_b]}L^2_x} + \|w^2u_{{\textnormal{tr}}}\|_{L^1_{[t_a,t_b]}L^\infty_x} \right)
\intertext{Here we use the fact that our explicit formula for $u_{{\textnormal{tr}}}$ allows us to control all of its mixed norms.}
&\le c \|w\|^2_X.\end{aligned}$$
Sixth we treat terms of the form $wu_{{\textnormal{tr}}}^2$, $wu_{{\textnormal{ref}}}^2$, and $wu_{{\textnormal{tr}}}u_{{\textnormal{ref}}}$, again writing out the computation in one case only. $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}\bigg\| \int_{t_a}^t e^{-i(t-s)H_q} w(s) u_{{\textnormal{tr}}}^2(s) ds \bigg\|_{L^p_{[t_a,t_b]}L^r_x} \\
&\le c \left(\|w u_{{\textnormal{tr}}}^2\|_{L^1_{[t_a,t_b]}L^2_x} + \|Pwu_{{\textnormal{tr}}}^2\|_{L^1_{[t_a,t_b]}L^r_x} \right) \\
&\le c \left(\|w\|_{L^1_{[t_a,t_b]}L^2_x}\|u_{{\textnormal{tr}}}\|^2_{L^\infty_{[t_a,t_b]}L^\infty_x} + \|w\|_{L^1_{[t_a,t_b]}L^\infty_x} \|u_{{\textnormal{tr}}}\|^2_{L^\infty_{[t_a,t_b]}L^\infty_x} \right) \\
&\le c (t_a-t_b)^{3/4}\left(\|w\|_{L^\infty_{[t_a,t_b]}L^2_x} + \|w\|_{L^4_{[t_a,t_b]}L^\infty_x}\right) \|u_{{\textnormal{tr}}}\|^2_{L^\infty_{[t_a,t_b]}L^\infty_x}\\
&\le c (t_b - t_a)^{3/4}\|w\|_X.\end{aligned}$$
Seventh we treat terms of the form $wu_{{\textnormal{tr}}}u_{{\textnormal{bd}}}$ and $wu_{{\textnormal{ref}}}u_{{\textnormal{bd}}}$. $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}\bigg\| \int_{t_a}^t e^{-i(t-s)H_q} w(s) u_{{\textnormal{tr}}}(s)u_{{\textnormal{bd}}}(s) ds \bigg\|_{L^p_{[t_a,t_b]}L^r_x} \\
&\le c \left(\|w u_{{\textnormal{tr}}}u_{{\textnormal{bd}}}\|_{L^1_{[t_a,t_b]}L^2_x} + \|Pwu_{{\textnormal{tr}}}u_{{\textnormal{bd}}}\|_{L^1_{[t_a,t_b]}L^r_x} \right) \\
&\le c (1 + |q|^{\frac 1 2})\|w\|_{L^1_{[t_a,t_b]}L^\infty_x} \|u_{{\textnormal{tr}}}\|_{L^\infty_{[t_a,t_b]}L^\infty_x} \|u_{{\textnormal{bd}}}\|_{L^\infty_{[t_a,t_b]}L^2_x}\\
&\le c (t_b-t_a)^\frac 34 |q|^{\frac 12} \|u_{{\textnormal{bd}}}\|_{L^\infty_{[t_a,t_b]}L^2_x} \|w\|_X.\\
&\le c (t_b-t_a)^\frac 34 \|w\|_X.\end{aligned}$$ where in the last step we again use $|q|^{\frac 12} \|u_{{\textnormal{bd}}}\|_{L^\infty_{[t_a,t_b]}L^2_x} \le 1$.
Eighth we treat terms of the form $wu_{{\textnormal{bd}}}^2$. $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}\left\| \int_{t_a}^t e^{-i(t-s)H_q} w(s) u_{{\textnormal{bd}}}^2(s) ds \right\|_{L^p_{[t_a,t_b]}L^r_x} \\
&\le c \left(\|w u_{{\textnormal{bd}}}^2\|_{L^{4/3}_{[t_a,t_b]}L^1_x} + \|Pwu_{{\textnormal{bd}}}^2\|_{L^1_{[t_a,t_b]}L^r_x} \right) \\
&\le c |q| \|wu_{{\textnormal{bd}}}^2\|_{L^{4/3}_{[t_a,t_b]}L^1_x} \\
&\le c |q| \|w\|_{L^{4/3}_{[t_a,t_b]}L^\infty_x}\|u_{{\textnormal{bd}}}^2\|_{L^\infty_{[t_a,t_b]}L^1_x} \\
&\le c (t_b-t_a)^\frac 12 \|w\|_X\end{aligned}$$
Ninth we treat terms of the form $u_{{\textnormal{tr}}}^2u_{{\textnormal{ref}}}$ and $u_{{\textnormal{tr}}}u_{{\textnormal{ref}}}^2$, once again writing out the computation in one case only. $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}\left\| \int_{t_a}^t e^{-i(t-s)H_q} u_{{\textnormal{tr}}}(s)^2u_{{\textnormal{ref}}}(s) ds \right\|_{L^p_{[t_a,t_b]}L^r_x} \\
&\le c \left(\|u_{{\textnormal{tr}}}^2u_{{\textnormal{ref}}}\|_{L^{4/3}_{[t_a,t_b]}L^1_x} + \|Pu_{{\textnormal{tr}}}^2u_{{\textnormal{ref}}}\|_{L^1_{[t_a,t_b]}L^r_x} \right) \\
&\le c |q| \|u_{{\textnormal{tr}}}^2u_{{\textnormal{ref}}}\|_{L^\infty_{[t_a,t_b]}L^1_x} \\
&\le c |q| \|u_{{\textnormal{tr}}}u_{{\textnormal{ref}}}\|_{L^\infty_{[t_a,t_b]}L^2_x} \|u_{{\textnormal{tr}}}\|_{L^\infty_{[t_a,t_b]}L^2_x}.
\intertext{At this point we use the explicit formula for $u_{{\textnormal{tr}}}$ to bound $\|u_{{\textnormal{tr}}}\|_{L^\infty_{[t_a,t_b]}L^2_x}$, and we split up $\|u_{{\textnormal{tr}}}u_{{\textnormal{ref}}}\|_{L^\infty_{[t_a,t_b]}L^2_x}$ into regions along the positive and negative real axis.}
&\le c |q| \bigg(\|u_{{\textnormal{tr}}}\|_{L^\infty_{[t_a,t_b]}L^\infty_x}\|u_{{\textnormal{ref}}}\|_{L^\infty_{[t_a,t_b]}L^2_{x>0}} \\
&\hspace{2cm}+ \|u_{{\textnormal{tr}}}\|_{L^\infty_{[t_a,t_b]}L^2_{x<0}}\|u_{{\textnormal{ref}}}\|_{L^\infty_{[t_a,t_b]}L^\infty_x}\bigg) \\
&\le c(k) |q| \frac {(\log v)^{\sigma(k)}}{v^{k{\varepsilon}}}.\end{aligned}$$
Tenth we treat terms of the form $u_{{\textnormal{bd}}}u_{{\textnormal{tr}}}^2$, $u_{{\textnormal{bd}}}u_{{\textnormal{tr}}}u_{{\textnormal{ref}}}$ and $u_{{\textnormal{bd}}}u_{{\textnormal{ref}}}^2$. $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}\left\| \int_{t_a}^t e^{-i(t-s)H_q} u_{{\textnormal{bd}}}(s)u_{{\textnormal{tr}}}(s)^2 ds \right\|_{L^p_{[t_a,t_b]}L^r_x} \\
&\le c \left(\|u_{{\textnormal{bd}}}u_{{\textnormal{tr}}}^2\|_{L^1_{[t_a,t_b]}L^2_x} + \|Pu_{{\textnormal{bd}}}u_{{\textnormal{tr}}}^2\|_{L^1_{[t_a,t_b]}L^r_x} \right) \\
&\le c |q|^{\frac 1 2} \|u_{{\textnormal{bd}}}u_{{\textnormal{tr}}}^2\|_{L^\infty_{[t_a,t_b]}L^2_x} \\
&\le c |q|^\frac 12\|u_{{\textnormal{bd}}}u_{{\textnormal{tr}}}\|_{L^\infty_{[t_a,t_b]}L^2_x} \\
&\le c \left(\|u_{\textnormal{bd}}(t_a)\|_{L^2_x} + |q| {\langle}q {\rangle}^\frac 12 \|u_{{\textnormal{bd}}}(t_a)\|^3_{L^2_x} \right).\end{aligned}$$
Eleventh, and last, we treat terms of the form $u_{{\textnormal{bd}}}^2 u_{{\textnormal{tr}}}$ and $u_{{\textnormal{bd}}}^2 u_{{\textnormal{ref}}}$. $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}\left\| \int_{t_a}^t e^{-i(t-s)H_q} u_{{\textnormal{bd}}}(s)^2u_{{\textnormal{tr}}}(s) ds \right\|_{L^p_{[t_a,t_b]}L^r_x} \\
&\le c \left(\|u_{{\textnormal{bd}}}^2u_{{\textnormal{tr}}}\|_{L^{4/3}_{[t_a,t_b]}L^1_x} + \|Pu_{{\textnormal{bd}}}^2u_{{\textnormal{tr}}}\|_{L^1_{[t_a,t_b]}L^r_x} \right) \\
&\le c |q| \|u_{{\textnormal{bd}}}^2u_{{\textnormal{tr}}}\|_{L^\infty_{[t_a,t_b]}L^1_x} \\
&\le c |q|\|u_{\textnormal{bd}}\|_{L^\infty_{[t_a,t_b]}L^2_x} \|u_{{\textnormal{bd}}}u_{{\textnormal{tr}}}\|_{L^\infty_{[t_a,t_b]}L^2_x}.\end{aligned}$$ But using once again $|q|^\frac 12\|u_{\textnormal{bd}}\|_{L^\infty_{[t_a,t_b]}L^2_x} \le 1$ we see that this obeys the same bound as the previous term.
Combining these eleven estimates, we find that $$\begin{aligned}
\|w\|_X \le c \Big(&\|w(t_a)\|_{L^2_x} + c(k){\langle}q{\rangle}^2\frac{(\log v)^{\sigma(k)}}{v^{k{\varepsilon}}} + \|u_{\textnormal{bd}}(t_a)\|_{L^2_x} + |q| {\langle}q {\rangle}^\frac 12 \|u_{{\textnormal{bd}}}(t_a)\|^3_{L^2_x}\\
&+ (t_b-t_a)^\frac 12\|w\|_X + \|w\|_X^2 + \|w\|_X^3 \Big).\end{aligned}$$ Choosing $c_1$ sufficiently small allows us to absorb the linear term into the left hand side. A continuity argument just like that in Phase 1 now gives us the conclusion.
Phase 2 for $q \ge 0$ {#S:phase2pos}
=====================
In this section, we prove Theorem \[th:3\]. We repeat the three lemmas of Phase 2 in the case $q \ge 0$. These results are used in the course of Phase 2 in the case $q<0$ in order to approximate the flow of $e^{-itH_0}$ by that of ${\textnormal{NLS}_0}$, and they also give an improvement of the asymptotic result in [@HMZ] for $q >0$.
\[linappp\] Let $\phi\in L^2$ and $0<t_b$. If $q \ge 0$ and $t_b^{1/2} \le c_1 \|\phi\|_{L^2}^{-2}$, then $$\|{\textnormal{NLS}_q}(t)\phi - e^{-itH_q}\phi\|_{L^P_{[0,t_b]}L^R_x} \le c_2 t_b^{1/2}\|\phi\|_{L^2}^3,$$ where $P$ and $R$ satisfy $\frac 2 P + \frac 1 R = \frac 1 2$, and $c_1$ and $c_2$ depend only on the constant appearing in the Strichartz estimates.
Let $h(t) = {\textnormal{NLS}_q}(t)\phi$, so that $$i{{\partial}}_t h + \frac 1 2 {{\partial}}_x^2 h - q \delta_0(x)h + |h|^2h = 0.$$ We use again the notation $\|h\|_{X'} = \|h\|_{L^\infty_{[0,t_b]}L^2_x} + \|h\|_{L^6_{[0,t_b]}L^6_x}$. We apply the Strichartz estimate, which in this case reads $$\|h\|_{L^p_{[0,t_b]}L^r_x} \le c(\| \phi \|_{L^2} + \||h|^2h\|_{L^{\tilde p}_{[0,t_b]}L^{\tilde r}_x })$$ once with $(p,r) = (\infty, 2)$ and $(\tilde p, \tilde r) = (6/5,6/5)$, and once with $(p,r) = (6,6)$ and $(\tilde p, \tilde r) = (6/5,6/5)$. As before, the $h$ terms are each estimated using Hölder’s inequality as $$\begin{aligned}
\|h\|^3_{L^{18/5}_{[0,t_b]}L^{18/5}_x} &\le c t_b^{1/2} \|h\|^3_{X'},\end{aligned}$$ yielding $$\|h\|_{X'} \le c(\|\phi\|_{L^2} + t_b^{1/2} \|h\|^3_{X'}).$$ Using the continuity of $\|h\|_{X'(t_b)}$, we conclude that $$\|h\|_{X'} \le 2c\|\phi\|_{L^2},$$ so long as $8c^2t_b^{1/2}\|\phi\|_{L^2}^2 \le 1$.
We now apply the Strichartz estimate to $u(t) = h(t) - e^{-itH_q}\phi$, observing that the initial condition is zero and the effective forcing term $-|h|^2h$, to get $$\|h(t) - e^{-itH_q}\phi\|_{L^P_{[0,t_b]}L^R_x} \le c \||h|^2h\|_{L^{6/5}_{[0,t_b]}L^{6/5}_x} \le ct_b^{1/2}\|h\|_{X'}^3 \le ct_b^{1/2}\|\phi\|_{L^2}^3.$$
\[L:approx2p\] Under the same hypotheses as the previous lemma, $$\label{E: approx2p}
\|{\textnormal{NLS}_q}(t)\phi - g\|_{L^\infty_{[0,t_b]}L^2_x} \le c \left(t_b^2\|\phi\|_{L^2}^9 + t_b^{3/2}\|\phi\|_{L^2}^7 + t_b \|\phi\|_{L^2}^5\right),$$ where $$g(t) = e^{-itH_q}\phi + \int_0^t e^{-i(t-s)H_q}|e^{-isH_q}\phi|^2e^{-isH_q}\phi ds.$$
A direct calculation shows $$h(t) - g(t) = \int_0^t e^{-i(t-s)H_q}\left(|h(s)|^2h(s) - |e^{-isH_q}\phi|^2e^{-isH_q}\phi\right)ds.$$ The Strichartz estimate gives us in this case $$\|h - g\|_{L^\infty_{[0,t_b]}L^2_x} \le \||h|^2h - |e^{-isH_q}\phi|^2e^{-isH_q}\phi\|_{L^{6/5}_{[0,t_b]}L^{6/5}_x}.$$
We introduce the notation $w(t) = h(t) - e^{-itH_q}\phi$, and use this to rewrite our difference of cubes: $$|h|^2h - |e^{-isH_q}\phi|^2e^{-isH_q}\phi =
\begin{aligned}[t]
&w|w|^2 + 2 e^{-isH_q}\phi |w|^2 + e^{isH_q}\overline{\phi} w^2 \\
&+ 2|e^{-isH_q}\phi|^2w + \left(e^{-isH_q}\phi\right)^2{\overline{w}}.
\end{aligned}$$ We proceed term by term, using Hölder estimates similar to the ones in the previous lemma and in Phase 1. Our goal is to obtain Strichartz norms of $w$, so that we can apply Lemma \[linappp\].
We have, for the cubic term, $$\begin{aligned}
\|w^3\|_{L^{6/5}_{[0,t_b]}L^{6/5}_x} &\le c t_b^{1/2}\|w\|_{L^\infty_{[t_a,t_b]}L^2_x}\|w\|^2_{L^6_{[t_a,t_b]}L^6_x} \\
&\le c t_b^2\|\phi\|_{L^2}^9.\end{aligned}$$ For the first inequality we used Hölder, and for the second Lemma \[linapp\]. Next we treat the quadratic terms using the same strategy (observe that as before we ignore complex conjugates): $$\begin{aligned}
\|e^{-isH_q}\phi |w|^2\|_{L^{6/5}_{[0,t_b]}L^{6/5}_x} &\le c t_b^{1/2}\|w\|_{L^\infty_{[t_a,t_b]}L^2_x}\|w\|_{L^6_{[t_a,t_b]}L^6_x}\|e^{-isH_q}\phi\|_{L^6_{[t_a,t_b]}L^6_x} \\
&\le c t_b^{3/2}\|\phi\|_{L^2}^7.\end{aligned}$$ And finally the linear terms: $$\||e^{-isH_q}\phi|^2 w\|_{L^{6/5}_{[0,t_b]}L^{6/5}_x} \le c t_b^{1/2}\|w\|_{L^\infty_{[t_a,t_b]}L^2_x}\|e^{-isH_q}\phi\|^2_{L^6_{[t_a,t_b]}L^6_x} \le c t_b \|\phi\|_{L^2}^5.$$
Putting all this together, we see that $$\|h - g\|_{L^\infty_{[0,t_b]}L^2_x} \le c \left(t_b^2\|\phi\|_{L^2}^9 + t_b^{3/2}\|\phi\|_{L^2}^7 + t_b \|\phi\|_{L^2}^5\right).$$
Finally we estimate, at time $t_2$, the error incurred by dropping the integral term:
\[L:dropintp\] For $t_1 < t_2$ and $\phi = u(x,t_1)$, we have $$\begin{aligned}
\label{E:dropintp}
\bigg\| \int_{t_1}^{t_2} e^{-i(t_2-s)H_q}&|e^{-isH_q}\phi|^2e^{-isH_q}\phi ds \bigg\|_{L^2_x} \le \\
\nonumber&c \left[(t_2-t_1) + (t_2-t_1)^{\frac 12}\left(q e^{-v^{1-\delta}} + q^2e^{-2v^{1-\delta}} + q^3 e^{-3v^{1-\delta}}\right)\right],\end{aligned}$$ where $c$ is independent of the parameters of the problem.
We write $\phi(x) = \phi_1(x) + \phi_2(x)$, where $\phi_1(x) = e^{-it_1v^2/2}e^{it_1/2}e^{ixv}{\textnormal{sech}}(x-x_0-vt_1)$, and estimate individually the eight resulting terms. We know that for large $v$, $\phi_2$ is exponentially small in $L^2$ norm from Lemma 3.1 of [@HMZ]. This makes the term which is cubic in $\phi_1$ the largest, and we treat this one first.
I. We claim $\left\| \int_{t_1}^{t_2} e^{-i(t_2-s)H_q}|e^{-isH_q}\phi_1|^2e^{-isH_q}\phi_1 ds \right\|_{L^2_x} \le c (t_2-t_1)$.
We begin with a direct computation $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}\left\| \int_{t_1}^{t_2} e^{-i(t_2-s)H_q}|e^{-isH_q}\phi_1|^2e^{-isH_q}\phi_1 ds \right\|_{L^2_x} \\
&\le (t_2 - t_1) \left\| e^{-i(t_2-s)H_q}|e^{-isH_q}\phi_1|^2e^{-isH_q}\phi_1 \right\|_{L^\infty_{[t_1,t_2]}L^2_x} \\
&\le c (t_2-t_1) \left\| |e^{-isH_q}\phi_1|^2e^{-isH_q}\phi_1 \right\|_{L^\infty_{[t_1,t_2]}L^2_x}.\end{aligned}$$ It remains to show that this last norm is bounded by a constant. This follows exactly the same argument as that given in part I of Lemma \[L:dropint\], with the difference that terms involving $P$ are omitted.
II\. For the other terms we use simpler Strichartz estimates. The smallness will come more from the smallness of $\phi_2$ than from the brevity of the time interval. $$\begin{aligned}
{\hspace{0.3in}&\hspace{-0.3in}}\bigg\| \int_{t_1}^{t_2} e^{-i(t_2-s)H_q}|e^{-isH_q}\phi_1|^2e^{-isH_q}\phi_2 ds \bigg\|_{L^2_x} \\
&\le c \||e^{-isH_q}\phi_1|^2e^{-isH_q}\phi_2\|_{L^1_{[t_1,t_2]}L^2_x}.
\intertext{We have used the Strichartz estimate with $(\tilde p, \tilde r) = (1,2)$, and use H\"older's inequality so as to put ourselves in a position to reapply the Strichartz estimate.}
&\le c (t_2-t_1)^{\frac 12}\|e^{-isH_q}\phi_1\|^2_{L^4_{[t_1,t_2]}L^\infty_x}\|e^{-isH_q}\phi_2\|_{L^\infty_{[t_1,t_2]}L^2_x} \\
&\le c (t_2-t_1)^{\frac 12} \|\phi_1\|_{L^2_x}^2\|\phi_2\|_{L^2_x} \\
&\le c (t_2-t_1)^{\frac 12}q e^{-v^{1-\delta}}.\end{aligned}$$ Note that here $\delta = 1 -{\varepsilon}$. Similarly we find that $$\begin{aligned}
\bigg\| \int_{t_1}^{t_2} e^{-i(t_2-s)H_q}|e^{-isH_q}\phi_2|^2e^{-isH_q}\phi_1 ds \bigg\|_{L^2_x} &\le c (t_2-t_1)^{\frac 12}q^2 e^{-2v^{1-\delta}} \\
\bigg\| \int_{t_1}^{t_2} e^{-i(t_2-s)H_q}|e^{-isH_q}\phi_2|^2e^{-isH_q}\phi_2 ds \bigg\|_{L^2_x} &\le c (t_2-t_1)^{\frac 12}q^3 e^{-3v^{1-\delta}}.\end{aligned}$$
These three lemmas improve the error in the case $q>0$ from $v^{-\frac {1-{\varepsilon}} 2}$ to $v^{-(1-{\varepsilon})}$, or, in the language of [@HMZ], from $v^{-\frac \delta 2}$ to $v^{-\delta}$.
[ABCD]{}
X.D. Cao and B.A. Malomed, [*Soliton-defect collisions in the nonlinear Schrödinger equation,*]{} Physics Letters A [**206**]{}(1995), 177–182.
R.H. Goodman, P.J. Holmes, and M.I. Weinstein, [*Strong NLS soliton-defect interactions,*]{} Physica D [**192**]{}(2004), 215–248.
J. Holmer, J. Marzuola, and M. Zworski, [*Fast soliton scattering by delta impurities,*]{} Comm. Math. Physics. [**274**]{}(2007), 187–216.
J. Holmer, J. Marzuola, and M. Zworski, [*Soliton splitting by external delta potentials,*]{} J. Nonlinear Sci., [**17**]{}(2007), 349–367.
J. Holmer and M. Zworski, [*Slow soliton interaction with delta impurities,*]{} J. Modern Dynamics, [**1**]{}(2007), 689–718.
J. Holmer and M. Zworski, *Soliton interaction with slowly varying potentials*, IMRN, **2008**, Article ID rnn026, 36 pages (2008).
M. Keel and T. Tao, [*Endpoint Strichartz estimates,*]{} Amer. J. Math. [**120**]{}(1998), 955–980.
S.H. Tang and M. Zworski, [*Potential scattering on the real line,*]{} Lecture notes,
V.E. Zakharov and A.B. Shabat, [*Exact theory of two-dimensional self-focusing and one-dimensional self-modulation of waves in nonlinear media,*]{} Soviet Physics JETP [**34**]{} (1972), no. 1, 62–69.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
In this paper I review a number of recent results in the field of exotic atoms. Recent experiments or ongoing experiments with muonic, pionic and antiprotonic hydrogen, as well as recent measurement of the pion mass are described. These experiments provide information about nucleon-pion or nucleon-antinucleon interaction as well as information on the proton structure (charge or magnetic moment distribution).
PACS numbers: 06.20.Fn, 32.30.Rj, 36.10.-k, 07.85.Nc
bibliography:
- 'exotic.bib'
---
Paul Indelicato$^{1,}$[^1], $^{1}$ Laboratoire Kastler-Brossel, École Normale Supérieure et Université P. et M. Curie, Case 74, 4 place Jussieu, F-75252, Cedex 05, France
Introduction {#sec:intro}
============
Exotic atoms are atoms that have captured a long-lived, heavy particle. This particle can be a lepton, sensitive only to the electromagnetic and weak interactions, like the electron or the muon, or a meson like the pion, or a baryon like the antiproton. An other kind of exotic atom is the one in which the *nucleus* has been replaced by a positron (positronium, an e$^+$e$^-$ bound system) or a positively charged muon (muonium, a $\mu^+$e$^-$ bound system). Positronium and muonium are pure quantum electrodynamics (QED) systems as they are made of elementary, point-like Dirac particles insensitive to the strong interaction. The annihilation of positronium has been a benchmark of bound-state QED (BSQED) for many years. For a long time there has been an outstanding discrepancy between the calculated (see, e.g., [@afs2002] for a recent review) and measured lifetime of ortho-positronium in vacuum, that has been resolved recently [@vzg2003]. As the positronium is the best QED test system, such a discrepancy was considered very serious. The $1 \,^3\mbox{S}_1 \to 2\, ^3\mbox{S}_1$ transition has also been measured accurately [@fcmc93].
Muonium has also been investigated in detail (see e.g., [@hug92].) Both the ground state hyperfine structure [@lbdd99] and the $1s
\to 2s$ transition [@cmyn88; @dfcs89; @sbbb95; @mbbb2000] have been investigated theoretically [@mky2001; @hill2001; @egs2001] and experimentally. The work on the hyperfine structure provides a very accurate muon mass value as well as a value for the fine structure constant (see the CODATA recommended values of the fundamental constants [@mat2000]).
The capture of a negatively charged, heavy particle $X^-$ by an atom, occurs at a principal quantum number $n\approx n_e
\sqrt{\frac{m_{X^{-}}}{m_e}}$ where $n_e$ is the principal quantum number of the atom outer shell, and $m_e$, $m_{X^-}$ are respectively the electron and particle mass. This leads to $n=14$, 16 and 41 for muons, pions and antiprotons respectively. The capture process populates $\ell$ sub-states more or less statistically. During the capture process of an heavy, negatively charged particle, many or all of the electrons of the initial atoms are ejected by Auger effect. As long as electrons are present, Auger transition rates are very large and photon emission is mostly suppressed except for the low lying states. For light elements, or particles like the antiproton, the cascade can end up with an hydrogenlike ion, with only the exotic particle bound to the nucleus [@sabb94].
The spectroscopy of exotic atoms has been used as a tool for the study of particles and fundamental properties for a long time. Exotic atoms are also interesting objects as they enable to probe aspects of atomic structure that are quantitatively different from what can be studied in electronic or “normal” atoms. For example, all captured particles are much heavier than the electron, and thus closer to the nucleus, leading to a domination of vacuum polarization effects over self-energy contributions, in contrast to normal atoms. The different relevant scales, Coulomb and vacuum polarization potential, together with pionic and electronic densities in pionic and normal hydrogen are represented on Fig. \[fig:elpidens\]. On can see that the pion density inside the nucleus, or where the vacuum polarization potential is large, is several orders of magnitude larger than the electronic density in hydrogen. This lead to very large vacuum polarization and finite nuclear size corrections.
Other fundamental changes can be found in exotic atoms: pions are bosons, and thus obey the Klein-Gordon equation, while electrons, muons and antiprotons as spin $1/2$ fermions, obey the Dirac equation. Yet antiprotons, which are not elementary particles, have a magnetic moment very different from the one of a Dirac particle. This leads to large corrections not present in other types of atoms.
In the present paper, I will review a number of systems of interest for the study of the proton structure or of the strong interaction at low energy. In Section \[sec:pi-muh\], I describe an ongoing experiment to measure the 2s Lamb-shift of muonic hydrogen. In section \[sec:pi-piat\], I present recent experiments involving pionic atoms. Finally in section \[sec:pi-pbar\], I will review recent work on antiprotonic hydrogen and helium.
Muonic hydrogen and the determination of the proton charge radius {#sec:pi-muh}
=================================================================
In the last decade very important progress has been made in the accuracy of optical measurements in hydrogen. With the help of frequency combs and rubidium fountain atomic clocks, the accuracy of the 1s$\to$2s transition measurement has reached $1.8\times 10^{-14}$, giving 2 466 061 413 187 103(46) Hz [@nhrp2000]. The Rydberg constant (which requires knowledge of e.g., 2s$\to$nd transitions) that is needed to extract the Lamb-shift from the 1s$\to$2s transition energy and to get theoretical values is now known to $7.7\times 10^{-12}$ [@dsaj2000]. From this information one can obtain the 1s and 2s Lamb-shift to 2.7 ppm accuracy. However for many years it has been impossible to use those very accurate measurements to test QED in hydrogen, which is the only atom, at the present moment, in which experimental uncertainty is much smaller than the size of two-loop BSQED corrections. Because in hydrogen $Z\alpha << 1$, calculations of radiative corrections have been done as an expansion in $Z\alpha$, i.e., expanding the electron propagator in the number of interaction with the nucleus. It is only recently that for the one-loop self-energy an exact, all order calculation, with a numerical precision small compared to the 46 Hz experimental error bar has been performed. For the two-loop self-energy, the situation is very complex. In the first calculation of the irreducible contribution to the loop-after-loop contribution (Fig. \[fig:sese-diag\] a) it was found that the all-order contribution did not match, even at $Z=1$, the result obtained by summing up all know term in the $Z\alpha$ expansion [@mas98; @mas98a]. This results was latter confirmed [@yer2000]. It should be noted that this piece has no meaning by itself, not being a renormalizable, gauge invariant set of Feynman graphs.
More recently the complete all-order two-loop self-energy has been evaluated, but only for $Z\ge 40$ [@yis2003]. It cannot be said at the moment wether the extrapolation to $Z=1$ agrees with the $Z\alpha$ expansion (Fig. \[fig:sese-res\]).
Yet the issue cannot be resolved with the help of experiment, as the proton charge radius is not well known, and the uncertainty on the theoretical value due to that fact is larger than any uncertainty on the two-loop corrections. Values of the proton radius measured by electron scattering range from 0.805(12) fm[@hand63] to 0.847(11) [@mmd96] and 0.880(15) fm [@ros2000], the two later values resulting from reanalysis of the same experiment[@simon80]. On the other hand, one obtains 0.908 fm from a comparison between measurements in hydrogen and QED calculation[@mat2000].
Owing to this large dispersion of results, the uncertainty in QED calculations is 4 times larger than the present experimental accuracy. It has thus been proposed to use muonic hydrogen to obtain an independent measurement of the proton radius. The experiment consists in measuring the 2s$\to$2p$_{3/2}$ transition energy, which is strongly dependent in the proton radius. From value of [@egs2001] for the “light by light” contribution one gets $$\label{eq:muptote}
206.099(20) - 5.2256 r^2 + 0.0363 r^3 \, \textstyle{\textrm{meV}},$$ where the number in parenthesis represent the uncertainty (quadratic sum), and where $r$, the proton mean spherical charge radius, must be expressed in Fermi. If one uses instead Ref. [@pac96; @pac99], then one obtains 206.074(3) meV for the constant term.
An experiment aiming at an accuracy of 40 ppm of the 2s$\to$2p$_{3/2}$ energy difference has been started at the Paul Scherrer Institute (PSI). Such an accuracy would provide a proton charge radius to $\approx$ 0.1% accuracy, which would allow to compare theory and experiment for the 1s and 2s Lamb-shift on hydrogen to the 0.4 ppm level. The experiment uses the fact that the 2s state is metastable in muonic hydrogen. This is due to the fact that, the muon being 206 times more massive than the electron (and thus 200 times closer to the nucleus), vacuum polarization dominates radiative corrections in exotic atoms, and has opposite sign compare to self-energy. The 2s state is thus the lowest $n=2$ level in muonic hydrogen, while it is 2p$_{1/2}$ in hydrogen. The experiment thus consists in exciting the 6 $\mu$m $2 \,^3\mbox{S}_{1/2} \to 2\, ^{5}\mbox{P}_{3/2}$ transition with a laser, and detects the 2 keV X-rays resulting from the 2p$_{3/2}\to$1s transition that follows. In order to reduce background events, coincidence between the high-energy electrons resulting from the muon disintegration and the 2 keV X-rays must be done.
The experiment uses slow muons prepared in the *cyclotron trap II* [@sim93], installed on the high-intensity pion beam at PSI. The muons, originating in pion decays, are decelerated to eV energies through interaction with thin foils inside the trap. They are then accelerated to a few keV, and transfered to a low density hydrogen target in a 5 T magnetic field, using a bent magnetic channel, to get rid of unwanted particles[@dhhk93; @taq96]. A stack of foils at the entrance of the target is used to trigger an excimer laser in around 1 $\mu$s (the muons half life is around 2 $\mu$s). This laser is at the top of a laser chain that use dye lasers, Ti-sapphire lasers and a multipass Raman cell filled with 15 bars of H$_2$ to produce the 6 $\mu$m radiation. The laser system is shown on Fig. \[fig:laser6\]. More details on the population of metastable states and on the experimental set-up as can be found in [@tbch99; @pbcd2000; @kabc2001]. A first run of the experiment took place in summer 2002, in which an intensity of 0.3 mJ was obtained at 6 $\mu$m, which is enough to saturate the transition, and $\approx$ 50 muons/h where detected in the target. Counting rate is expected to be around 5 events/h at the transition peak, which makes the experiment extremely difficult, owing to the uncertainty on where to look for the transition, and of the complexity of the apparatus.
Pionic atoms {#sec:pi-piat}
============
Pions are mesons, i.e., particles made of a quark-antiquark pair. They are sensitive to strong interaction. To a large extent, the strong interaction between nucleons in atomic nuclei results from pion exchange. The lifetime of the charged pion is $2.8\times
10^{-8}$ s. They decay into a muon and a muonic neutrino. The mass of the pion is 273 times larger than the electron mass. Contrary to the electron, it has a charge radius of 0.8 fm and it is a spin-0 boson.
Measurement of the pion mass {#sec:pi-pimass}
----------------------------
For a long time the spectroscopy of pionic atoms has been the favored way of measuring accurately the pion mass. This mass was measured in 1986 in pionic magnesium, with a crystal X-ray spectrometer, to a 3 ppm accuracy [@jnbc86; @jbde86]. Yet, as the pions were stopped in solid magnesium, in which it was possible for the pionic atom to recapture electrons before de-excitation, it was found that the hypothesis made by Jeckelmann *et al.* on the number of electron recaptured in the atom was incorrect (in the pion cascade leading to the formation of the pionic atom, all the electron are ejected by auger effect.) This happened in experiments designed to measure the muonic neutrino mass, from the decay of stopped positively charged pions into muon and muonic neutrino [@abdf94]. This experiment found a negative value for the square of the neutrino mass, using the 1986 value of the pion mass. A reanalysis of the 1986 experiment, with better modeling of the electron capture was done, which lead to a pion mass value in agreement compatible with a positive value for the square of the neutrino mass.
Such a situation was very unsatisfactory and it was decided to use the cyclotron trap and the high-luminosity X-ray spectrometer, developed initially for work with antiprotons at the Low Energy Antiproton Ring (LEAR) at CERN to redo a measurement of the pion mass, in a low pressure gas, in which case electron recapture is negligible small. Moreover the resolution of the spectrometer was such that line resulting from the decay of an exotic atom with an extra electron would be separated from the main transition in a purely hydrogenlike exotic atom. In a first experiment a value from the pion mass was obtained by doing a measurement in pionic nitrogen, using copper K$\alpha$ X-rays as a reference [@lbgg98]. This measurement, with an accuracy of 4 ppm, and in good agreement with the limits set by [@abdf94], allowed to settle the question of the pion mass. However the 4 ppm accuracy is not good enough for recent projects involving pionic hydrogen, which are discussed in Sec. \[sec:pi-pih\]. The schematic of the experiment is shown in Fig. \[fig:trap\].
The previous experiment accuracy was limited by the beam intensity, the characteristics of the cyclotron trap, the quality of the X-ray standard (broad line observed in second order of diffraction, while the pion line was observed in first order) and the CCD size and operation. It was decided to use the 5g$\to$4f transition in *muonic* oxygen transition as a standard, with and energy close to the one of the 5g$\to$4f transition in pionic nitrogen in place of the Cu energy standard, relying on the fact that the muon mass is well known [@mat2000]. This standard energy can be evaluated with a uncertainty of $\approx 0.3$ ppm. A new trap was designed, optimized for muon production, which was also to be used for the experiment presented in Sec. \[sec:pi-muh\]. Meanwhile the beam intensity of the PSI accelerator had improved. Finally a new CCD detector was designed, with larger size, higher efficiency and improved operations [@naab2002].
With these improvements, a new experiment was done, which lead to a statistical accuracy in the comparison between the pionic and muonic line, compatible with a final uncertainty around 1 ppm [@naab2002a]. However at that accuracy, effects due to the fabrication process of the CCD are no longer negligible and require very delicate study, e.g., to measure the pixel size. Those studies are underway.
A byproduct of the pion mass measurement has been a very accurate measurement of the 5$\to$4 transition fine structure in pionic nitrogen. The energy difference between 5g$\to$4f and 5f$\to$4d is found to be $2.3082\pm 0.0097$ eV [@lbgg98], while a calculation based on the Klein-Gordon equation, with all vacuum polarization corrections of order $\alpha$ and $\alpha^2$ and recoil corrections provides 2.3129 eV [@bpi2002]. This is one of the best test of QED for spin-0 boson so far.
3.2. Pionic hydrogen and deuterium {#sec:pi-pih}
----------------------------------
Quantum Chromodynamics is the theory of quarks and gluons, that describe the strong interaction at a fundamental level, in the Standard Model. It has been studied extensively at high-energy, in the asymptotic freedom regime, in which perturbation theory in the coupling constant can be used. At low energy the QCD coupling constant $\alpha_{\mbox{S}}$ is larger than one and perturbative expansion in $\alpha_{\mbox{S}}$ cannot be done. Weinberg proposed Chiral Perturbation Theory (ChPT) [@wei66] to deal with this problem. More advanced calculations have been performed since then, that require adequate testing. Short of the possibility of studying pionium (a bound pion-antipion system) accurately enough, pionic hydrogen is the best candidate for accurate test of ChPT. The shift and width of np$\to$1s transition in pionic hydrogen due to strong interaction can be connected respectively to the $\pi^-$p$\to\pi^-$p and $\pi^-$p$\to\pi^0$n cross-sections, which can be evaluated by ChPT, using a Deser-type formula[@dgbt54]. After a successful attempt at studying pionic deuterium with the apparatus described in Sec. \[sec:pi-pimass\], which provided in a very short time a sizable improvement over previous experiments [@hksb98], it was decided that such an apparatus could lead to improvements in pionic hydrogen of a factor 3 in the accuracy of the shift and of one order of magnitude in the accuracy of the width. In order to reach such an improvement, systematic studies as a function of target density and of the transition (np$\to$1s, with n=2, 3 and 4) have been done.
The main difficulty in the experiment is to separate the strong interaction broadening of the pionic hydrogen lines, from other contributions, namely the instrumental response function, Doppler broadening due to non-radiative de-excitation of pionic hydrogen atoms by collisions with the H$_2$ molecules of the gas target and from possible transitions in exotic hydrogen molecules. The instrumental response is being studied using a transition in helium-like ions[@abbd2003], emitted by the plasma of an Electron-Cyclotron Ion Trap (ECRIT) build at PSI[@bsh2000]. High-intensity spectra, allow for systematic study of instrumental response. Exotic atoms do not provide as good a response function calibration as most line coming from molecules are broadened by Doppler effect due to the Coulomb explosion during the atom formation process [@sabg2000], and as the rate being much lower, the statistic is often not sufficient. An example of an highly-charged argon spectrum in the energy range of interest is presented on Fig. \[fig:ar16\].
Antiprotonic atoms {#sec:pi-pbar}
==================
The operation of LEAR, a low-energy antiproton storage ring with stochastic and electron cooling at CERN from 1983 to 1996 has caused a real revolution in antiproton physics. Numerous particle physics experiments were conducted there, but also atomic physics experiments. A number of the latter used antiprotonic atoms produced with the cyclotron trap (from $\bar{\mbox{p}}$H to $\bar{\mbox{p}}$Xe). Others used Penning trap to measure the antiproton/proton mass ratio to test CPT invariance [@gkhh99]. An other experiment was concerned with precision laser spectroscopy of metastable states of the He$^+\bar{\mbox{p}}$ system [@hmtm94], the existence of which had been discovered earlier at KEK [@inss91]. This experiment is now being continued with improved accuracy at LEAR successor, the AD (Antiproton Decelerator). Compared with recent high-accuracy three-body calculations with relativistic and QED corrections, these experiments provide very good upper bounds to the charge and mass differences between proton and antiproton, again testing CPT invariance[@hehi2001]. More recently the hyperfine structure of the $^3$He$^+\bar{\mbox{p}}$ atoms as been investigated [@weis2002] and found in good agreement with theory [@bk98]. Expected accuracy improvements in the new AD experiment ASACUSA, should lead to even more interesting results [@ymhw2002].
Antiprotonic hydrogen and deuterium {#sec:pi-pbarh}
-----------------------------------
X-ray spectroscopy of antiprotonic hydrogen and deuterium was performed at LEAR to study the strong interaction between nucleon and antinucleon at low energy. First the use of solid state detectors like Si-Li detectors, provided some information. The study of line intensities provided estimates of the antiproton annihilation in the 2p state. The 2p$\to$1s transitions were observed. While the transition energy is around 8.7 keV, the 1s broadening due to annihilation is $1054\pm 65$ eV and the strong interaction shift is $-712.5\pm 25.3$ eV. Measuring such a broad line is very difficult as many narrow contamination lines will be superimposed on it. Moreover those rather precise values are spin-averaged quantities that neglect the unknown 1s level splitting. More recently the use of CCD detectors has allowed to improve the $\bar{\mbox{p}}$H measurement [@aabc99a] and to make the first observation of $\bar{\mbox{p}}$D line, which is even broader, due to three body effects [@aabc99].
The broadening of the 2p state however is much smaller. The Balmer 3d$\to$2p lines can thus be studied by crystal spectroscopy. The use of the cyclotron trap allowing to capture antiprotons in dilute gases with a 90 % efficiency and of an efficient, high-resolution crystal spectrometer were instrumental to the success of such an experiment, owing to the low production rate of antiprotons (a few $10^8$ per hour). With that a counting rate of around 25 counts per hour was observed, due to the use of an ultimate resolution device, even optimized for efficiency. The experimental set-up is basically the same as described in Sec. \[sec:pi-piat\]. However to improve X-ray collection efficiency, a double spectrometer was build, with two arms symmetrical with respect to the trap axis and three crystals. On one side a large CCD detector allowed to have two vertically super-imposed crystals. On the other side a single crystal was mounted. Resolutions of around 290 meV where achieved, which are of the order of the expected line splitting. The final spectrum observed with one of the three crystal/detector combination is presented on Fig. \[fig:pbar-spec\].
In order to extract strong interaction parameters from such spectrum, a detailed description of the QED structure of the multiplet is required. Antiproton being composed of three quarks are not point-like, Dirac particles. In particular their gyro-magnetic ratio is very different : for the antiproton we have $a_{\bar{p}}=(g-2)/2 = -1.792847386$ instead of $-\alpha/(2\pi)\approx -1.2\times 10^{-3}$ for a positron. The corrections due to the anomalous magnetic moment of the antiproton can be accounted for by introducing the operator (valid for distances larger than the Compton wavelength of the electron $\hbar/mc)$ $$\Delta H= a \frac{\hbar q}{2 m_p} \beta \left( i \frac{
\boldsymbol{\alpha}\boldsymbol{E}}{c}- \boldsymbol{\Sigma} \boldsymbol{B} \right),
\label{eqn:hfssec}$$ where $m_p$ is the antiproton mass, $\boldsymbol{E}$ and $\boldsymbol{B}$ are the electric and magnetic fields generated by the nucleus, $\boldsymbol{\alpha}$ are Dirac matrices and $$\boldsymbol{\Sigma}=\left(\begin{array}{cc}
\boldsymbol{\sigma}&0\\0& \boldsymbol{\sigma}
\end{array}
\right) .$$
Vacuum polarization corrections and finite particle size (both for the nucleus and the antiproton) must be included. Because the antiproton is $\approx 2000$ times closer to the nucleus than in the hydrogen case, hyperfine structure and the $g-2$ correction are very large, even larger than the fine structure. In such a case perturbation theory is insufficient to account for the effect. The full Hamiltonian matrix over the 2p and 3d Dirac states must be build and diagonalized. The result of such a fully relativistic calculation for antiprotonic deuterium is shown in Fig. \[fig:pbar-deut\], together with the result of earlier calculation [@bps81]. The large difference between the two results is not understood. However the results from [@bps81] do not reproduce correctly the observed line shape [@gaab99]. By combining the high-precision measurements carried out with the cyclotron trap and the spherical crystal spectrometer, and the theoretical calculations presented above, it has been possible to evaluate strong interaction shifts for the 2p level, and different $\bar{\mbox{p}}$H and $\bar{\mbox{p}}$D spin states [@gaab99]. The results confirm calculations based on the Dover-Richard semi-phenomenological potential [@dar80; @ras82] (see [@gaab99] for more details).
Conclusion and perspectives
===========================
In this paper, I have explored different aspects of the physics of light muonic, pionic and antiprotonic atoms. I have left out many aspects of that physics, like the study of the atomic cascade, collisions between exotic atoms and gases, or the atomic and molecular phenomena involved in muon-catalyzed fusion. The formation of antiprotonic highly-charged ions as has been observed at LEAR seems to point to very exciting new atomic physics[@rgfis2003]. I have not explored either the studies of interest to nuclear physics like measurement of nuclear charge distribution for heavier elements with muonic atoms (see, e.g., [@bfgr89]) or of neutron distribution with antiprotonic atoms[@tjlh201]. With the continuous progress in accelerators technology (improvements in the intensity at PSI), the development of very low energy antiproton energy at the AD at CERN, the trapping of antihydrogen[@aabb2002; @gbos2002], or the new antiproton machine at GSI, it is expected that this physics will continue to develop in the years to come and provide more challenges to atomic and fundamental physics.
[**Acknowledgments**]{} Laboratoire Kastler Brossel is Unit[é]{} Mixte de Recherche du CNRS n$^{\circ}$ C8552. I wish to thank all the participants to the antiprotonic hydrogen, pion mass, pionic hydrogen and muonic hydrogen experiments for their relentless effort to make those experiments live and develop. I am in particular indebted to D. Gotta, L. Simons, F. Kottmann and F. Nez. On the theoretical side, participation of S. Boucard, V. Yerokhin, E.O. Le Bigot, V. Shabaev and P.J. Mohr is gratefully acknowledged.
\[!hbp\]
\[!hbp\] ![The three Feynman diagrams contributing to the two-loop self-energy. Double lines represent bound electron propagators and wavy lines photon propagators. Diagram (a) represents the loop-after-loop term. The irreducible part is obtained when the propagator between the two-loop has an energy different from the energy of the bound state being studied [@yis2003]. \[fig:sese-diag\]](se2loop.eps "fig:")
\[!hbp\] ![Comparison with all-order numerical calculation and the function obtained from the first or second order expansion in $Z\alpha$. \[fig:sese-res\]](comp.eps "fig:")
\[!hbp\]
\[!hbp\] ![Principle of X-ray spectrocopy of exotic atoms with the cyclotron trap and a spherically curved crystal spectrometer. The bidimensional X-ray detector is a 6-chips cooled CCD detector and is located on the Rowland circle of radius $R/2$, where $R$ is the radius of curvature of the crystal ($\approx 3$ m) []{data-label="fig:trap"}](princi.eps "fig:")
\[!hbp\]
\[!hbp\] ![High-resolution spectrum of antiprotonic hydrogen [@gaab99]. The difference between the measured line shape and the solid line is due to the fact that the solid line represents a line-shape model without strong interaction, evaluated following the method described in the text . []{data-label="fig:pbar-spec"}](pbarh-spec.epsi "fig:")
\[!hbp\] ![Theoretical level scheme of antiprotonic deuterium and comparison with earlier work (Pilkuhn) [@bps81]. []{data-label="fig:pbar-deut"}](pbar-deut-lev.epsi "fig:")
[^1]: Email address: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Observations of the starburst galaxy, M82, have been made with the VLA in its A-configuration at 15 GHz and MERLIN at 5 GHz enabling a spectral analysis of the compact radio structure on a scale of $< 0.1''''$ (1.6 pc). Crucial to these observations was the inclusion of the Pie Town VLBA antenna, which increased the resolution of the VLA observations by a factor of $\sim$2. A number of the weaker sources are shown to have thermal spectra and are identified as H[ii]{} regions with emission measures $\sim$10$^7$ cm$^{-6}$ pc. Some of the sources appear to be optically thick at 5 GHz implying even higher emission measures of $\sim$10$^8$ cm$^{-6}$ pc. The number of compact radio sources in M82 whose origin has been determined is now 46, of which 30 are supernova related and the remaining 16 are H[ii]{} regions. An additional 15 sources are noted, but have yet to be identified, meaning that the total number of compact sources in M82 is at least 61. Also, it is shown that the distribution of H[ii]{} regions is correlated with the large-scale ionised gas distribution, but is different from the distribution of supernova remnants. In addition, the brightest H[ii]{} region at (B1950) 09$^h$ 51$^m$ 42.21$^s$ +69$^{\circ}$ 54$''$ 59.2$''''$ shows a spectral index gradient across its resolved structure which we attribute to the source becoming optically thick towards its centre.'
bibliography:
- 'papers.bib'
date: 'Accepted ?. Received ?'
nocite:
- '[@Burbidge64]'
- '[@Taylor99]'
- '[@Turner00]'
- '[@Kronberg85]'
title: 'A Parsec-Scale Study of the 5/15 GHz Spectral Indices of the Compact Radio Sources in M82'
---
\[firstpage\]
galaxies : starburst – galaxies : individual : M82 – H[ii]{} regions – radio continuum : galaxies
Introduction
============
The radio emission from star-forming galaxies can be either of a thermal (e.g. free-free) or non-thermal (e.g. synchrotron radiation) origin. The non-thermal radiation originates from relativistic electrons which have been accelerated by supernovae resulting from the deaths of massive stars and the thermal emission is from free-free emission from gas which has been ionised by hot, young stars. Therefore, it is obviously of interest to investigate the relative contributions of these processes to the emission which we observe from nearby star-forming galaxies. A separation of the non-thermal and thermal components of radio emission from the archetypal starburst galaxy, M82, has already been carried out by @Allen99 on angular scales of 2$''$ and they identified several complexes of ionised gas. However, 2$''$ is not sufficient resolution to separate the most compact radio sources from the more diffuse radio structure. In this paper we compare observations of M82 made with the VLA in its A-configuration at 15 GHz, incorporating the VLBA Pie Town antenna, with MERLIN 5 GHz observations. The maps made from these observations have a resolution $\leq$ 0.1$''$ which extends the analysis of the most compact radio structure into the thermal regime. Thus, we are now able to comment on the relative contribution of thermal and non-thermal processes to the radio emission on scales of order 1 pc. For consistency with previous publications, we assume a distance of 3.2 Mpc to M82 throughout the paper (Burbidge, Burbidge & Rubin 1964).
Spectral information on 26 of the compact radio sources in M82 has already been published [@Wills97; @Allen98]. These analyses were prompted by 408 MHz MERLIN observations and the radio continuum spectra of the detected sources were mainly consistent with non-thermal synchrotron emission, occasionally with a low-frequency turnover which was attributed to free-free absorption by ionised gas. However, free-free absorption is likely to be responsible for the non-detection of many of the remaining sources at low frequencies. In addition, @Wills97 (hereafter W97) identified two sources which were inconsistent with a free-free absorbed synchrotron spectrum, the AGN candidate, 44.01+59.6[^1] and a possible H[ii]{} region at 40.62+56.0. @Allen98 (hereafter AK98) came to similar conclusions regarding the majority of the sources, but additionally identified a flat-spectrum ($\alpha \sim$ -0.1, where $S \propto \nu^{\alpha}$) source at 42.21+59.2, which they identified as an H[ii]{} region. However, over 50 compact radio sources have been identified in the nuclear region of M82, therefore until now there has only been spectral information published on around half of the total number, similar to the situation in another nearby starburst, NGC 2146 [@Tarchi00]. Also, since previous investigations concentrated on the brightest sources at the longer radio wavelengths, there would have been a bias away from the identification of thermal sources such as H[ii]{} regions. This is because their electron temperatures are unlikely to exceed 10$^4$ K and hence even if the ionised gas is optically thick, the brightness temperatures cannot exceed this limit (n.b. 10$^4$ K $\equiv$ 0.48 mJy beam$^{-1}$ with MERLIN at 5 GHz, 50 mas resolution). Therefore, in order to make robust comparisons between the numbers of compact radio sources of different types in starburst galaxies we must first make more confident identifications of a greater fraction of the sources. Furthermore, with the improved sensitivity of radio observations, it is clear that further populations of compact radio sources will be discovered in more distant galaxies. Hence the need to fully investigate the nature of these sources in nearby galaxies.
The observations and data analysis {#observations}
==================================
Observations of M82 were performed with the Very Large Array (VLA) of the National Radio Astronomy Observatory, including the Pie Town Very Long Baseline Array (VLBA) telescope, the data from which were correlated in real-time via a fibre-optic link to the VLA site, on the 2nd December 2000. These observations represent the first stage of a programme to image the 15 GHz radio emission from M82 on a wide range of scales and the combination with B- and C-configuation data will be presented in a later paper, along with a discussion of the structures and sizes of the compact sources. However, the A-configuration data alone represent measurements of the radio emission from M82 on scales comparable to those measured using MERLIN at 5 GHz. Therefore, we also present observations made with the MERLIN interferometer in February 1999. The details of both observations are summarised in Table \[obstable\]. A number of the parameters listed in Table \[obstable\] were chosen to optimise the field of view of the observations since the compact sources in M82 cover a region roughly 50$''$ $\times$ 10$''$ in size. Hence, the VLA correlator was used in spectral line mode, limiting the usable bandwidth to 43.75 MHz, but producing 7 spectral channels with widths of 6.25 MHz. This mode results in smearing at the 5$\%$ level at a distance of 95$''$ from the phase centre of the observations (Taylor, Carilli & Perley (1999). We deemed this level of smearing to be acceptable for the purposes of our observations and Figure \[vlamerlin\_compare\_fig\] shows the map of the field of view of the observations.
Array VLA (A-configuration) + Pie Town MERLIN
-------------------------------------------- ---------------------------------- -----------------------------
Date 2nd December 2000 5th / 22nd February 1999
Observing Frequency 14.9649 GHz 4.546 / 4.866 / 5.186 GHz
(Multi-Frequency Synthesis)
Bandwidth 43.75 MHz 15 MHz
Number of Frequency Channels 7 $\times$ 6.25 MHz 15 $\times$ 1 MHz
Integration Time per Visibility 3.33 secs 4 secs
Total Integration Time 12 hours 24 hours
Longest Baseline (k$\lambda$) 3662 3768
Shortest Baseline (k$\lambda$) 20.80 132.1
Naturally Weighted Sensitivity 50 35
($\mu$Jy beam$^{-1}$)
Gaussian [*u-v*]{} taper FWHM (M$\lambda$) 4.0 2.2
[clean]{} beam size (mas) 87$\times$77 @ 58$^{\circ}$ 83$\times$74 @ 42$^{\circ}$
(10,17.9)(0,0) (10,0)
Table \[obstable\] shows that the two sets of observations cover a similar overall range of spatial frequencies, although the VLA has more shorter baselines and hence greater sensitivity to extended structure. In addition, MERLIN has some slightly longer baselines (in terms of wavelengths) than the VLA and Pie Town combination. Therefore, in order to provide a valid comparison, maps were made by limiting both data-sets to a range of 130 k$\lambda < L <$ 3660 k$\lambda$, where $L = (u^2 +v^2)^{1/2}$ and $u$ and $v$ are the co-ordinates of the visibilities in the [*u-v*]{} plane. The lower limit to this range means that any structures larger than $\sim$1.6$''$ are suppressed. For the purposes of this paper, the word ‘compact’ may describe any source which has significant radio emission on the scales measured by our data, i.e. less than $\sim$1.6$''$ in size. In addition, the density of sampling in the [*u-v*]{} plane is substantially different between the VLA and MERLIN, since MERLIN is a more sparsely distributed array. Therefore, the relative weighting functions used on the two data-sets needed to be different. The final imaging parameters used in the comparison are also presented in Table \[obstable\].
The MERLIN and VLA data were reduced separately and self-calibration was used to remove residual complex gain errors after phase-referencing. Therefore, the images from each data-set were registered relative to one another by ‘fixing’ the position of the brightest and most compact radio source, 41.95+57.5, at (B1950) 09$^h$ 51$^m$ 41.95$^s$ +69$^{\circ}$ 54$'$ 57.50$''$ [@Pedlar99]. A spectral index map and spectral index error map was then produced for the entire central region of M82 using the [aips]{} task [comb]{}. Spectral indices were only calculated for pixels with brightnesses above the 3-$\sigma$ level in each map as stated in Table \[obstable\], using the spectral index convention stated previously. Spectral index error maps were also calculated using the noise levels stated in Table \[obstable\], but are not presented. Sufficient signal was present in the maps at both frequencies such that spectral index maps could be derived for 34 sources. The MERLIN and VLA images of these 34 sources are shown in Figure \[spix\_fig\] and the spectral index maps for 5 of these sources are shown in Figure \[examplespixfig\]. For around 75$\%$ of the 34 sources in Figure \[spix\_fig\] there exists a good correspondance between the positions of the compact sources (within $\sim$10 mas) at 5 GHz and 15 GHz. In the remaining $\sim$25$\%$ of cases more complex structure is present which does not agree as well between frequencies. Some possible reasons for these discrepancies are discussed in Section \[ID\_section\]. In addition to the 34 sources detected in both sets of observations, we detect an additional 5 compact sources in the VLA map which have no counterparts in the MERLIN map on the same scale. These sources were chosen by examining the spatial frequency limited VLA 15 GHz map and searching for compact sources with emission above the 5-$\sigma$ level. The VLA maps of these sources are shown in Figure \[vlanewfig\] with the corresponding limits on their two-point spectral indices. For reference purposes, Figure \[vlamerlin\_compare\_fig\] contains boxes indicating which sources are illustrated in Figure \[spix\_fig\] and circles indicating the sources in Figure \[vlanewfig\]. It should be noted that Figure \[vlamerlin\_compare\_fig\] shows a significant amount of 15 GHz emission from regions outside of the regions imaged in Figures \[spix\_fig\] and \[vlanewfig\]. Most of this emission is suppressed once the limitation on the spatial frequency coverage of the 15 GHz observations is made, although emission is detected at a $\sim$3-$\sigma$ level in these regions.
(10,21) (-3,0)
(10,21) (0.2,0)
(10,21) (0.2,0)
(10,8) (0.2,0)
(10,11) (-3,0)
(10,11) (-3,0)
The nature of the compact sources {#nature_section}
=================================
By comparing the maps of the compact radio emission in M82 at 5 GHz and 15 GHz we have detected 34 sources that display compact radio emission at both frequencies as discussed in Section \[observations\] and identified an additional 5 compact sources at 15 GHz, but not at 5 GHz. Of these 39 sources, 19 have had spectra published by either W97, AK98 or both. We agree with the original identification of 18 of these 19 sources as being of supernova origin and the remaining source as being an H[ii]{} region, although the nature of the non-thermal sources at 41.95+59.5 and 44.01+59.6 is still questionable [e.g. @Wills99; @Mcdonald01]. Therefore, there were 20 sources which remained to be reliably identified. In order to do this we examined the radio continuum spectrum of each source from 1.4 GHz to 15 GHz.
Spectra of the compact sources {#spectra_section}
------------------------------
Since two-point spectral indices only offer a limited amount of information regarding the nature of the source it is obviously of interest to examine the full spectra of as many sources as possible. Therefore, we have used lower resolution ($\sim$0.2$''$ typically) maps at 1.42 GHz (L-band), 4.86 GHz (C-band), 8.4 GHz (X-band) and 15 GHz (U-band) and examined the flux densities at the positions given by the VLA and MERLIN comparison maps. For the L-band measurements we used the combined VLA and MERLIN map of @Pedlar95 and for the C-band measurements we used an existing VLA A and B-configuration combined image (observed on 5th July 1995 and 31st October 1995 in A and B-configurations respectively). For the X-band measurements we used the data in @Huang94 for the sources available and for the U-band map we used the new VLA data and utilised the full spatial frequency range with an appropriate weighting for a comparison on a scale of $\sim$0.2$''$. Since these low-resolution maps contain a substantial amount of the more diffuse background radio emission in M82, an estimate of the background away from the position of the source was made and an appropriate amount deducted from the measured flux density at the position examined. The errors in the corrected flux densities reflect the uncertainty in the background level which has been subtracted. Table \[fluxtable\] shows the measured flux densities of the ‘new’ sources from the high-resolution maps and compares these measurements with the integrated flux densities from the low-resolution maps. Clearly, the flux densities measured from the high-resolution maps are systematically lower than those measured from the low-resolution maps. This discrepancy is partly due to the more diffuse parts of the sources being ‘resolved out’ in the 85 mas maps and to the erroneous inclusion of diffuse background in the measurements from the 200 mas resolution maps.
----------- ----------------- ----------------- ----------------- -----------------
(1) (2) (3) (4) (5)
Source ID S$_5$ S$_{15}$ S$_5$ S$_{15}$
(85 mas) (85 mas) (200 mas) (200 mas)
38.76+534 0.15 $\pm$ 0.03 0.27 $\pm$ 0.06 0.26 $\pm$ 0.05 0.24 $\pm$ 0.07
39.29+542 0.21 $\pm$ 0.03 1.3 $\pm$ 0.1 1.7 $\pm$ 0.2 1.5 $\pm$ 0.2
39.68+556 1.1 $\pm$ 0.1 3.4 $\pm$ 0.2 1.5 $\pm$ 0.2 3.2 $\pm$ 0.2
40.62+560 0.49 $\pm$ 0.09 0.22 $\pm$ 0.05 1.2 $\pm$ 0.3 0.52 $\pm$ 0.08
40.95+588 0.85 $\pm$ 0.09 1.4 $\pm$ 0.1 1.4 $\pm$ 0.4 2.3 $\pm$ 0.3
40.96+579 - 0.62 $\pm$ 0.07 0.86 $\pm$ 0.22 0.58 $\pm$ 0.08
41.17+562 0.72 $\pm$ 0.10 1.9 $\pm$ 0.2 2.0 $\pm$ 0.3 2.8 $\pm$ 0.2
41.64+573 0.36 $\pm$ 0.06 1.6 $\pm$ 0.1 0.96 $\pm$ 0.19 1.8 $\pm$ 0.2
42.08+584 0.35 $\pm$ 0.06 1.6 $\pm$ 0.1 1.2 $\pm$ 0.4 1.7 $\pm$ 0.2
42.48+584 - 0.43 $\pm$ 0.08 1.8 $\pm$ 0.2 1.0 $\pm$ 0.2
42.56+580 0.56 $\pm$ 0.07 1.5 $\pm$ 0.1 2.7 $\pm$ 0.4 1.8 $\pm$ 0.2
42.66+564 0.58 $\pm$ 0.07 0.30 $\pm$ 0.06 0.62 $\pm$ 0.25 0.34 $\pm$ 0.07
42.69+582 0.54 $\pm$ 0.06 1.7 $\pm$ 0.2 0.97 $\pm$ 0.43 1.8 $\pm$ 0.2
42.82+613 0.83 $\pm$ 0.08 0.41 $\pm$ 0.07 1.5 $\pm$ 0.2 0.31 $\pm$ 0.12
44.43+618 0.72 $\pm$ 0.09 0.67 $\pm$ 0.10 1.1 $\pm$ 0.3 0.31 $\pm$ 0.09
44.91+611 0.71 $\pm$ 0.07 0.43 $\pm$ 0.07 1.2 $\pm$ 0.2 0.39 $\pm$ 0.11
44.93+639 - 0.55 $\pm$ 0.08 0.26 $\pm$ 0.09 0.66 $\pm$ 0.08
45.33+646 - 0.59 $\pm$ 0.09 0.63 $\pm$ 0.17 0.63 $\pm$ 0.11
45.41+637 - 0.43 $\pm$ 0.06 0.21 $\pm$ 0.07 0.48 $\pm$ 0.10
46.17+676 0.48 $\pm$ 0.06 1.0 $\pm$ 0.1 1.1 $\pm$ 0.2 1.4 $\pm$ 0.2
----------- ----------------- ----------------- ----------------- -----------------
Figure \[spectra\_fig\] shows the continuum radio spectra of the 20 sources which have not had spectra previously published by either W97 or AK98. The spectra were fitted by three models for the radio emission and 6 examples are shown in Figure \[spectra\_fig\]. The models used can be described by the equations $$S(\nu) = S_0\nu^{\alpha},$$ $$S(\nu) = S_0\nu^{\alpha}e^{-\tau(\nu)},$$ and $$S(\nu) = S_0\nu^2(1-e^{-\tau(\nu)}),$$ where, $$\tau(\nu) = (\nu/\nu_{\tau=1})^{-2.1}.$$
(10,18) (-3.5,0)
These equations represent a simple power-law spectrum, a power-law spectrum with free-free absorption by a screen of foreground ionised gas and a self-absorbed Bremsstrahlung spectrum respectively. From this point they will be referred to as models 1, 2 and 3 respectively. In these expressions $\alpha$ is the optically thin spectral index and $\nu_{\tau=1}$ is the frequency at which the free-free optical depth is unity. Of the 20 sources which were modelled, 7 were found to be best fitted by model 1 with an ‘inverted’ spectral index ($\alpha >$ 0), 1 source was best fitted by model 1 with a non-thermal spectral index ($\alpha <$ 0), 4 sources were best fitted by model 2 and 8 sources by model 3. These results are used in the following section to make the final source identifications. Note that we interpret the spectra of the sources which have inverted spectra with a spectral index $<$ 2 (as expected for a simple, optically thick H[ii]{} region), as being due to an electron density gradient of the type discussed by @Olnon75 and @Panagia75, since ionised gas with different optical depths which are confused will result in an inverted spectral index (for the entire optically thick source) which is a function of the power-law index of the electron density gradient.
Radio source identification {#ID_section}
---------------------------
In order to determine which mechanism is responsible for the radio emission from the compact sources in M82, the morphology of the sources as resolved in the maps shown in Figure \[spix\_fig\] and the spectral information in Figure \[spectra\_fig\] were considered. Therefore, in addition to the 26 which have been identified in previous publications, identifications have been made for a further 20 sources. Table \[IDtable\] lists all of the 46 sources which have had spectra analysed by W97, AK98 and in this paper as follows.
1. Source identifier.
2. Position of image origin.
3. Optically thin spectral index deduced by @Wills97 (W97).
4. Optically thin spectral index deduced by @Allen98 - screen absorption model / no absorption (AK98 (a)).
5. Optically thin spectral index deduced by @Allen98 - mixed absorption model (AK98 (b)).
6. Two-point spectral index (this paper).
7. Final identification of source as either supernova-related (SNR), H[ii]{} region or AGN candidate.
------------ --------------------------------------------------------- ------------------ ------------------ ------------------ ------------------ ---------
(i) (ii) (iii) (iv) (v) (vi) (vii)
Source ID Image Origin $\alpha$ $\alpha$ $\alpha$ $\alpha_5^{15}$ ID
(relative to (B1950) 09$^h$ 51$^m$ +69$^{\circ}$ 54$'$) W97 AK98(a) AK98(b) M02
38.76+53.4 38.759 +53.52 - - - 0.54 $\pm$ 0.27 H[ii]{}
39.10+57.4 39.106 +57.33 -0.6 $\pm$ 0.1 -0.38 $\pm$ 0.01 - -0.53 $\pm$ 0.09 SNR
39.29+54.2 39.289 +54.25 - - - 1.64 $\pm$ 0.16 H[ii]{}
39.40+56.1 39.391 +56.18 -0.50 $\pm$ 0.05 -0.21 $\pm$ 0.01 - -1.04 $\pm$ 0.22 SNR
39.64+53.4 - -0.20 $\pm$ 0.05 -0.71 $\pm$ 0.01 - - SNR
39.68+55.6 39.668 +55.59 - - - 1.03 $\pm$ 0.09 H[ii]{}
39.77+56.9 - -0.50 $\pm$ 0.06 -0.49 $\pm$ 0.01 - - SNR
40.32+55.1 40.317 +55.20 -0.50 $\pm$ 0.06 -0.55 $\pm$ 0.01 - -0.23 $\pm$ 0.21 SNR
40.62+56.0 40.594 +56.10 - - - -0.72 $\pm$ 0.25 SNR
40.66+55.2 40.676 +55.11 -0.70 $\pm$ 0.05 -0.52 $\pm$ 0.01 -0.55 $\pm$ 0.01 -0.54 $\pm$ 0.08 SNR
40.95+58.8 40.938 +58.87 - - - 0.44 $\pm$ 0.13 H[ii]{}
40.96+57.9 40.955 +57.85 - - - $>$ 1.2 H[ii]{}
41.17+56.2 41.176 +56.16 - - - 0.87 $\pm$ 0.14 H[ii]{}
41.29+59.7 41.302 +59.64 - -0.54 $\pm$ 0.01 -0.56 $\pm$ 0.01 -0.47 $\pm$ 0.09 SNR
41.64+57.9 41.637 +57.94 - - - 1.32 $\pm$ 0.18 H[ii]{}
41.95+57.5 41.951 +57.49 -0.80 $\pm$ 0.05 -0.72 $\pm$ 0.01 -0.74 $\pm$ 0.01 - SNR?
42.08+58.4 42.079 +58.39 - - - 1.32 $\pm$ 0.17 H[ii]{}
42.21+59.2 42.210 +59.04 - -0.10 $\pm$ 0.01 - 1.16 $\pm$ 0.13 H[ii]{}
42.48+58.4 42.481 +58.36 - - - $>$ 1.2 H[ii]{}
42.53+61.9 - -0.80 $\pm$ 0.05 -1.84 $\pm$ 0.01 -1.99 $\pm$ 0.01 - SNR
42.56+58.0 42.557 +58.05 - - - 0.88 $\pm$ 0.14 H[ii]{}
42.66+56.4 42.673 +56.40 - - - -0.58 $\pm$ 0.21 SNR
42.67+55.6 42.662 +55.54 -0.30 $\pm$ 0.05 -0.61 $\pm$ 0.01 -0.63 $\pm$ 0.01 -1.3 $\pm$ 0.2 SNR
42.69+58.2 42.694 +58.24 - - - 1.04 $\pm$ 0.13 H[ii]{}
42.82+61.3 42.810 +61.30 - - - -0.63 $\pm$ 0.17 SNR
43.18+58.3 43.186 +58.35 -0.80 $\pm$ 0.05 -0.67 $\pm$ 0.01 -0.72 $\pm$ 0.01 -0.44 $\pm$ 0.08 SNR
43.31+59.2 43.305 +59.20 -0.60 $\pm$ 0.06 -0.64 $\pm$ 0.01 -0.85 $\pm$ 0.01 -0.65 $\pm$ 0.07 SNR
44.01+59.6 44.008 +59.59 0.20 $\pm$ 0.05 -0.51 $\pm$ 0.01 -0.56 $\pm$ 0.01 -0.38 $\pm$ 0.06 SNR
44.29+59.3 44.276 +59.30 - -0.56 $\pm$ 0.01 - -0.72 $\pm$ 0.12 SNR
44.43+61.8 44.419 +61.72 - - - 0.07 $\pm$ 0.17 SNR
44.52+58.1 44.509 +58.21 -0.60 $\pm$ 0.05 -0.61 $\pm$ 0.01 -0.64 $\pm$ 0.01 -0.15 $\pm$ 0.17 SNR
44.91+61.1 44.899 +61.19 - - - -0.45 $\pm$ 0.18 SNR
44.93+63.9 44.934 +63.91 - - - $>$ 1.1 H[ii]{}
45.17+61.2 45.173 +61.25 -0.80 $\pm$ 0.05 -0.68 $\pm$ 0.01 -0.69 $\pm$ 0.01 -0.52 $\pm$ 0.07 SNR
45.26+65.3 45.254 +65.18 - -0.62 $\pm$ 0.01 - -1.02 $\pm$ 0.20 SNR
45.33+64.6 45.330 +64.63 - - - $>$ 1.1 H[ii]{}
45.41+63.7 45.409 +63.73 - - - $>$ 1.0 H[ii]{}
45.44+67.3 45.423 +67.43 -0.60 $\pm$ 0.05 -0.57 $\pm$ 0.01 - -0.83 $\pm$ 0.17 SNR
45.52+64.7 - -0.20 $\pm$ 0.09 -0.15 $\pm$ 0.01 - - SNR
45.79+65.2 45.758 +65.32 -0.60 $\pm$ 0.05 -0.23 $\pm$ 0.01 - -0.55 $\pm$ 0.13 SNR
45.91+63.8 45.898 +63.88 -0.20 $\pm$ 0.05 -0.53 $\pm$ 0.01 -0.54 $\pm$ 0.01 -0.38 $\pm$ 0.11 SNR
46.17+67.6 46.175 +67.72 - - - 0.66 $\pm$ 0.16 H[ii]{}
46.52+63.8 46.521 +63.93 -0.40 $\pm$ 0.05 -0.73 $\pm$ 0.01 -1.50 $\pm$ 0.01 -0.35 $\pm$ 0.11 SNR
46.56+73.8 - -0.60 $\pm$ 0.05 -0.78 $\pm$ 0.01 -1.04 $\pm$ 0.01 - SNR
46.75+67.0 - -0.80 $\pm$ 0.05 -0.57 $\pm$ 0.01 -0.90 $\pm$ 0.01 - SNR
47.37+68.0 - -0.60 $\pm$ 0.07 -0.57 $\pm$ 0.01 - - SNR
------------ --------------------------------------------------------- ------------------ ------------------ ------------------ ------------------ ---------
As can be seen from Figure \[spix\_fig\], some of the regions imaged contain complex structure, therefore some of the identifications listed in Table \[IDtable\] are more robust than others. For example, the sources at 39.29+54.2, 39.68+55.6, 41.64+57.9, 42.08+58.4, 42.21+59.2 and 42.56+58.0 as well as the 5 ‘new’ sources shown in Figure \[vlanewfig\] all consist of relatively simple structure which is clearly brighter at 15 GHz than 5 GHz. Hence, these sources are confidently identified as being the most optically thick part of regions of thermal emission. The remaining 5 sources which are listed as H[ii]{} regions in Table \[IDtable\] are located in more morphologically complex regions which are dominated by thermal processes, although a mixture of thermal and non-thermal emission is inevitable at some level. For example, at 5 GHz, 46.17+67.7 has a simple single-component structure which becomes more complex at 15 GHz. A mixture of thermal and non-thermal emission may also be responsible for the structures of sources which have been identified as supernova remnants, examples of which are 44.43+61.8 and 44.52+58.2.
Of the 26 sources identified previously, only one was deemed to be a possible H[ii]{} region. This number has now increased to 16 by the addition of 15 sources examined in this paper. In addition, a further 5 supernova remnants have been identified. Therefore, it is suggested that the previous studies were biased towards the identification of supernova remnants, since only the most luminous sources were investigated. Also, the measurements of the source spectra were also biased towards longer wavelengths, away from the regime where thermal sources are brightest - especially if they are optically thick. The observations made with the VLA and Pie Town combination have provided the highest resolution images of the central regions of M82 at 15 GHz and clearly allowed an extension of previous work into the thermal regime. The peak brightness temperature of a radio source can also be used as a constraint when identifying H[ii]{} regions. The brightness temperature, $T_b$, is related to the electron temperature, $T_e$ by $$\label{Tb_equation}
T_b = T_e(1-e^{-\tau}),$$ where $\tau$ is the optical depth. Hence in the optically-thick regime ($\tau \gg$ 1) $T_b \approx T_e$. Since ionised gas becomes opaque towards lower frequencies it was decided to measure the brightness temperatures from the lowest frequency map available, which was the L-band (1420 MHz) map of @Pedlar95 and Figure \[bias\_Tb\_fig\] shows the measured brightness temperatures plotted against the source spectral indices. It can be seen that the brightness temperatures of the H[ii]{} regions all lie below 10$^4$ K, as indicated by the horizontal line. It should be noted that the measured brightness temperature of a thermal source is a lower limit for the electron temperature because as an H[ii]{} region becomes optically thin the brightness temperature becomes less than the electron temperature. It can also be seen that the brightness temperatures derived for the supernova remnants are mostly in excess of 10$^4$ K, although since the previous studies of the spectra by W97 and AK98 concentrated only on the brightest sources this can be mainly attributed to a selection effect.
(8,6) (0,0)
The radio source distribution in M82
------------------------------------
Since the number of identified sources in M82 is now 46, a rudimentary statistical analysis of the distributions may be attempted. Comparing the distribution of H[ii]{} regions with identified supernova remnants, it can be said that the two distributions are different with 80$\%$ confidence. This confidence level was derived using the two-dimensional Kolmogorov-Smirnov test of @Fasano87 and @Peacock83. Figure \[histogramfig\] shows the one-dimensional distributions of the right ascension coordinates of the compact radio sources in M82. The top panel shows the distribution of H[ii]{} regions and the bottom panel shows the distribution of supernova remnants, including an additional 14 unidentified sources. These sources were noted by @Willsthesis but were undetected at 408 MHz and were not included in the spectral analysis in W97. However, these sources are almost certainly supernova remnants, since they are generally present on low-frequency ($\sim$1.6 GHz) maps, but not at higher frequencies - implying a non-thermal spectrum. Due to their low brightnesses a spectral analysis has yet to be made for them. In addition, by taking into account the transient source at 41.50+59.7, the total number of compact sources whose origin has been determined is now 61.
(8,11) (0.1,0)
The propagation of star-forming regions within M82 has been discussed by @Satyapal97 and @DeGrijs00. On the basis of a changing CO index across M82 and other indicators of an old stellar population in the nucleus, @Satyapal97 suggested that the starburst was propagating outwards from the centre. However, @DeGrijs00 reported the discovery of ‘fossil’ supernova remnants away from the nucleus ($\sim$ 500 pc away) and suggested that the star-formation was propagating inwards. In order to test these hypotheses with the radio data, the sizes of the H[ii]{} region and SNR distributions shown in Figure \[histogramfig\] was examined, since the H[ii]{} regions trace the hot, young stars and are more indicative of recent star-formation than the supernova remnants which are the result of star-formation $\sim$10$^7$ years previously. Figure \[ionisedgasfig\] shows how the positions of the 16 H[ii]{} regions correlate with the large scale distribution of ionised gas in M82 as shown by the 12.8 $\mu$m \[NeII\] map of @Achtermann95. It can be seen that the \[NeII\] distribution has a width of $\sim$20$''$ at half-intensity which compares with a width of $\sim$35$''$ for the supernova remnant distribution in Figure \[histogramfig\] which may seem to favour the inwardly propagating star-formation hypothesis of @DeGrijs00. However, small-number statistics in the compact source distribution do not allow a robust distinction to be drawn between the two possibilities.
(8,6) (-0.2,0)
Again referring to Figure \[ionisedgasfig\], it can be seen that a number of the thermal sources appear to be associated with the two peaks in the \[NeII\] emission in the west, although a number of the sources seem to be in regions of low \[NeII\] intensity. However, another two-dimensional Kolmogorov-Smirnov test comparing the compact H[ii]{} region distribution with the \[NeII\] line distribution determined that the distributions are similar with a significance level of 90$\%$.
\[nonthermal\]Compact radio sources of non-thermal origin
---------------------------------------------------------
For the 30 non-thermal sources, it can be seen from Table \[IDtable\] that the measured two-point spectral indices are often significantly different from those derived for the models of W97 and AK98. As shown in Figure \[spix\_fig\], these sources are often highly resolved and more morphologically complex than the compact H[ii]{} regions identified previously. It would appear that the two-point spectral indices are more consistent with the models for the brightest, most compact sources and less consistent for the weaker, complex sources. This discrepancy may be partly due to the ‘resolving out’ of larger scale emission. In any case, deciding on what size scale some of the sources ‘start’ is a very subjective matter.
The compact radio sources of thermal origin
-------------------------------------------
In the previous studies of the spectra of the most compact radio sources in M82, only two sources have been identified as being of possible thermal origin, 40.62+56.0 (W97) and 42.21+59.2 (AK98). It should be noted at this stage that we identify 40.62+56.0 as a supernova remnant due to the extension of the spectrum of this source to higher frequencies than those presented in W97. The spectrum of this source is shown in Figure \[spectra\_fig\] and if the 15 GHz measurement is excluded then the spectrum agrees with that in W97 and appears to be flat. The higher frequency measurement, however, implies a non-thermal origin for the radio emission from this source. In Section \[ID\_section\], 16 sources (including 42.21+59.2) were identified for which positive or flat spectral indices have been derived which are attributable to free-free emission from ionised gas. Table \[HIItable\] summarises the inferred properties of these H[ii]{} regions. For the resolved sources, the sizes were measured by taking a slice through the emission and determining the largest angular size. For the more compact sources, a two-dimensional Gaussian was fitted to the source using the [aips]{} task [jmfit]{}. The emission measures were derived from the fits to the spectra of the sources as described in Section \[spectra\_section\]. For the cases where the sources did not display a turnover in their spectra at the transition between the optically thick and optically thin regimes, a lower limit for the emission measures was taken to be 9 $\times$ 10$^7$ cm$^{-6}$ pc. This limit corresponds to a turnover frequency somewhere above 5 GHz. By calculating the Lyman continuum flux required to ionise the H[ii]{} regions, given their sizes and emission measures stated in Table \[HIItable\], it is possible to derive an equivalent number of O5 stars, assuming that the Lyman continuum output of an O5 star is $\sim$5.1 $\times$ 10$^{49}$ photons s$^{-1}$ [@Spitzer78]. The number of O5 stars required to ionise these regions varies from $\sim$ a few, to over 500. Therefore, these regions may be similar to those found in other galaxies such as NGC 2146 [@Tarchi00], Henize 2-10 [@Kobulnicky99] and NGC 5253 (Turner, Beck & Ho 2000), which were deemed to be due to the precursors of ‘super-star clusters’ or ‘proto-globular clusters’. No optical counterparts were found for the radio sources in these galaxies, however, since these galaxies are not face-on, the optical emission is likely to be obscured by intervening dust and/or the parent molecular cloud.
------------ ------ -------------------------------------------- ------------
Source Size Emission Measure Equivalent
ID (pc) (cm$^{-6}$ pc) number of
($\times 10^7 \times (T_e/10^4 K)^{1.35}$) O5 stars
38.76+53.4 1.9 1.2 3
39.29+54.2 1.8 2.5 6
39.68+55.6 5.6 21 520
40.95+58.8 7.2 13 530
40.96+57.9 2.3 2.0 8
41.17+56.2 7.9 10 490
41.64+57.9 5.6 $>$9.0 221
42.08+58.4 4.3 5.9 86
42.21+59.2 4.9 5.0 94
42.48+58.4 0.8 3.4 2
42.56+58.0 4.3 2.3 33
42.69+58.2 1.9 $>$9.0 25
44.93+63.9 3.7 $>$9.0 97
45.33+64.6 2.0 1.3 4
45.41+63.7 1.3 $>$9.0 12
46.17+67.6 3.6 6.9 70
------------ ------ -------------------------------------------- ------------
: \[HIItable\]Properties of the H[ii]{} regions in M82.
With the relatively high resolution provided by the VLA and MERLIN (85 mas = 1.3 pc) it is now possible to resolve the compact thermal sources. Variations in the two-point spectral indices can be seen across several of the sources and Figure \[4221fig\] shows the spectral index variation across the brightest H[ii]{} region at 42.21+59.2. Three models for the electron density distribution were fitted to these data - a power-law, Gaussian distribution and constant electron density sphere. As can be seen from Figure \[4221fig\], it is currently not possible to choose between these models. However, the increased spectral index towards the centre of the region may be attributed to increasing optical thickness due to enhanced emission measure.
(8,6) (0,0)
Conclusions {#discuss}
===========
The VLA in its A-configuration connected with the Pie Town VLBA antenna via a real-time fibre optic link has been used to image M82 at 15 GHz. These observations have provided the highest resolution images yet made at this frequency and allowed a direct comparison with 5 GHz MERLIN images at a similar resolution. The conclusions may be summarised as follows.
1. In addition to the 26 compact radio sources which have been previously identified in the nucleus of M82, a further 20 sources have been identified. The final identifications are summarised in Table \[final\_ID\_table\] (n.b. the controversial sources at 41.95+57.5 and 44.01+59.5 are included under ‘supernova remnants’). By comparison with the 12.8 $\mu$m \[NeII\] map of @Achtermann95, it was noted that the locations of the compact H[ii]{} regions broadly coincided with the peaks in the large-scale ionised gas distribution.
-------------------- ----------- ----------- -------
Before This work TOTAL
this work
Supernova Remnants 25 5 30
H[ii]{} Regions 1 15 16
-------------------- ----------- ----------- -------
: \[final\_ID\_table\]The number of identified compact H[ii]{} regions and young supernova remnants in M82.
2. Of the 15 new H[ii]{} regions identified, all have steep ‘inverted’ spectra at the highest resolution ($\alpha_5^{15} >$ +0.4). For the simple model of an H[ii]{} region in which S $\propto \nu^2$ in the optically-thick regime and S $\propto \nu^{-0.1}$ in the optically-thin regime this would imply that these sources are becoming optically thick between 5 and 15 GHz. However, @Olnon75 and @Panagia75 showed that simple models of ionised nebulae with significant electron density gradients could produce inverted spectra, the slope of which depends on the structure of the region. This may be the case for a number of the sources, but further high-resolution imaging at different frequencies is required to confirm or deny this possibility.
3. The low-resolution spectra of the regions in which the compact sources have been imaged show inverted spectra for about half of the sources and flattening spectra at the highest frequencies for the remainder. It is inferred that at the highest resolution only the most compact and optically-thick components of the ionised gas are being detected.
4. Of the 46 identified sources in M82, $\sim$ 35$\%$ are H[ii]{} regions and the remaining $\sim$ 65$\%$ probably have supernova origins. Therefore, it would appear that M82 is not lacking in compact H[ii]{} regions in comparison to other galaxies such as NGC 2146 [@Tarchi00] as much as was previously thought, although it still appears that M82 is in a more advanced starburst stage than NGC 2146 due to the greater number of supernova remnants.
5. Five additional sources were identified as supernova remnants since they have steep, negative spectral indices at the higher frequencies. At the lower frequencies they are often experiencing a turnover in their spectra by 1.4 GHz requiring emission measures for the foreground ionised gas of $\sim$ 10$^7$ cm$^{-6}$ pc. These supernova remnants are obscured at the lower radio frequencies where previous studies have concentrated and may represent those supernova remnants embedded deeper within M82 and hence behind a greater amount of ionised gas.
6. An additional 14 unidentified sources were added to the analysis and after taking into account the transient source noted by Kronberg, Biermann & Schwab (1985), the total number of compact sources now stands at 61. The 14 unidentified sources are most likely the older supernova remnants in the sample and hence are not bright enough to have been included in the essentially flux-limited analyses. However, assuming that they are all SNR, a comparison between the H[ii]{} region distribution and SNR distribution shows significant differences, although any difference in the overall extent of the distributions is not significant. Hence, the radio data cannot be used to either confirm or deny the propagating star-formation hypotheses of @Satyapal97 and @DeGrijs00.
Acknowledgements {#acknowledgements .unnumbered}
----------------
MERLIN is a national facility operated by the University of Manchester on behalf of PPARC. The VLA is operated by the National Radio Astronomy Observatory, which is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. ARM acknowledges the receipt of a PPARC postgraduate research grant.
\[lastpage\]
[^1]: All positions within M82 are quoted relative to (B1950) 09$^h$ 51$^m$ +69$^{\circ}$ 54$'$
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A brief review of the latest developments in the spectroscopy of heavy quarks is presented. The current status of the recently ‘discovered’ pentaquarks is also discussed.'
address: |
Department of Physics and Astronomy, Northwestern University,\
Evanston, IL 60208, USA\
[email protected]
author:
- 'Kamal K. Seth'
title: 'QUARKONIA & PENTAQUARKS'
---
I am good at counting one, two. Three is difficult for me. So, I generally talk about mesons, and stay away from baryons. When the organizers asked me to talk also about pentaquarks, that became a real challenge. Since Frank Wilczek has told you all about the theoretical ideas behind pentaquarks, my job has become easier. I just have to tell you about experimental facts.
Heavy Quarkonia
===============
Light quark ($n\equiv u,d,s$) spectroscopy is very rich, and very tough. The quarks are highly relativistic in the hadrons they make, the strong coupling constant $\alpha_s$ is very large ($\sim0.6$), and the $u,d,s$ quarks have such similar masses that nearly all $|n\bar{n}>$ mesons are mixtures of all three flavours. This results in a very high density of overlapping states, difficult to disintangle, and even more difficult to understand. In contrast, the charm ($c$) and beauty ($b$) quarks are heavy enough so that relativistic problems are not too serious ($\left<v^2/c^2\right>\approx 0.1-0.2$), $\alpha_s$ is not too large, ($\alpha_s\approx0.2-0.3$), and charmonium $|c\bar{c}>$ and bottomonium $|b\bar{b}>$ states are few and well resolved (see Fig. 1). This makes the spectroscopy of $|c\bar{c}>$ and $|b\bar{b}>$ particularly useful for the study of Quantum Chromodynamics.
Charmonium
----------
This is where all of it began with the 1974 discovery of $J/\psi$. From 1974–1985 a lot of discovery physics in charmonium was done at SLAC, ORSAY and DESY, but precision was often lacking, except for the vector states which could be directly produced in $e^+e^-$ annihilation. Width of triplet P-wave states could not be determined, and except for the ground state, no singlet S- and P-wave states could be successfully identified. The region above the $D\bar{D}$ threshold essentially remained.
{width="4.8in"}
During 1990–2000, the Fermilab experiments E760 and E835 exploited the ability of $p\bar{p}$ annihilations to make precision measurements of the masses and widths of $^3S_1$ ($J/\psi$, $\psi'$), and $1^3P$ ($\chi_0$, $\chi_1$, $\chi_2$) states, but were not so successful in making equally precise measurements of the singlet state $1^1S_0$ ($\eta_c$), and failed in identifying $2^1S_0$ ($\eta_c'$) and $1^1P_1$ ($h_c$). The region above $D\bar{D}$ threshold remained *terra incognita*. For a review see Ref. 1.
During the 1990s, the BES detector at the Beijing Electron-Positron Collider (BEPC) made important contributions in charmonium spectroscopy, primarily by investing much greater luminosity ($\sim\times10$) then SLAC+ORSAY+DESY. They also made some important excursions in the region above the $D\bar{D}$ threshold.
More recently, new players have emerged in the field. The CLEO detector at the CESR accelerator at Cornell, the Belle detector at KEK, and the BaBar detector at PEP II at Stanford, are all beginning to produce extremely interesting results. I am going to describe some of these below. In somewhat more distant future (2008 –) we expect new accelerators, BEPC II and FAIR at GSI, to come online and provide further insight into the physics of this mass region.
### The Spin Singlet States and the Hyperfine Interaction
The spin-indepedent $q\bar{q}$ interaction is well understood in terms of one-gluon exchange, and is very successfully modeled by a Coulombic 1/$r$ potential. The spin dependence which follows from this is also accepted. What is not understood is the the nature of the confinement part of the interaction, which is generally modeled by a scalar potential proportional to $r$. A crucial test of the Lorenz nature of the confinement potential is provided by the measurement of hyperfine or spin-singlet/spin-triplet splittings. A scalar potential does not contribute to the spin-spin or hyperfine interaction, whereas for a Coulombic potential it is a contact interaction. As a consequence hyperfine splitting is predicted to be finite only for S-wave states, and to be zero for P-wave and higher L-states
### Hyperfine Splitting in S-wave Quarkonia
No singlet states have so far been identified in bottomonium. In charmonium, however, it has been established for a long time that $\Delta M(1S)_{hf}\equiv M(J/\psi, 1^3S_1)-M(\eta_c, 1^1S_0)=172\pm2$ MeV. It is interesting to determine the size of the hyperfine splitting of $2S$ states, which sample the confinement region more deeply. Long ago Crystal Ball[@cball] claimed the identification of $\eta_c'$ with $M(\eta_c')=3594\pm5$ MeV, leading to $\Delta M(2S)_{hf}=92\pm5$ MeV, which kind of made sense with $\Delta M(1S)_{hf}=172\pm2$ MeV. Most potential model calculations tried to accomodate this ‘experimental’ result , although it was not confirmed by any subsequent measurement, and was actually dropped by the PDG meson summary.
The seach for $\eta_c'$ has finally ended. Belle[@bellea] announced it first in two different decays of large samples of B-mesons. CLEO[@cleoa] and BaBar[@babara] both have identified it in the two-photon fusion reaction, $e^+e^-\to(e^+e^-)\gamma\gamma,\gamma\gamma\to\eta_c'\to K_SK\pi$. The CLEO measurement is shown in Fig. 2. The exciting part of these measurements is that $M(\eta_c')_{avg}=3637.4\pm4.4$ MeV, which is almost 50 MeV larger than the old Crystal Ball claim, and it leads to a surprisingly small hyperfine splitting, $\Delta M(2S)_{hf}=48.6\pm4.4$. It is too early to say whether this can be explained in terms of channel mixing[@elq], or unexpected contribution from the confinement potential.
{width="2.25in"}
### Hyperfine Splitting in P-wave Quarkonia
As mentioned already, hyperfine splitting is expected to be zero in all except S-wave states if the confinement potential is scalar, as is generally assumed. Thus it is expected that $\Delta M(1P)_{hf} \equiv \left<M(1^3P_J)\right>-M(1^1P_1)=0$, except for higher order contributions of no more than an MeV or two. Unfortunately, while $\left<M(1^3P_J)\right>=3525.31\pm0.07$[@cester], the $h_c(1^1P_1)$ has not been firmly identified. Let me however, give you a preview with the statement that both Fermilab E835 and CLEO are working on the search for $h_c$. The E835 experiment is analyzing the reactions $p\bar{p}\to h_c\to\pi^0 J/\psi$ and $p\bar{p}\to h_c\to\gamma\eta_c$, and preliminary results are that while the first reaction does not have a signal for $h_c$ formation[@dave], the second may have. The CLEO team is analyzing $e^+e^-\to\psi'\to\pi^0 h_c,h_c\to\gamma\eta_c$ but has not presented any results so far \[Note: Since the conference, CLEO has announced its preliminary results with $M(h_c)=3524.8\pm0.7$ MeV[@amiran] with the consequent $\Delta M(1P)_{hf}=0.6\pm0.6$ MeV. It appears that there is no significant departure from the simple expectation, $\Delta M(1P)_{hf}$=0\].
### The $\rho-\pi$ Problem
Since the widths for leptonic decays, as well as 3 gluon decays to light hadrons, of both $J/\psi$ and $\psi'$ depend on the wave functions at the origin, pQCD predicts the equality of the ratios of branching ratios $$\frac{B(\psi'\to l^+l^-)}{B(J/\psi\to l^+l^-)} = (13\pm2)\% = \frac{B(\psi'\to LH)}{B(J/\psi\to LH)}.$$ This expectation has been extended to ratios of individual hadronic decays, and has led to many measurements by BES and CLEO to test it. The results is that while the sums of all hadronic decays do seem to follow this expectation, and $\sum_i B(\psi'\to LH)_i/\sum_i B(J/\psi\to LH)_i = (17\pm3)\%$, individual decays show large departures from it, the ratio being as small as 0.2% for $\rho\pi$ decays. While many exotic theoretical suggestions have been made to explain these deviations, it appears that what we are witnessing is the failure of attempts to stretch pQCD beyond its limits of validity.
### Higher Vector States
For a long time the parameters listed in the PDG compilation for the three vector states above the $D\bar{D}$ threshhold have been based on the R-parameter measurement by the DASP group[@dasp], even though none of the other measurements of R agreed with it. Recent measurements by the BES group[@besa] have finally allowed us[@sethb] to make a reliable determination of these parameters. The result is that the total and leptonic widths of these states have changed by large amounts, e.g. $\Gamma(4039)=88\pm5$ MeV, instead of $52\pm10$ MeV. Similar other new results are $\Gamma(4153)=107\pm8$ MeV, $\Gamma(4426)=119\pm15$ MeV.
Bottomonium
-----------
Despite the fact that the bottomonium $b\bar{b}$ system is certainly more amenable to pQCD, we know far less about bottomonium than we know about charmonium. The $\eta_b$, ground state of bottomonium, has not been identified so far. The vector states $\Upsilon(1S,2S,3S)$ and $4S,)$ are known but only one hadronic transition from these, $\Upsilon(nS)\to\Upsilon(n'S)\pi^+\pi^-\; (n'<n)$ has ever been observed. Radiative transitions $\Upsilon(nS)\to\gamma\chi_b(n'^3P)$ states have been observed. No D-states, which are expected to be bound (see Fig. 1) have been observed. No hadronic transition from any $\chi_b$ states has ever been observed. Recently, CLEO had made small gains in both the above problems. The $1^3D_2$ state has been successfully observed in 4-photon cascade $\Upsilon(3S)\to\gamma_1(2P)\to\gamma_1\gamma_2(1D)\to\gamma_1\gamma_2\gamma_3(1P)\to\gamma_1\gamma_2\gamma_3\gamma_4\Upsilon(1S),\Upsilon(1S)\to l^+l^-$. The mass $M(1^3D_2)=10,161.1\pm0.6\pm1.6$ MeV[@cleob]. In another measurement, $\Upsilon(3S)\to\gamma\chi_b(2P),\chi_b(1,2)\to\omega\Upsilon(1S)$ has also been observed[@cleoc].
Exotics
=======
I was going to talk about the classical exotics of QCD, the glueballs and hybrids, but because of the request of the conference organizers to talk about pentaquarks, I will simply refer you to my last review of glueballs and hybrids[@sethc]. To summarize, no consensus candidates for scalar or tensor glueballs have emerged so far. Candidates for $|q\bar{q}g>$ hybrids with exotic $J^{PC}=1^{-+}$ have indeed been claimed amid plenty of controversy.
Having been released from glueballs and hybrids, I can indulge in another class of exotics. These are the unexpected, uninvited, and therefore the exotic hadrons which have recently shown up in several hadron spectroscopy experiments.
The first of these was the discovery of narrow ($\Gamma<7$ MeV) $D_{sJ}$ resonances with $M(D^{*+}_s,J^{P}=0^+)=2317$ MeV, and $M(D^{*+}_s,J^{P}=1^+)=2462$ MeV by BaBar[@babarb] and CLEO[@cleod]. These states were expected at higher masses and therefore with large widths. Instead, they show up temptingly just below $DK$ and $D^*K$ thresholds, giving encouragement to molecular enthusiasts.
The second exotic is the discovery by Belle[@belleb] of a narrow resonance in $B$ decays, which was quickly confirmed by CDF. This resonance, dubbed X(3872), has a mass of $3872\pm1$ MeV, a width $<2.5$ MeV, and decays (it appears almost exclusively) to $\pi^+\pi^- J/\psi$. Again, since its mass, width, and decay make it difficult to fit it into the charmonium spectrum, and since $M(D^0)+M(\bar{D}^{*0})=3872$ MeV, it has provided more fodder to $|D^0\bar{D}^{*0}>$ molecule enthusiasts. At CLEO we have searched for this state in untagged two photon fusion (therefore $J^{PC}=0^{\pm,+},2^{\pm,+},...$) and in ISR (initial state radiation) mediated production, and have established quite stringent upper limits on its population in either production mode[@cleoe].
I finally come to the hotter than hot topic of...
Pentaquarks
===========
The pentaquark story starts with the annoucement by LEPS at Osaka[@nakano] that there was a significant enhancement in the missing mass spectrum for $\gamma K^-$ in the reaction $\gamma n \to K^+K^- n$ with photons incident on a plastic scintillator (CH) target. The missing mass spectrum was interpreted to indicate a $|K^+n>$ state dubbed $\Theta^+$, with mass $M(\Theta^+)=1.54\pm0.01$ GeV, and width $\Gamma(\Theta^+)<25$ MeV, with statistical significance of $4.6\sigma$. If true, the state had strangeness $S=+1$, and had to have at least five quarks. The pentaquark was born! In quick succession, CLAS claimed confirmation in photons incident on deuterium[@clasa], and hydrogen[@clasb], SAPHIR in $\gamma+p$[@saphir], ZEUS[@zeus] in $e^+p$ and $e^-p$ inelastic scattering, HERMES[@hermes] in $e^+d$ inelastic scattering, YEREVAN in $p$+propane[@yerevan], DIANA[@diana] in $K^+$+Xenon, SVD in $p$+Si[@svd], and COSY in $p+p$[@cosy]. Neutrinos were also not left behind, and ITEP[@itep] claimed $\Theta^+$ in $\nu+$H$_2$,D$_2$,Ne data from CERN and FNAL. In my memory, never before has such a stampede been witnessed!
The theoretical model for $\Theta^+$ which was in vogue is the antidecuplet model of Jaffe and Wilczek[@jw]. According to this model, there should be $(S=0)\;N^*$, $(S=-1)\;\Sigma$, and $(S=-2)\Xi$ cascade pentaquarks also. Sure enough, NA49[@na49] announced the observation of $\Xi(1862)$ as the $S=-2$ pentaquark in the reaction $p+p\to X+(\Xi^-\pi^-)$. Going one step further, H1[@h1] claimed the observation of a charmed pentaquark $\Theta_c(3099)$ in the reaction $e+p\to (D^*p)$.
One would think that with so many positive observations there is no doubt that pentaquarks exist. Unfortunately, this is not so. There are two reasons. The first is illustrated in my Table I. The fact is that despite the claimed significance levels up to $7.8\sigma$, a conservative uniform determination of significance, $\sigma=S/\sqrt{S+2B}$, where $S$ and $B$ are signal and background counts respectively, leads to the fact that none of the significance levels rises to the level of $5\sigma$, the criterion used by Physical Review Letters for a claim of observation.
[l|l|l|l|l|l|l]{}\
Mass & Width & $N$ & Signif. & Uniform & Reaction & Experiment\
(MeV) & (MeV) & & Claimed & signif. & $A + B \rightarrow X + \Theta$ &\
$\Theta^{+}(1540)$ & & & & & &\
$1540 \pm 10 \pm 5$ & $< 25$ & 19 & 4.6 $\sigma$ & $\sim 2.7\sigma$ & $\gamma +$ C $\rightarrow X + (n K^{+})$ & LEPS\
$1542 \pm 2 \pm 5$ & $< 21$ & 43 & 5.2 $\sigma$ & $\sim 3.5\sigma$ & $\gamma + d$ $\rightarrow X + (nK^{+})$ & CLAS\
$1540 \pm 4 \pm 3$ & $< 25$ & 63 & 4.8 $\sigma$ & $\sim 4.3\sigma$ & $\gamma + p$ $\rightarrow X + (nK^{+})$ & SAPHIR\
$1555 \pm 1 \pm 10$ & $< 26$ & 41 & 7.8 $\sigma$ & $\sim 4.0\sigma$ & $\gamma + p$ $\rightarrow X + (nK^{+})$ & CLAS\
& & & & & &\
$1539 \pm 2 \pm 2$ & $< 9$ & 29 & 4.4 $\sigma$ & $\sim 3.0\sigma$ & $K^{+} +$ Xe $\rightarrow X + (pK^0_s)$ & DIANA\
$1533 \pm 5 \pm 3$ & $< 20$ & 27 & 6.7 $\sigma$ & $\sim 4.0\sigma$ & $\nu +$ Ne $\rightarrow X + (pK^0_s)$ & ITEP\
$1528 \pm 4$ & $< 19$ & 60 & 5.8 $\sigma$ & $\sim 4.0\sigma$ & $\gamma^{*} + d$ $\rightarrow X + (pK^0_s)$ & HERMES\
$1526 \pm 3 \pm 3$ & $< 24$ & 50 & 5.6 $\sigma$ & $\sim 3.5\sigma$ & $p +$ Si $\rightarrow X + (pK^0_s)$ & SVD-2\
$1530 \pm 5$ & $< 18$ & 50 & 3.7 $\sigma$ & $\sim 3.7\sigma$ & $p + p$ $\rightarrow X + (pK_s^0)$ & COSY\
$1545 \pm 12$ & $< 40$ & 100 & 5.5 $\sigma$ & $\sim 4.0\sigma$ & $p +$ prop $\rightarrow X + (pK^0_s)$ & YEREVAN\
$1522 \pm 2 \pm 3$ & $< 6$ & 221 & 4.6 $\sigma$ & $\sim 3.6\sigma$ & $\gamma^{*} + p$ $\rightarrow X + (p^{\pm}K_s)$ & ZEUS\
$\Xi(1862)$ & & & & & &\
$(S + -2)$ & & & & & &\
$1862 \pm 2$ & $< 21$ & 65 & 5.8 $\sigma$ & $\sim 4.7\sigma$ & $p + p$ $\rightarrow X + (\Xi^{-}\pi^{-})$ & NA49\
$\Theta(3099)$ & & & & & &\
$(C = -1)$ & & & & & &\
$3099 \pm 3 \pm 5$ & $< 35$ & 51 & 5.4 $\sigma$ & $\sim 4.2\sigma$ & $e + p$ $\rightarrow X + (D^{*}p)$ & HERA\
The second reason is much more important. Since pentaquarks are so novel and exciting, many more experiments have attempted to find them, but failed. Those which have tried, but failed to find $\Theta^+$ include BES[@besb], FNAL E690[@e690], FNAL E871 (HyperCP)[@e871], CDF[@cdf], BaBar[@babarc], ALEPH[@aleph], DELPHI[@delphi], PHENIX $(\bar{\Theta})$[@phenix], and HERA-B[@herab]. Similarly, CDF[@cdf], BaBar[@babarc], ALEPH[@aleph], HERA-B[@herab], and ZEUS[@zeusb] find no evidence for $\Xi(1860)$. CDF[@cdf], ALEPH[@aleph], ZEUS[@zeus], and FNAL E831 FOCUS[@focus] also find no evidence for $\Theta_c$. It is not looking very good for the survival of any of the pentaquarks!
Finally, I have to confess to my personal skepticism about pentaquarks. Nearly two decades ago, there was a similar stampede for the existence of dibaryons, and for several years I made valiant searches for them. While claims for nearly 40 dibaryons with masses between 1900 and 2250 MeV were made, not a single one survived high resolution, high statistics measurements[@sethd].
This research was supported by the U. S. Department of Energy.
Note: The references listed below include several which were not published at the time of the conference, but have become available since then.
[99]{}
K. K. Seth, *Prog. Part. Nucl. Phys.* **50**(2004)341-352.
C. Edwards et al., *Phys. Rev. Lett.* **48**(1982)70.
Belle Collaboration, S. K. Choi et al., *Phys. Rev. Lett.* **89**(2002)102001; K. Abe et al., *Phys. Rev. Lett.* 89(2002)142001.
CLEO Collaboration, D. M. Asner et al., *Phys. Rev. Lett.* **92**(2004)142001.
BaBar Collaboration, B. Aubert et al., *Phys. Rev. Lett.* **92**(2004)142002.
E. J. Eichten, K. Lane, and C. Quigg, *Phys. Rev.* **D** **69**(2004)094019.
R. Cester, *Proc. Frontier Science 2002*, Frascati Physics Series (2003) 41.
D. Joffe, Ph D. dissertation, Northwestern University, 2004, unpublished.
A. Tomaradze, Proc. APS meeting of GHP, Fermilab (Oct. 2004), to be published by IOP.
DASP Collaboration, R. Brandelik et al., *Z. Phys.* **C1**(1979)233.
BES Collaboration, J. Z. Bai et al., *Phys. Rev. Lett.* **88**(2002)101802.
K. K. Seth, `hep-ex/0405007`.
CLEO Collaboration, G. Bonvicini et al., *Phys. Rev.* **D** **70**(2004)032001.
CLEO Collaboration, D. Cronin-Hennessy, *Phys. Rev. Lett.* **92**(2004)22202.
K. K. Seth, *Modern Phys. Lett.* **A 18**(2003)330-339.
BaBar Collaboration, B. Aubert et al., *Phys. Rev. Lett.* **90**(2003)242001.
CLEO Collaboration, D. Besson et al., *Phys. Rev.* **D** **68**(2003)032002.
Belle Collaboration, S. K. Choi et al., *Phys. Rev. Lett.* **91**(2003)262001.
CLEO Collaboration, S. Dobbs, et al., `hep-ex/0410038`, submitted to *Phys. Rev. Lett.*
T. Nakano et al., *Phys. Rev. Lett.* **91**(2003)012002.
CLAS Collaboration, S. Stepanyan et al., *Phys. Rev. Lett.* **91**(2003)252001.
CLAS Collaboration, V. Kubarovsky et al., *Phys. Rev. Lett.* **92**(2004)032001.
SAPHIR Collaboration, J. Barth et al., *Phys. Lett.* **B 572**(2003)127.
ZEUS Collaboration, S. Chekanov et al., *Phys. Lett.***B 591**(2004)7.
HERMES Collaboration, A. Airapetian et al., *Phys. Lett.***B 585**(2004)213.
YEREVAN, P. Zh. Aslanyan, V. N. Emelyanenko and G. G. Rikhkvitzkaya, `hep-ex/0403044`.
DIANA Collaboration, V. V. Barmin et al., *Phys. Atom. Nucl.* **66**(2003)1715.
SVD Collaboration, A. Aleev et al., `hep-ex/0401024`, submitted to *Yad. Fiz.*.
COSY-TOF Collaboration, M. Abdel-Bary et al., *Phys. Lett.* **B 595**(2004)127.
ITEP, A. E. Asratyan et al., *Phys. Atom. Nucl.* **67**(2004)684.
R. Jaffe and F. Wilczek, *Phys. Rev. Lett.* **91**(2003)232003.
NA49 Collaboration, C. Alt et al., *Phys. Rev. Lett.* **92**(2004)042003.
H1 Collaboration, A. Aktas et al., *Phys. Lett.* **B 588**(2004)17.
BES Collaboration, J. Z. Bai et al., *Phys. Rev.* **D** **70**(2004)012004.
FNAL(E690), D. Christian, QNP Conference, Bloomington, 2004; `http://www.qnp2004.org/`.
FNAL(E871), HyperCP Collaboration, M. J. Longo et al., `hep-ex/0410027`.
CDF Collaboration, D. Litvintsev, `hep-ex/0410024`.
BaBar Collaboration, B. Aubert et al., `hep-ex/0408064`.
ALEPH Collaboration, S. Schael et al., *Phys. Lett.* **B 599**(2004)1.
DELPHI Collaboration, S. Raducci, P. Abreu and A. De Angelis, see S. R. Armstrong in `hep-ex/0410080`.
PHENIX Collaboration, C. Pinkenburg, *J. Phys. G.* **30**(2004)S1201.
HERA-B Collaboration, K. T. Knöpfle et al. *J. Phys. G.* **30**(2004)S1363.
ZEUS Collaboration, U. Karshon `hep-ex/0410029`.
FOCUS Collaboration, `http://www-focus.fnal.gov/penta/penta_charm.html`.
K. K. Seth, “Dibaryons in Theory and Practice”,\
`http://bartok.phys.northwestern.edu/papers.html`.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this letter we study a class of symmetries of the new translational extended shape invariant potentials. It is proved that a generalization of a compatibility condition introduced in a previous article is equivalent to the usual shape invariance condition. We focus on the recent examples of Odake and Sasaki (infinitely many polynomial, continuous $l$ and multi-index rational extensions). As a byproduct, we obtain new relations, to the best of our knowledge, for Laguerre, Jacobi polynomials and (confluent) hypergeometric functions.'
address: |
Departamento de Análisis Económico, Universidad de Zaragoza,\
Gran Vía 2, E-50005 Zaragoza, Spain
author:
- Arturo Ramos
title: Symmetries and the compatibility condition for the new translational shape invariant potentials
---
\#1\#2
(
[\#1]{} \#2
)
ł Ł \#1\#2
shape invariance ,compatibility condition
81Q05 ,81Q60
Introduction
============
The list of shape invariant potentials has remained quite the same until 2008. Then, key contributions of Gómez-Ullate et al. led to a quick and strong development of the subject in recent years. The first steps were the possibility of rationally extend shape-invariant potentials (to obtain non shape invariant ones) [@GomKamMil04; @GomKamMil04b]. Then, the introduction of the so called $X_{l}$ exceptional Laguerre and Jacobi polynomials [@GomKamMil10; @GomKamMil09] fostered all subsequent works. By the one hand, Quesne (and coworkers) [@Que08; @Que09; @BagQueRoy09] introduced the first examples of rationally extended shape invariant potentials. This idea has been greatly developed by Odake and Sasaki [@OdaSas09; @OdaSas10; @OdaSas10b; @OdaSas11; @OdaSas11b] to infinitely many families of rationally extended shape-invariant potentials, even with functions depending on continuous index ${l}$ and multi-indexed polynomials. They have also extended these ideas to the context of discrete quantum mechanics (see, e.g., [@OdaSas10c] and references therein). Other works by Grandati [@Gra11; @Gra12; @Gra12b] have a close relation with the ones cited.
On the other hand, the works [@Que08; @BouGanMal10; @BouGanMal11] inspired our recent article [@Ram11], where a compatibility condition has been found that is satisfied by the new examples. Even we have shown that such a condition forces the shape invariance of the examples treated there. It is worth mentioning that [@BouGanMal10; @BouGanMal11] are preceded by [@GanMal08]. This paper continues on the same line of study and shows that the examples of [@OdaSas09; @OdaSas10; @OdaSas10b; @OdaSas11] fit perfectly in our framework, satisfying the mentioned compatibility condition.
The letter is organized as follows. In the second section we recall the equations which satisfy the new translational shape invariant potentials of [@Que08; @Que09; @BagQueRoy09; @BouGanMal11; @Ram11]. We prove the equivalence between a generalization of the cited compatibility condition and the usual shape invariance condition. Afterwards, we comment on the isospectrality properties of the potentials involved. In the third section we describe how the examples of [@OdaSas09; @OdaSas10; @OdaSas10b; @OdaSas11] fit into our framework. We obtain as a byproduct new relations, to the best of our knowledge, for Laguerre, Jacobi polynomials and (confluent) hypergeometric functions. In the fourth and last section we offer some conclusions.
Symmetries and the relation of the compatibility condition with the shape invariance condition\[eccsic\]
========================================================================================================
For a brief account of shape invariance, see, e.g., [@Ram11] and references therein. In the examples of [@Que08; @Que09; @BagQueRoy09; @OdaSas09; @OdaSas10; @OdaSas10b; @OdaSas11; @BouGanMal11; @Ram11] the superpotential function takes the form of W(x,a)=W\_0(x,a)+W\_[1+]{}(x,a)-W\_[1-]{}(x,a), \[Wgen\] where $a$ denotes the set of parameters under transformation. $W_0(x,a)$ is the superpotential of a pair of shape invariant partner potentials of the classical type. $W_{1+}(x,a)$, $W_{1-}(x,a)$ are logarithmic derivatives which moreover satisfy $$W_{1-}(x,a)=W_{1+}(x,f(a))\,, \label{sic2}$$ where $f(a)$ in those cases is a translation of $a$.
The corresponding partner potentials for (\[Wgen\]) are V(x,a)&=&W\_0\^2(x,a)-W\_0\^(x,a)\
& &+W\_[1+]{}\^2(x,a)+W\_[1+]{}\^(x,a) +W\_[1-]{}\^2(x,a)+W\_[1-]{}\^(x,a)\
& &-2W\_0(x,a)W\_[1-]{}(x,a)+2W\_0(x,a)W\_[1+]{}(x,a)\
& &-2W\_[1-]{}(x,a)W\_[1+]{}(x,a)-2W\_[1+]{}\^(x,a) \[Vcom\]\
V(x,a)&=&W\_0\^2(x,a)+W\_0\^(x,a)\
& &+W\_[1+]{}\^2(x,a)+W\_[1+]{}\^(x,a) +W\_[1-]{}\^2(x,a)+W\_[1-]{}\^(x,a)\
& &-2W\_0(x,a)W\_[1-]{}(x,a)+2W\_0(x,a)W\_[1+]{}(x,a)\
& &-2W\_[1-]{}(x,a)W\_[1+]{}(x,a)-2W\_[1-]{}\^(x,a) \[Vtilcom\] However, for the examples of [@Que08; @Que09; @BagQueRoy09; @BouGanMal11] such partner potentials reduce to V(x,a)&=&V\_0(x,a)-2W\_[1+]{}\^(x,a), \[Vg\]\
V(x,a)&=&V\_0(x,a)-2W\_[1-]{}\^(x,a), \[tilVg\] where $V_0(x,a)$, $\widetilde V_0(x,a)$ conform the pair of shape invariant partner potentials associated to $W_0(x,a)$. Thus, it is in principle necessary that the following *compatibility condition* holds: $$W_{1+}^2+W_{1+}^{\prime}+W_{1-}^2+W_{1-}^{\prime}
-2W_0W_{1-}+2W_0W_{1+}-2W_{1-}W_{1+}=0 \label{cc1}$$ (the dependence on the arguments has been omitted for brevity). Such compatibility condition is the main object of our interest here. First we will discuss a kind of symmetries of the problems of type (\[Wgen\]), (\[sic2\]), (\[Vcom\]), (\[Vtilcom\]). Afterwards we establish the relation between a generalized compatibility condition and the ordinary shape invariance condition.
Symmetries of the new translational shape invariant potentials\[symt\]
----------------------------------------------------------------------
There exist a class of symmetries of superpotentials of type (\[Wgen\]) which satisfy the condition (\[sic2\]) given by the transformations W\_[1+]{}(x,a)&=&U\_[1+]{}(x,a)-g(x) \[trW1p\]\
W\_[1-]{}(x,a)&=&U\_[1-]{}(x,a)-g(x) \[trW1m\] where $g(x)$ is a function *depending only on $x$*. The function $g(x)$ must be differentiable in the domain of interest but otherwise arbitrary. For example, $g(x)$ could be any polynomial, ${\rm e}^x$, etc. Thus we have $$W(x,a)=W_0(x,a)+W_{1+}(x,a)-W_{1-}(x,a)
=W_0(x,a)+U_{1+}(x,a)-U_{1-}(x,a)$$ The corresponding partner potentials (\[Vcom\]), (\[Vtilcom\]) are likewise invariant under (\[trW1p\]) and (\[trW1m\]). However, their different terms do vary, in such a way that their variations cancel out. Firstly, we have & &W\_[1+]{}\^2+W\_[1+]{}\^+W\_[1-]{}\^2+W\_[1-]{}\^ -2W\_0W\_[1-]{}+2W\_0W\_[1+]{}-2W\_[1-]{}W\_[1+]{}\
& &=U\_[1+]{}\^2+U\_[1+]{}\^+U\_[1-]{}\^2+U\_[1-]{}\^ -2W\_0U\_[1-]{}+2W\_0U\_[1+]{}-2U\_[1-]{}U\_[1+]{}\
& &-2g\^(x) and moreover -2W\_[1+]{}\^(x,a)&=&-2U\_[1+]{}\^(x,a)+2g\^(x)\
-2W\_[1-]{}\^(x,a)&=&-2U\_[1-]{}\^(x,a)+2g\^(x) Therefore, if (\[cc1\]) holds, we have & &U\_[1+]{}\^2(x,a)+U\_[1+]{}\^(x,a) +U\_[1-]{}\^2(x,a)+U\_[1-]{}\^(x,a)\
& &-2W\_0(x,a)U\_[1-]{}(x,a)+2W\_0(x,a)U\_[1+]{}(x,a)-2U\_[1-]{}(x,a)U\_[1+]{}(x,a) =2g\^(x) This means that by virtue of a symmetry of the problem, the compatibility condition (\[cc1\]) should be generalized in such a way that its right hand side could be a function of $x$ not necessarily equal to zero. This observation leads to our main result in the following subsection.
Compatibility and shape invariance conditions
---------------------------------------------
For the class of problems described in this letter, there is an equivalence between the mentioned generalized compatibility condition and the usual shape invariance condition, as described in the next Theorem.
Assume we have a superpotential of the type W(x,a)=W\_0(x,a)+W\_[1+]{}(x,a)-W\_[1-]{}(x,a), where $$W_{1-}(x,a)=W_{1+}(x,f(a))\,,$$ $f(a)$ being the transformation on the parameters $a$, and $W_0(x,a)$ satisfies the shape invariance condition $$W_0^2(x,a)-W_0^2(x,f(a))+W_0^\prime(x,f(a))+W_0^\prime(x,a)=R(f(a)). \label{siW0}$$ Then, the shape invariant condition for $W(x,a)$ W\^2(x,a)-W\^2(x,f(a))+W\^(x,f(a))+W\^(x,a)=R(f(a))\[siWt\] holds if and only if & &W\_[1+]{}\^2(x,a)+W\_[1+]{}\^(x,a)+W\_[1-]{}\^2(x,a)+W\_[1-]{}\^(x,a)\
& &-2W\_0(x,a)W\_[1-]{}(x,a)+2W\_0(x,a)W\_[1+]{}(x,a)-2W\_[1-]{}(x,a)W\_[1+]{}(x,a)\
& &=(x) \[cidelta\] for some non-singular function $\epsilon(x)$ of $x$ only.
[*Proof*]{}
The condition of shape invariance (\[siWt\]) reads in this case $$\begin{aligned}
& &W^2(x,a)-W^2(x,f(a))+W'(x,f(a))+W'(x,a)-R(f(a))=\nonumber\\
& &\quad W_0^2(x,a)-W_0^2(x,f(a))+W_0'(x,f(a))+W_0'(x,a)-R(f(a))\nonumber\\
& &\quad+W_{1+}^2(x,a)+W_{1+}^{\prime}(x,a)
+W_{1-}^2(x,a)+W_{1-}^{\prime}(x,a) \nonumber\\
& &\quad-2W_0(x,a)W_{1-}(x,a)+2W_0(x,a)W_{1+}(x,a)
-2W_{1-}(x,a)W_{1+}(x,a)\nonumber\\
& &\quad -[W_{1+}^2(x,f(a))+W_{1+}^{\prime}(x,f(a))
+W_{1-}^2(x,f(a))+W_{1-}^{\prime}(x,f(a)) \nonumber\\
& &\quad -2W_0(x,f(a))W_{1-}(x,f(a))+2W_0(x,f(a))W_{1+}(x,f(a))\nonumber\\
& &\quad -2W_{1-}(x,f(a))W_{1+}(x,f(a))]-2W_{1-}^{\prime}(x,a)
+2W_{1+}^{\prime}(x,f(a))=0\nonumber\\
& & \label{gor}\end{aligned}$$ With the hypothesis that $W_0(x,a)$ satisfies (\[siW0\]), also that $W_{1-}(x,a)=W_{1+}(x,f(a))$ and that & &W\_[1+]{}\^2(x,a)+W\_[1+]{}\^(x,a)+W\_[1-]{}\^2(x,a)+W\_[1-]{}\^(x,a)\
& &-2W\_0(x,a)W\_[1-]{}(x,a)+2W\_0(x,a)W\_[1+]{}(x,a)-2W\_[1-]{}(x,a)W\_[1+]{}(x,a)\
& &=(x)\
& &W\_[1+]{}\^2(x,f(a))+W\_[1+]{}\^(x,f(a))+W\_[1-]{}\^2(x,f(a)) +W\_[1-]{}\^(x,f(a))\
& &-2W\_0(x,f(a))W\_[1-]{}(x,f(a))+2W\_0(x,f(a))W\_[1+]{}(x,f(a))\
& &-2W\_[1-]{}(x,f(a))W\_[1+]{}(x,f(a))\
& &=(x)the shape invariance condition is readily satisfied.
Conversely, with the above hypothesis we assume that the shape invariance condition (\[siWt\]) is satisfied, therefore (\[gor\]) is also satisfied. Taking into account (\[siW0\]) and $W_{1-}(x,a)=W_{1+}(x,f(a))$ and rearranging, (\[gor\]) becomes $$\begin{aligned}
& &\quad W_{1+}^2(x,a)+W_{1+}^{\prime}(x,a)
+W_{1-}^2(x,a)+W_{1-}^{\prime}(x,a) \nonumber\\
& &\quad-2W_0(x,a)W_{1-}(x,a)+2W_0(x,a)W_{1+}(x,a)\nonumber\\
& &\quad -2W_{1-}(x,a)W_{1+}(x,a)=\nonumber\\
& &\quad W_{1+}^2(x,f(a))+W_{1+}^{\prime}(x,f(a))
+W_{1-}^2(x,f(a))+W_{1-}^{\prime}(x,f(a)) \nonumber\\
& &\quad -2W_0(x,f(a))W_{1-}(x,f(a))+2W_0(x,f(a))W_{1+}(x,f(a))\nonumber\\
& &\quad -2W_{1-}(x,f(a))W_{1+}(x,f(a))\nonumber\end{aligned}$$ that is, the expression evaluated at $(x,a)$ equals the expression itself evaluated at $(x,f(a))$, thus both expressions must be equal to a function of $x$ only, namely, $\epsilon(x)$. This ends the proof of the Theorem.
[*Remarks*]{}
1\. In actual examples it is observed that (\[cidelta\]) is satisfied with $\epsilon(x)=0$, which is a slightly stronger condition that in particular implies shape invariance.
2\. Note that Ho proposes in [@Ho11; @Ho11b] a similar form to the superpotential (\[Wgen\]), but that approach is different: other relations, different from (\[cc1\]) or (\[cidelta\]) are satisfied. As an example of this, in [@Ram11] it is shown that the harmonic oscillator and the Morse potential admit no non-trivial extensions by our means. However with the technique of Ho they do. See also [@CarPerRanSan08; @FelSmi09; @Gra11b; @Que12].
3\. We observe that the potentials in (\[Vg\]) and (\[tilVg\]) are related by construction by a first order intertwining relation as described in Section 2 of [@Ram11] with superpotential (\[Wgen\]). The fulfillment of condition (\[cc1\]) provides a cancelation of some of their terms (it is the same condition for $V(x,a)$ in (\[Vg\]) and $\widetilde V(x,a)$ in (\[tilVg\])). Thus the isospectrality (maybe up to the ground state of one of them) of the potentials (\[Vg\]) and (\[tilVg\]) is ensured. See [@CarFerRam01; @CarRam08] for a group theoretical explanation of the intertwining technique.
4\. Another question is the isospectrality of the mentioned potentials with the ordinary shape invariant potentials $V_0(x,a)$ and $\widetilde V_0(x,a)$. This is also easy to justify: with the conditions (\[sic2\]) and (\[cidelta\]) the shape invariant relation (\[siWt\]) for the potentials of (\[Vg\]) and (\[tilVg\]) becomes identical to that of the partner potentials $V_0(x,a)$ and $\widetilde V_0(x,a)$. In particular, the quantity $R(f(a))$, from which the spectrum of the potentials is calculated, is identical in both cases, showing the mentioned isospectrality (maybe up to the ground state of one of them). See also [@BagQueRoy09; @OdaSas11] for an approximation to such an isospectrality based on the intertwining technique.
5\. We have established that the generalized compatibility condition (\[cidelta\]) is equivalent to the ordinary shape invariance condition (\[siWt\]) in the mentioned circumstances. However, the former condition is simpler to work with than the latter for the cases studied in [@Que08; @Que09; @BagQueRoy09; @BouGanMal11; @Ram11] and in this letter.
6\. The condition (\[cc1\]) has been shown, for $W's$ satisfying the Bernoulli equation $W^\prime+W^2-k_1(x)W=0$ (where $k_1(x)=c\coth (c x)$, etc.), in the examples of [@Ram11], to imply (\[sic2\]) in particular. This means that such conditions are not really independent in specific examples.
The compatibility condition (\[cidelta\]) admits another, even simpler, form. Denoting $$W_{1+}(x,a)=\frac{\psi^\prime_{1+}(x,a)}{\psi_{1+}(x,a)}\,,\quad\quad\quad
W_{1-}(x,a)=\frac{\psi^\prime_{1-}(x,a)}{\psi_{1-}(x,a)}$$ it becomes $$\frac{1}{\psi_{1+}}(\psi^{\prime\prime}_{1+}+2W_0 \psi^{\prime}_{1+})
+\frac{1}{\psi_{1-}}(\psi^{\prime\prime}_{1-}-2W_0\psi^{\prime}_{1-})
-2\frac{\psi^{\prime}_{1+}\psi^{\prime}_{1-}}{\psi_{1+}\psi_{1-}}=\epsilon(x)
\label{cc12d}$$ where the dependence on the arguments $(x,a)$ has been dropped for simplicity. For the case of $\epsilon(x)=0$ it follows $$(\psi^{\prime\prime}_{1+}+2W_0 \psi^{\prime}_{1+})\psi_{1-}
+(\psi^{\prime\prime}_{1-}-2W_0\psi^{\prime}_{1-})\psi_{1+}
-2\psi^{\prime}_{1+}\psi^{\prime}_{1-}=0
\label{cc12}$$
In terms of the functions $\psi_{1+}(x,a), \psi_{1-}(x,a)$, the symmetries of Subsection \[symt\] are expressed in the following way. The functions change as \_[1+]{}(x,a)&=&(-g(x)dx)\_[1+]{}(x,a)\
\_[1-]{}(x,a)&=&(-g(x)dx) \_[1-]{}(x,a) where ${\displaystyle U_ {1+}(x,a)=\frac{\chi_{1+}^\prime(x,a)}{\chi_{1+}(x,a)}}$ and ${\displaystyle U_ {1-}(x,a)=\frac{\chi_{1-}^\prime(x,a)}{\chi_{1-}(x,a)}}$.
Examples \[examples\]
=====================
In this section we study the fulfillment of the compatibility condition (\[cc12\]) for the examples of [@OdaSas09; @OdaSas10; @OdaSas10b; @OdaSas11]. These cases are specially well suited for our purposes, since they take the form of Section \[eccsic\] and are known to be shape invariant. By the symmetry property of these problems, it suffices to study the compatibility condition (\[cc12\]), which will be obtained directly in all cases. We will obtain as a byproduct new relations, to the best of our knowledge, of Laguerre, Jacobi polynomials and (confluent) hypergeometric functions.
Polynomial shape invariant extensions of the radial oscillator and Darboux–Pöschl–Teller potentials
---------------------------------------------------------------------------------------------------
### Radial oscillator
According to [@OdaSas09; @OdaSas10; @OdaSas10b], the extended partner potentials of the radial oscillator have a superpotential of the form $$W_l(x,g)=W_0(x,g+l)+\frac{\xi^\prime_l(x^2,g+1)}{\xi_l(x^2,g+1)}
-\frac{\xi^\prime_l(x^2,g)}{\xi_l(x^2,g)}$$ where $x>0$, $$\begin{aligned}
W_0(x,g)&=&-x+\frac{g}{x} \nonumber\\
\xi_l(x,g)&=&L_l^{(g+l-\frac 32)}(-x)\nonumber\end{aligned}$$ and $L_n^{(a)}(x)$ are Laguerre polynomials.
We will try to check (\[cc12\]) directly choosing (with a slight abuse of notation) $$\begin{aligned}
W_0(x,a)&=&W_0(x,g+l) \nonumber\\
\psi_{1+}(x,a)&=&\xi_l(x^2,g+1)\nonumber\\
\psi_{1-}(x,a)&=&\xi_l(x^2,g)\nonumber\end{aligned}$$ and by writing it in another way, using the relation (2.41) of [@OdaSas10b], namely (dependence on arguments dropped) $$\begin{aligned}
\psi^{\prime\prime}_{1+}&=&
4l\psi_{1+}-2\left(\frac{g+l}{x}+x\right)\psi^{\prime}_{1+}\label{pp_RO}\\
\psi^{\prime\prime}_{1-}&=&
4l\psi_{1-}-2\left(\frac{g+l-1}{x}+x\right)\psi^{\prime}_{1-}\label{pm_RO}\end{aligned}$$ Thus the relation (\[cc12\]) becomes $$8l \psi_{1+}\psi_{1-}+\frac{2(1-2g-2l)}{x}\psi_{1+}\psi^\prime_{1-}
-4x\psi_{1-}\psi^\prime_{1+}-2\psi^\prime_{1-}\psi^\prime_{1+}=0 \label{cc12RO}$$ This last relation can be proved using the equations (3.5) and (3.6) of [@OdaSas10b]. That implies, in particular, the fulfillment of (\[siWt\]) for this case. In [@OdaSas10b] it is proved (\[siWt\]) directly for the current case.
The relations (\[cc12\]) and (\[cc12RO\]) are new, and equivalent to each other, for Laguerre polynomials.
### Trigonometric Darboux-Pöschl-Teller potential
According to [@OdaSas09; @OdaSas10; @OdaSas10b], the extended partner potentials of the trigonometric Darboux-Pöschl-Teller potential have a superpotential of the form $$W_l(x,g,h)=W_0(x,g+l,h+l)
+\frac{\xi^\prime_l(\cos(2x),g+1,h+1)}{\xi_l(\cos(2x),g+1,h+1)}
-\frac{\xi^\prime_l(\cos(2x),g,h)}{\xi_l(\cos(2x),g,h)}$$ where $x\in\left(0,\frac{\pi}{2}\right)$, $$\begin{aligned}
W_0(x,g,h)&=&g \cot(x)-h \tan(x) \nonumber\\
\xi_l(x,g,h)&=&P_l^{(-g-l-\frac 12,h+l-\frac 32)}(x)\nonumber\end{aligned}$$ and $P_n^{(a,b)}(x)$ are Jacobi polynomials.
We will try to check (\[cc12\]) by choosing (with a slight abuse of the notation) $$\begin{aligned}
W_0(x,a)&=&W_0(x,g+l,h+l) \nonumber\\
\psi_{1+}(x,a)&=&\xi_l(\cos(2x),g+1,h+1)\nonumber\\
\psi_{1-}(x,a)&=&\xi_l(\cos(2x),g,h)\nonumber\end{aligned}$$ Moreover, we transform (\[cc12\]) by using the relation (2.41) of [@OdaSas10b], namely $$\begin{aligned}
\psi^{\prime\prime}_{1+}&=&
4l(g-h-l+1)\psi_{1+}
+2\left((g+l+1)\cot x+(h+l)\tan x\right)\psi^{\prime}_{1+}\nonumber\\
\psi^{\prime\prime}_{1-}&=&
4l(g-h-l+1)\psi_{1-}
+2\left((g+l)\cot x+(h+l-1)\tan x\right)\psi^{\prime}_{1-}\nonumber\end{aligned}$$ Thus, the relation (\[cc12\]) becomes $$\begin{aligned}
& &-8l(h-g+l-1)\psi_{1+}\psi_{1-}
+2(2h+2l-1)\tan x\,\psi_{1+}\psi^\prime_{1-} \nonumber\\
& &\quad+2(2g+2l+1)\cot x\,\psi_{1-}\psi^\prime_{1+}
-2\psi^\prime_{1-}\psi^\prime_{1+}=0 \label{cc12TPT}\end{aligned}$$ Such relation can be proved directly using (3.12) and (3.13) of [@OdaSas10b]. In such a paper, it has been proved the shape invariance condition (\[siWt\]) directly. We have proved it checking that the stronger (and simpler) condition (\[cc12\]) or (\[cc12TPT\]) holds.
For this case, (\[cc12\]) and (\[cc12TPT\]) are new relations, equivalent to each other, for Jacobi polynomials.
### Hyperbolic Darboux-Pöschl-Teller potential
According to [@OdaSas09; @OdaSas10; @OdaSas10b], the extended partner potentials of the hyperbolic Darboux-Pöschl-Teller potential have a superpotential of the form $$W_l(x,g,h)=W_0(x,g+l,h-l)
+\frac{\xi^\prime_l(\cosh(2x),g+1,h-1)}{\xi_l(\cosh(2x),g+1,h-1)}
-\frac{\xi^\prime_l(\cosh(2x),g,h)}{\xi_l(\cosh(2x),g,h)}$$ where $x>0$, $$\begin{aligned}
W_0(x,g,h)&=&g \coth(x)-h \tanh(x) \nonumber\\
\xi_l(x,g,h)&=&P_l^{(-g-l-\frac 12,-h+l-\frac 32)}(x)\nonumber\end{aligned}$$ and $P_n^{(a,b)}(x)$ are again Jacobi polynomials.
We will try to check (\[cc12\]) by choosing $$\begin{aligned}
W_0(x,a)&=&W_0(x,g+l,h-l) \nonumber\\
\psi_{1+}(x,a)&=&\xi_l(\cosh(2x),g+1,h-1)\nonumber\\
\psi_{1-}(x,a)&=&\xi_l(\cosh(2x),g,h)\nonumber\end{aligned}$$ and transforming the cited condition by using the relation (2.41) of [@OdaSas10b], namely $$\begin{aligned}
\psi^{\prime\prime}_{1+}&=&
4l(l-g-h-1)\psi_{1+}
+2\left((g+l+1)\coth x+(h-l)\tanh x\right)\psi^{\prime}_{1+}\nonumber\\
\psi^{\prime\prime}_{1-}&=&
4l(l-g-h-1)\psi_{1-}
+2\left((g+l)\coth x+(h-l+1)\tanh x\right)\psi^{\prime}_{1-}\nonumber\end{aligned}$$ Thus the relation (\[cc12\]) becomes $$\begin{aligned}
& &-8l(h+g-l+1)\psi_{1+}\psi_{1-}
+2(1+2h-2l)\tanh x\,\psi_{1+}\psi^\prime_{1-} \nonumber\\
& &\quad+2(1+2g+2l)\coth x\,\psi_{1-}\psi^\prime_{1+}
-2\psi^\prime_{1-}\psi^\prime_{1+}=0 \label{cc12HPT}\end{aligned}$$ This last relation can be proved by using equations (3.12) and (3.13) of [@OdaSas10b], as in the previous case. Therefore, the shape invariance for this case holds in particular. In [@OdaSas10b], the relation (\[siWt\]) has been proved directly.
For this case, (\[cc12\]) and (\[cc12HPT\]) are new relations, equivalent to each other, for Jacobi polynomials.
Continuous $l$ shape invariant extensions of the radial oscillator and trigonometric Darboux–Pöschl–Teller potentials
---------------------------------------------------------------------------------------------------------------------
### Radial oscillator
According to [@OdaSas11], the extended partner potentials of the radial oscillator with continuous $l>0$ have a superpotential of the form $$W_l(x,g)=W_0(x,g+l)+\frac{\xi^\prime_l(x^2,g+1)}{\xi_l(x^2,g+1)}
-\frac{\xi^\prime_l(x^2,g)}{\xi_l(x^2,g)}$$ where $x>0$, $$\begin{aligned}
W_0(x,g)&=&-x+\frac{g}{x} \nonumber\\
\xi_l(x,g)&=&\frac{\Gamma(g+2 l-\frac12)}{\Gamma(l+1)\Gamma(g+l-\frac12)}\,
{}_1F_1\left(\begin{array}{c}-l\\g+l-\frac12\end{array}
\Bigm|-x\right)\nonumber\end{aligned}$$ and ${}_1F_1\left(\begin{array}{c}a\\b\end{array}\Bigm|x\right)$, $\Gamma(x)$ are the confluent hypergeometric and Gamma functions, respectively.
We choose (with a slight abuse of notation) $$\begin{aligned}
W_0(x,a)&=&W_0(x,g+l) \nonumber\\
\psi_{1+}(x,a)&=&\xi_l(x^2,g+1)\nonumber\\
\psi_{1-}(x,a)&=&\xi_l(x^2,g)\nonumber\end{aligned}$$ in order to check whether (\[cc12\]) is satisfied. We first transform it by using the relation (3.9) of [@OdaSas11], namely (\[pp\_RO\]) and (\[pm\_RO\]). Therefore, (\[cc12\]) is transformed into (\[cc12RO\]) again. Such relation can be proved again for the current $\psi_{1+}, \psi_{1-}$ by using properties (3.10) and (3.11) of [@OdaSas11]. Thus the compatibility condition holds and as a result also the shape invariance condition does. This last result has been obtained directly in [@OdaSas11].
For this case, (\[cc12\]) and (\[cc12RO\]) are new relations, equivalent to each other, for confluent hypergeometric functions.
### Trigonometric Darboux-Pöschl-Teller potential
According to [@OdaSas11], the extended partner potentials of the trigonometric Darboux-Pöschl-Teller potential with continuous $l>0$ have a superpotential of the form $$W_l(x,g,h)=W_0(x,g+l,h+l)
+\frac{\xi^\prime_l(\cos(2x),g+1,h+1)}{\xi_l(\cos(2x),g+1,h+1)}
-\frac{\xi^\prime_l(\cos(2x),g,h)}{\xi_l(\cos(2x),g,h)}$$ where $x\in\left(0,\frac{\pi}{2}\right)$, $$\begin{aligned}
W_0(x,g,h)&=&g \cot(x)-h \tan(x) \nonumber\\
\xi_l(x,g,h)&=&
\frac{\Gamma(g+2 l-\frac12)}{\Gamma(l+1)\Gamma(g+l-\frac12)}\,
{}_2F_1\left(\begin{array}{c}-l, g-h+l-1\\g+l-\frac12\end{array}
\Bigm|\frac{1-x}{2}\right)\nonumber\end{aligned}$$ and ${}_2F_1\left(\begin{array}{c}a,b\\c\end{array}\Bigm|x\right)$ is the hypergeometric function.
We denote again $$\begin{aligned}
W_0(x,a)&=&W_0(x,g+l,h+l) \nonumber\\
\psi_{1+}(x,a)&=&\xi_l(\cos(2x),g+1,h+1)\nonumber\\
\psi_{1-}(x,a)&=&\xi_l(\cos(2x),g,h)\nonumber\end{aligned}$$ in order to check that (\[cc12\]) holds. We transform it first using the result (3.9) of [@OdaSas11], namely $$\begin{aligned}
\psi^{\prime\prime}_{1+}&=&
-4l(g-h+l-1)\psi_{1+}\nonumber\\
& &-2\left((g+h+2l+1)\csc (2x)+(g-h-1)\cot(2x)\right)\psi^{\prime}_{1+}\nonumber\\
\psi^{\prime\prime}_{1-}&=&
-4l(g-h+l-1)\psi_{1-}\nonumber\\
& &-2\left((g+h+2l-1)\csc (2x)+(g-h-1)\cot(2x)\right)\psi^{\prime}_{1-}\nonumber\end{aligned}$$ Thus the relation (\[cc12\]) becomes $$\begin{aligned}
& & -8l(g-h+l-1)\psi_{1+}\psi_{1-}
-2(2g+2l-1)\cot x\,\psi_{1+}\psi^\prime_{1-} \nonumber\\
& &\quad-2(2h+2l+1)\tan x\,\psi_{1-}\psi^\prime_{1+}
-2\psi^\prime_{1-}\psi^\prime_{1+}=0 \label{cc12T2F1PT}\end{aligned}$$ This last equation can be proved directly using the equations (3.10) and (3.11) of [@OdaSas11], thus fulfilling the compatibility condition. As a consequence, (\[siWt\]) holds (something which has been checked directly in [@OdaSas11]).
For this case, (\[cc12\]) and (\[cc12T2F1PT\]) are new relations amongst hypergeometric functions equivalent to each other.
Conclusions and outlook
=======================
We have studied the fulfillment of the compatibility condition introduced in [@Ram11] in the cases of the extended shape invariant potentials of [@OdaSas09; @OdaSas10; @OdaSas10b; @OdaSas11]. Firstly, we have proved that for the form of the superpotential (\[Wgen\]), where $W_0(x,a)$ generates a pair of shape invariant potentials of the classical type and the extra terms satisfy (\[sic2\]), the compatibility condition (\[cidelta\]) is equivalent to the ordinary shape invariance condition for the full superpotential (\[siWt\]). Then, the cited examples are exactly of the form described in Section \[eccsic\]. We check directly whether the compatibility condition (\[cc12\]) holds and indeed we prove it in all cases, using previous results of [@OdaSas10b; @OdaSas11]. Thus, for the cases studied we provide an alternative and simpler way of proving shape invariance.
The multi-index polynomial extensions to the radial oscillator and trigonometric Darboux-Pöschl-Teller potentials introduced in [@OdaSas11b] are shown to be shape invariant and they are of the form described in Section \[eccsic\], thus the compatibility condition (\[cidelta\]) must hold in that cases as well.
It would be interesting to see whether there exists non-trivial rational extensions to other shape invariant potentials of the Infeld and Hull classification [@InfHul51; @CarRam00; @CooKhaSuk01; @GanMalRas11] (with superpotential of the type $k_0(x)+m k_1(x)$) to infinitely many polynomial and continuous $l$ functions analogous to that of [@OdaSas09; @OdaSas10; @OdaSas10b; @OdaSas11]. If these examples do exist, the relation (\[cidelta\]) must hold again.
Acknowledgements {#acknowledgements .unnumbered}
================
We acknowledge correspondence with R. Sasaki, where he advanced us the fulfillment of relation (\[cc1\]) for some of their cases and made helpful remarks in a previous version of this paper. This work is supported by Spanish Ministry of Economy and Competitiveness, project ECO2009-09332 and by Aragon Government, ADETRE Consolidated Group.
[AAAAA]{}
Gómez-Ullate D, Kamran N and Milson R 2004 [*J. Phys. A: Math. Theor.*]{} [**37**]{} 1780–804
Gómez-Ullate D, Kamran N and Milson R 2004 [*J. Phys. A: Math. Theor.*]{} [**37**]{} 10065–78
Gómez-Ullate D, Kamran N and Milson R 2010 [*J. Approx. Theory*]{} [**162**]{} 987–1006
Gómez-Ullate D, Kamran N and Milson R 2009 [*J. Math. Anal. Appl.*]{} [**359**]{} 352–67
Quesne C 2008 [*J. Phys. A: Math. Theor.*]{} [**41**]{} 392001
Quesne C 2009 [*SIGMA*]{} [**5**]{} 084
Bagchi B, Quesne C and Roychoudhury R 2009 [*Pramana J. Phys.*]{} [**73**]{} 337–47
Odake S and Sasaki R 2009 [*Phys. Lett. B*]{} [**679**]{} 414–7
Odake S and Sasaki R 2010 [*Phys. Lett. B*]{} [**684**]{} 173–6
Odake S and Sasaki R 2010 [*J. Math. Phys.*]{} [**51**]{} 053513
Odake S and Sasaki R 2011 [*J. Phys. A: Math. Theor.*]{} [**44**]{} 195203
Odake S and Sasaki R 2011 [*Phys. Lett. B*]{} [**702**]{} 164-70
Odake S and Sasaki R 2010 [*J. Phys. A: Math. Theor.*]{} [**43**]{} 335201
Grandati Y 2011 [*Ann. Phys. NY*]{} [**326**]{} 2074-90
Grandati Y 2012 [*J. Phys.: Conf. Ser.*]{} [**343**]{} 012041
Grandati Y 2012 [*Disconjugacy of the Schrödinger equation for the trigonometric Darboux–Pöschl–Teller potential and exceptional Jacobi polynomials*]{}, [*J. Eng. Math.*]{} In press
Bougie J, Gangopadhyaya A and Mallow J V 2010 [*Phys. Rev. Lett.*]{} [**105**]{} 210402
Bougie J, Gangopadhyaya A and Mallow J V 2011 [*J. Phys. A: Math. Theor.*]{} [**44**]{} 275307
Ramos A 2011 [*J. Phys. A: Math. Theor.*]{} [**44**]{} 342001
Gangopadhyaya A and Mallow J V 2008 [*Int. J. Mod. Phys. A*]{} [**23**]{} 4959
Ho C-L 2011 [*Prog. Theor. Phys.*]{} [**126**]{} 185–201
Ho C-L 2011 [*J. Math. Phys.*]{} [**52**]{} 122107
Cariñena J F, Perelomov A M, Rañada M F and Santander M 2008 [*J. Phys. A: Math. Theor.*]{} [**41**]{} 085301
Fellows J M and Smith R A 2009 [*J. Phys. A: Math. Theor.*]{} [**42**]{} 335303
Grandati Y 2011 [*J. Math. Phys.*]{} [**52**]{} 103505
Quesne C 2012 [*Int. J. Mod. Phys. A*]{} [**27**]{} 1250073
Cariñena J F, Fernández D J and Ramos A 2001 [*Ann. Phys. NY*]{} [**292**]{} 42–66
Cariñena J F and Ramos A 2008 [*Int. J. Geom. Meth. Mod. Phys.*]{} [**5**]{} 605–40
Infeld L and Hull T E 1951 [*Rev. Mod. Phys.*]{} [**23**]{} 21–68
Cariñena J F and Ramos A 2000 [*Rev. Math. Phys.*]{} [**12**]{} 1279–304
Cooper F, Khare A and Sukhatme U 2001 [*Supersymmetry in Quantum Mechanics*]{} (Singapore: World Scientific)
Gangopadhyaya A, Mallow J V and Rasinaru C 2011 [*Supersymmetric Quantum Mechanics: an introduction*]{} (Singapore: World Scientific)
|
{
"pile_set_name": "ArXiv"
}
|
[THERMODYNAMIC VARIABLES FROM SPECTATOR DECAY[^1] ]{}
W. Trautmann$^1$ for the ALADIN collaboration$^2$
$^1$Gesellschaft für Schwerionenforschung mbH\
D-64291 Darmstadt\
Germany\
$^2$Catania, Darmstadt, East Lansing, Frankfurt,\
Milano, Moscow, Rossendorf, Warsaw
[**INTRODUCTION**]{}
The Van-der-Waals-type range dependence of the nuclear forces has provided a good part of the motivation for fragmentation studies. The predictions of a liquid-gas phase transition in nuclear matter [@jaqa83; @scott] have raised the hope that signals of it may be identified in reactions of finite nuclei. The observation of multifragmentation [@jakob; @fried; @moretto] represented a major breakthrough in this direction, as it indicated that the created nuclear systems may actually pass through states of high temperature and low density during the later stages of the reaction. Multifragmentation has been predicted [@gross90; @bond95] to be the dominant decay mode under such conditions that are similar to those expected for the coexistence region in the nuclear-matter phase diagram.
Many features of multifragmentation are well reproduced by the statistical multifragmentation models [@gross90; @bond95; @botv95]. They predominantly include the fragment distributions and correlations describing the populated partition space. But also kinetic observables such as the energies or velocity correlations of the produced fragments have been found to agree with the statistical predictions in certain cases [@oesch; @kwiat]. The essential assumption on which these models are based is that of a single equilibrated breakup state at the end of the dynamical evolution and of a statistical population of the corresponding phase space. To the extent that it is realized in nature, this scenario offers the possibility to map out the nuclear phase diagram, even though for finite nuclei, by sampling the thermodynamic equilibrium conditions associated with the multi-fragment breakup channels.
Considerable progress has been made with this program during recent years. Multifragmentation has been studied over a wide range of different classes of high-energy reactions and breakup temperatures have been deduced from the measured data. The correlation of the temperature with the excitation energy, often referred to as the caloric curve of nuclei and first reported for spectator decays following $^{197}$Au + $^{197}$Au reactions at 600 MeV per nucleon [@poch95], has also been derived for other cases [@kwiat; @haug96; @ma97]. Critical evaluations of the obtained results and of the applied methods have followed. More work is clearly needed in order to complete the picture [@poch97].
In this contribution, the focus will be on the decay of excited spectator nuclei [@dronten]. New results will be reported that were obtained in two recent experiments with the ALADIN spectrometer at SIS in which reactions of $^{197}$Au on $^{197}$Au in the regime of relativistic energies up to 1 GeV per nucleon were studied. In the first experiment, the ALADIN spectrometer was used to detect and identify the products of the projectile-spectator decay [@schuetti]. The Large-Area Neutron Detector (LAND) was used do measure coincident free neutrons emitted by the projectile source. In the second experiment, three multi-detector hodoscopes, consisting of a total of 216 Si-CsI(Tl) telescopes, and three high-resolution telescopes were positioned at backward angles to measure the yields and correlations of isotopically resolved light fragments of the target-spectator decay [@hongfei]. From these data excitation energies and masses, temperatures, and densities were deduced. Before proceeding to the discussion of these new data some of the indications for equilibration during the spectator decay will be briefly recalled.
[**EVIDENCE FOR EQUILIBRATION**]{}
The universal features of the spectator decay, as apparent in the observed $Z_{bound}$ scaling of the measured charge correlations, were the first and perhaps most striking indications for equilibrium [@schuetti]. The quantity $Z_{bound}$ is defined as the sum of the atomic numbers $Z_i$ of all projectile fragments with $Z_i \geq$ 2. It represents the charge of the original spectator system reduced by the number of hydrogen isotopes emitted during its decay.
The invariance of the fragmentation patterns, when plotted as a function of $Z_{bound}$, suggests that the memory of the entrance channel and of the dynamics governing the primary interaction of the colliding nuclei is lost. This feature extends to other observables. The transverse-momentum widths of the fragments, as shown in Fig. 1, do not change with the bombarding energy, indicating that collective contributions to the transverse motion are small. The equilibration of the three kinetic degrees of freedom in the moving frame of the projectile spectator was confirmed by the analysis of the measured velocity spectra [@schuetti]. The square-root dependence on the atomic number $Z$ implies kinetic energies nearly independent of $Z$ and hence of the mass.
The success of the statistical multifragmentation model in describing the observed population of the partition space may be seen as a further argument for equilibration. Here the main task consists of finding an appropriate ensemble of excited nuclei to be subjected to the multi-fragment decay according to the model prescription. Starting from the entrance channel may not necessarily provide sufficiently realistic ensembles, even though a good description of the fragment correlations was obtained with the quantum-molecular-dynamics model coupled to the statistical multifragmentation model [@konop93]. An alternative method consists of using empirically derived ensembles. Near perfect descriptions of the measured correlations, including their dispersions around the mean behaviour, can be achieved [@botv95; @hongfei]. The mathematical procedure of backtracing allows for studying the uniqueness of the obtained solutions and their sensitivities to the observables that were used to generate it [@deses96].
The ensemble derived empirically for the reaction $^{197}$Au on $^{197}$Au at 1000 MeV per nucleon is shown in Fig. 2. Its capability of reproducing the measured mean multiplicity of intermediate-mass fragments and the mean charge asymmetry of the two heaviest fragments is illustrated in Fig. 3 where the dashed and dotted lines show the model results for $E_x/A$ chosen 15% above and below the adopted values. In the region $Z_{bound} >$ 30, the mean excitation energy of the ensemble of spectator nuclei was found to be well constrained by the mean fragment multiplicity alone. At $Z_{bound} \approx$ 30 and below, the charge asymmetry was a necessary second constraint while, at the lowest values of $Z_{bound}$, neither the multiplicity nor the asymmetry provided rigid constraints on the excitation energy.
The spectator source, well localized in rapidity [@schuetti] and, apparently, exhibiting so many signs of equilibration, seems an excellent candidate for studying the nuclear phase diagram. Dynamical studies also support this conclusion [@fuchs97; @goss97]. There are limitations, however, which are mainly seen in the emission of nucleons and very light particles. Here the components from reaction stages that lead to the formation of the spectator system and from its subsequent breakup overlap, thereby creating difficulties for the extraction of the equilibrium properties. This is particularly apparent in the case of the excitation energy which will be discussed in the next section.
[**EXCITATION ENERGY**]{}
A method to determine the excitation energy from the experimental data was first presented by Campi [*et al.*]{} [@campi94] and applied to the earlier $^{197}$Au + Cu data [@kreutz]. It is based on the idea of calorimetry which requires a complete knowledge of all decay products, including their atomic numbers, masses, and kinetic energies. In this work, the measured abundances for $Z \ge$ 2 were used and, e.g., the yields of hydrogen isotopes were deduced by extrapolating to $Z$ = 1. In the same type of analysis with the more recent data for $^{197}$Au + $^{197}$Au at 600 MeV per nucleon, the data on neutron production measured with LAND were taken into account [@poch95]. Since the hydrogen isotopes were not detected assumptions concerning the overall $N/Z$ ratio of the spectator, the intensity ratio of protons, deuterons, and tritons, and the kinetic energies of hydrogen isotopes had to be made.
The latest evaluation of the excitation energy included the measured neutron data for three bombarding energies and the data for hydrogen emission from the target spectator at 1000 MeV per nucleon [@gross]. For the case of 600 MeV per nucleon, the difference to the published energy values [@poch95] amounted to about 10%. More importantly, however, the deduced spectator energy was found to depend considerably on the bombarding energy (Fig. 4). It increases by about 30% over the range 600 to 1000 MeV per nucleon which is in contrast to the universality observed for other observables (previous section). The origin of this rise lies solely in the behavior of the mean kinetic energies of neutrons in the spectator frame (Fig. 5). They have a large effect on the deduced total excitation energy, first, because the neutron multiplicity is large and, second, because the hydrogen isotopes, measured only at 1000 MeV per nucleon, were assumed to scale in the same way as the neutrons do.
It is obvious that this uncertainty represents a considerable problem, a memory of the entrance channel is inconsistent with the idea of measuring thermodynamic properties of an equilibrated breakup state. It is reasonable to assume that part of the experimentally determined energy may be due to pre-equilibrium or pre-breakup emission, even though the analysis of nucleon emission is restricted to the data at forward (backward) angles in the projectile (target) frame (Fig. 6). The experimental excitation energies are larger than the range of energies potentially consistent with the statistical multifragmentation model (Fig. 4), and the spectra of hydrogen isotopes, measured with the high-resolution telescopes, exhibit yields and slopes much larger than predicted by the model [@hongfei]. The process of spectator formation involves secondary scatterings of fireball nucleons on spectator matter which may generate a pre-breakup source centered close to the spectator rapidity. Experimentally, the next step should consist of complementing the neutron data with equivalent data for proton and light-charged-particle emission at several bombarding energies.
[**TEMPERATURE**]{}
The shape of the caloric curve [@poch95], reminiscent of first-order phase transitions in ordinary liquids, and its similarity to predictions of microscopic statistical models [@gross90; @bond95; @hongfei], has initiated a widespread discussion of whether nuclear temperatures of this magnitude can be measured reliably (see Refs. [@xi96; @gulm97] and references given in these recent papers) and whether this observation may indeed be linked to a transition towards the vapor phase [@natowitz; @more96].
Here we restrict ourselves to new results obtained from the study of the target spectator at backward angles in the laboratory in $^{197}$Au + $^{197}$Au collisions at 1000 MeV per nucleon. From the isotopically resolved yields of hydrogen, helium, and lithium isotopes breakup temperatures $T_{{\rm HeLi}}$, $T_{{\rm Hepd}}$, and $T_{{\rm Hedt}}$ were derived [@albergo]. The corrections for sequential feeding of the ground-state yields, based on calculations with the quantum statistical model [@konop94], resulted in good qualitative agreement for the three temperature observables [@hongfei].
The corrected temperature $T_{{\rm HeLi}}$ is shown in Fig. 7. With decreasing $Z_{bound}$, it increases from $T$ = 4 MeV for peripheral collisions to about 10 MeV for the most central collisions. Within the errors, these values are in good agreement with those measured for projectile spectators in the same reaction at 600 MeV per nucleon. In both cases, the displayed data symbols represent the mean values of the range of systematic uncertainties associated with the two different experiments while the errors include both statistical and systematic contributions. The projectile temperatures are the result of a new analysis of the original data and are somewhat higher, between 10% and 20%, than those reported previously [@poch95]. Their larger errors follow from a reassessment of the potential $^4$He contamination of the $^6$Li yield caused by $Z$ misidentification. The invariance of the breakup temperature with the bombarding energy is consistent with the observed universality of the spectator decay.
Calculations with the statistical multifragmentation model were performed for the ensemble of excited spectator nuclei shown in Fig. 2. Results have already been shown in Fig. 3. The solid line in Fig. 7 represents the thermodynamic temperature $T$ obtained in these calculations. With decreasing $Z_{bound}$, it increases monotonically from about 5 to 9 MeV. Over a wide range of $Z_{bound}$ it remains close to $T$ = 6 MeV which reflects the plateau predicted by the statistical multifragmentation model for the range of excitation energies 3 MeV $\le E_x/A \le$ 10 MeV [@bond95]. In model calculations performed for a fixed spectator mass, the plateau is associated with a strong and monotonic rise of the fragment multiplicities. Experimentally, due to the decrease of the spectator mass with increasing excitation energy (Fig. 2), the production of intermediate-mass fragments passes through a maximum in the corresponding range of $Z_{bound}$ from about 20 to 60 (Fig. 3).
The dashed line gives the temperature $T_{{\rm HeLi}}$ obtained from the calculated isotope yields. Because of sequential feeding, it differs from the thermodynamic temperature, the uncorrected temperature $T_{{\rm HeLi},0}$ being somewhat lower. Here, in order to permit the direct comparison with the experimental data in one figure, we display $T_{{\rm HeLi}}$ which has been corrected in the same way with the factor 1.2 suggested by the quantum statistical model. The calculated $T_{{\rm HeLi}}$ exhibits a more continuous rise with decreasing $Z_{bound}$ than the thermodynamic temperature and is in very good agreement with the measured values. We thus find that, with the parameters needed to reproduce the observed charge partitions, this temperature-sensitive observable is well reproduced. A necessary requirement for a consistent statistical description of the spectator fragmentation is thus fulfilled.
In the same experiment excited-state temperatures [@poch87; @kunde91] were determined from the populations of particle-unstable resonances measured with the Si-CsI hodoscopes. The peak structures were identified by using the technique of correlation functions, and background corrections were based on results obtained for resonance-free pairs of fragments with $Z \le$ 3, such as p-d, d-d, up to $^3$He-$^7$Li. Correlated yields of p-t, p-$^4$He, d-$^3$He, $^4$He-$^4$He, and p-$^7$Li coincidences and $^4$He singles yields were used to deduce temperatures from the populations of states in $^4$He (g.s.; group of three states at 20.21 MeV and higher), $^5$Li (g.s.; 16.66 MeV), and $^8$Be (3.04 MeV; group of five states at 17.64 MeV and higher). The probabilities for the coincident detection of the decay products of these resonances were calculated with a Monte-Carlo model [@kunde91; @serf97]. The uncertainty of the background subtraction is the main contribution to the errors of the deduced temperatures.
The values for the three excited-state temperatures are given in Fig. 8 as a function of the experimental excitation energy $\langle E_0 \rangle/\langle A_0 \rangle$. Mutually consistent with each other, they appear to be virtually independent of the excitation energy, centering about a mean value of $\approx$ 5 MeV. This is in striking contrast to the monotonically rising isotope temperature $T_{{\rm HeLi}}$ which is shown in comparison.
A saturation of excited-state temperatures and a similar difference to the behavior of isotope temperatures has also been observed in central $^{197}$Au + $^{197}$Au collisions at incident energies $E/A$ = 50 MeV to 200 MeV [@serf98]. The interpretation given there starts from the fact that the excited states used for the temperature evaluation are very specific quantum states which may not exist in the nuclear medium in identical forms [@dani92; @roepke; @alm95]. The observed asymptotic states can develop and survive only at very low densities that may not be reached before the cluster is emitted into vacuum. Accordingly, the excited-state populations should reflect the temperature and its fluctuations at this final stage of fragment emission. The obtained mean value near 5 MeV is not inconsistent with results of dynamical calculations based on the BUU model [@fuchs97].
[**DENSITY**]{}
Expansion is a basic conceptual feature of both the statistical multifragmentation and the liquid-gas phase transition. A volume of about six to eight times that occupied at saturation density is assumed in the statistical multifragmentation models while the critical volume in the case of infinite nuclear matter has about three times the saturation value. The experimental confirmation of expansion or low breakup density is therefore of the highest significance.
In central collisions of heavy nuclei, expansion is evident from the observation of radial collective flow [@reis97a]. Significant radial flow is not observed in spectator decays, and evidence for expansion has been obtained, indirectly, from model comparisons. Models that assume sequential emission from the surfaces of nuclear systems at saturation density underpredict the fragment multiplicities while those assuming expanded breakup volumes yield satisfactory descriptions of the populated partition space [@hubel92]. The disappearance of the Coulomb peaks in the kinetic-energy spectra of emitted light particles and fragments, associated with increasing fragment production, provides additional evidence consistent with volume emission or emission from expanded systems [@milkau91].
Interferometric methods permit experimental determinations of the breakup volume or, more precisely, of the space-time extension of where the emitted products had suffered their last collision [@ardouin]. In the present case of spectator decay at relativistic energies the time scales should be rather short and the data, in good approximation, may be directly related to the breakup volume that is of interest here.
Proton-proton correlation functions measured at angles of $\Theta_{lab} \approx$ 135$^{\circ}$ are shown in Fig. 9. They are characterized by a depression at small relative momenta, caused by Coulomb repulsion, and by a peak near $q$ = 20 MeV/c that is caused by the S-wave nuclear interaction. Its comparatively small amplitude signals a large spatial extension of the proton source. The quantitative analysis of these data was performed with the Koonin-Pratt formalism [@pratt]. A uniform sphere was assumed for the proton source. The deduced radii are of the order of 8 to 9 fm which is distinctly larger than the radius of 6.5 to 7 fm of a gold nucleus at normal density. The structure of the correlation functions is somewhat obscured at larger $Z_{bound}$ but there is no indication that the radii and thus the volume should significantly change with impact parameter. The derived density, however, decreases considerably with increasing centrality, caused by the changing number of spectator constituents $A_0$ (Fig. 10). These spectator masses result from the calorimetric analysis described above and are found to be in good agreement with the prediction of the geometric participant-spectator model [@gosset]. The mean relative density decreases to values below $\rho /\rho_0$ = 0.2 for the most central bin, i.e. smallest $Z_{bound}$. The deduced values as well as the variation with centrality compare well with the densities entering the statistical multifragmentation model in the version that uses a fixed cracking distance for the placement of fragments inside the breakup volume [@bond95]. We recall here that the proton multiplicities and kinetic-energy spectra indicate a predominantly pre-breakup emission. This does not necessarily exclude the possibility that their interaction with the forming spectator matter causes the interferometric picture to reflect the extension of the latter.
Besides the proton-proton correlations also correlations of other light charged particles were used to determine breakup radii. Pronounced resonances are exhibited by the p-$\alpha$ ($^5$Li, g.s.), d-$\alpha$ ($^6$Li, 2.19 MeV), and t-$\alpha$ ($^7$Li, 4.63 MeV) correlation functions. Their peak heights were analyzed using the numerical results of Boal and Shillcock [@boal]. The deduced values, within errors are in qualitative agreement with the proton values. Their much smaller error bars demonstrate that higher accuracies may be reached with these pronounced resonances of particle unbound states in light nuclei. The further development of the formalism needed for their quantitative interpretation seems therefore highly desirable.
[**SUMMARY AND PERSPECTIVES**]{}
New results for the mass, excitation energy, temperature, and density of excited spectator systems at breakup have been presented. The discussion of these data was meant to demonstrate that methods exist to determine these thermodynamic variables from the experiment. It was also intended to show that they are not without problems and that, perhaps, serious conceptual difficulties exist which may make it very difficult to arrive at unambiguous results. The problem caused for the excitation energy by the dynamical process of spectator formation is an example.
It should be rewarding to further pursue this program, not only because of the hope to better identify signals of the liquid-gas phase transition but also because unexpected results may appear, as demonstrated for the temperatures. The saturation of the excited-state temperatures, according to the interpretation given here, is an interesting in-medium effect in itself. It also has the consequence that it may provide us with a means to determine rather reliably the internal temperatures of fragments at their final separation from the system.
The interferometry with light charged particles confirms the low density of the breakup configuration. For more precise evaluations the formalisms needed to deduce radii and densities from interferometric measurements will need continuing development. Measurements of radii for different fragment species, possibly emitted at different stages of the reaction, may allow us to test and refine the otherwise so successful picture of the single breakup state in the spectator decay.
[99]{}
H. Jaqaman [*et al.*]{}, [*Phys. Rev.*]{} C 27:2782 (1983).
D.K. Scott, [*Proceedings of the 6th High Energy Heavy Ion Study*]{}, Berkeley (1983), ed. H.G. Pugh [*et al.*]{}, report LBL-16281, p. 263.
B. Jakobsson [*et al.*]{}, [*Z. Phys.*]{} A 307:293 (1982).
E.M. Friedlander [*et al.*]{}, [*Phys. Rev.*]{} C 27:2436 (1983).
For a recent review see L.G. Moretto and G.J. Wozniak, [*Ann. Rev. Nucl. Part. Science*]{} 43:379 (1993).
D.H.E. Gross, [*Rep. Prog. Phys.*]{} 53:605 (1990).
J.P. Bondorf [*et al.*]{}, [*Phys. Rep.*]{} 257:133 (1995).
A.S. Botvina [*et al.*]{}, [*Nucl. Phys.*]{} A 584:737 (1995).
Bao-An Li [*et al.*]{}, [*Phys. Lett.*]{} B 335:1 (1994).
K. Kwiatkowski [*et al.*]{}, [*Proceedings of the XXXV International Winter Meeting on Nuclear Physics*]{}, Bormio (1997), ed. I. Iori (Ricerca Scientifica ed Educazione Permanente, Milano, 1997), p. 432.
J. Pochodzalla [*et al.*]{}, [*Phys. Rev. Lett.*]{} 75:1040 (1995).
J.A. Hauger [*et al.*]{}, [*Phys. Rev. Lett.*]{} 77:235 (1996).
Y.-G. Ma [*et al.*]{}, [*Phys. Lett.*]{} B 390:41 (1997).
For a more comprehensive status report see J. Pochodzalla, [*Prog. Part. Nucl. Phys.*]{} 39:443 (1997).
W. Trautmann, Multifragmentation in relativistic heavy-ion reactions, [*in*]{}: “Correlations and Clustering Phenomena in Subatomic Physics,” M.N. Harakeh, J.H. Koch, and O. Scholten, ed., Plenum Press, New York (1997).
A. Schüttauf [*et al.*]{}, [*Nucl. Phys.*]{} A 607:457 (1996).
Hongfei Xi [*et al.*]{}, [*Z. Phys.*]{} A 359:397 (1997).
J. Konopka [*et al.*]{}, [*Prog. Part. Nucl. Phys.*]{} 30:301 (1993).
P. Désesquelles [*et al.*]{}, [*Nucl. Phys.*]{} A 604:183 (1996).
C. Fuchs [*et al.*]{}, [*Nucl. Phys.*]{} A 626:987 (1997).
P.-B. Gossiaux and J. Aichelin, [*Phys. Rev.*]{} C 56:2109 (1997).
X. Campi [*et al.*]{}, [*Phys. Rev.*]{} C 50:R2680 (1994).
P. Kreutz [*et al.*]{}, [*Nucl. Phys.*]{} A 556:672 (1993).
C. Groß, PhD thesis, Universität Frankfurt (1998).
Hongfei Xi [*et al.*]{}, [*Phys. Rev.*]{} C 54:R2163 (1996).
F. Gulminelli and D. Durand, [*Nucl. Phys.*]{} A 615:117 (1997).
J.B. Natowitz [*et al.*]{}, [*Phys. Rev.*]{} C 52:R2322 (1995).
L.G. Moretto [*et al.*]{}, [*Phys. Rev. Lett.*]{} 76:2822 (1996).
S. Albergo [*et al.*]{}, [*Il Nuovo Cimento*]{} 89 A:1 (1985).
J. Konopka [*et al.*]{}, [*Phys. Rev.*]{} C 50:2085 (1994).
J. Pochodzalla [*et al.*]{}, [*Phys. Rev.*]{} C 35:1695 (1987).
G.J. Kunde [*et al.*]{}, [*Phys. Lett.*]{} B 272:202 (1991).
V. Serfling, PhD thesis, Universität Frankfurt (1997).
V. Serfling [*et al.*]{}, [*Phys. Rev. Lett.*]{} (1998), in press.
P. Danielewicz and Q. Pan, [*Phys. Rev.*]{} C 46:2002 (1992).
M. Schmidt [*et al.*]{}, [*Ann. Phys. (N.Y.)*]{} 202:57 (1990).
T. Alm [*et al.*]{}, [*Phys. Lett.*]{} B 346:233 (1995).
W. Reisdorf and H.G. Ritter, [*Ann. Rev. Nucl. Part. Science*]{} 47:663 (1997).
J. Hubele [*et al.*]{}, [*Phys. Rev.*]{} C 46:R1577 (1992).
U. Milkau [*et al.*]{}, [*Phys. Rev.*]{} C 44:R1242 (1991).
For a recent review see D. Ardouin, [*Int. J. Mod. Phys.*]{} E6:391 (1997).
S. Fritz, PhD thesis, Universität Frankfurt (1997).
S. Pratt and M.B. Tsang, [*Phys. Rev.*]{} C 36:2390 (1987).
J. Gosset [*et al.*]{}, [*Phys. Rev.*]{} C 16:629 (1977).
D.H. Boal and J.C. Shillcock, [*Phys. Rev.*]{} C 33:549 (1986).
[^1]: presented at 14th Winter Workshop on Nuclear Dynamics, Snowbird, Utah, 31 Jan - 7 Feb 1998
|
{
"pile_set_name": "ArXiv"
}
|
---
bibliography:
- 'auto\_generated.bib'
title: 'Search for disappearing tracks in proton-proton collisions at $\sqrt{s} = 8$'
---
=1
$Revision: 274636 $ $HeadURL: svn+ssh://svn.cern.ch/reps/tdr2/papers/EXO-12-034/trunk/EXO-12-034.tex $ $Id: EXO-12-034.tex 274636 2015-01-22 16:37:18Z wulsin $
Introduction {#sec:intro}
============
We present a search for long-lived charged particles that decay within the tracker volume and produce the signature of a *[[disappearing track]{}]{}*. A disappearing track can be produced in beyond the standard model (BSM) scenarios by a charged particle whose decay products are undetected. This occurs because the decay products are either too low in momentum to be reconstructed or neutral (and weakly interacting) such that they do not interact with the tracker material or deposit significant energy in the calorimeters.
There are many BSM scenarios that produce particles that manifest themselves as disappearing tracks [@PhysRevD.85.095011; @SpreadSUSY; @MiniSplitSUSY; @UnnaturalSUSY; @Ellis2012]. One example is anomaly-mediated supersymmetry breaking (AMSB) [@Giudice1998; @Randall1999], which predicts a particle mass spectrum that has a small mass splitting between the lightest chargino ([$\chipm_1$]{}) and the lightest neutralino ([$\chiz_1$]{}). The chargino can then decay to a neutralino and a pion, ${\ensuremath{\chipm_1}\xspace}\to {\ensuremath{\chiz_1}\xspace}\Pgp^\pm$. The phase space for this decay is limited by the small chargino-neutralino mass splitting. As a consequence, the chargino has a significant lifetime, and the daughter pion has momentum of ${\approx}100\MeV$, typically too low for its track to be reconstructed. For charginos that decay inside the tracker volume, this results in a disappearing track. We benchmark our search in terms of its sensitivity to the chargino mass and chargino-neutralino mass splitting (or equivalently, the chargino mean proper lifetime, $\tau$) in AMSB. Constraints are also placed on the chargino mass and mean proper lifetime for direct electroweak chargino-chargino and chargino-neutralino production.
Previous CMS analyses have searched for long-lived charged particles based on the signature of anomalous ionization energy loss [@EXO-11-074; @HSCP2011; @HSCP2012], but none has targeted a disappearing track signature. A search for disappearing tracks conducted by the ATLAS Collaboration excludes at 95% confidence level (CL) a chargino in AMSB scenarios with mass less than 270and mean proper lifetime of approximately 0.2 ns [@ATLASDisapp2].
Detector description and event reconstruction {#sec:detector}
==============================================
The central feature of the CMS apparatus is a superconducting solenoid of 6 internal diameter. Within the superconducting solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Extensive forward calorimetry complements the coverage provided by the barrel and endcap detectors. The ECAL consists of 75848 crystals that provide coverage in pseudorapidity $\abs{ \eta }< 1.479$ in the barrel region and $1.479 <\abs{ \eta } < 3.0$ in the two endcap regions. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. They are measured in the pseudorapidity range $\abs{\eta}< 2.4$, with detection planes made using three technologies: drift tubes, cathode strip chambers, and resistive plate chambers. Muons are identified as a track in the central tracker consistent with either a track or several hits in the muon system.
The silicon tracker measures ionization energy deposits (“hits”) from charged particles within the pseudorapidity range $\abs{\eta}< 2.5$. It consists of 1440 silicon pixel and 15148 silicon strip detector modules and is located in the 3.8 field of the superconducting solenoid. The pixel detector has three barrel layers and two endcap disks, and the strip tracker has ten barrel layers and three small plus nine large endcap disks. Isolated particles with transverse momentum = 100emitted in the range $\abs{\eta} < 1.4$ have track resolutions of 2.8% in and 10 (30)in the transverse (longitudinal) impact parameter [@TRK-11-001].
The particle-flow (PF) event reconstruction consists in reconstructing and identifying each single particle with an optimized combination of all subdetector information [@CMS-PAS-PFT-10-001; @CMS-PAS-PFT-09-001]. The energy of photons is obtained directly from the ECAL measurement, corrected for zero-suppression effects. The energy of electrons is determined from a combination of the track momentum at the main interaction vertex, the corresponding ECAL cluster energy, and the energy sum of all bremsstrahlung photons attached to the track. The energy of muons is taken from the corresponding track momentum. The energy of charged hadrons is determined from a combination of the track momentum and the corresponding ECAL and HCAL energies, corrected for zero-suppression effects and for the response function of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies.
Particles are clustered into jets using the anti-algorithm [@Cacciari:2008gp] with a distance parameter of 0.5. Jet momentum is determined from the vectorial sum of all particle momenta in the jet, and is found from simulation to be within 5% to 10% of the true momentum over the whole spectrum and detector acceptance. An offset correction is applied to take into account the extra energy clustered in jets due to additional proton-proton (pp) interactions within the same bunch crossing. Jet energy corrections are derived from the simulation, and are confirmed using in situ measurements of the energy balance of dijet and photon+jet events.
The missing transverse energy is defined as the magnitude of the vector sum of the of all PF candidates reconstructed in the event. A more detailed description of the CMS apparatus and event reconstruction, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [@Chatrchyan:2008zzk].
Data samples and simulation {#sec:dataset}
===========================
The search is performed with $\sqrt{s}= 8$pp collision data recorded in 2012 with the CMS detector at the CERN LHC. The data correspond to an integrated luminosity of 19.5. A BSM particle that produces a disappearing track would not be identified as a jet or a particle by the PF algorithm because the track is not matched to any activity in the calorimeter or muon systems. To record such particles with the available triggers, we require one or more initial-state-radiation (ISR) jets, against which the BSM particles recoil. As a result, the is approximately equal to the of the BSM particles, and likewise to the of the ISR jets. To maximize efficiency for the BSM signal, events used for the search are collected with the union of two triggers that had the lowest thresholds available during the data taking period. The first requires $\ETslash > 120\GeV$, where the is calculated using the calorimeter information only. The second trigger requires larger than either 95 or 105, depending on the run period, where is reconstructed with the PF algorithm and excludes muons from the calculation. Additionally, the second trigger requires at least one jet with $\pt > 80\GeV$ within $\abs{\eta} < 2.6$. The use of alternative calculations in these triggers is incidental; the thresholds set for these formulations simply happen to be such that these triggers yield the highest BSM signal efficiency.
Events collected with these triggers are required to pass a set of basic selection criteria. These requirements reduce backgrounds from QCD multijet events and instrumental sources of , which are not well-modeled by the simulation. We require $\ETslash>100\GeV$, near the trigger threshold, to maximize the signal acceptance, and at least one jet reconstructed with the PF algorithm with $\pt > 110\GeV$. The jet must have $\abs{\eta}<2.4$ and meet several criteria aimed at reducing instrumental noise: less than 70% of its energy assigned to neutral hadrons or photons, less than 50% of its energy associated with electrons, and more than 20% of its energy carried by charged hadrons. Additional jets in the event with $\pt>30\GeV$ and $\abs{\eta}<4.5$ are allowed provided they meet two additional criteria. To reduce the contribution of QCD multijets events, the difference in azimuthal angle, $\Delta \phi$, between any two jets in the event must be less than 2.5 radians, and the minimum $\Delta \phi$ between the vector and either of the two highest-jets is required to be greater than 0.5 radians.
Signal samples are simulated with 6 [@Sjostrand:2006za] for the processes ${\ensuremath{\Pq\Paq'}\xspace}\to {\ensuremath{\chipm_1}\xspace}{\ensuremath{\chiz_1}\xspace}$ and ${\ensuremath{\Pq\Paq}\xspace}\to {\ensuremath{\chipm_1}\xspace}\PSGc^{\mp}_1$ in the AMSB framework. The SUSY mass spectrum in AMSB is determined by four parameters: the gravitino mass $m_{3/2}$, the universal scalar mass $m_0$, the ratio of the vacuum expectation values of the Higgs field at the electroweak scale $\tan\beta$, and the sign of the higgsino mass term $\sgn(\mu)$. Of these, only $m_{3/2}$ significantly affects the chargino mass. We produce samples with variations of the $m_{3/2}$ parameter that correspond to chargino masses between 100 and 600. Supersymmetric particle mass spectra are calculated according to the SUSY Les Houches accord [@SLHA] with 7.80 [@Isajet]. The branching fraction of the ${\ensuremath{\chipm_1}\xspace}\to {\ensuremath{\chiz_1}\xspace}\Pgp^\pm$ decay is set to 100%. While the chargino mean proper lifetime is uniquely determined by the four parameters above, the simulation is performed with a variety of mean proper lifetime values ranging from 0.3 to 300 ns to expand the search beyond the AMSB scenario.
To study the backgrounds, we use simulated samples of the following standard model (SM) processes: $\PW + $jets, , $\cPZ\to \ell\ell$ ($\ell = \Pe,\mu,\tau$), $\cPZ\to \cPgn \cPgn$; $\PW \PW$, $\cPZ \cPZ$, $\PW \cPZ$, $\PW \cPgg$, and $\cPZ \cPgg$ boson pair production; and QCD multijet and single-top-quark production. The $\PW + $jets and $\cPqt\cPaqt$ are generated using 5 [@MadGraph] with 6 for parton showering and hadronization, while single top production is modeled using [@Powheg; @Powheg1; @Powheg2; @Powheg3] and 6. The $\cPZ\to\ell\ell$, boson pair productions, and QCD multijet events are simulated using 6.
All samples are simulated with CTEQ6L1 parton density functions (PDF). The full detector simulation with [@GEANT4] is used to trace particles through the detector and to model the detector response. Additional pp interactions within a single bunch crossing (pileup) are modelled in the simulation, and the mean number per event is reweighted to match the number observed in data.
Background characterization {#sec:bkgdStudy}
===========================
In the following sections we examine the sources of both physics and instrumental backgrounds to this search. We consider how a disappearing track signature may be produced, that is, a high-momentum ($\pt > 50\GeV$), isolated track without hits in the outer layers of the tracker and with little associated energy ($ < 10\GeV$) deposited in the calorimeters. Various mechanisms that lead to tracks with missing outer hits are described, and the reconstruction limitations that impact each background category are investigated.
Sources of missing outer hits {#sec:srcMissOutHits}
-----------------------------
A disappearing track is distinguished by missing outer hits in the tracker, [$N_\text{outer}$]{}, those expected but not recorded after the last (farthest from the interaction point) hit on a track. They are calculated based on the tracker modules traversed by the track trajectory, and they do not include modules known to be inactive. Standard model particles can produce tracks with missing outer hits as the result of interactions with the tracker material. An electron that transfers a large fraction of its energy to a bremsstrahlung photon can change its trajectory sufficiently that subsequent hits are not associated with the original track. A charged hadron that interacts with a nucleus in the detector material can undergo charge exchange, for example via $\Pgpp + \text{n} \to\Pgpz + \Pp$, or can experience a large momentum transfer. In such cases, the track from the charged hadron may have no associated hits after the nuclear interaction.
There are also several sources of missing outer hits that arise from choices made by the default CMS tracking algorithms, which are employed in this analysis. These allow for the possibility of missing outer hits on the tracks of particles that traverse all of the layers of the tracker, mimicking the signal. In a sample of simulated single-muon events we find that 11% of muons produce tracks that have at least one missing outer hit. This effect occurs not only with muons, but with any type of particle, and thus produces a contribution to each of the SM backgrounds.
The CMS track reconstruction algorithm identifies many possible trajectory candidates, each constructed with different combinations of hits. In the case of multiple overlapping trajectories, a single trajectory is selected based on the number of recorded hits, the number of expected hits not recorded, and the fit $\chi^2$. We find that for most of the selected trajectories with missing outer hits, there exists another candidate trajectory without missing outer hits.
We have identified how a trajectory with missing outer hits is chosen as the reconstructed track over a trajectory with no missing outer hits. The predominant mechanism is that the particle passes through a glue joint of a double sensor module, a region of inactive material that does not record one of the hits in between the first and last hit on the track. Such a trajectory has no missing outer hits, but it does have one expected hit that is not recorded. The penalty for missing hits before the last recorded hit is greater than for those missing after the last hit. As a result, the reconstructed track is instead identified as a trajectory that stops before the layer with the glue joint and has multiple missing outer hits. In a smaller percentage of events, a trajectory with missing outer hits is chosen because its $\chi^2$ is much smaller than that of a trajectory with no missing outer hits.
Electrons {#sec:bkgdStudyElec}
---------
We reject any tracks matched to an identified electron, but an electron may fail to be identified if its energy is not fully recorded by the [ECAL]{}. We study unidentified electrons with a [$\cPZ\to\EE$]{}tag-and-probe [@WZInclusiveCrossSections] data sample in which the tag is a well-identified electron, the probe is an isolated track, and the invariant mass of the tag electron and probe track is consistent with that of a $\cPZ$ boson. From the $\eta,\phi$ distribution of probe tracks that fail to be identified as electrons we characterize several ways that an electron’s energy can be lost. An electron is more likely to be unidentified if it is directed toward the overlap region between the barrel and endcap of the [ECAL]{}or toward the thin gaps between cylindrical sections of the barrel [ECAL]{}. We therefore reject tracks pointing into these regions. An electron may also fail the identification if it is directed towards an [ECAL]{}channel that is inoperational or noisy, so we remove tracks that are near any such known channels. After these vetoes, concentrations of unidentified electrons in a few regions survive. Thus we also veto tracks in these additional specific regions.
Muons {#sec:bkgdStudyMu}
-----
To reduce the background from muons, we veto tracks that are matched to a muon meeting loose identification criteria.
We study muons that fail this identification with a [$\cPZ\to\MM$]{}tag-and-probe data sample. The probe tracks are more likely to fail the muon identification in the region of the gap between the first two “wheels” of the barrel muon detector, $0.15<\abs{\eta}<0.35$; the region of gaps between the inner and outer “rings” of the endcap muon disks, $1.55<\abs{\eta}<1.85$; and in regions near a problematic muon chamber. Tracks in these regions are therefore excluded.
With a sample of simulated single-muon events we investigate the signatures of muons outside these fiducial regions that fail to be identified. In this sample the muon reconstruction inefficiency is $6.8
\ten{-5}$. We identify three signatures of unreconstructed muons. One signature is a large [ECAL]{}deposit or a large [HCAL]{}deposit. In a second signature, there are reconstructed muon segments in the muon detectors that fail to be matched to the corresponding tracker track. The final signature has no recorded muon detector segments or calorimeter deposits. These signatures are consistent with a $\mu\to e\nu\cPagn$ decay in flight or a secondary electromagnetic shower. Lost muons that produce large calorimeter deposits are rejected, while the contribution from those without calorimeter deposits is estimated from control samples in data.
Hadrons {#sec:bkgdStudyTau}
-------
Charged hadrons can produce tracks with missing outer hits as a result of a nuclear interaction. However, tracks produced by charged hadrons in quark/gluon jets typically fail the requirements that the track be isolated and have little associated calorimeter energy. According to simulation, the contribution from hadrons in jets in the search sample is ten times smaller than that of the hadrons from a single-prong hadronic tau (${\ensuremath{\Pgt_\mathrm{h}}\xspace}$) decay. The track from a ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$ lepton decay can satisfy the criteria of little associated calorimeter energy but large if the of the hadron is mismeasured, measured to be significantly larger than the true value. This class of background is studied using a sample of simulated single-pion events.
In these events, the pion tracks typically have ${\approx}17$ hits. From this original sample we produce three new samples in which all hits associated with the track after the 5th, 6th, or 7th innermost hit have been removed. After repeating the reconstruction, the associated calorimeter energy does not change with the removal of hits on the track. However, the resolution improves with the number of hits on the track, as additional hits provide a greater lever arm to measure the track curvature. Thus the background from ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$ decays is largest for tracks with small numbers of hits, which motivates a minimum number of hits requirement.
Fake tracks {#sec:fakeTrkBkgd}
-----------
Fake tracks are formed from combinations of hits that are not produced by a single particle. We obtain a sample of such tracks from simulated events that contain a track that is not matched to any generated particle. Most of these tracks have only three or four hits; the probability to find a combination of hits to form a fake track decreases rapidly with the number of hits on the track. However, fake tracks typically are missing many outer hits and have little associated calorimeter energy, so they closely resemble signal tracks.
Candidate track selection
=========================
In this section, we define the [*candidate track*]{} criteria that are designed to suppress the backgrounds described in the previous section and to identify well-reconstructed, prompt tracks with large . The [[candidate track]{}]{}sample is composed of the events that pass the basic selection defined in Section \[sec:dataset\] and contain a track that meets the following criteria.
A candidate track is required to have $\pt > 50\GeV$ and $\abs{\eta} < 2.1$, as signal tracks would typically have large and are produced centrally. The primary vertex is chosen as the one with the largest sum $\pt^2$ of the tracks associated to it. The track is required to have $\abs{d_0}< 0.02\cm$ and $\abs{d_z}< 0.5\cm$, where $d_0$ and $d_z$ are the transverse and longitudinal impact parameters with respect to the primary vertex. The track must be reconstructed from at least 7 hits in the tracker. This reduces the backgrounds associated with poorly reconstructed tracks.
The number of missing middle hits, [$N_\text{mid}$]{}, is the number of hits expected but not found between the first and last hits associated to a track. The number of missing inner hits, [$N_\text{inner}$]{}, corresponds to lost hits in layers of the tracker closer to the interaction point, before the first hit on the track. We require [$N_\text{mid}$]{}= 0 and [$N_\text{inner}$]{}= 0 to ensure that the track is not missing any hits in the pixel or strip layers before the last hit on the track. Similarly to the calculation of missing outer hits, the determination of missing inner and middle hits accounts for tracker modules known to be inactive. The relative track isolation, $(\Sigma \pt^{\Delta R<0.3} - \pt)/\pt$ must be less than 0.05, where $\Sigma \pt^{\Delta R<0.3}$ is the scalar sum of the of all other tracks within an angular distance $\Delta R = \sqrt{\smash[b]{(\Delta \eta)^2 + (\Delta \phi)^2}} < 0.3$ of the candidate track. Additionally, we require that there be no jet with $\pt > 30\GeV$ within $\Delta R<0.5$ of the track. The above criteria select high-isolated tracks. In events with large , the dominant SM source of high-and isolated tracks is from leptons.
We veto any tracks within $\Delta R<0.15$ of a reconstructed electron with $\pt>10\GeV$; the electron must pass a loose identification requirement. To further reduce the background from electrons, we veto tracks in the regions of larger electron inefficiency described in Section \[sec:bkgdStudyElec\]. These regions are the gap between the barrel and endcap of the [ECAL]{}, $1.42<\abs{\eta}<1.65$; the intermodule gaps of the [ECAL]{}; and all cones with aperture $\Delta R=0.05$ around inoperational or noisy [ECAL]{}channels or clusters of unidentified electrons in the [$\cPZ\to\EE$]{}sample.
We veto any tracks within $\Delta R<0.15$ of a muon with $\pt>10\GeV$ that passes a loose identification requirement. We additionally reject tracks in regions of larger muon inefficiency identified in Section \[sec:bkgdStudyMu\]. These regions are $0.15<\abs{\eta}<0.35$, $1.55<\abs{\eta}<1.85$, and within $\Delta R<0.25$ of any problematic muon detector.
After vetoing tracks that correspond to reconstructed electrons and muons, we face a background from single-prong ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$ decays. We veto any track within $\Delta R<0.15$ of a reconstructed hadronic tau candidate. The reconstructed tau must have $\pt > 30\GeV$, $\abs{\eta} < 2.3$, and satisfy a set of loose isolation criteria.
The background contributions in the [[candidate track]{}]{}sample, as estimated from Monte Carlo simulations, are summarized in Table \[tab:trackGenMatchBkgd\].
\[tab:trackGenMatchBkgd\]
Source Contribution
------------- -------------- -- --
Electrons 15%
Muons 20%
Hadrons 60%
Fake tracks 5%
Disappearing track selection {#sec:selDisTrk}
============================
We define a *[[disappearing track]{}]{}* as a [[candidate track]{}]{}that has the signature of missing outer hits and little associated calorimeter energy. A [[disappearing track]{}]{}is first required to have ${\ensuremath{N_\text{outer}}\xspace}\ge 3$. Tracks from the potential signal are generally missing several outer hits, provided their lifetime is such that they decay within the tracker volume. To remove SM sources of tracks with missing outer hits, we additionally require that the associated calorimeter energy [$E_\text{calo}$]{}of a [[disappearing track]{}]{}be less than 10, much smaller than the minimum of 50. Since the decay products of the chargino are too low in momentum to be reconstructed or weakly interacting, they would not deposit significant energy in the calorimeters. We compute [$E_\text{calo}$]{}as the sum of the ECAL and HCAL clusters within $\Delta R<0.5$ of the direction of the track.
![ The number of missing outer hits () and the associated calorimeter energy () of tracks in the search sample, before applying the requirement on the plotted quantity. The signal and the background sum distributions have both been normalized to unit area, and overflow entries are included in the last bin. []{data-label="fig:calototNmissout"}](figures/trackNHitsMissingOuter_FullSelectionNoMissHit.pdf "fig:"){width="48.00000%"} ![ The number of missing outer hits () and the associated calorimeter energy () of tracks in the search sample, before applying the requirement on the plotted quantity. The signal and the background sum distributions have both been normalized to unit area, and overflow entries are included in the last bin. []{data-label="fig:calototNmissout"}](figures/trackCaloTot_RhoCorr_FullSelectionNoCalo.pdf "fig:"){width="48.00000%"}
The requirements placed on [$E_\text{calo}$]{}and [$N_\text{outer}$]{}effectively isolate signal from background, as shown in Fig. \[fig:calototNmissout\]. Tracks produced by SM particles generally are missing no outer hits and have large [$E_\text{calo}$]{}, while signal tracks typically have many missing outer hits and very little [$E_\text{calo}$]{}. The search sample is the subset of events in the [[candidate track]{}]{}sample that contain at least one [[disappearing track]{}]{}. The efficiencies to pass various stages of the selection, derived from simulation, are given for signal events in Table \[tab:cutFlowEffSig\].
\[tab:cutFlowEffSig\]
------------------------ ------- ------ ------- -------- ------ -------
Chargino mass\[\] 300 300 300 500 500 500
Chargino $c\tau$\[cm\] 10 100 1000 10 100 1000
Trigger 10% 10% 7.4% 13% 13% 10%
Basic selection 7.0% 6.7% 4.2% 8.9% 9.0% 6.3%
High-isolated track 0.24% 3.6% 3.1% 0.14% 4.4% 4.9%
Candidate track 0.15% 2.3% 1.3% 0.10% 2.9% 2.2%
Disappearing track 0.13% 1.0% 0.27% 0.095% 1.4% 0.47%
------------------------ ------- ------ ------- -------- ------ -------
Background estimates and associated systematic uncertainties
============================================================
For each of the background sources described in Sections \[sec:bkgdStudyElec\]–\[sec:fakeTrkBkgd\], the contribution in the search sample is estimated. The SM backgrounds are estimated with a method that is based on data and only relies on simulation to determine the identification inefficiency. The estimate of the fake-track background is obtained from data.
Standard model backgrounds
--------------------------
We estimate the SM background contributions to the search sample as $N^{i} = N^{i}_\text{ctrl}P^{i}$, where $N^{i}_\text{ctrl}$ is the number of events in data control samples enriched in the given background source and $P^{i}$ is the simulated identification inefficiency, for $i=\Pe,\Pgm,\Pgt$. The electron-enriched control sample is selected by requiring all the search sample criteria except for the electron veto and the [$E_\text{calo}$]{}requirement. The muon-enriched control sample is selected by requiring all the search sample criteria except for the muon veto. The ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$-enriched control sample is selected by requiring all the search sample criteria except for the ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$ veto and the [$E_\text{calo}$]{}requirement. The [$E_\text{calo}$]{}requirement is removed for the electron and ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$ control samples because it is strongly correlated with both the electron and ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$ vetoes. The hadron background is estimated as the contribution from ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$ decays, which is its dominant component.
The identification inefficiencies $P^i$ correspond to the probability to survive the corresponding veto criteria, , the electron veto and [$E_\text{calo}$]{}requirement for electrons, the muon veto for muons, and the ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$ veto and [$E_\text{calo}$]{}requirement for $\tau$ leptons. We determine $P^i$, defined to be the ratio of the number of events of the given background source in the search sample to the number in the corresponding control sample, from the simulated [$\PW\to\ell\nu$+jets]{}process. The [$\PW\to\ell\nu$+jets]{}process is the dominant contribution of the control samples: it represents 84% of the electron-enriched control sample, 85% of the muon-enriched control sample, and 75% of the ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$-enriched control sample. Of the more than 26 million simulated [$\PW\to\ell\nu$+jets]{}events, only one passes the search sample criteria; in that event the disappearing track is produced by a muon in a $\PW\to\mu\nu$ decay. For the other simulated physics processes, no events are found in the search sample. Since no simulated electron or tau events survive in the search sample, we quote limits at 68% CL on the electron and ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$ inefficiencies. The control sample sizes, identification inefficiencies, and background estimates are given in Table \[tab:bkgdCalc\].
In addition to the uncertainties that result from the finite size of the simulation samples (labeled “statistical”), we also assess systematic uncertainties in the simulation of $P^i$ using tag-and-probe methods. In [$\cPZ\to\EE$]{}, [$\cPZ\to\MM$]{}, and [$\cPZ\to\TT$]{}samples, $P^i$ is measured as the probability of a probe track of the given background type to pass the corresponding veto criteria. The difference between data and simulation is taken as the systematic uncertainty.
The probe tracks are required to pass all of the [[disappearing-track]{}]{}criteria, with a looser requirement of $\pt > 30\GeV$, and without the corresponding veto criteria, , the electron veto and [$E_\text{calo}$]{}requirement for electrons, the muon veto for muons, and the ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$ veto and [$E_\text{calo}$]{}requirement for taus. Additionally, to obtain an adequate sample size, the [$\cPZ\to\TT$]{}probe tracks are not required to pass the [$N_\text{outer}$]{}requirement or the isolation requirement of no jet within $\Delta R<0.5$ of the track. The [$\cPZ\to\EE$]{}and [$\cPZ\to\MM$]{}tag-and-probe samples are collected with single-lepton triggers and require a tag lepton ($\Pe$ or $\Pgm$) that is well-reconstructed and isolated. The tag lepton and probe track are required to be opposite in charge and to have an invariant mass between 80 and 100, consistent with a [$\cPZ\to\ell\ell$]{}decay. We measure $P^{\Pe}$ as the fraction of [$\cPZ\to\EE$]{}probe tracks that survive the electron veto and [$E_\text{calo}$]{}requirement and $P^{\Pgm}$ as the fraction of [$\cPZ\to\MM$]{}probe tracks that survive the muon veto. The [$\cPZ\to\TT$]{}tag-and-probe sample is designed to identify a tag $\tau$ lepton that decays as $\Pgt\to\mu\nu\cPagn$. This sample is collected with a single-muon trigger and requires a well-reconstructed, isolated tag muon for which the transverse invariant mass of the muon and is less than 40. The tag muon and probe track are required to be opposite in charge and to have an invariant mass between 40 and 75, consistent with a [$\cPZ\to\TT$]{}decay. We measure $P^\Pgt$ as the fraction of probe tracks that survive the ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$ veto. No probe tracks in the [$\cPZ\to\TT$]{}data survive both the ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$ veto and the [$E_\text{calo}$]{}requirement, so the [$E_\text{calo}$]{}requirement is not included in the determination of $P^\Pgt$ for the systematic uncertainty.
For each of the tag-and-probe samples, the contamination from sources other than the target [$\cPZ\to\ell\ell$]{}process is estimated from the simulation and is subtracted from both the data and simulation samples before calculating $P^i$. The systematic uncertainties in $P^i$ are summarized in Table \[tab:bkgdCalc\]. The systematic uncertainties in the electron and ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$ estimates are incorporated into the 68% CL upper limit on their background contributions according to Ref. [@CousinsHighland].
\[tab:bkgdCalc\]
Electrons Muons Taus
------------------------------------ ----------------------------------------------- ------------------------------------------ -----------------------------------------------
Criteria removed to $\Pe$ veto $\Pgm$ veto ${\ensuremath{\Pgt_\mathrm{h}}\xspace}$ veto
select control sample ${\ensuremath{E_\text{calo}}\xspace}< 10\GeV$ ${\ensuremath{E_\text{calo}}\xspace}< 10\GeV$
$N^{i}_\text{ctrl}$ from data $7785$ $4138$ $29$
$P^{i}$ from simulation $<6.3\times 10^{-5}$ $1.6 ^{+3.6}_{-1.3} \times 10^{-4} $ ${<}0.019$
$N^{i} = N^{i}_\text{ctrl} P^{i} $ ${<}0.49\stat$ $0.64 ^{+1.47}_{-0.53}\stat$ ${<}0.55\stat$
$P^{i}$ systematic uncertainty 31% 50% 36%
$N^{i}$ ${<}0.50\,\text{(stat+syst)}$ $0.64 ^{+1.47}_{-0.53}\stat\pm0.32\syst$ $<0.57\,\text{ (stat+syst)}$
Fake tracks {#fake-tracks}
-----------
The fake-track background is estimated as $N^\text{fake} = N^\text{basic} P^\text{fake}$, where $N^\text{basic} = 1.77 \ten{6}$ is the number of events in data that pass the basic selection criteria, and $P^\text{fake}$ is the fake-track rate determined in a [$\cPZ\to\ell\ell$]{}($ \ell = \Pe$ or $\mu$) data control sample, a large sample consisting of well-understood SM processes. In the simulation, the probability of an event to contain a fake track that has large transverse momentum and is isolated does not depend on the underlying physics process of the event. The [$\cPZ\to\ell\ell$]{}sample is collected with single-lepton triggers and is selected by requiring two well-reconstructed, isolated leptons of the same flavor that are opposite in charge and have an invariant mass between 80 and 100, consistent with a [$\cPZ\to\ell\ell$]{}decay. We measure $P^\text{fake}$ as the probability of an event in the combined [$\cPZ\to\ell\ell$]{}control sample to contain a track that passes the [[disappearing-track]{}]{}selection. There are two [$\cPZ\to\ell\ell$]{}data events with an additional track that passes the [[disappearing-track]{}]{}selection, thus $P^\text{fake}$ is determined to be $(2.0^{+2.7}_{-1.3}) \times 10^{-7}$. The rate of fake tracks with between 3 and 6 hits is consistent between the sample after the basic selection and the [$\cPZ\to\ell\ell$]{}control sample, as shown in Fig. \[fig:fakeTrkRatios\]. Fake tracks with 5 hits provide a background-enriched sample that is independent of the search sample, in which tracks are required to have 7 or more hits. We use the ratio of the rates of fake tracks with 5 hits between these two samples (including the statistical uncertainty), to assign a systematic uncertainty of 35%. The fake-track background estimate is $N_\text{fake} = 0.36^{+0.47}_{-0.23}\stat\pm0.13\syst$ events.
![ The ratio of the fake-track rates, $P^\text{fake}$, in the sample after the basic selection and in the [$\cPZ\to\ell\ell$]{}control sample, observed in data, as a function of the number of hits on the candidate track. []{data-label="fig:fakeTrkRatios"}](figures/fakeTrkRatios.pdf){width="48.00000%"}
Background estimate validation
------------------------------
The methods used to estimate the backgrounds in the search sample are tested in three control samples: the [[candidate track]{}]{}sample and [$E_\text{calo}$]{}and [$N_\text{outer}$]{}sideband samples. The sideband samples are depleted in signal by applying inverted signal isolation criteria, and the size of the samples is increased by relaxing the track requirement to $\pt>30\GeV$. In the [$N_\text{outer}$]{}sideband sample, events must pass all criteria of the [[candidate track]{}]{}sample, and the candidate track must have 2 or fewer missing outer hits. In the [$E_\text{calo}$]{}sideband sample, events must pass all criteria of the [[candidate track]{}]{}sample, and the candidate track must have more than 10of associated calorimeter energy. The backgrounds in each of these control samples are estimated using the methods used to estimate the backgrounds in the search region, with the appropriate selection criteria modified to match each sample. The data yields and estimates in each of these samples are consistent within the uncertainties, as shown in Table \[tab:bkgEstValidate\]. The methods of background estimation were validated in these control samples before examining the data in the search sample.
\[tab:bkgEstValidate\]
Sample Data Estimate Data/Estimate
------------------------------ ------ ---------------- ------------------
Candidate tracks 59 $49.0 \pm 5.7$ $1.20 \pm 0.21$
[$E_\text{calo}$]{}sideband 197 $195 \pm 13$ $1.01 \pm 0.10$
[$N_\text{outer}$]{}sideband 112 $103 \pm 9$ $1.09 \pm 0.14$
\
Additional systematic uncertainties {#sec:syst}
===================================
In addition to the systematic uncertainties in the background estimates described previously, there are systematic uncertainties associated with the integrated luminosity and the signal efficiency.
The integrated luminosity of the 8[$\Pp\Pp$]{}collision data is measured with a pixel cluster counting method, for which the uncertainty is 2.6% [@CMS-PAS-LUM-13-001].
The uncertainty associated with the simulation of jet radiation is assessed by comparing the recoil of muon pairs from ISR jets in data with a sample of simulated ${\ensuremath{\cPZ\to\MM}\xspace}$+jets events. The dimuon spectra ratio of data to simulation is used to weight the signal events, and the corresponding selection criteria efficiency is compared to the nominal efficiency. The uncertainty is 3–11%.
We assess uncertainties due to the jet energy scale and resolution from the effect of varying up and down by one standard deviation the jet energy corrections and jet energy resolution smearing parameters [@JESPaper]. The selection efficiency changes by 0–7% from the variations in the jet energy corrections and jet energy resolution.
We assess the PDF uncertainty by evaluating the envelope of uncertainties of the CTEQ6.6, MSTW08, and NNPDF2.0 PDF sets, according to the PDF4LHC recommendation [@pdf2; @Alekhin:2011sk]. The resultant acceptance uncertainties are 1–10%.
The uncertainty associated with the trigger efficiency is assessed with a sample of $\PW\to \mu \nu$ events. We compare the trigger efficiency in data and simulation as a function of reconstructed after excluding muons, as the trigger efficiency is similar for $\PW\to \mu \nu$ and signal events. We select $\PW\to \mu \nu$ events by applying the basic selection criteria excluding the requirement. We also apply the candidate track criteria excluding the muon veto. The ratio of the trigger efficiency in data and simulation is used to weight the signal events. The resultant change in the selection efficiency is 1–8%.
The uncertainty associated with the modeling of [$N_\text{outer}$]{}is assessed by varying the [$N_\text{outer}$]{}distribution of the simulated signal samples by the disagreement between data and simulation in the [$N_\text{outer}$]{}distribution in a control sample of muon tracks. Since muons are predominantly affected by the algorithmic sources of missing outer hits described in Section \[sec:srcMissOutHits\], they illustrate how well the [$N_\text{outer}$]{}distribution is modeled in simulation. The consequent change in signal efficiencies is found to be 0–7%.
The uncertainties associated with missing inner and middle hits are assessed as the difference between data and simulation in the efficiency of the requirements of zero missing inner or middle hits in a control sample of muons. A sample of muons is used because they produce tracks that rarely have missing inner or middle hits, as would be the case for signal. These uncertainties are 3% for missing inner hits and 2% for missing middle hits.
The systematic uncertainty associated with the simulation of [$E_\text{calo}$]{}is assessed as the difference between data and simulation in the efficiency of the [$E_\text{calo}<10\GeV$]{}requirement, in a control sample of fake tracks with exactly 4 hits. This sample is used because such tracks have very little associated calorimeter energy, as would be the case for signal tracks. The uncertainty is 6%.
The uncertainty associated with the modeling of the number of pileup interactions per bunch crossing is assessed by weighting the signal events to match target pileup distributions in which the numbers of inelastic interactions are shifted up and down by the uncertainty. The consequent variation in the signal efficiency is 0–2%.
The uncertainty in the track reconstruction efficiency is assessed with a tag-and-probe study [@CMS-PAS-TRK-10-002]. The track reconstruction efficiency is measured for probe muons, which are reconstructed using information from the muon system only. We take the uncertainty to be the largest difference between data and simulation among several pseudorapidity ranges, observed to be 2%.
The systematic uncertainties in the signal efficiency for samples of charginos with $c\tau$ in the range of maximum sensitivity, 10–1000, and all simulated masses, are summarized in Table \[tab:sigSyst\].
\[tab:sigSyst\]
-------------------------------------------------- -------
Jet radiation (ISR) 3–11%
Jet energy scale / resolution 0–7%
PDF 1–10%
Trigger efficiency 1–8%
[$N_\text{outer}$]{}modeling 0–7%
[$N_\text{inner}$]{}, [$N_\text{mid}$]{}modeling 2–3%
[$E_\text{calo}$]{}modeling 6%
Pileup 0–2%
Track reconstruction efficiency 2%
Total 9–22%
-------------------------------------------------- -------
Results {#sec:results}
=======
Two data events are observed in the search sample, which is consistent with the expected background. The numbers of expected events from background sources compared with data in the search sample are shown in Table \[tab:bkgEstSumm\]. From these results, upper limits at 95% CL on the total production cross section of direct electroweak chargino-chargino and chargino-neutralino production are calculated for various chargino masses and mean proper lifetimes. The next-to-leading-order cross sections for these processes, and their uncertainties, are taken from Refs. [@CharginoProduction1999; @SusyProduction]. The limits are calculated with the CL$_\mathrm{S}$ technique [@Read:2002hq; @Junk:1999kv], using the LHC-type CL$_\mathrm{S}$ method [@CMS_ATLAS_HiggsCombination]. This method uses a test statistic based on a profile likelihood ratio [@asymptoticCLs] and treats nuisance parameters in a frequentist context. Nuisance parameters for the systematic uncertainties in the integrated luminosity and in the signal efficiency are constrained with log-normal distributions. There are two types of nuisance parameters for the uncertainties in the background estimates, and they are specified separately for each of the four background contributions. Those that result from the limited size of a sample are constrained with gamma distributions, while those that are associated with the relative disagreement between data and simulation in a control region have log-normal constraints. The mean and standard deviation of the distribution of pseudo-data generated under the background-only hypothesis provide an estimate of the total background contribution to the search sample of $1.4 \pm1.2$ events.
\[tab:bkgEstSumm\]
Event source
-------------- ----------------- -------------------------------- --
Electrons $ {<}0.49\stat$ ${<}0.50\,\text{(stat+syst)}$
Muons
Taus $ {<}0.55\stat$ ${<}0.57\,\text{(stat+syst)} $
Fake tracks
Data
The distributions of the , number of hits, [$E_\text{calo}$]{}, and [$N_\text{outer}$]{}of the disappearing tracks in the search region are shown for the observed events and the estimated backgrounds in Fig. \[fig:distDataBkgd\_FullSel\]. The shapes of the electron, muon, and tau background distributions are obtained from the data control samples enriched in the given background. The fake track distribution shapes are taken from the [$\cPZ\to\ell\ell$]{}control sample, using fake tracks with 5 hits, except for the plot of the number of hits, for which fake tracks with 7 or more hits are used. The background normalizations have the relative contributions of Table \[tab:bkgEstSumm\] and a total equal to 1.4 events, the mean of the background-only pseudo-data. No significant discrepancy between the data and estimated background is found.
In contrast to a slowly moving chargino, which is expected to have a large average ionization energy loss, the energy loss of the two disappearing tracks in the search sample is compatible with that of minimum-ionizing SM particles, ${\approx}3\MeV/$cm.
{width="48.00000%"} {width="48.00000%"} {width="48.00000%"} {width="48.00000%"}
The expected and observed constraints on the allowed chargino mean proper lifetime and mass are presented in Fig. \[fig:limits\]. The maximum sensitivity is for charginos with a mean proper lifetime of 7, for which masses less than 505are excluded at 95% CL.
In Fig. \[fig:massSplit\], we show the expected and observed constraints on the mass of the chargino and the mass difference between the chargino and neutralino, $\Delta m_{\PSGc_1} = m_{\PSGc^\pm_1} - m_{\PSGc^0_1}$, in the minimal AMSB model. The limits on $\tau_{\PSGc^\pm_1}$ are converted into limits on $\Delta m_{\PSGc_1}$ according to Ref. [@equation; @equation2]. The two-loop level calculation of $\Delta m_{\PSGc_1}$ for wino-like lightest chargino and neutralino states [@massSplittingCurve] is also indicated. In the AMSB model, we exclude charginos with mass less than 260, corresponding to a chargino mean proper lifetime of 0.2 and $\Delta m_{\PSGc_1} = 160\MeV$.
In Fig. \[fig:3DUpperLimit\], we show the observed upper limit on the total cross section of the ${\ensuremath{\Pq\Paq'}\xspace}\to {\ensuremath{\chipm_1}\xspace}{\ensuremath{\chiz_1}\xspace}$ plus ${\ensuremath{\Pq\Paq}\xspace}\to {\ensuremath{\chipm_1}\xspace}\PSGc^{\mp}_1$ processes in terms of chargino mass and mean proper lifetime. A model-independent interpretation of the results is provided in \[sec:appendix\].
![The expected and observed constraints on the chargino mean proper lifetime and mass. The region to the left of the curve is excluded at 95% CL. []{data-label="fig:limits"}](figures/lifetimeNs_vs_mass.pdf){width="\cmsFigWidth"}
![The expected and observed constraints on the chargino mass and the mass splitting between the chargino and neutralino, $\Delta m_{\PSGc_1}$, in the AMSB model. The prediction for $\Delta m_{\PSGc_1}$ from Ref. [@massSplittingCurve] is also indicated. []{data-label="fig:massSplit"}](figures/massSplit_vs_mass.pdf){width="48.00000%"}
![The observed upper limit (in pb) on the total cross section of ${\ensuremath{\Pq\Paq'}\xspace}\to {\ensuremath{\chipm_1}\xspace}{\ensuremath{\chiz_1}\xspace}$ and ${\ensuremath{\Pq\Paq}\xspace}\to {\ensuremath{\chipm_1}\xspace}\PSGc^{\mp}_1$ processes as a function of chargino mass and mean proper lifetime. The simulated chargino mass used to obtain the limits corresponds to the center of each bin. []{data-label="fig:3DUpperLimit"}](figures/lifetimeNs_vs_mass_color.pdf){width="\cmsFigWidth"}
Summary {#sec:conclusion}
=======
A search has been presented for long-lived charged particles that decay within the CMS detector and produce the signature of a disappearing track. In a sample of proton-proton data recorded at a collision energy of $\sqrt{s}=8\TeV$ and corresponding to an integrated luminosity of 19.5, two events are observed in the search sample. Thus, no significant excess above the estimated background of $1.4 \pm1.2$ events is observed and constraints are placed on the chargino mass, mean proper lifetime, and mass splitting. Direct electroweak production of charginos with a mean proper lifetime of 7 and a mass less than 505is excluded at 95% confidence level. In the AMSB model, charginos with masses less than 260, corresponding to a mean proper lifetime of 0.2 and chargino-neutralino mass splitting of $160\MeV$, are excluded at 95% confidence level. These constraints corroborate those set by the ATLAS Collaboration [@ATLASDisapp2].
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMWFW and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); MoER, ERC IUT, and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NIH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); NRF and WCU (Republic of Korea); LAS (Lithuania); MOE and UM (Malaysia); CINVESTAV, CONACYT, SEP, and UASLP-FAI (Mexico); MBIE (New Zealand); PAEC (Pakistan); MSHE, and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS and RFBR (Russia); MESTD (Serbia); SEIDI and CPAN (Spain); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (USA). Individuals have received support from the Marie-Curie program and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation à la Recherche dans l’Industrie et dans l’Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS program of Foundation for Polish Science, cofinanced from European Union, Regional Development Fund; the Compagnia di San Paolo (Torino); the Consorzio per la Fisica (Trieste); MIUR project 20108T4XTM (Italy); the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; and the National Priorities Research Program by Qatar National Research Fund.
Individuals have received support from the Marie-Curie program and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation à la Recherche dans l’Industrie et dans l’Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS program of Foundation for Polish Science, cofinanced from European Union, Regional Development Fund; the Compagnia di San Paolo (Torino); the Consorzio per la Fisica (Trieste); MIUR project 20108T4XTM (Italy); the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF; and the National Priorities Research Program by Qatar National Research Fund.
Model-independent interpretation {#sec:appendix}
================================
To allow the interpretation of the results of this search in the context of other new physics models, the signal efficiency is parameterized in terms of the four-momenta and decay positions of the generated BSM particles. This allows the signal efficiency to be approximated without performing a full simulation of the CMS detector. In this approximation, the signal efficiency is factorized as $\epsilon = \epsilon_\mathrm{b}\epsilon_\mathrm{t}$, where $\epsilon_\mathrm{b}$ is the probability of an event to pass the basic selection and $\epsilon_\mathrm{t}$ is the probability for that event to contain at least one disappearing track. The efficiency to pass the basic selection $\epsilon_\mathrm{b}$ depends mostly on the of the BSM system, which is approximately equal to . The efficiency of the basic selection as a function of the of the chargino-chargino or chargino-neutralino system ($\PSGc\PSGc$) is shown in Table \[tab:modelIndepBasic\]. To calculate the probability $\epsilon_\mathrm{t}$ that an event contains a disappearing track, it is necessary to first identify charged particles that pass the following track preselection criteria: $\pt > 50$, $\abs{\eta} < 2.2$, and a decay position within the tracker volume, , with a longitudinal distance to the interaction point of less than 280and a transverse decay distance in the laboratory frame $L_{xy}$ of less than 110. For long-lived charged particles that meet the track preselection criteria, the efficiency to pass the [[disappearing-track]{}]{}selection depends mostly on $L_{xy}$, as given in Table \[tab:modelIndepDisTrk\]. Each of the long-lived BSM particles that pass the preselection should be considered, weighted by its [[disappearing-track]{}]{}efficiency from Table \[tab:modelIndepDisTrk\], to determine whether the event contains at least one disappearing track. This parameterization of the efficiency is valid under the assumptions that the long-lived BSM particles are isolated and that their decay products deposit little or no energy in the calorimeters. For the benchmark signal samples used in this analysis, the efficiency approximation agrees with the full simulation efficiencies given in Table \[tab:cutFlowEffSig\] within 10% for charginos with [$c \tau$]{}between 10 and 1000. The expected number of signal events $N$ for a new physics process is the product of the signal efficiency $\epsilon$, the cross section $\sigma$, and the integrated luminosity $L$, $N = \epsilon \sigma L$. By comparing such a prediction with the estimated background of $1.4 \pm 1.2$ events and the observation of two events in this search, constraints on other models can be set.
\[tab:modelIndepBasic\]
$\pt(\tilde{\chi}\tilde{\chi})$\[\] Basic selection efficiency (%)
------------------------------------- --------------------------------
${<}100$ 0.0 $\pm$ 0.0
100–125 13.1 $\pm$ 0.3
125–150 44.1 $\pm$ 0.8
150–175 65.3 $\pm$ 1.2
175–200 75.7 $\pm$ 1.5
200–225 79.5 $\pm$ 1.9
${>}225$ 85.5 $\pm$ 1.1
\[tab:modelIndepDisTrk\]
$L_{xy}$\[cm\] Disappearing track efficiency (%)
---------------- -----------------------------------
${<}30$ 0.0 $\pm$ 0.2
30–40 26.0 $\pm$ 1.0
40–50 44.2 $\pm$ 1.6
50–70 50.8 $\pm$ 1.4
70–80 45.5 $\pm$ 2.1
80–90 25.5 $\pm$ 1.6
90–110 3.1 $\pm$ 0.4
${>}110$ 0.0 $\pm$ 0.0
The CMS Collaboration \[app:collab\]
====================================
=5000=500=5000
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Chaoyou Fu, Yibo Hu, Xiang Wu, Guoli Wang, Qian Zhang, and Ran He\*, [^1]'
bibliography:
- 'mybibfile.bib'
title: |
High Fidelity Face Manipulation\
with Extreme Pose and Expression
---
[Shell : Bare Demo of IEEEtran.cls for Computer Society Journals]{}
-realistic face manipulation with pose and expression is a meaningful task in a wide range of fields, such as movie industry, entertainment and photography technologies. With the flourish of Generative Adversarial Networks (GANs) [@goodfellow2014generative; @arjovsky2017wasserstein], face manipulation has achieved significant advances in recent years [@choi2018stargan; @pumarola2018ganimation; @huang2017beyond; @hu2018pose; @shen2018faceid]. However, existing face manipulation methods mainly focus on only one facial variation, i.e., only changing pose [@hu2018pose] or expression [@pumarola2018ganimation]. Meanwhile, these methods for large pose or expression have still been limited to a low-resolution (128 $\times$ 128). Particularly, joint pose and expression modeling is challenging [@zhang2018joint], especially for high-resolution facial images with extreme pose and expression.
For the face manipulation, a straightforward way is to apply image-to-image translation [@isola2017image; @wang2018high]. However, in the case of high-resolution with extreme pose and expression, it is difficult to guarantee the facial local structures in this way. As shown in Fig. \[fig-nd-compare\] (a), the facial local structures, such as the eyes, nose and mouth, are unclear. Recent observations [@wang2018high; @ma2017pose] show that the boundary information is crucial in high fidelity image synthesis. Hence, we argue that the lack of structure guidance makes it difficult to synthesize extreme high-resolution face images. Several structure guided methods have been proposed for face manipulation [@hu2018pose; @jo2019sc; @song2018geometry]. For example, CAPG-GAN [@hu2018pose] utilizes facial landmarks to control face rotation. SC-FEGAN [@jo2019sc] realizes local facial editing with the sketch guidance. Most of structure guided methods directly concatenate a face image and its structure guidance in the image space. However, such direct concatenation is difficult to maintain the facial structure and texture. As shown in Fig. \[fig-nd-compare\] (b), the structure of the synthesized face is ambiguous and the textures are confused. This phenomenon may be caused by the lack of the disentanglement between structure and texture that is important for interpretable image manipulation [@shu2018deforming]. [@bao2018towards] proposes a simple disentanglement manner. It introduces a face recognition network to learn structure invariant features, and then concatenates the structure invariant features with structure features to synthesize faces. As shown in Fig. \[fig-nd-compare\] (c), this disentanglement manner does make the structure of the synthesized face clearer, but the textures of the synthesized high-resolution face are somewhat lost. We argue that it is because the features of the face recognition network are too compact, leading to severe texture loss in such high-resolution case.
Based on the above observations, we propose a novel framework for high-resolution face manipulation with extreme pose and expression, as shown in Fig. \[fig-framework\]. Our framework simplifies this challenging task into two correlated stages: a boundary prediction stage and a disentangled face synthesis stage. The first stage utilizes a boundary image for joint pose and expression modeling. It employs an encoder-decoder network to predict the boundary image of the target face in a semi-supervised way. Pose and expression estimators are introduced to improve the prediction accuracy. The second stage encodes the predicted boundary image and the original face into the structure and texture latent space by two encoder networks respectively. Since it is hard to directly disentangle the structure and texture [@shu2018deforming], we introduce a face recognition network as a proxy to facilitate disentanglement. Different from [@bao2018towards] that directly utilizes the compacted features of the proxy network, we propose a simple yet effective feature threshold loss to control the compactness between our learned face features and the compacted features. The result of our method in Fig.\[fig-nd-compare\] (d) has not only clear structures, but also integrated textures.
![Visualization comparisons (512 $\times$ 512) of different methods. (a) Image-to-image translation [@wang2018high]. The local structures, e.g., the mouth, are unclear; (b) Concatenating the original input face and the boundary of the target face [@hu2018pose]. The local structures are ambiguous and textures are confused; (c) Utilizing a face recognition network to disentangle structure and texture [@bao2018towards]. The local structures are clear, but the textures are somewhat lost; (d) Our method. The structures and textures are well maintained. []{data-label="fig-nd-compare"}](Fig-nd-compare.pdf){width="49.00000%"}
Most of current public available face manipulation databases are limited in low-resolution, e.g., the resolution of the original images in the MultiPIE database [@gross2010multi] is only $640 \times 480$. Although recently released high-resolution databases CelebA-HQ [@karras2017progressive] and FFHQ [@karras2018style] reach to $1024 \times 1024$ resolution, the pose variants in these databases are inadequate (most images are nearly frontal). Therefore, in order to verify the effectiveness of our method in the face manipulation, we collect a new high quality Multi-View Face (MVF-HQ) database. The comparisons of our MVF-HQ database and other existing public high-resolution face manipulation databases are tabulated in Table \[table-high-resolution\]. It is obvious that MVF-HQ has the following three advantages: (1) **Large-Scale**. MVF-HQ consists of $120,283$ images, far more than other high-resolution databases ($70,000$ at most [@karras2018style]). (2) **High-Resolution**. The resolution of original images in MVF-HQ is up to $6000 \times 4000$, while other databases can only reach to $1024 \times 1024$ resolution [@karras2017progressive; @karras2018style]. (3) **Abundant Variants**. MVF-HQ contains $13$ views from $-90^o$ to $+90^o$ (the interval is $15^o$) as well as diverse expression and illumination variants. MVF-HQ will be released soon, along with its $5$ precise facial landmarks.
In summary, the main contributions are as follows:
- The high-resolution face manipulation problem with extreme pose and expression is formulated as a stage-wise learning problem that contains two correlated stages: a boundary prediction stage and a disentangled face synthesis stage.
- Joint pose and expression modeling is implemented by the boundary image translation in the first stage. Besides, a proxy network and a feature threshold loss are introduced in the second stage to disentangle the structure and texture for better utilizing the boundary image.
- We collect a high quality MVF-HQ database that contains $120,283$ images in $6000 \times 4000$ resolution from $479$ identities. It has abundant variants in terms of views, expressions and illuminations.
- Experiments on the MultiPIE [@gross2010multi], RaFD [@langner2010presentation], CelebA-HQ [@karras2017progressive] and our MVF-HQ databases show that our method dramatically improves the visualization of face manipulation.
Related Work
============
Face Manipulation
-----------------
Face manipulation has attracted great attention to computer vision and graphics [@blanz2003reanimating; @wang2009face; @yang2011expression; @cao2014facewarehouse; @kemelmacher2014illumination; @thies2016face2face; @li2018global1; @chen2019semantic; @wang2018video]. Pose rotation [@tran2017disentangled; @hu2018pose; @tran2017disentangled] and expression edit [@pumarola2018ganimation; @tulyakov2018mocogan] are the two of the main research directions. TP-GAN [@huang2017beyond] adopts a two-pathway generative network architecture, achieving photo-realistic face frontalization from a single image. FaceID-GAN [@shen2018faceid] introduces an identity classifier as a competitor to better preserve identity when changing pose and expression. UV-GAN [@deng2018uv] completes the facial UV map to improve the performance of pose-invariant face recognition. By controlling the magnitude of Action Units (AUs), GANimation [@pumarola2018ganimation] renders expressions in a continuum. MoCoGAN [@tulyakov2018mocogan] separates the hidden features as a content subspace and a motion subspace, which makes it possible to synthesize different expressions.
Other face manipulation tasks also have made considerable development, such as facial makeup [@chang2018pairedcyclegan], age synthesis [@zhao2019look; @li2018global1] face inpainting [@yu2018generative; @nazeri2019edgeconnect], cross spectral synthesis [@duan2019pose]. PairedCycleGAN [@chang2018pairedcyclegan] introduces a new cycle generative network that transfers makeup styles and removes styles in an asymmetric manner. By this mean, it is capable of wearing makeup for a target face in the style of a reference face. StarGAN [@choi2018stargan] realizes multi-domain face attribute transfer by a single generator. WaveletGLCA-GAN [@li2018global1] employs four generative networks to learn both global topology structure and local texture details, achieving vivid age synthesis. AIM [@li2018global1] proposes a unified framework for both cross-age synthesis and age-invariant face recognition. [@yu2018generative] utilizes the surrounding background patches to facilitate image inpainting. [@duan2019pose] introduces a pose alignment module and a texture prior generator to tackle the unpaired cross spectral synthesis problem.
However, extreme face manipulation methods [@huang2017beyond; @shen2018faceid; @hu2018pose] are still limited to low-resolution (128$\times$128). High-resolution face manipulation with extreme pose and expression remains unexplored.
Image Synthesis
---------------
As one of the primary means for face manipulation, image synthesis has made great progress in recent years [@goodfellow2014generative; @kingma2013auto; @van2016conditional; @li2015generative; @dinh2016density]. It contains unconditional manner [@karras2017progressive; @fu2019dual; @miyato2018spectral] and conditional manner [@pix2pix2017; @park2019semantic]. For unconditional synthesis, images are generated from noise without any condition. Generative Adversarial Networks (GANs) [@goodfellow2014generative; @radford2015unsupervised; @arjovsky2017wasserstein] are representative unconditional synthesis models that consist of a generator and a discriminator to play a min-max game. The generator synthesizes data from a prior to confuse the discriminator, while the discriminator tries to distinguish the generative data and the real data. PG-GAN [@karras2017progressive] significantly improves the synthesis quality by progressively growing the generator and discriminator. Variational AutoEncoders (VAEs) [@kingma2013auto] are the other representative unconditional synthesis models, which optimize the evidence lower bound objective (ELBO) to learn the data distribution. IntroVAE [@huang2018introvae] employs an introspective variational generation model to synthesize high-resolution images without discriminator. VQ-VAE [@van2017neural] learns discrete representations with an autoregressive prior to generate high quality data. VQ-VAE-2 [@razavi2019generating] further improves the generative quality by modifying the autoregressive prior.
For conditional image synthesis, the synthesized images need to meet the given conditions. pix2pix [@pix2pix2017] introduces a conditional generative adversarial loss for paired image-to-image translation. pix2pixHD [@wang2018high] further improves pix2pix with a coarse-to-fine generator and a multi-scale discriminator, realizing higher fidelity image translation. CycleGAN [@CycleGAN2017] proposes a cycle consistent way for unpaired image-to-image translation. BigGAN [@brock2018large] first achieves high-resolution ($512 \times 512$) conditional image synthesis on the Imagenet [@deng2009imagenet]. StyleGAN [@karras2018style] employs an alternative generator to automatically learn attributes. Benefitting from the proposed spatially-adaptive normalization, [@park2019semantic] synthesizes photo-realistic landscapes by the semantic layout. However, as we stated in Section \[introduction\], it is hard for the conditional image synthesis methods to realize high-resolution face manipulation with extreme pose and expression.
Face manipulation databases
---------------------------
Existing face manipulation databases can be divided into two categories: low-resolution databases and high-resolution databases. Representative low-resolution databases include Celeb-A [@liu2015deep], CAS-PEAL-R1 [@gao2007cas] and MultiPIE [@gross2010multi]. The resolution of the original images in the Celeb-A, CAS-PEAL-R1 and MultiPIE databases is $505 \times 606$, $640 \times 480$ and $640 \times 480$ [^2], respectively. Celeb-A is an in-the-wild face database that contains of $202,599$ images with $40$ attribute annotations. It is wildly used in face attribute editing, face inpainting and other face manipulation tasks. Both the CAS-PEAL-R1 and the MultiPIE databases are in the controlled environment with diverse pose, illumination and expression variants, which have $30,863$ and $750,000$+ images, respectively. They are mainly adopted for pose invariant face recognition.
Due to the high cost of data acquisition, there are limited number of high-resolution face manipulation databases. RaFD [@langner2010presentation] is mainly used for facial expression analysis. However, the image number ($8,040$ images of $73$ identities) and the resolution ($681 \times 1024$) are limited. Recently released databases CelebA-HQ [@karras2017progressive] and FFHQ [@karras2018style] reach to $1024 \times 1024$ resolution. The former contains $30,000$ images that are mainly selected from the Celeb-A database. Researchers utilize image processing techniques, such as super-resolution, to convert low-resolution images into higher resolution ones. The latter consists of $70,000$ images that are crawled from Flickr. However, most of images in the CelebA-HQ and FFHQ databases are nearly frontal faces, making it hard to edit large pose on these databases. Different from the existing face manipulation databases, our newly built MVF-HQ database has $120,283$ images of higher resolution with diverse variants. The comparisons are tabulated in Table \[table-high-resolution\].
![The framework of our method, which consists of a boundary prediction stage and a disentangled face synthesis stage. The first stage predicts the boundary image of the target face in a semi-supervised way. A pose estimator and an expression estimator are employed to improve the prediction accuracy. The second stage utilizes the predicted boundary image to synthesize refined face. A proxy network and a feature threshold loss are introduced to disentangle the structure and texture in the latent space. []{data-label="fig-framework"}](framework.pdf){width="49.00000%"}
Method
======
Given an original face $I^a$, the goal of our method is to synthesize the target face $I^b$, according to a given pose vector $p^b$ and an expression vector $e^b$. In addition, we denote the boundary image of the original face and the target face as $B^a$ and $B^b$, respectively. The face manipulation task is explicitly divided into two stages: a boundary prediction stage and a disentangled face synthesis stage, as shown in Fig. \[fig-framework\]. In the rest of this section, we will present the above two stages in detail.
{width="99.00000%"}
Boundary Prediction
-------------------
Boundary prediction stage predicts the target boundary image according to the given conditional vectors, including a pose vector and an expression vector. As shown in Fig. \[fig-framework\], we utilize an encoder network $Enc$ and a decoder network $Dec$ to realize this conditional boundary prediction. Specifically, through $Enc$, we first map the original input boundary image $B^a$ into a latent space $z^a = Enc(B^a)$. Then, the target pose vector $p^b$ and expression vector $e^b$ are concatenated with the hidden variable $z^a$ to provide conditional information. Last, the target boundary image is generated by the decoder network $\hat{B}^b = Dec(z^a, p^b, e^b)$.
The pose and expression are discrete in the database, e.g., the MultiPIE database [@gross2010multi] only has 15 discrete poses and 6 discrete expressions. However, we expect that this stage can generate boundary images with unseen poses and expressions. Hence, we introduce a semi-supervised training manner. For the poses and expressions in the database, we can utilize the corresponding ground truth to constrain the generated boundary image. For the poses and expressions that do not exist in the database, we utilize two pre-trained estimators, including a pose estimator $F_p$ and an expression estimator $F_e$, to constrain the generated boundary image by conditional regression.
The loss functions involved in this stage are described below, including a pixel-wise loss and a conditional regression loss.
**Pixel-Wise Loss.** For the poses and expressions that belong to the database, a pixel-wise $L_1$ loss is utilized to constrain the predicted boundary image $\hat{B}^b = Dec(Enc(B^a), p^b, e^b)$:
$$\label{eq:pix-boundary}
\mathcal{L}_{\text{pix-boud}} = \left | Dec(Enc(B^a), p^b, e^b) - B^b \right |,$$
where $B^b$ is the ground truth target boundary image.
**Conditional Regression Loss.** \[regression\] For the poses and the expressions that do not exist in the database, we first randomly produce $p^r$ and $e^r$ to generate boundary image $B^r = Dec(z^a, p^r, e^r)$. Then, we utilize a pose estimator $F_p$ and an expression estimator $F_e$ to estimate pose $\hat{p}^r = F_p(B^r)$ and expression $\hat{e}^r = F_e(B^r)$, respectively. The estimated $\hat{p}^r$ and $\hat{e}^r$ are used to constrain the generated boundary image. The intuition is that the estimated $\hat{p}^r$ and $\hat{e}^r$ of $B^r$ should be equal to the conditional vectors $p^r$ and $e^r$, respectively. Hence, a conditional regression loss, including a pose regression term and an expression regression term, is formulated as:
$$\label{eq:regression}
\begin{aligned}
\mathcal{L}_{\text{reg}} &= || F_p(Dec(z^a, p^r, e^r)) - p^r||_2^2 \\
&+ || F_e(Dec(z^a, p^r, e^r)) - e^r ||_2^2.
\end{aligned}$$
The parameters of the pre-trained $F_p$ and $F_e$ are fixed during training procedure.
Disentangled Face Synthesis
---------------------------
This stage utilizes the predicted boundary image to perform refined face synthesis. As shown in Fig. \[fig-framework\], we first utilize two encoders $G_{enc}^{B}$ and $G_{enc}^{I}$ to map the predicted boundary image $\hat{B}^b$ and the original input face $I^a$ to $f_{B^b} = G_{enc}^{B}(\hat{B}^b)$ and $f_{I^a} = G_{enc}^{I}(I^a)$, respectively. Then, we disentangle the structure and texture in the latent space, by a proxy network $Proxy$ and a feature threshold loss. After disentanglement, the boundary features $f_{B^b}$ and the image feature $f_{I^a}$ are concatenated to feed into the decoder $G_{dec}^{I}$, synthesizing the final target face $\hat{I^b} = G_{dec}^{I}(f_{B^b}, f_{I^a})$.
The loss functions in this stage are presented below, including a feature threshold loss, a multi-scale pixel-wise loss, a multi-scale conditional adversarial loss and an identity preserving loss.
{width="99.00000%"}
**Feature Threshold Loss.** \[feature-threshold-Loss\] The feature threshold loss is designed to assist in disentangling the structure and texture in the latent space. Considering that directly disentangling structure and texture is difficult, we utilize a pre-trained face recognition network as a proxy network $Proxy$, whose features $f_{P^a} = Proxy(I^a)$ are thought to be structure invariant. In addition, instead of directly utilizing the compact features $f_{P^a}$ that will result in texture loss, as shown in Fig. \[fig-nd-compare\] (c), we introduce a feature threshold loss to better disentangle the structure and texture. Specifically, it controls the feature distance between the face features $f_{I^a} = G_{enc}^{I}(I^a)$ and the compact features $f_{P^a} = Proxy(I^a)$ with a threshold margin $m$:
$$\label{eq:threshold}
\mathcal{L}_{\text{thr}} = \left[|| G_{enc}^{I}(I^a) - Proxy(I^a) ||_2^2 - m \right]^{+},$$
where $[\cdot]^{+} = max(0,\cdot)$. As the loss $\mathcal{L}_{\text{thr}}$ decreases, the face features $f_{I^a}$ are closer to the compact features $f_{P^a}$, which means the structure and the texture are better disentangled. Meanwhile, the threshold margin $m$ controls the compact degree of face features $f_{I^a}$, which is employed to maintain the texture. The parameter analysis of $m$ is presented in Section \[parameter-analysis\].
**Multi-Scale Pixel-Wise Loss.** We introduce a multi-scale pixel-wise loss to constrain the synthesized face at different scales. Specifically, with the downsampling operation on factors of 2 and 4, we first obtain an image pyramid of 3 scales of the synthesized and the ground truth faces, respectively. Then, we calculate the pixel-wise loss on these 3 scales faces: $$\label{eq:pix-2}
\mathcal{L}_{\text{pix-mul}} = \sum_{s=1,2,3} \left |G_{dec}^{I}(f_{B^b}, f_{I^a})_s - I^b_s \right |,$$ where $s$ denotes the scales. The pixel-wise loss at the top of the image pyramid pays more attention to the global information, because it has a larger receptive field. On the contrary, the pixel-wise loss in the bottom of the image pyramid is more concerned with the recovery of details.
**Multi-Scale Conditional Adversarial Loss.** To improve the sharpness of the synthesized face images, we also introduce a conditional adversarial loss. The discriminator tries to distinguish the fake image pair $\{\hat{I}^b, B^b \}$ from the real image pair $\{I^b, B^b \}$, and the generator tries to fool the discriminator: $$\label{eq:adv}
\begin{aligned}
\mathcal{L}_{\text{adv}} &= \mathbb{E}_{I^b \sim P(I^b)} \left [ \log D(I^b, B^b) \right ] \\
& + \mathbb{E}_{\hat{I}^b \sim P(\hat{I}^b)} \left [ \log(1 - D(\hat{I}^b, B^b)) \right ].
\end{aligned}$$ In order to improve the ability of the discriminator, we adopt the multi-scale discriminant strategy [@wang2018high]. It utilizes three discriminators to discriminate the synthesized images at three different scales.
{width=".99\textwidth"}
**Identity Preserving Loss.** In order to further preserve the identity information of the synthesized faces, we adopt an identity preserving loss as [@hu2018pose]. Specifically, a pre-trained Light CNN [@wu2018light] is introduced as a feature extractor $D_{ip}$. It forces the identity features of the synthesized face $\hat{I}^b$ to be as close to the identity features of the real face $I^b$ as possible. The identity preserving loss is formulated as: $$\label{eq:ip}
\mathcal{L}_{\text{ip}} = || D_{ip}^{p}(\hat{I}^b) - D_{ip}^{p}(I^b) ||_2^2 + || D_{ip}^{fc}(\hat{I}^b) - D_{ip}^{fc}(I^b) ||_2^2.$$ where $D_{ip}^{p}$ and $D_{ip}^{fc}$ denote the output of the last pooling layer and the fully connected layer, respectively.
Overall Loss {#overal-loss}
------------
The boundary prediction stage and the disentangled face synthesis stage are trained separately. We first train the boundary prediction stage, and then utilize the predicted boundary to train the face synthesis stage. For the boundary prediction stage, the overall loss is: $$\label{eq:bp}
\mathcal{L}_{\text{bp}} = \lambda_1 \mathcal{L}_{\text{pix-bound}} + \lambda_2 \mathcal{L}_{\text{reg}}.$$ For the the face synthesis stage, the overall loss is: $$\label{eq:fs}
\mathcal{L}_{\text{fs}} = \alpha_1 \mathcal{L}_{\text{thr}} + \alpha_2 \mathcal{L}_{\text{pix-mul}} + \alpha_3 \mathcal{L}_{\text{adv}}+ \alpha_4 \mathcal{L}_{\text{ip}},$$ where $\lambda_1$, $\lambda_2$, $\alpha_1$, $\alpha_2$, $\alpha_3$ and $\alpha_4$ are the trade-off parameters.
Multi-View Face (MVF-HQ) Database {#mvf-hq}
=================================
In order to verify the effectiveness of our method in extreme high-resolution face manipulation, we collect the MVF-HQ database. This section introduces the details of the MVF-HQ database in terms of the technical setup, data acquisition, data processing and comparisons.
Technical Setup
---------------
$13$ Canon EOS digital SLR cameras (EOS $1300$D/$1500$D with $55$mm prime lens) are equipped to take photos. The resolution of these photos is up to $24.00$-megapixel ($6000 \times 4000$). In order to ensure the accurate angles of the collected photos, we elaborately design and build a horizontal semicircular bracket with $1.5$m radius, locating at the same height with head. All cameras are placed on the bracket with $15^o$ interval and point to the center of the semicircular bracket, as shown in Fig. \[fig-setup\]. Meanwhile, all cameras are connected to one computer through USB interfaces. We design a software that can control all cameras to take photos simultaneously. The taken photos are automatically stored on the hard drive.
We also use $7$ flashes for illuminations. These flashes are placed on the above, front, front-above, front-below, behind, left and right, respectively. By turning on one flash and turning off the others, we can simulate different weak lighting conditions. Besides, a chair is placed in the center of the semicircular bracket for fixing the pose of participants. Furthermore, we set a uniform white background for the data acquisition environment.
![Technical setup. []{data-label="fig-setup"}](fig-setup.pdf){width=".3\textwidth"}
{width="99.00000%"}
Data Acquisition
----------------
We invite a total of $500$ participants and all of them have signed data acquisition licenses before taking photos. Each participant is asked to sit down in the chair and fine-tune the chair height to make sure the head is in the same high of cameras. During data acquisition process, the participant is asked to look directly into the direction of the camera on $0^o$ (see Fig. \[fig-setup\]) and displays three facial expressions, including neutral, smile and surprise, respectively. Each expression is photographed under all illuminations. The flashes are switched automatically and quickly to guarantee the pose and expression consistency under different illuminations. The examples of different views, expressions and illuminations are plotted in Fig. \[fig-angles\], Fig. \[fig-expressions\], and Fig. \[fig-illuminations\], respectively.
Data Processing
---------------
After data acquisition, we carefully check each original image to clean the database. The participants whose poses are not standard and some blurred images are removed from the database. Ultimately, we select $120,283$ images from $479$ identities. Considering that it is hard for landmark detection algorithms to accurately detect facial landmarks under large poses, we manually mark the five facial landmarks for images with extreme poses ($\pm60^o$, $\pm75^o$ and $\pm90^o$). The landmarks of other angles are automatically detected by the algorithm and checked by human. All facial landmarks will be released along with the MVF-HQ database.
![Examples of the expressions. []{data-label="fig-expressions"}](fig-expressions.pdf){width=".48\textwidth"}
![Examples of the illuminations. []{data-label="fig-illuminations"}](fig-illuminations.pdf){width="48.00000%"}
Comparisons
-----------
Table \[table-high-resolution\] presents the comparisons of our MVF-HQ database with current public available high-resolution databases. We observe that MVF-HQ has the following advantages: (1) Large-Scale. MVF-HQ consists of $120,283$ images, far more than other databases ($70,000$ at most [@karras2018style]). (2) High-Resolution. High-performance digital SLR cameras enable the obtained images reach to $6000 \times 4000$ resolution, while other high-resolution databases can only reach to $1024 \times 1024$ resolution [@karras2017progressive; @karras2018style]. (2) Abundant Variants. MVF-HQ contains abundant variants, including poses, expressions and illuminations.
Experiments
===========
In this section, we evaluate our method on the MultiPIE [@gross2010multi], the RaFD [@langner2010presentation], the CelebA-HQ [@karras2017progressive] and our newly built MVF-HQ databases. The details of these databases and experimental settings are first introduced in Section \[database-and-settings\]. Then, massive qualitative and quantitative results are presented in Sections \[qualitative-experiments\] and \[quantitative-experiments\], respectively. Finally, experimental analyses are described in Section \[experimental-analysis\].
{width="98.00000%"}
Databases and Settings {#database-and-settings}
----------------------
**The MultiPIE database** is a multi-view database in the controlled environment for face recognition and synthesis. There are four sessions, including $337$ identities with $15$ view points, $20$ illumination levels and $6$ expressions. Due to the limitations of the data acquisition equipment, the resolution of the original face images in the MultiPIE database only reaches to $640 \times 480$. In our paper, we adopt two different settings for the quantitative and qualitative experiments, respectively. For the quantitative recognition experiments, following the Setting $2$ protocol of [@yim2015rotating; @hu2018pose], we only use the face images with natural expression under all $20$ illumination levels and $13$ poses ranging from $-90^o$ to $+90^o$. A total of $337$ subjects are chosen in the experiments. The first $200$ subjects are used for training and the remaining $137$ subjects are used for testing. For the testing set, the first face image of each subject is used as the gallery and other face images are used as probes. The setting protocol of our qualitative experiments is mainly based on the Setting $2$ protocol of [@yim2015rotating; @hu2018pose]. The difference is that, except for the natural expression, we also use the other $5$ expressions for expression edit. In our experiments, all images of the MultiPIE database are aligned and cropped to $128 \times 128$ resolution.
**The RaFD database** is a high-resolution ($681 \times 1024$) face database with abundant attributes that is mainly used for expression recognition. It consists of $8,040$ images from $73$ identities with eight kinds of standard emotional expressions. Furthermore, each identity also contains three different gazed directions (left, frontal and right) and five camera angles ($\pm90^o$, $\pm45^o$ and $0^o$). We randomly select $10$ identities as the testing set and use the remaining identities as the training set. Each image in the RaFD database is aligned and cropped to $512 \times 512$ resolution.
![Visualization comparisons with CAPG-GAN [@hu2018pose] on the MultiPIE Setting 2. Our method achieves better results in textures, e.g., the freckles in the first set of images. Zoom in for details. []{data-label="fig-multipie-compare"}](Fig-multipie-compare.pdf){width="48.00000%"}
**The CelebA-HQ database** is a high-quality version of the CelebA database [@liu2015deep]. The CelebA database is an in-the-wild face database that contains $202,599$ face images from $10,177$ celebrities. It has large diversities but with low image quality. Considering the important role of high quality images in the face synthesis tasks, researchers [@karras2017progressive] create the CelebA-HQ database by a series of image processing techniques. The CelebA-HQ database has $30,000$ images in $1024 \times 1024$ resolution. Note that most of face images in the CelebA-HQ database are nearly frontal view. In order to enrich the facial angles of the images in the CelebA-HQ database, we utilize a $3$D model [@zhu2016face] to rotate the frontal images to profiles. We randomly choose $3,000$ images as the testing set and use the remaining images as the training set. Each image is resized to $512 \times 512$ resolution. In addition, considering the abundant pose variants of the CelebA database, we also verify the effectiveness of our method on it. $2000$ images are selected as the testing set. Each image is aligned and cropped to $128 \times 128$ resolution.
**The MVF-HQ Database** is a high-resolution multi-view face database, the details of which are described in Section \[mvf-hq\]. For the face manipulation tasks with extreme pose and expression, all the above introduced traditional face manipulation databases have their limitations. The MultiPIE database is limited in the low resolution, while the high-resolution RaFD database and the CelebA-HQ database are mainly limited in the image number ($8,040$ images) and the pose diversity (nearly frontal), respectively. As presented in Table \[table-high-resolution\], the MVF-HQ database has the significant advantages in the number of images, the resolution and the diverse attributes. In our experiments, we randomly select $336$ identities as the training set and the remaining $143$ identities as the testing set. There are no identity overlaps between training and testing. In addition, due to the limited GPU memory, we only conduct experiments at $512 \times 512$ and $1024 \times 1024$ resolutions.
![Synthesis results on the MultiPIE database. The boundary images are generated by our boundary prediction stage. []{data-label="fig-multipie"}](Fig-multipie.pdf){width="48.00000%"}
{width="98.00000%"}
**Experimental Settings.** \[experimental-details\] The facial boundary image is obtained by the facial landmarks. Thanks to the advances in facial landmark detection [@bulat2017far], we first detect $68$ facial landmarks, and then connect the adjacent landmarks to obtain a boundary image, as shown in Fig. \[fig-nd-compare\]. It mainly consists of five facial components, including eyebrows, eyes, nose, mouth and jaw. These components can clearly present facial pose, expression and shape. The pose vectors are directly calculated according to the detected facial landmarks. Meanwhile, we utilize the Action Units (AUs) [@friesen1978facial] as our expression vectors, which are collected by the open source toolkit [@baltrusaitis2018openface]. The pose estimator $F_p$ and expression estimator $F_e$ in Section \[regression\] are pre-trained on the above four databases as well as the large-scale in-the-wild database CelebA [@liu2015deep]. The parameters $\lambda_1$, $\lambda_2$, $\alpha_1$, $\alpha_2$, $\alpha_3$ and $\alpha_4$ in Section \[overal-loss\] are set to $1$, $0.1$, $0.01$, $50$, $0.5$ and $0.02$, respectively. The parameter $m$ in Eq. (\[eq:threshold\]) is set to $7$. Adam [@kingma2014adam] ($\beta_1$ = 0.9, $\beta_2$ = 0.999) is adopted as the optimizer with a fixed learning rate $0.0002$. Our method is implemented by Pytorch. The high-resolution experiments on the MVF-HQ database are conducted on $8$ NVIDIA Titan X GPUs with $12$GB memory. Training takes about $12$ days for $1024 \times 1024$ resolution and about $7$ days for $512 \times 512$ resolution.
![Synthesis results on the CelebA database. (a) Diverse face manipulation results. (b) Different pose and expression changes of one input image. []{data-label="fig-celeba"}](Fig-celeba.pdf){width="48.00000%"}
Qualitative Experiments
-----------------------
**Experimental Results on the MultiPIE database.** According to the given conditional vectors, our method can render an input face to the corresponding pose and expression. Another state-of-the-art work to realize the similar task is CAPG-GAN [@hu2018pose], which rotates a face to the target pose by controlling $5$ facial landmarks. The comparison results between our method and CAPG-GAN are shown in Fig. \[fig-multipie-compare\]. CAPG-GAN concatenates the original face images and the target facial landmarks as input, and then directly feeds them into the generator. As mentioned in Section \[introduction\], such concatenation manner can not hold the textures well, which is also embodied in Fig. \[fig-multipie-compare\]. It is obvious that the synthesized images of CAPG-GAN are too smooth, leading to the loss of many facial texture details, such as the freckles in the first set of images. On the contrary, the synthesized images of our method are closer to the ground truth in terms of global structure and local textures. We owe the superiority of our method over CAPG-GAN to the disentanglement of structure and texture in the latent space.
{width="98.00000%"}
The other advantage of our method over CAPG-GAN is that, except for rendering poses, we can also edit the facial expressions. Fig. \[fig-multipie\] presents the synthesis results of manipulating pose and expression simultaneously. The boundary images in Fig. \[fig-multipie\] are generated in our boundary prediction stage, according to the given pose and expression vectors. The structure of our synthesized face images is consistent with the boundary images. At the same time, compared with the ground truth, the textures of the synthesized images are preserved well, even under extreme pose and expression.
![Visualization comparisons (512 $\times$ 512) with pix2pixHD [@wang2018high] on the RaFD database (the first row) and MVF-QH database (the second row). Please zoom in for details. []{data-label="fig-nd-rafd-compare"}](Fig-nd-rafd-compare.pdf){width="48.00000%"}
We further display the ability of our method to synthesize unseen poses and expressions in Fig. \[fig-continuous\]. By controlling the pose vector, the pose of the first person gradually rotates from $18.75^o$ to $56.25^o$. The angle interval of the synthesized images is $3.75^o$, while the angle interval in the MultiPIE database is $15^o$. All the poses of the synthesized images except the two images with red box are unseen in the training stage. In addition, the expression of the second person is gradually changed from neutral to scream. Although all the expressions in the MultiPIE database are discrete, our method has ability to synthesize unseen continuous expressions between neutral and scream. As shown in Fig. \[fig-continuous\], the second person with neutral expression begins with opening his mouth gradually. Then, his eyebrows are raised and the mouth is opened wildly. After that, his eyes begin to be closed and the mouth continues to be opened. Ultimately, the second person makes a scream expression. The continuous variants of pose and expression in Fig. \[fig-continuous\] demonstrate the generalization ability of our method.
**Experimental Results on the RaFD database.** We compare our method with pix2pixHD [@wang2018high], which is a state-of-the-art high-resolution conditional image-to-image translation method. The first row of Fig. \[fig-nd-rafd-compare\] plots the expression manipulation results. We can see that, although this task only needs to make a slight facial changes, pix2pixHD fails to hold the structure and texture well. The reason may be that pix2pixHD does not disentangle the structure and texture. Contrastively, our method has much better synthesis results. Moreover, in Fig. \[fig-rafd\], we also show the synthesis results of angry, contemptuous, disgusted, fearful, happy, sad and surprise expressions, respectively. Each synthesized expression is vivid and matches the target expression label. Besides, in the last column of Fig. \[fig-rafd\], we also present the high quality synthesis results of joint pose and expression variation. The small number of images in the RaFD database brings huge challenges to the above face manipulation tasks. The high quality of the synthesized images indicates the high performance of our method under the limited training images.
**Experimental Results on the MVF-HQ database.** The second row of Fig. \[fig-nd-rafd-compare\] plots the visualization comparisons between our method and pix2pixHD. For the synthesized face image of pix2pixHD, the local structures, such as the eye outline, are ambiguous and the whole facial textures are confused. Conversely, our synthesized image has clear structures and refined textures. In addition, we discover that the result of our method on the MVF-HQ database is better than that on the RaFD database. It may benefit from the large-scale training images of the MVF-HQ database.
In Fig. \[fig-nd-512-pose-v3\], we display the synthesis results of different poses and expressions on the MVF-HQ database. We observe that our method successfully synthesizes photo-realistic details, even under the extreme $90^o$ case. Moreover, we also explore a more challenging scenario, i.e., manipulating face images under $1024 \times 1024$ resolution. As presented in Fig. \[fig-nd-1024\], our method achieves excellent results, which not only preserves the global facial structure, but also synthesizes refined unseen textures, such as recovering the unseen ears of the third face image in the first row. More details, such as the double eyelids of the fourth face image in the first row and the thin eyebrows of the first face image in the second row, demonstrate the superiority of our method.
{width="98.00000%"}
**Experimental Results on the CelebA-HQ database.** All the above face manipulation databases are in the controlled environment. In order to explore the expansibility of our method under in-the-wild situation, we extend experiments on the CelebA-HQ database. As stated in Section \[database-and-settings\], most of images in the CelebA-HQ database are nearly frontal. The profiles are created by a $3$D model and have too many artifacts. In this case, we only conduct face frontalization experiments. That is, rotating the created profiles to the frontal ones. Due to the effects of uncontrolled variants, such as diverse illuminations and backgrounds, high-resolution face frontalization under the in-the-wild setting is challenging. Fig. \[fig-celeba-hq\] shows the results of the synthesized frontal faces from the profiles. Although the input profiles with massive artifacts lose many facial textures, our method successfully eliminates the artifacts and completes the lost textures.
We further perform experiments on the original version of the CelebA-HQ database, i.e., the CelebA database [@liu2015deep] that has abundant poses but with lower image quality. Fig. \[fig-celeba\] (a) plots the manipulation results with diverse pose variants, from which we observe that our method has ability to rotate extreme poses, such as the first set of images. Fig. \[fig-celeba\] (b) presents the different pose and expression changes of one input face image. The above visualization results demonstrate the superior performance of our method in the uncontrolled environment.
Quantitative Experiments
------------------------
In this subsection, we quantitatively evaluate the identity preserving property and synthesis quality of our method. As shown in Fig. \[fig-multipie\], our method effectively recovers the structure and texture from the profiles, which can be used to improve face recognition performance under large poses [@hu2018pose]. Hence, we compare the face recognition accuracy of our method with the state-of-the-art face frontalization methods, including 3D-PIM [@zhao20183d], CAPG-GAN [@hu2018pose], PIM [@zhao2018towards], TP-GAN [@huang2017beyond], FF-GAN [@yin2017towards] and DR-GAN [@tran2017disentangled] on the MultiPIE Setting $2$ protocol. The probe set consists of profiles with various views and the gallery set only contains one frontal face image per subject. The profiles in the probe set are frontalized by our method, and the pre-trained LightCNN [@wu2018light] is used to extract features. Cosine distances are calculated as the similarities to obtain the Rank-1 accuracies, the comparisons of which are tabulated in Table \[table-1\]. ‘LightCNN’ means evaluation on the original profiles via the pre-trained LightCNN model. ‘Ours’ means calculating the Rank-1 accuracy on the synthesized frontal face images by the same LightCNN model. We observe that as the pose degree increases, the accuracies of all the methods drop gradually. The degradation is caused by the loss of facial appearance of profiles. The Rank-1 accuracies of all the methods are comparable under small pose angles ($15^o$, $30^o$ and $45^o$). But at the larger pose degrees, the superiority of our method is obvious. Particularly, our method significantly improves the accuracy under the challenging $\pm 90^o$, and obtains the best performance compared with other state-of-the-art methods.
![The cross database experimental results (training on MultiPIE and testing on MVF-HQ). []{data-label="fig-cross"}](fig-cross.pdf){width="48.00000%"}
The probe and gallery settings of our MVF-HQ database are analogous to the MultiPIE database. Table \[table-2\] shows the comparison results of our method against other state-of-the-art methods, including pix2pixHD and CAPG-GAN. It is obvious that our method outperforms its competitors by a large margin under the extreme poses ($75^o$ and $90^o$). The quantitative recognition results are consistent with the qualitative visualization results in Fig. \[fig-multipie-compare\] and Fig. \[fig-nd-rafd-compare\]. The face recognition results on the MultiPIE database and the MVF-HQ database demonstrate that our method can effectively improve the recognition performance under large poses.
Besides, in order to evaluate the quality of the synthesized images, we compare Fr$\acute{\text{e}}$chet Inception Distance (FID) [@heusel2017gans] of CAPG-GAN and pix2pixHD, which is calculated between the real faces and the synthesized faces. The results in Table \[table-2\] qualitatively reveal the high quality synthesis character of our method.
Experimental Analysis
---------------------
**Ablation Study.** In this subsection, we investigate the roles of the five loss functions in our method, including the conditional regression loss $\mathcal{L}_{\text{reg}}$ in Eq. (\[eq:regression\]), the feature threshold loss $\mathcal{L}_{\text{thr}}$ in Eq. (\[eq:threshold\]), the multi-scale pixel-wise $\mathcal{L}_{\text{pix-mul}}$ loss in Eq. (\[eq:pix-2\]), the conditional adversarial loss $\mathcal{L}_{\text{adv}}$ in Eq. (\[eq:adv\]) and the identity preserving loss $\mathcal{L}_{\text{ip}}$ in Eq. (\[eq:ip\]). Both qualitative and quantitative experimental results are reported for better comparisons.
Fig. \[fig-ablation\] shows the qualitative visualization results of our method and its five variants. We discover that without $\mathcal{L}_{\text{reg}}$, the generated boundary image, whose pose does not belong to the database, is unsatisfactory. The outlines of many facial components, such as the nose and jaw, are unclear, resulting in an incomplete synthesized face image. It demonstrates that the conditional regression loss $\mathcal{L}_{\text{reg}}$ really facilitates the prediction of unseen poses and expressions. Without the feature threshold loss $\mathcal{L}_{\text{thr}}$, the local structures, e.g., the eyes and nose, are ambiguous and the textures are confused, indicating the effect of disentanglement. The parameter $m$ in $\mathcal{L}_{\text{reg}}$ has nonnegligible impacts on the synthesis results, which will be discussed in the following part. Without the multi-scale pixel-wise loss $\mathcal{L}_{\text{pix-mul}}$ (only utilizing one scale pixel-wise loss), the global structure is clear but the local textures, e.g., the teeth, are blurred. Hence, the multi-scale pixel-wise loss $\mathcal{L}_{\text{pix-mul}}$ contributes to recovery texture details. Without $\mathcal{L}_{\text{adv}}$, there are many artifacts in the synthesized image, revealing the validity of the conditional adversarial loss. Without the identity preserving loss $\mathcal{L}_{\text{ip}}$, the local textures, such as the beard, are somewhat light. The identity preserving loss may benefit to the enhancement of local texture details.
Table \[table-3\] further tabulates the FID and Rank-1 accuracy results of different variants of our method. Since the conditional regression loss is introduced for unseen poses and expressions, ‘w/o $\mathcal{L}_{\text{reg}}$’ is not listed in Table \[table-3\]. We observe that the FID will increase and the Rank-1 will decrease if one loss is not adopted, which are consistent with the qualitative visualization results in Fig. \[fig-ablation\]. These qualitative and quantitative results verify that each component in our method is essential for extreme high-resolution face manipulation.
**Cross Database Experiments.** Fig. \[fig-cross\] plots the results of cross database experiments. That is, training on the MultiPIE database and testing on the MVF-HQ database. There is a large domain gap between the two databases, because of the differences in the acquisition equipment, participants, backgrounds, etc. Although the synthesized images on the MVF-HQ database inevitably brings some domain information of the MultiPIE database, such as the backgrounds, our method successfully manipulates the input faces. The cross database experimental results further demonstrate the generalization ability of our method.
**Parameter Analysis.** \[parameter-analysis\] As mentioned in Section \[feature-threshold-Loss\], the value of the parameter $m$ in the feature threshold loss $\mathcal{L}_{\text{thr}}$ (Eq. (\[eq:threshold\])) has nonnegligible effects on the disentanglement. We plot the visualization results with different values of $m$ in Fig. \[fig-parameter\]. It can be observed that when the value of $m$ is too large, the synthesized faces are blurred due to the weak disentanglement. On the contrary, when the value of $m$ is too small, the textures of the synthesized faces will be somewhat lost because of the too compact texture features. The best result is obtained when $m=7$. In addition, the quantitative FID and Rank-1 accuracy results are listed in Table \[table-5\]. The results of these quantitative indicators are consistent with the visualization results in Fig. \[fig-parameter\]. When $m$ equals $7$, we obtain the minimum FID and maximum accuracy.
Conclusion
==========
This paper has developed a stage-wise framework for high fidelity face manipulation with extreme pose and expression. It simplifies the face manipulation into two correlated stages: a boundary prediction stage and a disentangled face synthesis stage. The first stage predicts the boundary image of the target face in a semi-supervised way, modeling pose and expression jointly. The second stage utilizes the predicted boundary to perform refined face synthesis. A proxy network and a novel feature threshold loss are introduced to disentangle the structure and texture in the latent space. Further, a new high-resolution MVF-HQ database has been created, which consists of 120,283 images in 6000 $\times$ 4000 resolution from 479 identities. It is much larger in scale and much higher in resolution than the existing public high-resolution face manipulation databases. Extensive experiments show that our method significantly pushes forward the advance of extreme face manipulation.
[^1]: Manuscript received October xx, 2019; revised December xx, 2019.
[^2]: The MultiPIE database contains a small number of frontal images in $3072 \times 2048$ resolution, but most of images are $640 \times 480$ resolution.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We propose and analyze an efficient high-dimensional quantum state transfer scheme through an $XXZ$-Heisenberg spin chain in an inhomogeneous magnetic field. By the use of a combination of coherent quantum coupling and free spin-wave approximation, pure unitary evolution results in a perfect high-dimensional swap operation between two remote quantum registers mediated by a uniform quantum data bus, and the feasibility is confirmed by numerical simulations. Also, we observe that either the strong $z$-directional coupling or high quantum spin number can partly suppress the thermal excitations and protect quantum information from the thermal noises when the quantum data bus is in the thermal equilibrium state.'
address:
- '$^{1}$ State Key Laboratory of Low-Dimensional Quantum Physics and Department of Physics, Tsinghua University, Beijing 100084, China\'
- '$^{1}$ State Key Laboratory of Low-Dimensional Quantum Physics and Department of Physics, Tsinghua University, Beijing 100084, China\'
- |
$^2$ School of Physics, Beijing Institute of Technology, Beijing 100081, China\
[email protected]
author:
- Zhe Yang
- Ming Gao
- Wei Qin
title: 'Transfer of high-dimensional quantum state through an $XXZ$-Heisenberg quantum spin chain'
---
introduction
============
The transfer of quantum state between two distant quantum registers is an essential task of quantum information processing (QIP)[@Chuang]. While long-range quantum communication can be realized by the use of photons[@photon1; @photon2; @photon3], coupled solid-state systems can act as quantum data buses to connect two separated registers for short-range communication, e.g., within a computer. Such data buses have been explored in the context of various quantum systems ranging from trapped ions[@ion1; @ion2] and super-conducting flux qubits[@superconductor1; @superconductor2; @superconductor3] to cavity arrays[@cavity1; @cavity2; @cavity3] and nanoelectromechanical oscillators[@oscillators1]. Due to the ability to provide an alternative to either direct register interactions or an interface between stationary and flying qubits, quantum spin chains have attracted much attention in recent years [@qin61; @Entang2; @Entang3; @IJMPB1; @IJMPB2; @QST1; @QST2; @QST3; @QST4]. In the original scheme[@Bose], S. Bose studied a uniform spin chain of Heisenberg coupling, and quantum information can be efficiently transferred between two ends of the spin channel via natural evolution. Moreover, many strategies aiming to achieve the perfect quantum state transfer (QST) over arbitrary distance have emerged, such as engineering the coupling strength in a way dependent of the chain length[@perf11; @perf12], implementing local measurements of individual spins[@perf21] and designing some special configurations of spin chains[@perf31; @perf32; @perf33]. Alternatively, coherent quantum coupling has been widely used to achieve high-fidelity QST by tuning the registers to interact weakly with the channel[@perf41; @perf42; @perf43; @perf51; @perf52].
Compared to two-dimensional systems working as qubits, high-dimensional systems as qudits also deserve to explore because they can carry large capacity and lead to a further insight into our understanding of quantum physics. Until now, many proposals of quantum computation[@CaoY] and quantum communication, e.g. quantum cloning[@clon1; @clon2], quantum teleportation[@tele1; @tele2; @tele3], quantum key distribution[@QKDd] and quantum correlation[@corr] have been extended to high-dimensional versions. Indeed, with some notable exceptions[@high1; @high2; @high3], where perfect high-dimensional state transfer over long distance has been implemented by utilizing a repeated measurement procedure or a free spin wave approximation, prior work on perfect QST in coupled-spin systems has primarily focused upon qubits[@2d1; @2d2; @2d3; @2d4; @2d5; @2d6].
In this paper, we devote our attention to a perfect transfer of high-dimensional quantum state through an $XXZ$-Heisenberg coupling spin chain of arbitrary length in an inhomogeneous magnetic field. On employing the Holstein-Primakoff transformation and the free spin wave approximation, the Hamiltonian takes the form of free bosons and can be diagonalized through an orthogonal transformation. Tuning the register-bus coupling in the $xy$ plane to be much smaller than that within the data bus enables a special data bus collective eigenmode resonating with the two end registers. As a consequence, unitary evolution results in a perfect swap operation between the two registers in the optimal time, and numerical simulations are performed to confirm it. Moreover, we observe that increasing either the strong $z$-directional coupling or high quantum spin number is capable of protecting quantum information from the thermal noises.
The structure of the paper is as follows. In section 2, we introduce the analysis of the model and give the Hamiltonian. In section 3, we show that a high fidelity QST and the thermal effects. Finally, we summarize the whole mechanism and draw our conclusions in the section 4.
Model and Analysis
==================
![(Color online) (a) Shown is a quantum data bus mediating two quantum registers, with an $XXZ$-Heisenberg coupling. We demonstrate that a perfect high-dimensional swap operation between the registers via purely unitary evolution over arbitrary distance by applying an inhomogeneous field. (b) We employ a $d$-dimensional space spanned by the low-lying level states ranging from $|0\rangle$ to $|d-1\rangle$ to encode quantum information as a qudit. The condition $ 2S>>d $ predicts that the spin-wave interaction can be neglected to yield a tight-binding Hamiltonian, which can be diagonalized through an orthogonal transformation. (c) On maintaining $ {\omega _0}/{\Omega _0} << 1$ ,there is a special data bus collective mode being resonantly coupled to the two registers, and off-resonant coupling can be neglected. Therefore, we achieve a high dimensional quantum state transfer protocol through this eigenmode-mediated quantum channel.[]{data-label="f1"}](figure1.eps){width="10cm"}
As shown in Fig. $1(a)$, an $XXZ$-Heisenberg model governs an $(N+2)$-site spin-$S$ chain in an inhomogeneous magnetic field. Only the nearest-neighbor interaction is considered and the system is described by $$\label{H1}
H = {H_{B}} +{H_I}+ {H_M}.$$ The Hamiltonian of the quantum data bus is $$\label{HB}
{H_B} = - {\Omega _0}\sum\limits_{i = 1}^{N - 1}{(S_i^ + S_{i + 1}^ - + S_i^ - S_{i + 1}^ + )} -{\Omega _z}\sum\limits_{i = 1}^{N - 1} {S_i^z} S_{i + 1}^z,$$ where ${\Omega _0}>0$ is the coupling strength in the $xy$-plane and ${\Omega _z}>0$ is that along the $z$-direction. $S_i^\nu$ is the $\nu$ $ (\nu {\rm{ = x,y,z}})$ component of the spin operator $ {{\mathbf{S}}_i} $ at the $i$-th site with $ S_i^ \pm = S_i^x \pm iS_i^y $. $H_I$ describes the interaction between the two end registers and the intermediate quantum data bus, $$\label{HI}
{H_I} = - {\omega _0}(S_s^ + S_1^ - + S_r^ + S_N^ - + H.c.) - {\omega _z}(S_s^zS_1^z + S_r^zS_N^z),$$ where ${\omega _0}>0$ is the interaction between the sender (receiver) and the quantum data bus in the $xy$-plane and ${\omega _z}>0 $ is that along the $z$-direction. The Zeeman term reads $$\label{HM}
{H_M} = - ({B_s}S_s^z + {B_r}S_r^z+\sum\limits_{i = 1}^N {{B_i}S_i^z}),$$ with $B_i$ being the local magnetic field on the $ith$-site in the $z$-direction. By implementing the Holstein-Primkoff (HP) transformation $S^{+}_{i}=\sqrt{2S-a_{i}^{\dag}a_{i}}a_{i}$ and $S^{z}_{i}=S-a^{\dag}_{i}a_{i}$, the Hamiltonian can be rewritten in terms of boson operators, and the state of each spin is described by a Fock state instead. In general, the low-lying $d$-dimensional space of the sender is harnessed to encode quantum information, and the input state is $ \left| {{\varphi_s}} \right\rangle = \sum\nolimits_{u = 0}^{d - 1} {{\alpha _u}{{\left| u \right\rangle }_s}} $, while the data bus and the receiver align in a parallel way being a ferromagnetic order [@perf11], as sketched in Fig. 1(b).
For a spin chain of $N+2$ spins-$S$, the Hilbert space $\mathcal{H }$ is of dimension ${(2S + 1)^{N + 2}}$. The Hamiltonian $H$ preserves the total bosonic number $ N = a_s^\dag {a_s} + a_r^\dag {a_r} + \sum\nolimits_{i = 1}^N {a_i^\dag {a_i}} $ due to $[H,N] = 0$. Therefore, $\mathcal{H}$ can be decomposed into an invariant subspace $\mathcal{S_G}$ spanned by $|n_{s},n_{1},\cdots,n_{N},n_{r}\rangle$ for $n_{s},n_{i},n_{r}=0,\cdots,d-1$, and the dynamics of the system is completely restricted in the the ${d^{(N + 2)}}$-dimensional subspace $\mathcal{S_G}$. Suppose that the dimension of the transferred state is much smaller than quantum spin number, i.e., $ d < < 2S$, the average boson number of each site could be much smaller than $2S$, $\left\langle {a_i^\dag {a_i}} \right\rangle <<2S$. Subsequently, the spin-wave interaction is negligible, such that the HP transformation is simplified to $S_i^ + = \sqrt {2S} {a_i}$[@HP; @HP1], leading to a bosonized tight-binding Hamiltonian $$\label{HF}
\begin{array}{l}
{H_B} = - 2{\Omega _0}S\;\sum\nolimits_{i = 1}^{N - 1} {({a_i}^{\dag}a_{i + 1} + H.c.)}
- {\Omega _z}\sum\nolimits_{i = 1}^{N - 1} {[{S^2} - S(a_i^\dag {a_i} + a_{i + 1}^\dag {a_{i + 1}})]} ,\\
\\
{H_I} = - 2{\omega _0}S(a_s^\dag {a_1} + a_r^\dag {a_N} + H.c.) - {\omega _z}[2{S^2} - S(a_s^\dag {a_s} + a_1^\dag {a_1} + a_r^\dag {a_r} + a_N^\dag {a_N})],\\
\\
{H_M} = - \left[B_{s}\left(S-a_{s}^{\dag}a_{s}\right)+B_{r}\left(S-a_{r}^{\dag}a_{r}\right)+\sum_{i=1}^{N} {{B_i}(S - a_i^\dag {a_i})}\right].
\end{array}$$ In order to achieve an efficient high-dimensional state transfer, we choose $$\label{B}
\begin{array}{l}
{B_s} = {B_r} = 2{\Omega _z}S, \\
{B_1} = {B_N} = {\Omega _z}S, \\
{B_2} =\cdots={B_{N - 1}} = {\omega _z}S,
\end{array}$$ and apply the following orthogonal transformation[@Chuang; @perf51; @XXd] $$\label{tran}
a_i^\dag = \sqrt {\frac{2}{{N + 1}}} \sum\limits_{k = 1}^N {\sin } \frac{{ik\pi }}{{N + 1}}c_k^\dag, \quad i=1,...,N,$$ the Hamiltonian $H$ is transformed to $$\label{HHH}
\begin{array}{l}
H{\rm{ = }}\sum\limits_{k = 1}^N {[({\varepsilon _k} + \Gamma )c_k^\dag {c_k}]} {\rm{ + }}\Gamma (a_s^\dag {a_s} + a_r^\dag {a_r})
{\rm{ + }}\sum\limits_{k = 1}^N {{t_k}[a_s^\dag {c_k} + {{\left( { - 1} \right)}^{k - 1}}a_r^\dag {c_k} + H.c]}.
\end{array}$$ where ${\varepsilon _k} = - 4{\Omega _0}S\cos (\frac{{k\pi }}{{N + 1}})$, $ {t_k} =- 2\omega_{0}S\sqrt{\frac{2}{N+1}}\sin (\frac{{k\pi }}{{N + 1}})$ and $\Gamma = (2{\Omega _z} + {\omega _z})S$. Note that the choice of the nonuniform field is applicable for only $N\geq3$, and in the special case of $N=1$, the field can be chosen as $B_{s}=B_{r}=\omega_{z}S+h$ and $B_{1}=h$ with $h\geq0$.
Quantum state transfer and the thermal effects
==============================================
By restricting our discussion to a case of odd $N$ chains, there exists a zero-energy data bus collective mode, corresponding to $\kappa {\rm{ = (N + 1)/2}}$, being resonantly coupled to the two end registers with strength ${t_\kappa } = - 2{\omega _0}S/A $ where $A = \sqrt {{{(N + 1)}\mathord{\left/{\vphantom {{(N + 1)} 2}}\right.\kern-\nulldelimiterspace}2}}$. Under the assumption that $ {\omega _0}/{\Omega _0} < < 1/\sqrt N $, off-resonant coupling can be neglected as a result of ${t_\kappa } \ll |{\varepsilon _\kappa } - {\varepsilon _{\kappa \pm 1}}|$, such that evolution dynamics behaves as an effective model in which only the two end registers and the $\kappa$-th collective mode are involved, as illustrated schematically in Fig. 1(c). In this case, the effective Hamiltonian $$\label{Heff}
\begin{array}{l}
{H_{\text{eff}}}{\rm{ = }}\Gamma (c_\kappa ^\dag {c_\kappa } + a_s^\dag {a_s} + a_r^\dag {a_r})+{t_\kappa}[a_s^\dag {c_\kappa} + {\left( { - 1} \right)^{\kappa - 1}}a_r^\dag {c_\kappa} + H.c]
\end{array}$$ governes the evolution of the system. In the Heisenberg picture, the operators should evolve in the full space associated with $H_{\text{eff}}$. Thus by choosing evolution time $\tau \equiv \pi /\sqrt 2 {t_\kappa }$, it yields $$\label{as}
\begin{array}{l}
a_s^\dag ({\tau}) = {( - 1)^\kappa }{e^{ - i\Gamma \tau }}a_r^\dag ,\;\;\;a_r^\dag ({\tau}) = {( - 1)^\kappa }{e^{ - i\Gamma \tau }}a_s^\dag,
\end{array}$$ which reveals that the quantum state of the sender can be perfectly transfered to the receiver in the optimal time $\tau $. A swap gate has been established between the sender and the receiver, up to an additional phase independent of the sent state. Without decoherence, our scheme can, in principle, achieve perfect QST in spin chains of arbitrary length. However, the optimal time should be much shorter than the coherence time when decoherence is present, and the chain length is therefore limited.
To confirm the efficiency of our method, numerical simulations are performed. Initially, the whole system, including the two end registers and the intermediate data bus, is in a product state $$\label{State}
|\psi\left(0\right)\rangle =|\varphi\rangle_{s}|0\rangle_{\text{bus}}^{\otimes N}|0\rangle_{r},$$ and $|0\rangle_{\text{bus}}^{\otimes N}=|0\rangle_{1}\otimes\cdots\otimes |0\rangle_{N}$. In general, the state of the receiver at time $ t $ is a mixed state ${\rho _r}(t )$, which can be obtained by tracing off the other sites ${\rho _r}(\tau ) = \text{Tr}{_{\hat r}}({e^{ - iHt}}\left| {\psi (0)} \right\rangle \langle \psi (0)|{e^{iHt }})$.
![(Color online) The average fidelity varies as a function of quantum spin number $S$ with $N=3$, $d=3$ and ${\omega _0}/{\Omega _0} = 0.1$ for three values of the $z$-directional coupling. Here, the evolution time is the optimal time $\tau$.[]{data-label="f2"}](figure2.eps){width="60.00000%"}
A complex projective space $\mathbb{C}{P^{d - 1}} $ is constructed by the set of the pure states of a $d$-dimensional Hilbert space. According to Hurwitz parametrization method, a pure $d$-dimensional state must be described by $2(d-1)$ parameters including $d-1$ azimuthal angles ${\theta _i}$ and $d-1$ polar angles $ {\varphi _i}$ $$\label{qudit}
\left| \Psi \right\rangle = (\cos {\theta _{d - 1}},\sin {\theta _{d - 1}}\cos {\theta _{d - 2}}{e^{i{\varphi _{d - 1}}}}, \sin {\theta _{d - 1}}\sin {\theta _{d - 2}}\cos {\theta _{d - 3}}{e^{i{\varphi _{d - 2}}}},...,\prod\limits_{i = 1}^{d - 1} {\sin {\theta _i}} {e^{i{\varphi _1}}})$$ with ${\theta _i} \in [0,\frac{\pi }{2}]$ and $ {\varphi _i} \in [0,2\pi )$. The fidelity between the sent state of the sender and the received state of the receiver at time $\tau$ is given by $F(\tau ){ = _s}\langle \varphi |{\rho _r}(\tau ){\left| \varphi \right\rangle _s}$. Correspondingly, the average fidelity over all possible input pure states is $$\label{Faver}
\left\langle {F(\tau )} \right\rangle = \frac{1}{V}\int_{} {F(\tau )} dV.$$ Here, $V=\pi^{d-1}/\left(d-1\right)$ is the total volume of the manifold of pure states, and $ dV =\prod_{p=1}^{d-1}\cos\vartheta_{p}\left(\sin\vartheta\right)^{2p-1}d\vartheta_{p}d\varphi_{p}$ is the volume element[@highst]. In fact, the average fidelity is a generalization of the usual Bose formula[@Bose], i.e. in the case of $d=2$, Eq. (\[Faver\]) takes the same form as the average fidelity of the case of qubit. In Fig. 2 the average fidelity varies as a function of quantum spin number $S$ when $N=3$ and $d=3$ with ${\omega _0}/{\Omega _0} = 0.1$. The numerical results are based on the Hamiltonian of Eq. (1), and three different $z$-directional coupling strengths are chosen to demonstrate the feasibility of the method. We observe that the average fidelity increases with $S$, and when ${\omega _0}/{\Omega _0} <<1$ and $d<<2S$, the average fidelity nearly tends to one, e.g., in a case of $S=10$, $\left\langle {F(\tau )} \right\rangle$ is $0.9974$ (black line), $0.9984$ (red line), and $0.9986$ (blue line). The leakage of quantum information results from either the off-resonant coupling or the spin-wave interaction. In the following, an investigation on the thermal effects will be numerically given when the quantum data bus is in a thermal equilibrium state described by $$\label{pb}
{\rho _B} = \frac{1}{Z}e^{-H_{B}/T},$$ where $Z = \textnormal{tr}({e^{ - {H_B}/T}})$ characterizes a partition function and $T$ represents the temperature.
![(Color online) The average fidelity varies as a function of temperature with $N=1$ and $d=3$ for either (a) three $z$-directional coupling strengths in a case of $S=3$, or (b) three quantum spin numbers in a case of ${\omega _0} = {\omega _z}$. Here, $h=\omega_{z}S$ and the evolution time is the optimal time $\tau$.[]{data-label="f4"}](figure3.eps){width="55.00000%"}
The density matrix of the whole system is in a product state, $$\label{p0}
\rho (0) = \sum\limits_{\mu ',\mu = 0}^{d - 1} {{\alpha _\mu }\alpha _{\mu '}^*{{\left| \mu \right\rangle }_s}{{\left| 0 \right\rangle }_r}\langle \mu '{|_s}\langle 0{|_r}} \otimes {\rho _B}.$$
In Fig. \[f4\] we plot the average fidelity as a function of the temperature for a bus of length $N=1$ initially in its thermal equilibrium state: $F\langle\tau\rangle$ decreases with $T/\omega_{0}$ owing to the validity of free spin wave approximation only in the low boson excitation regime, however, increasing either $\omega_{z}$ or $S$ can depress the thermal noises to prevent quantum information from leaking. In Fig. \[f4\](a), it should be noted that the $z$-directional coupling contains the spin-wave interaction found in nonlinear terms of the HP transformation, and such coupling can lead to the leakage of quantum information, specially at very low temperature range. However, with the temperature increasing, the $z$-directional coupling can effectively cope with the thermal effects, and provide the protection for quantum information instead. Moreover, from the model featured by $H$ of Eq. (\[HHH\]), both the $z$-directional coupling and the magnetic field result in $\Gamma$ being capable of protecting quantum information, in similar to the magnetic field applied on an $XX$ coupling spin chain.
Summary
=======
In this paper a quantum state transfer protocol has been studied through an $XXZ$ coupling spin chain in the presence of an inhomogeneous magnetic field. Upon harnessing coherent quantum coupling and free spin-wave approximation, off-resonant couplings and spin-wave interactions can be ignored, and consequently, an arbitrary unknown high-dimensional quantum state can be transferred between two remote registers with high fidelity via purely dynamical evolution. When the quantum data bus is in the thermal equilibrium state, the effects of the temperature on the state transfer protocol have also been numerically studied. In contrast to previous work on $XX$ coupling spin chains, an additional $z$-directional coupling can depress the thermal excitations and partly counteract the thermal effects to ensure the feasibility of the present method. With its scalability and robustness, this protocol may be applicable in a high-dimensional solid device for quantum information processing.
Acknowledgments {#acknowledgments .unnumbered}
===============
We gratefully thank Chao Lian, Shuzhe Shi and Hui Li for helpful discussions.
References {#references .unnumbered}
==========
[99]{}
M. A. Nielsen and I. L. Chuang, *Quantum Computation and Quantum Information* (UK: Cambridge University Press, 2000). K. Mattle, H. Weinfurter, P. G. Kwiat, and A. Zeilinger, Phys. Rev. Lett. [**76**]{}, 4656 (1996). J. I. Cirac, P. Zoller, H. J. Kimble, and H. Mabuchi, Phys. Rev. Lett. [**78**]{}, 3221 (1997). T. Jennewein, C. Simon, G. Weihs, H. Weinfurter and A. Zeilinger, Phys. Rev. Lett. [**84**]{}, 4729 (2000). D. Kielpinski, C. Monroe, and D. J. Wineland, Nature (London) [**417**]{}, 709 (2002). F. Schmidt-Kaler, H. Haffner, M. Riebe, S. Gulde, G. P. T. Lancaster, T. Deuschle, C. Becher, C. F. Roos, J. Eschner, and R. Blatt, Nature (London) [**422**]{}, 408 (2003). M. A. Sillanpää, J. I. Park, and R. W. Simmonds, Nature (London) [**449**]{}, 438 (2007). J. Q. You and F. Nori Nature (London) [**474**]{}, 589 (2011). J. Majer, J. M. Chow, J. M. Gambetta, Jens Koch, B. R. Johnson, J. A. Schreier, L. Frunzio, D. I. Schuster, A. A. Houck, A. Wallraff, A. Blais, M. H. Devoret, S. M. Girvin, and R. J. Schoelkopf, Nature (London) [**449**]{}, 443 (2007). C. D. Ogden, E. K.Irish, and M. S. Kim, Phys. Rev. A [**78**]{}, 063805 (2008). G. D. de Moraes Neto, M. A. de Ponte, and M. H. Y. Moussa, Phys. Rev. A [**84**]{}, 032339 (2011). Y. Liu, and D. L. Zhou, New J. Phys. 17(1), 013032 (2015). J. Eisert, M. B. Plenio, S. Bose, and J. Hartley, Phys. Rev. Lett. [**93**]{}, 190402 (2004). S. Bose, Phys. Rev. Lett. [**91**]{}, 207901, (2003). M. Christandl, N. Datta, A. Ekert, and A. J. Landahl, Phys. Rev. Lett. [**92**]{}, 187902 (2004). M. Christandl, N. Datta, T. C. Dorlas, A. Ekert, A. Kay, and A. J. Landahl, Phys. Rev. A [**71**]{}, 032312 (2005). F. Verstraete, M. A. Martín-Delgado, and J. I. Cirac, Phys. Rev. Lett. [**92**]{}, 087201 (2004). D. Burgarth and S. Bose, Phys. Rev. A [**71**]{}, 052315 (2005). Y. Li, T. Shi, B. Chen, Z. Song, and C. P. Sun, Phys. Rev. A [**71**]{}, 022301 (2005). T. J. Osborne and N. Linden, Phys. Rev. A [**69**]{}, 052315 (2004). A. Wájcik, T. [Ł]{}uczak, P. Kurzyński, A. Grudka, T. Gdala, and M. Bednarska, Phys. Rev. A [**72**]{}, 034303 (2005). A. Wájcik, T. [Ł]{}uczak, P. Kurzyński, A. Grudka, T. Gdala, and M. Bednarska, Phys. Rev. A [**75**]{}, 022330 (2007). L. Campos Venuti, C. Degli Esposti Boschi, and M. Roncaglia, Phys. Rev. Lett. [**99**]{}, 060401 (2007). L. Campos Venuti, S. M. Giampaolo, F. Illuminati, and P. Zanardi, Phys. Rev. A [**76**]{}, 052328 (2007). N. Y. Yao, L. Jiang, A. V. Gorshkov, Z.-X. Gong, A. Zhai, L.-M. Duan, and M. D. Lukin, Phys. Rev. Lett. [**106**]{}, 040505 (2011). N. Y. Yao, Z.-X. Gong, C. R. Laumann, S. D. Bennett, and L.-M. Duan, and M. D. Lukin, and L. Jiang, and A. V. Gorshkov, Phys. Rev. A [**87**]{}, 022306 (2013). W. Qin, C. Wang, Y. Cao, G. L. Long, Phys. Rev. A [**89**]{}, 062314 (2014). L. Campos Venuti, S. M. Giampaolo, F. Illuminati, and P. Zanardi, Phys. Rev. A. [**76**]{}, 052328 (2007). R. H. Crooks and D. V. Khveshchenko, Phys. Rev. A [**77**]{}, 062305 (2008). T. J. G. Apollaro, S. Lorenzo, and F. Plastina, Int. J. Mod. Phys. B [**27**]{}, 1345035 (2013). J. Liu, G. F. Zhang, and Z. Y. Chen, Int. J. Mod. Phys. B [**24**]{}, 1279 (2010). S. Paganelli, F. De Pasquale and G. L. Giorgi, Phys. Rev. A [**74**]{}, 012316 (2006). S. Lorenzo, T. J. G. Apollaro, A. Sindona and F. Plastina, Phys. Rev. A [**87**]{}, 042313 (2013). S. Lorenzo, T. J. G. Apollaro, S. Paganelli, G. M. Palma and F. Plastina, Phys. Rev. A [**91**]{}, 042321 (2015). S. J. Large, M. S. Underwood and D. L. Feder, Phys. Rev. A [**91**]{} 032319, (2015). Y. Cao, S. G. Peng, C. Zheng, and G. L. Long, Commun. Theor. Phys. [**55**]{}, 790 (2011). R. F. Werner, Phys. Rev. A [**58**]{}, 1827 (1998). M. Keyl and R. F. Werner, J. Math. Phys. [**40**]{}, 3283 (1999). A. Acin, N. Gisin and V. Scarani, Quantum Inf. Comput. [**3**]{}, 563 (2003). G. Rigolin, Phys. Rev. A [**71**]{}, 032303 (2005). X. Ge and Y. Shen, Phys. Lett. B [**606**]{}, 184 (2005). M. Jiang, X. Huang, L. L. Zhou, Y. M. Zhou, and J. Zeng, Chin. Sci. Bull. [**57**]{}, 2247 (2012). V. Karimipour, A. Bahraminasab, and S. Bagherinezhad, Phys. Rev. A [**65**]{}, 052331 (2002). H. Li, Y. S. Li, S. H. Wang, and G. L. Long, Commun. Theor. Phys. [**61**]{}, 273 (2014). A. Bayat, Phys. Rev. A [**89**]{}, 062302 (2014). W. Qin, C. Wang, and G. L. Long, Phys. Rev. A [**87**]{}, 012339 (2013). W. Qin, J. L. Li, and G. L. Long, Chin. Phys. B [**24**]{}, 040305 (2015). T. J. G. Apollaro, L. Banchi, A. Cuccoli, R. Vaia and P. Verrucchi, Phys. Rev. A [**85**]{}, 052319 (2012). L. Banchi, T. J. G. Apollaro, A. Cuccoli, R. Vaia, and P. Verrucchi, New J. Phys. [**13**]{}, 123006 (2011). K. Korzekwa, P. Machnikowski and P. Horodecki, Phys. Rev. A, [**89**]{}, 062301 (2014). Z. C. Shi, X. L. Zhao, and X. X. Yi, Phys. Rev. A, [**91**]{}, 032301 (2015). S. Paganelli, S. Lorenzo, T. J. Apollaro, F. Plastina and G. L. Giorgi, Phys. Rev. A, [**87**]{}, 062309 (2013). W. Qin, C. Wang and X. Zhang, Phys. Rev. A, [**91**]{}, 042303 (2015). T. Holstein and H. Primakoff, Phys. Rev. [**58**]{}, 1098 (1940). J. M. Ziman [*Principles of the theory of solids, second edition*]{} (Cambridge University Press, Cambridge, UK, 1972). E. Lieb, T. Schultz, and D. Mattis, Ann. Phys. (NY) [**16**]{}, 407 (1961). K. Życzkowski and H. Sommers, J. Phys. A [**34**]{}, 7111 (2001).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The necessity of an introduction of discrete physical objects in physics conception is analysed taking into consideration an optimum stage for postulating of some like objects in microworld as well as in macroworld including the new “physical graph” as a discrete microobject and carrying out its analogy with “Kirchhoff’s laws graph” for an electric network as a prototype of discrete macroobject which correspond both to discrete sets of trees — root trees (for microobjects) or skeleton trees (for macro networks). The transitions are found connecting the usual $S$-matrix theory with Feynman integrals and Feynman diagrams and the new physical graph kinematics formalism which uses the natural root trees basis for the treatment of the structure of an arbitrary complicated physical microobject with a specific “graph microgeometry” — beyond space-time consideration. Accordingly to the QCD results the proton (nucleon) mass is determined in terms of the root trees number $T_{v=11}$=1842 which corresponds to $v$=11 physical graph vertices. It is supposed that by means of a double- and a triple-splitting of the root trees numbers could be estimated the masses of the various series of another microobjects.'
address: |
All - Russian Institute for Scientific and Technical Information, VINITI, Moscow 125315, Russia\
(e-mail:[email protected])
author:
- 'V. E. Asribekov'
title: |
Graph kinematics of discrete physical objects:\
beyond space - time. I. General
---
PACS: 11.90, 12.90, 02.10
1 Introduction {#introduction .unnumbered}
==============
In our days the continuous physical objects fashion especially based on their field-theoretical description is very strong. In this connection it is really hard for any other point of view to gain a hearing.
Of course, it is naturally that in case of the macroobjects it is not necessary obviously to introduce some alternatives for this standard approach within a customary “external” geometry. Nevertheless, beginning from the most elementary viewpoint for describing atomic events we could not picture how the jump from one electronic orbit to another took place and we just had to accept it as a kind of discontinuity.
The following evolution of a picture of the nature towards the nuclear and subnuclear microobjects picture leads us to more open discontinuities and at last to the evident discrete physical objects perhaps with the proper “internal” geometry corresponding to their inner probably discrete structure. It it notable that a well-known field-theoretical problem of the divergence difficulties at the small distances for now structural physical microobjects eliminates automatically what could be considered as an essential result in the area of discrete objects physics.
In parts I–III of paper we describe a possible transition to the some representation of discrete physical objects using the graph theory formalism.
It is important to have in view that in general a discrete mathematics as well as the physical theories and models with any discrete structural objects are not derived from or reduced to the continuous mathematics and the physical theories and models with corresponding continuous objects. Therefore the real appearance in physics of various discrete physical objects can be made solely by the introduction of an adequate mathematical postulate without some additional justifications but using any typical “discrete-like” results from an initial quasi-continuous physical theory; nevertheless the continuous theories altogether are not excluded from a following consideration. For this purpose however it is necessarily to find a definite stage in continuous physics development setting up the insufficiency of its continuous theory. Inasmuch as the above-mentioned difficulties arise in phenomena involving very small distances (or very high energies) we may choose as a such stage the transition to microworld. In this connection part I includes Feynman diagram technique within [*S*]{}-matrix theory for microobjects. Taking into account a non-equivalence of some following singularities–analysed diagram technique (post–Feynman), based on the stable microobjects only, to a perturbation theory we postulate a new derivative physical graph formalism as a first step to discrete microobject. Owing to an analogy of such physical graph with Kirchhoff’s laws graph for an electric network there exist various discrete physical objects which may be presented through the discrete sets of graphs—skeleton or root trees, beyond usual space-time. In part II a proposed graph formalism is applied to the calculation of some qualitative and numerical characteristics for different microobjects without using of the continuous theories formalism (QED, QCD, etc.) itself but only its results. And part III is devoted to a possible realization of the Heisenberg—Dyson’s two-layer physics.
1.1 Heisenberg’s $S$-matrix\
of 1943 and its state vectors basis\
in the Hilbert space {#heisenbergs-s-matrix-of-1943-and-its-state-vectors-basis-in-the-hilbert-space .unnumbered}
------------------------------------
It is known that proposed by Heisenberg in 1943 the $S$-matrix contains the only physically measurable quantities and supposes the existence of a corresponding Hilbert space with the positive metric and the possibility to construct the complete Hilbert space basis. The $S$-matrix must be unitary and have so much analyticity that it represents what observed as causality; it also must have an invariance for the Lorentz group and for [*TCP*]{}, an approximate invariance for the isospin group, and so on.
Since 1948–49 the Feynman version of QED became the prototype of what is now called $S$-matrix theory which gave directly the rules for calculating $S$-matrix elements by means of Feynman integrals and corresponding Feynman diagrams. It is important the Feynman theory is a pure physical object theory, and the Feynman diagram describes any elementary process naively as a propagation of physical object from one vertex to another along a connected line.
1.2 “Singularities matrix” for Feynman integrals\
and networks of the new physical graphs\
in momentum space {#singularities-matrix-for-feynman-integrals-and-networks-of-the-new-physical-graphs-in-momentum-space .unnumbered}
-------------------------------------------------
A consistent analysis of all possible singularities of Feynman integral as a key quantity of the $S$-matrix formalism in Hilbert space (see for example, Ref. \[1\])
$$\displaystyle
\int\prod\limits^{l}_{r}d^{4}k_{r}\prod\limits^{n}_{s}f^{-n}\delta\!\left(\sum
\alpha -1\right)d\alpha_{s}, \eqno(1)$$ $$\displaystyle
f=\sum\limits^{n}_{i}\alpha_{i}\,(m_{i}^{2}-q_{i}^{2}), \eqno(2)$$ determining the contribution of an arbitrary Feynman diagram with $N$ external 4-momenta $\displaystyle p_{j}$ $(j=1,\,2,...,\,N)$, $n$ internal 4-momenta $\displaystyle q_{i}$ $(i=1,\,2,...,\,n)$ and $l$ loop momenta $\displaystyle k_{r}$ ($r=1,\,2,...,\,l)$ can in principle be performed by solving a set of corresponding $v$ laws of conservation of 4-momenta in $N$ external (algebraic sum over \[$j$\]) and $v-N$ internal (algebraic sum over ($i$)) vertices and $l$ Landau extremal independent loop equations (algebraic sum over $<r>$) in fact already for a new derivative (post–Feynmanian) physical graph with the same characteristics
$$\ \displaystyle\sum\limits_{[j]}\epsilon q=p_{j}; \
\ \ j=1,\,2,...,\,N,
\eqno (3)$$ $$\rule{14dd}{0dd}\displaystyle\sum\limits_{(i)}\epsilon q=0; \ \
\ i=1,\,2,...,\,v-N, \eqno (4)$$ $$\displaystyle\sum\limits_{<r>}\alpha q=0; \ \ \ r=1,\,2,...,\,l.
\eqno (5)$$
Actually the set from (3) and (4) contains only $v-1$ independent equations, since one of the equations corresponds to the law of conservation for the external 4-momenta $\displaystyle
p_{j}$ $$\displaystyle\sum\limits^{N}_{j}\epsilon p=0 \eqno(6)$$
(everywhere $\displaystyle\epsilon =0,\, \pm 1).$
Therefore the total number of independent equations (3), (4) and (5) is equal precisely to the number of the unknown internal 4-momenta $q$ $$\displaystyle [N+(v-N)]-1+l=v-1+l=n.$$
The rank of the obtained square matrix of coefficients from equations (3), (4) and (5) — so-called “singularities matrix” for the Feynman integral (1) $$\displaystyle \mathop{\bf M}_{(n\times n)}=\left\{
\begin{array}{c}
\displaystyle\mathop{{\bf I}\,(\epsilon )}_{(v-1\times n)} \\
\displaystyle\mathop{{\bf A}\,(\alpha)}_{(l\times n)} \\
\end{array}\right\}\eqno (7)$$
as can be shown is also equal to $n$.
This composite matrix $\displaystyle\mathop{\bf M}_{(n\times n)}$ contains the incidence matrix $\displaystyle\mathop{\bf I}_{(v-1\times
n)}\!\!\!\!(\epsilon_{ij})$ and the independent loop matrix $$\displaystyle \mathop{\bf A}_{(l\times
n)}\!\!(\alpha)=\mathop{\bf C}_{(l\times n)}\!\!(\epsilon_{ij})
\mathop{{\bf D}_{n}}_{(n\times n)}\!\!(\alpha )$$ (where $\displaystyle\epsilon_{ij}=0,\, \pm 1)$; here $\displaystyle\mathop{\bf C}_{(l\times n)}\!(\epsilon_{ij})$ is an usual cyclic (circuit) matrix and $\displaystyle
\mathop{{\bf D}_{n}}_{(n\times n)}\!\!(\alpha)$ — diagonal matrix
$$\displaystyle
{\bf D}_{n}\,(\alpha)=
\displaystyle \left(
{\arraycolsep=1.5dd
\begin{array}{cccc}
\alpha_1 & & & \\
&\alpha_2 & & \\
& & \ddots{} & \\
& & & \alpha_n \\
\end{array}}\right).$$
It is known (see Ref. \[2\]) that $\displaystyle \mathop{\bf C}_{(l\times n)}\times\mathop{{\bf
I}^{\bf T}}_{(n\times v-1)}\equiv 0.$
The rest of Landau extremal equations for internal 4-momenta
$$\displaystyle q^{2}_{i}=m_{i}^{2}; \ \ \ i=1,\,2,...,\,n
\eqno(8)$$
together with the initial conditions for external 4-momenta
$$\displaystyle p_{j}^{2}=M_{j}^{2}; \ \ \ j=1,\,2,...,\ N,
\eqno(9)$$
where $\displaystyle m_{i}$ and $\displaystyle M_{j}$ are the masses of internal and external lines in a new derivative physical graph, set up that all 4-momenta — an internal $\displaystyle
q_{i}$ as well as an external $\displaystyle p_{j}$ — can be located according to Landau (see Ref. \[3\]) factually on the mass shell.
Thus the physical graph includes in fact only the real stable physical objects and the matrix equation
$$\displaystyle \mathop{\bf M}_{(n\times
n)}\mathop{(Q_{n})}_{(n\times 1)}=\mathop{(P_{n})}_{(n\times
1)}\eqno (10)$$
where by means of $\displaystyle\mathop{(Q_{n})}_{(n\times 1)}$ and $\displaystyle\mathop{(P_{n})}_{(n\times 1)}$ are denoted the column vectors $$\displaystyle \mathop{(Q_{n})}_{(n\times
1)}=\left(\begin{array}{c}
q_{1} \\
q_{2} \\
q_{3} \\
q_{4} \\
\vdots{} \\
%q_{n-2} \\
q_{n-1} \\
q_{n}
\end{array}\right),\ \mathop{(P_{n})}_{(n\times
1)}=\left(\begin{array}{c}
p_{1} \\
p_{2} \\
\vdots{} \\
p_{N-1} \\
0 \\
\vdots{} \\
0 \\
\end{array}\right) \eqno(11)$$
may serve as a starting point for the following consistent consideration of the networks of these physical graphs in momentum space. Although it is necessarily to note that the direct solution of the full set of algebraic equations (3)–(6), (8), (9) — for a separated as well as for a complicated network of such physical graphs — is enough complex and labor-intensive but quite feasible. The fact is that the results may be classified on the base of the forms of construction of the special $l,\,v$ — sequences of physical graphs, namely the ladder and the parquet $l,\,v$ — sequences.
2 Kirchhoff’s laws matrix for\
an electric network analogy of 1847 {#kirchhoffs-laws-matrix-for-an-electric-network-analogy-of-1847 .unnumbered}
===================================
In agreement with the classical Kirchhoff’s work of 1847 (see Ref. \[4\]) initiated a development of the graph theory (especially in part of the particular graphs — the trees) the investigation of an arbitrary electric network with $n$ wires is carried out by means of two laws:
$$\!\!\displaystyle \sum\limits^{}_{[j]}\epsilon J=J^{*}_{j}; \ \
\ \ j=1,2,\ldots{} ,N,\eqno(12a)$$
$$\displaystyle \sum\limits^{}_{(i)}\epsilon J=0;\ \ i=1,2,\ldots{}
,v-N,\eqno(12b)$$
$$\displaystyle \sum\limits^{}_{<r>}J\omega=\sum\limits^{}_{<r>}{\cal E};\ \ r=1,2,\ldots{} ,l.\eqno(13)$$
The first, current law for $N$ external (algebraic sum over \[$j$\] in (12a)) and for $v-N$ internal (algebraic sum over ($i$) in (12b)) vertices of the electric network states that the algebraic sum of the currents $J$ flowing through all the network wires that meet at a vertex is the external current $J^{*}$ or zero. The second, voltage law for $l$ circuits (loops) states that the algebraic sum of the electromotive forces ${\cal E}$ within any closed circuit is equal to the algebraic sum of the products of the currents $J$ and the resistances $\omega$ in the various portions of the circuit.
The Kirchhoff’s laws (12a), (12b), (13) for an electric network including only the real measurable quantities $J,{\cal E}$ (and also $\omega$) are formally the same as the linear equations (3)–(5) for an identical topologically physical graph with real objects quantities $q,\ p$. The electric circuit analogy still with the suitable Feynman diagram has been carried by Bjorken, T. T. Wu and Boiling earlier in 1959–64 (see Ref. \[1\]). It is easy to show (see for example, Ref. \[5\]) however that the number of independent equations among (12a)–(12b) as in the case of set (3) and (4) is equal to $v-1$ and therefore the rank of the resulting square matrix of coefficients
$$\displaystyle \mathop{\bf M}_{(n\times
n)}^{}=\left\{\begin{array}{c}
\displaystyle\mathop{{\bf I}(\epsilon)}_{(v-1\times n)} \\
\displaystyle\mathop{{\bf C}(\epsilon)}_{(l\times
n)}^{}\displaystyle\mathop{{\bf D}_{n}(\omega)}_{(n\times n)}^{}
\end{array}\right\}\eqno(14)$$
is exactly corresponded to the number of unknown components of the column vector $(\displaystyle J_{n})$ i. e. to $n$.
In this way the soluble matrix equation for a typical Kirchhoff’s matrix (14) may be written in the obvious form
$$\displaystyle \left\{\begin{array}{c}
\displaystyle\mathop{{\bf I}(\epsilon)}_{(v-1\times n)} \\
\displaystyle\mathop{{\bf C}(\epsilon)}_{(l\times
n)}^{}\displaystyle\mathop{{\bf D}_{n}(\omega)}_{(n\times n)}^{}
\end{array}\right\}\displaystyle
\mathop{(J_{n})}_{(n\times
1)}^{}=\left(\begin{array}{c}
\displaystyle\mathop{(J^{*}_{v-1})}_{(v-1\times 1)}^{} \\
\displaystyle\mathop{{\bf C}\,(\epsilon)}_{(l\times
n)}^{}\displaystyle\mathop{({\cal E}_{n})}_{(n\times 1)}^{}
\end{array}\right)\eqno(15)$$
where by means $\displaystyle \mathop{(J_{n})}_{(n\times 1)}^{},\
\displaystyle \mathop{(J^{*}_{v-1})}_{(v-1\times 1)}^{}$ and $\displaystyle \mathop{({\cal E}_{n})}_{(n\times 1)}^{}$ are denoted the column vectors
$$\displaystyle \mathop{(J_{n})}_{(n\times
1)}^{}=\left(\begin{array}{c}
J_{1} \\
J_{2} \\
J_{3} \\
\vdots \\
J_{n-1} \\
J_{n}
\end{array}\right), \
\mathop{(J^{*}_{v-1})}_{(v-1\times
1)}^{}=\left(\begin{array}{c}
J^{*}_{1} \\
\vdots{} \\
J^{*}_{N-1} \\
0 \\
\vdots{} \\
0
\end{array}\right),$$
$$\displaystyle
\mathop{({\cal E}_{n})}_{(n\times 1)}^{}=\left(\begin{array}{c}
{\cal E}_{1} \\
{\cal E}_{2} \\
{\cal E}_{3} \\
\vdots{} \\
{\cal E}_{n-1} \\
{\cal E}_{n}
\end{array}\right).\eqno(16)$$
2.1 Skeleton trees basis\
for electric network {#skeleton-trees-basis-for-electric-network .unnumbered}
-------------------------
In accordance to Kirchhoff’s theorem (see Refs. \[2\], \[5\]) every electric network can be substituted by a corresponding graph with the same number of vertices $v$. The solution of the equations (12) and (13) for this adequate graph may be presented through the set of skeleton $v$-trees. The maximum set of independent skeleton $v$-trees for the corresponding full $K_{v}$-graph, with $\displaystyle \left(v\atop 2\right)$ lines, forms the skeleton trees basis for electric network. By using of this basis can be constructed the all possible solutions for various concrete networks (lots of such electric network examples described in Ref. \[5\]).
2.2 Kirchhoff–Maxwell topological analysis\
of electric networks {#kirchhoffmaxwell-topological-analysis-of-electric-networks .unnumbered}
-------------------------------------------
Two-layer structure of the typical Kirchhoff’s matrix (14) (or analogous matrix (7)) in a general matrix equation (15) (or in corresponding matrix equation (10)) shows a separation of the functions of the matrix operator (14) (or (7)) acting, firstly, upon the “internal geometry” of electric network by means of the incidence matrix $\displaystyle\mathop {{\bf
I}(\epsilon)}_{(v-1\times n)}$ (“upper layer”) and, secondly, upon the topologically other equilibrium conditions along the circuits (loops) of electric network by means of the loop matrix $\displaystyle\mathop{{\bf C}(\epsilon)}_{(l\times n)}$ $\displaystyle\mathop{{\bf D}_{n}(\omega)}_{(n\times n)}$ (“under layer”) within the framework of the same skeleton trees basis. In this connection it is important to note that into the structure of the incidence matrix $\displaystyle\mathop{{\bf I}(\epsilon)}_{(v-1\times n)}$ is embedded the “internal geometry” of real electric network including the natural skeleton trees basis factually as a creation of the specific “graph geometry” beyond the space-time consideration.
By introducing of “circuit currents” of Helmholtz–Maxwell instead of “branch (wire) currents” of Kirchhoff can be performed formally the analogous concrete calculations on the base of corresponding topologically equivalent Maxwell rules (see Ref. \[5\]) within the same “graph geometry”.
Obviously, the last formalism of the Helmholtz–Maxwell “circuit currents” is like to the formalism of “($p,\,k$)-diagrams” (see Ref. \[6\]) which may be used as an adequate tool for the analysis of the physical graphs from section 1. 2. However this problem is beyond the task of a given paper.
3 “Microgeometry” inside\
of the physical objects\
considering within the framework\
of the input–output scheme {#microgeometry-inside-of-the-physical-objects-considering-within-the-framework-of-the-inputoutput-scheme .unnumbered}
=================================
Heisenberg’s theory of the $S$-matrix connects the input and output of a scattering experiment without seeking to give a localized description of the intervening events including the inner structure of a propagated physical object.
Introducing a full set of the external 4-momenta $p_{j}$ ($j$=1, 2, …, $N$) simultaneously for $N_{1}$ ingoing as well as for $N_{2}$ outgoing physical objects $(N_{1}+N_{2}=N)$ we can obtain the solution of the equations (3)–(6), together with the mass shell conditions (8)–(9), in the terms of independent kinematic invariants $s_{i}=p_{i}^{2}$, $s_{ik}=(p_{i}+p_{k})^{2}$, $s_{ikl}=(p_{i}+p_{k}+p_{l})^{2}$, $s_{iklm}=(p_{i}+p_{k}+p_{l}+p_{m})^{2}$, etc. connecting with each other by kinematic and geometrical conditions in the 4-dimensional momentum space (see Refs. \[7–10\]). Thus we should be presented the whole physical picture for the real scattering process where an inner structure of the participating physical objects is factually omitted and the latter should be introduced only by insertion of the separate full graph with $v$ vertices $K_{v}$ as a some complicated vertex-fragment for concrete physical object.
3.1 Root trees basis for\
physical graph of microobject {#root-trees-basis-for-physical-graph-of-microobject .unnumbered}
-----------------------------
Within the framework of the input–output scheme for the full physical graph with inserted complicated $K_{v}$-vertex , as a physical microobject vertex-fragment in the general scheme , it is to be determined the special separate vertex — the root of tree which coincides with input vertex, and the other part of physical graph is to be arranged in hierarchical order, creating a path to output vertex (vertices).
Thus the solution of the corresponding linear equations (3)–(5) for this new hierarchical physical graph may be presented already through the set of root $v$-trees as against of skeleton $v$-trees for electric network in section 2.1.
If we consider the full physical graph as an unique $K_{v}$-vertex in the input–output scheme then may be formulated the problem of the discrete physical object structure. The maximum set of independent root $v$-trees for corresponding full $K_{v}$-graph, with the separate root-vertex and $\displaystyle \left(
v\atop 2\right)$ lines, forms the natural root trees basis for the hierarchical physical graph, i. e. a set of paths between the root-vertex, as an initial point of input, and the final points of output.
Returning to the two-layer structure of the matrix (7) in a general matrix equation (10) already for the discrete physical microobjects in the $S$-matrix input–output scheme we note again that the incidence matrix $\displaystyle \mathop{\bf I\,}_{(v-1\times n)}\!\!
\!\!\!\!\!\!(\epsilon)$ (“upper layer”) acts upon the “internal microgeometry” of the physical microobjects within the framework of the natural fixed root trees basis. Therefore the incidence matrix $\displaystyle \mathop{\bf I}_{(v-1\times
n)}\!\!\!\!\!\! \!(\epsilon)$ contains a specific “graph microgeometry” beyond space-time approach.
3.2 On the proton mass {#on-the-proton-mass .unnumbered}
----------------------
The fact is that the above-described initial $S$-matrix theory overgrew practically in the specific physical graph kinematics formalism allowing to realize the discrete physical microobject calculations without use of the traditional space-time treatment.
We start with the consideration of the proton mass problem within the framework of the results of QCD-theory \[11\]. It is shown earlier (in 3. 1) that “internal microgeometry” of the discrete physical microobjects reflects their inner structure and therefore is to correspond to the vertex many-point Green function (without free tails). These points are responsible for determination of the number of vertices of the root trees filling in graph microgeometry of the discrete physical microobjects (in a Riemann \[15\] sense).
If, as usually supposed, the full proton mass is concentrated into an internal gluon field (Refs. \[12–13\]) — a case of the pure gluodynamics — then from the Gell-Mann–Low function (Ref. \[14\]) $$\displaystyle \beta (g^{2}) =-b \frac{g^{4}}{16 \pi^{2}} +{\rm O}
\left(\frac{g^{2}}{4 \pi}\right)$$
where $g$ — coupling constant, we have a dimensionless constant $b$=11 characterized the number of vertices of the corresponding root trees. For setting up the graph equivalent to the electron mass $m_{e}$ we choice the corresponding simplest case of the root tree graph with $v$=2 vertices inasmuch as the number of a such root tree $T_{v=2} =1$ (the case $v$=1 is the trivial graph — an isolate vertex). Therefore the number of the root trees with $v$=11 vertices $T_{v=11}=1842$ (see Ref. \[2\]) determines for the proton mass
$$M_{p}=1842 m_{e}.$$
In the standard QCD-model with the number of quark flavours $n_{f}$ we have a dimensionless constant $b=11-2/3 \, n_{f}$ what is responsible for the pion mass at $n_{f}$=3: $T_{v=9}$=286 (see Ref. \[2\]) or
$$m_{\pi}=286~m_{e}.$$
3.3 On the “double - and triple - splitting”\
mechanism in the physics and the biology hierarchy {#on-the-double--and-triple--splitting-mechanism-in-the-physics-and-the-biology-hierarchy .unnumbered}
--------------------------------------------------
The investigation of the inner structure of the discrete physical microobjects by using of the analysis of a specific “graph microgeometry” may be continued on the base of the all kinds of root $v$-trees from $v$-sequences (see Table 1) what could permit to classify the various discrete
=[3em 9999 ]{}
[r|r|c]{}\
\
\
\
& &\
& &\
& &\
\[-5dd\] & &\
1 & 1 & 1
------------------------------------------------------------------------
\
2 & 1 & 2
------------------------------------------------------------------------
\
3 & 2 & 2
------------------------------------------------------------------------
\
4 & 4 & 2,250\
5 & 9 &
------------------------------------------------------------------------
2,22(2)\
6 & 20 & 2,400\
7 & 48 & 2,396\
8 & 115 & 2,487\
& &\
& &\
\[-5dd\] 9 & 286 & 2,514\
10 & 719 & 2,562\
11 & 1842 & 2,587\
& &\
\[-5dd\] & &\
\[-5dd\] 12 & 4766 & 2,620\
13 & 12486 & 2,641\
14 & 32973& 2,663\
15 & 87811& 2,681\
16 & 235381& 2,697\
17 & 634847& 2,711\
=[3em 9999 ]{}
[r|r|c]{}\
& &\
& &\
& &\
\[-5dd\] & &\
18 & 1721159& 2,724\
19 & 4688676& 2,736\
20 & 12826228& 2,746\
21 & 35221832& 2,756\
22 & 97055181& 2,764\
23 & 268282855& 2,772\
24 & 743724984& 2,779\
25 & 2067174645& 2,786\
26 & 5759636510&\
microobjects and to estimate their masses. It is easy to see from the Table 1 that $T_{v+1}/T_{v}$ relation is into the interval
$2\leqslant T_{v+1}/T_{v}<3$
and therefore we could produce the double- and triple-splitting operation on the “mass-values” of the various $T_{v}$ before the experimental fitting. In Table 1 we may have conventionally three zones:
inequality 1$\leqslant T_{v} \leqslant $115 for the stable atomic and nuclear discrete objects quantities,
inequality 286 $\leqslant T_{v}
\leqslant $1842 for the pion-nucleon discrete objects quantities,
inequality 4766 $\leqslant T_{v}$ for the heavy physical and the complicated biological hierarchical objects quantities.
4 Conclusions {#conclusions .unnumbered}
=============
In conclusion it is important to note that the development of the graph kinematics formalism for the discrete physical objects (in a Riemann \[15\] sense) which has two-layer matrix description (incidence+loop matrices) with natural “graph microgeometry” beyond the traditional space-time consideration leads obviously to the more general results in the area of the creation of the Heisenberg–Dyson’s two-layer physics (see for example, \[16\]) with the proper “internal geometry”. In the upper layer of this scheme we have the formalism for real physical objects, their momenta, energy, forces, etc. Whereas in the under layer we have the symbolical fields quantities, such as field strength, induction, intensity, etc. which may be discovered only through their energy and forces in the upper layer.
[99]{}
R. J. Eden, P. V. Landshoff, D. I. Olive, J. C. Polkinghorne, [*The Analytic S-Matrix*]{} (Cambridge University Press, 1966)
Frank Harary, [*Graph Theory*]{} (Addison-Wesley Publishing Company, 1969)
L. D. Landau, Nucl. Phys. [**13**]{}, 181 (1959)
G. Kirchhoff, Poggendorf Ann. Phys. [**72**]{}, 497 (1847)
S. Seshu, M. Reed, [*Linear Graph and Electrical Networks*]{} (Addison-Wesley Publishing Company, 1970)
V. E. Asribekov, Zh. Eksp. Teor. Fiz. [**48**]{}, 1328 (1965) \[Sov. Phys. JETP [**21**]{}, 887 (1965)\]
V. E. Asribekov, Nucl. Phys. [**34**]{}, 461 (1962); Phys. Lett. [**2**]{}, 284 (1962); Zh. Eksp. Teor. Phys. [**43**]{}, 1826 (1962) \[Sov. Phys. JETP [**16**]{}, 1289 (1962)\]
F. Rohrlich, Nucl. Phys. [**67**]{}, 659 (1965); Nuovo Cimento [**38**]{}, 673 (1965)
J. Tarski, Journ. Math. Phys. [**1**]{}, 149 (1960)
E. Byckling, K. Kajantie, [*Particle Kinematics*]{} (John Wiley and Sons, 1973)
W.Marciano,H.Pagels,[*Quantum Chromodynamics*]{}.$-$ Phys. Rep. [**36C**]{}, 139 (1978)
L. B. Okun, [*Elementary Particles Physics*]{} (Nauka, Moscow, 1984)
M. B. Voloshin, Priroda \[Sov. Nature\], 1, 54 (1979)
M.Gell-Mann, F. Low, Phys. Rev. [**95**]{}, 1300 (1954)
B. Riemann, [*Über die Hypothesen, welche der Geometrie zu Grunde liegen*]{}, Gött. Abhandlungen. [**13**]{} (1868)
W. Heisenberg, [*Physik und Philosophie*]{} (Frankfurt am Main, 1959)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We look at some dynamic geometries produced by scalar fields with both the “right" and the “wrong" sign of the kinetic energy. We start with anisotropic homogeneous universes with closed, open and flat spatial sections. A non-singular solution to the Einstein field equations representing an open anisotropic universe with the ghost field is found. This universe starts collapsing from $t \to -\infty$ and then expands to $t \to
\infty$ without encountering singularities on its way. We further generalize these solutions to those describing inhomogeneous evolution of the ghost fields. Some interesting solutions with the plane symmetry are discussed. These have a property that the same line element solves the Einstein field equations in two mirror regions $\left|t\right|\geq z$ and $\left|t\right|\leq z$, but in one region the solution has the *right* and in the other, the *wrong* signs of the kinetic energy. We argue, however, that a physical observer can not reach the mirror region in a finite proper time. Self-similar collapse/expansion of these fields are also briefly discussed.
author:
- Alexander Feinstein and Sanjay Jhingan
title: Ghosts in a Mirror
---
Introduction {#intro}
============
Recently some authors have discussed scalar fields with negative kinetic energies (NKE) [@Carroll; @Gibbons; @Nojiri]. In [@Carroll] the authors have studied these fields in connection to the dark matter problem in the universe. Also, since observational evidence does not exclude cosmological models with the pressure to density ratio $<-1$, various models with negative densities were studied recently by several groups [@negative].
In [@Gibbons], to motivate these studies, it was argued that one may find several physical examples, such as Lifshitz transitions in condensed matter physics or an unusual dispersion relation for rotons in liquid helium, suggesting that the NKE may appear in nature. Also, negative energy densities may appear in quantum particle creation processes in curved backgrounds [@Candelas] or in squeezed states of the electromagnetic fields [@Slusher]. Their appearance, however, signals usually the unhealthy vacuum instability. Therefore, if the time scale of this instability is too short, the matter cannot serve as viable candidate for the dark energy component [@Carroll]. A different reasoning towards NKE, however, may be applied if one approaches the cosmological singularity problem.
One believes that the emergence of spacetime singularities in General Relativity suggests the breakdown of the theory at its natural scales. At these scales one expects the quantum corrections to take a leading role and save the situation. Different approaches to regularize the singularity have been undertaken. Phenomenologically, and in the light of all the standard singularity theorems, what appears to be the simplest way to tackle the singularity problem is to allow for negative energy densities, and probably some degree of anisotropy and/or inhomogeneity. This would produce the repulsive gravity effects and may smoothen the singularity. Phenomenologically, again, one can associate the negative energy densities with the back reaction of the quantum fluctuations, so that the idea in itself is not that ridiculous. Thus, allowing for NKE may prove to be an interesting approach to spacetime singularities in General Relativity. Here, the problem of the vacuum instability should be irrelevant, and even may become a blessing, for if these exotic fields decay rapidly, after having smoothed the singularity, this well could be a reason as to why we live in a ghost-free world. Consequently, it is worthwhile to have a closer look at some geometries produced by such fields, and we suggest here that one should keep her/his eyes wide open, just in case.
We will be especially interested in dynamical spacetimes in this setting, some static solutions in the ghost sector were presented elsewhere [@Gibbons; @Gibbons-Rasheed]. Just to concentrate on the simplest examples we consider the energy-momentum tensor of a massless scalar field in the following form, $$\label{eq:emtensor}
T_{\mu \nu} = \epsilon(2 \varphi_{,\mu} \varphi_{,\nu} - g_{\mu\nu}
\varphi_{,\alpha}\varphi^{,\alpha}) ,$$ which is derived from the Lagrangian $$\label{eq:action}
{\cal L} = {\cal R} -2 \epsilon\varphi_{,\alpha}\varphi^{,\alpha} ,$$ and where $\varphi$ is a scalar, ${\cal R}$ is the Ricci scalar and $\epsilon$ may take values $1$ and $-1$. Our metric signature convention is $(-,+,+,+)$. When $\epsilon = 1$, the energy-momentum tensor is of a standard form, while $\epsilon =
-1$ stands for the ghost field.
HOMOGENEOUS ANISOTROPIC MODELS {#H-A-M}
==============================
Our starting point is the family of Kantowski-Sachs metrics. We choose these space-times because they combine both the non-trivial curvature as well as the effects of anisotropy. We should note, moreover, that we couldn’t find any other non-pathological, dynamical homogeneous and isotropic spacetime with the ghost fields.
With the energy-momentum tensor given by (\[eq:emtensor\]) we find the following formal solution to the Einstein field equations, $$\begin{aligned}
\label{Kantowski-Sachs}
ds^2=-(\frac{2\eta}{t}-k)^{-\alpha}dt^2+
(\frac{2\eta}{t}-k)^{\alpha}dr^2 \nonumber \\
+ t^2(\frac{2\eta}{t}-k)^{1-\alpha}(d\theta^2+f_k(\theta)^2d\phi),\end{aligned}$$ with the scalar field given by $$\label{eq:KS-sol}
\varphi(t)=\frac{1}{2}\sqrt{\frac{1-\alpha^2}{\epsilon}}
\log\left(\frac{2\eta}{t}-k\right).$$ Here $k$ is spatial curvature $(k = -1,0,1)$, and $$\label{eq:ftheta}
f_k(\theta)=\left\{
\begin{array}{ll}
\sin \theta & k=1 , \\
\theta & k=0 , \\
\sinh \theta & k=-1 .
\end{array}\right.$$ Now, for $\epsilon=1$ and $|\alpha| < 1$, we recover the usual Einstein-dilaton solution given, for example, in [@AF-VM].
When $\epsilon=-1$ one must continue the scalar field analytically in order to finish with a real field and a non-pathological metric. In this case one may obtain real solutions simply by considering the case $|\alpha|>1$. When $\alpha=0$, however, we must continue analytically the $log$ function. It follows, thus, that it is only possible to obtain real solutions in the homogeneous case when $k\neq0$. On the other hand, when certain amount of inhomogeneity is introduced the extension of $k=0$ solutions is also possible. We will discuss these cases in the next section.
The two ghost solutions with $\alpha=0$ are:
- $k=1$ $$\begin{aligned}
\label{eq:Kone}
ds^2& = &-dt^2+dr^2-(\eta^2+t^2)(d\theta^2+\sin^2\theta d\phi^2) ,\\
& & \varphi(t)=\arctan\left(\frac{\eta}{t}\right) \nonumber,
\end{aligned}$$
- $k=-1$ $$\begin{aligned}
\label{eq:Kmnone}
ds^2&=&-dt^2+dr^2+(\eta^2+t^2)(d\theta^2+\sinh^2\theta d\phi^2) ,\\
& & \varphi(t) = \arctan\left(\frac{\eta}{t}\right) \nonumber.
\end{aligned}$$
The solution (\[eq:Kone\]) is just the Gibbons-Rasheed’s massless “ghost” wormhole [@Gibbons-Rasheed] with $t$ now being the spacelike and $r$ the timelike co-ordinates. The solution (\[eq:Kmnone\]), however, represents a *non-singular* anisotropic universe with open spatial sections. Its volume expansion $\Theta$ is : $$\label{eq:expansion}
\Theta=\frac{2t}{\eta^2+t^2} .$$
![[Volume expansion of nonsingular universe]{} \[fig:expansion\]](expansion.eps){width="7cm" height="4cm"}
The volume deformation reads: $$\label{eq:shear}
\sigma= -\frac{t}{\sqrt{3}(\eta^2+t^2)}$$ The average scale factor $a(t)=(\eta^2+t^2)^{1/6}$, behaves as $a\propto t^{1/3}$ for $\left|t\right|>> \eta$, proper of a stiff fluid.
The non-singular universe is flat and static as $t\to -\infty$, with the vanishing scalar field. It collapses towards a flat universe at $t=0$, where the scalar stabilizes to a constant, and then expands again to a flat universe as $t\to \infty$, with the vanishing scalar field. The solution (\[eq:Kmnone\]) is definitely too crude to represent even a toy cosmology. But imagine, that the massless scalar field with the NKE is supplemented by an ordinary matter, or by a potential as in [@Carroll]. Then, it may be possible that the evolution of the model at different stages is dominated by either the ghost field or by ordinary matter. Near the singularity, the kinetic terms always dominate the potential, and therefore the short period of the ghost domination may give rise to a regular solution. There could also be situations where both ingredients will just conspire to produce a cosmological constant, as in [@Gibbons], during some epoch. Such a model could then be an interesting test bench to address the singularity, inflation and the coincidence problem at one go. The realistic model building, however, is far beyond the scope of the present paper.
In a different setting, one can make a connection between the solution (\[eq:Kmnone\]) and the singularity problem in low energy string cosmology. The related models are those of the pre-Big Bang scenario (for a review see, [@Lidsey; @Gasperini]). One of the main difficulties in these models is the smooth exit from the pre-Big Bang into the post-Big Bang regime. The possibilities of the smooth transitions in the context of the lowest-order effective string action were discussed in [@Risi]. There it was concluded that the transitional singularities at $t=0$ can be avoided for anisotropic backgrounds provided one accepts the sources with negative energy densities. The solution (\[eq:Kmnone\]) therefore represents an exact evolution of such a model near $t=0$.
Inhomogeneous solutions {#I-S}
=======================
When $k=0$, the spatially flat case, the two dimensional line element $(d\theta^2 + \theta^2 d\phi^2)$ can be cast into an explicitly flat form: $dx^2+dy^2$. In this case the analytical continuation of the scalar field simply does not exist. We, therefore, will allow for a certain degree of inhomogeneity and consider the following form of the plane symmetric solutions, $$\label{eq:plane}
ds^2=\frac{1}{\sqrt{t}} e^{f(t,z)} (-dt^2+dz^2)+t(dx^2+dy^2) .$$ It should be noted that the discussion here can be generalized in a straightforward manner to any $G_2$ spacetime with two commuting spacelike Killing vectors by introducing transversal gravitational degrees of freedom in the following form $(e^{P(t,z)}dx^2+e^{-P(t,z)}dy^2)$. The off-diagonal terms may also be included. For the sake of clarity, however, we stick to a simple plane symmetric case, which we feel is sufficient to make our point.
Assuming the geometry (\[eq:plane\]) the Klein-Gordon equation reads $$\label{eq:K-G}
{\varphi_{,tt}}+\frac{1}{t}{\varphi_{,t}}-{\varphi_{,zz}}=0,$$ and when $\varphi(t,z)$ is the solution of this equation, the metric function $f(t,z)$ can be obtained by quadratures: $$\begin{aligned}
\label{eq:quadrature}
f_{,t}&=&2 \epsilon t ({\varphi_{,t}}^2+{\varphi_{,z}}^2) \\
f_{,z}&=& -4 \epsilon {\varphi_{,t}}{\varphi_{,z}} .\end{aligned}$$ Again, if $\epsilon$ is $1$ we are dealing with the standard scalar field, and when $\epsilon=-1$ it is a ghost. Thus, the NKE solutions are related to the positive kinetic energy solutions by $f\to -f$ transformation.
We consider now two possible solutions to (\[eq:K-G\]): $$\label{eq:sol_zgt}
\varphi_1= b \; \hbox{arccosh}\left(\frac{z}{t}\right), \qquad{|z|
\geq t}$$ and $$\label{eq:sol_zlt}
\varphi_2= b \; \hbox{arccos}\left(\frac{z}{t}\right), \qquad{|z|
\leq t} .$$ Note that $\varphi_1$ relates to $\varphi_2$ by $b\to i b$ transformation even though they “operate” in two different, mirror, regions of spacetime. The positive energy solutions of Einstein equations for $\varphi_1$ and $\varphi_2$ are: $$\label{eq:phi_1}
ds^2 [1]=\frac{1}{{\sqrt t}}
(\frac{z^2}{t}-t)^{-2 b^2}(-dt^2+dz^2)+t(dx^2+dy^2),$$ for $|z| \geq t$, and $$\label{eq:phi_2}
ds^2[2]=\frac{1}{{\sqrt t}}
(t-\frac{z^2}{t})^{2 b^2}(-dt^2+dz^2)+t(dx^2+dy^2),$$ for $ |z| \leq t$. It is amazing, however, that the positive and the negative energy solutions interchange when the kinetic energy sign flips over and the power $2 b^2$ is even: the negative energy solution for $\varphi_1$, is given by the line element (\[eq:phi\_2\]), while the negative energy solution for $\varphi_2$ is given by (\[eq:phi\_1\]). Therefore, the same geometry (\[eq:phi\_1\]) is induced by either the normal scalar field in region $|z| \geq t$, or by a ghost field in its mirror region $|z| \leq t$. The same applies to (\[eq:phi\_2\]).
Now, for the solution (\[eq:phi\_1\]) Ricci scalar is $$\label{eq:Ricci_1}
{\cal R} = -2 b^2 t^{-3/2}(\frac{z^2}{t}-t)^{2 b^2},$$ and, for the solution (\[eq:phi\_2\]), $$\label{eq:Ricci_2}
{\cal R} = 2 b^2 t^{-3/2} (t-\frac{z^2}{t})^{-2 b^2}.$$ Therefore, the solutions (\[eq:phi\_2\]) are singular at $t=z$, but (\[eq:phi\_1\]) are not. Of course, there happen to be singularities at $t=0$ in these expressions, but these are not of our interest in this section. We have further checked the other curvature scalars for (\[eq:phi\_1\]), such as square and the cube of the Ricci and the Riemann tensors and found those regular at $t=z$. The possibility then arises, to continue the solution (\[eq:phi\_1\]) as it stands into the region $|z|<t$, by flipping the sign of the action. The spacetime could have looked as depicted in fig. \[fig:mirror\] below.
![Mirror solutions[]{data-label="fig:mirror"}](umirror.eps){width="10cm" height="5cm"}
The picture looks quite unexpected. One has equivalent geometry on both sides of the mirror line $z=t$, produced by the standard scalar field in the region II, while the same geometry in the region I is produced by a ghost. Technically this happens because the pure imaginary transformation of the scalar field $\varphi_1$ into $\varphi_2$ produces a real scalar field in a mirror region. It is tempting to call such scalar fields as mirror-ghosts. A different pair of scalar fields which solve the equation (\[eq:K-G\]) and have the mirror ghost properties are $\varphi =b/\sqrt{t^2-z^2}$ and $\varphi
=b/\sqrt{z^2-t^2}$.
On a closer inspection of the geometry, described by the fig. \[fig:mirror\], one finds that the co-ordinates $t$ and $z$ are not that appropriate to describe the situation. It is convenient to introduce the following chart $t=u^n +v^n$ and $z=u^n - v^n$, where $n=1/(1-2b^{2})$ [@Feinstein], and since we have insisted that $2b^2$ is even, the power $n$ must be negative. It is then easy to see that the surface $z=t$ corresponds to $v^n=0$ and it cannot be reached by a physical observer in a finite proper time. Thus the two regions are *worlds apart* for their internal observers. The “mirror" and the “real" world do not mix up![^1]
We now consider inhomogeneous evolution with nontrivial spatial curvature. To do so, we generalize the $\alpha=0$ solutions of the previous section using the following anzatz:
$$\label{roberts}
ds^2=-dudv + R^2(u,v) (d\theta^2+f_k(\theta)^2 d\phi^2) .$$
with $f_{k}(\theta)$ as defined in Eqn. (\[eq:ftheta\]).
With the energy-momentum tensor given by (\[eq:emtensor\]) we find the following solution: $$\begin{aligned}
\label{R-sol}
\varphi(u,v) &=& \frac{1}{2{\sqrt \epsilon}} \ln\left[\frac{(1-
{\sqrt \epsilon}p) kv - u}{(1+{\sqrt \epsilon}p)k v-u}\right] , \\
R^2(u,v) &=&\frac{1}{4}[(1-\epsilon p^2)k^2v^2-2kvu+u^2] ,\end{aligned}$$ where the parameter $p$ represents the strength of the scalar field. For $k=1$ and $\epsilon=1$ the solution reduces to the one found by Roberts [@Roberts]. It was further thoroughly investigated by various authors in connection with the spherically symmetric scalar field collapse (see, for example, [@ONT]). In the standard scalar field case, the $k=0$ solutions are flat spacetimes written in plane wave co-ordinates. While, the $k=-1$ represents scalar field collapse/expansion with open geometry where one does not expect black hole formations since the trapped surfaces, if formed, are non-compact. Thus, in the $k=-1$ case one either has a naked singularity or a regular collapse/expansion similar to the case of $k=1$ [@ONT].
We now consider the phantom solutions ($\epsilon=-1$). These are given by $$\label{eq:phantom3}
\varphi(u,v)= \hbox{arctan}\left(\frac{pkv}{kv-u}\right) .$$ One can see that the scalar field remains regular everywhere. As a consequence, the scalar curvature is also regular everywhere, but in the geometrical centre $u=0$ and $v=0$: $$\label{cscalepmn1}
{\cal R} = -\frac{8k^2 p^2 u v}{[p^2k^2v^2+(kv-u)^2]^2}.$$ The higher order curvature invariants behave in a similar way. It is further informative to look at the character of the gradient of the area coordinate $R(u,v)$, i.e., $R_{\alpha}(u,v)R^{\alpha}(u,v)$ [@Senovilla], which is given by $$\label{apparent}
R_{\alpha}R^{\alpha}=k-k^2\frac{p^2uv}{p^2k^2v^2+(kv-u)^2}$$ The character of the gradient of the area coordinate tells one about trappedness of the surfaces. It follows, that for a given $k$, the character of the area gradient never changes the sign (this doesn’t happen in the non-ghost case where the area gradient is parameter $p$- sensitive). Note, that the change $k=1$ to $k=-1$ corresponds to a change $v\to-v$, or, in $t$ and $r$ co-ordinates ($t=u+v$ and $r=u-v$) to an $r \leftrightarrow t$ interchange, exactly as in the homogeneous cases (\[eq:Kone\]), (\[eq:Kmnone\]). Therefore, the solutions with $k=1$ should be seen as dynamical generalizations of the Gibbons-Rasheed worm hole [@Gibbons-Rasheed], and in fact if looked at carefully in ($t,r$) coordinates represent a time sequence of static worm holes. The solutions with $k=-1$ are inhomogeneous generalizations of the solution (\[eq:Kmnone\]). This interpretation is further stressed by the behaviour of the area gradient, being globally timelike (or null) in the cosmological case, and globally spacelike (or null) in the case of $k=1$.
One can as well look at the scalar curvature singularity at $u=0$ $\cap$ $v=0$ in $t$ and $r$ coordinates. The singularity occurs at $R(t,r)=0$. In the $k=1$ case this surface is given by the equation $ r^2 +p^2(t-r)^{2}/4 =0$, while in the $k=-1$ case, it is $ t^2 +p^2(t-r)^{2}/4 =0$. Therefore, the singularity occurs at $t=r=0$.
One may interpret the $k=1$ solution along the lines of [@ONT]. These, then represent non-singular ghost field collapse. The ghost fields should be thought of as imploding from the past null infinity, ($u\to -\infty$). One can easily check that the scalar field vanishes on $v=0$ and is constant along $u=0$ null hypersurfaces. The mass function vanishes on both and so does the flux across these null hypersurfaces. Therefore, one can match $v<0$ and $u>0$ regions with flat spacetime [@ONT], avoiding singularities. We note that the parameter $p$ plays no essential role here.
Conclusions
===========
We have looked at various geometries produced by massless scalar fields both with the positive and the negative kinetic energies. In the homogeneous case, we have shown the existence of non-singular anisotropic solution with open spatial sections and NKE. Assuming inhomogeneous plane symmetry we have found mirror images with exactly the same geometry in different regions of spacetime produced by the scalar fields with positive (in one region) and the negative (in the other) kinetic energies. We have shown, however, that the physical observers from the positive energy region cannot reach the phantom geometry in a finite proper time. We have further generalized the self-similar scalar field dynamical solutions to include both the nontrivial curvature and the negative NKE. These generalize the static ghost worm hole solution to the dynamical case ($k=1$), and the homogeneous anisotropic universe to the inhomogeneous one ($k=-1$). The generalizations introduce singularities which can, probably, be removed by cutting and pasting methods. In particular, if one interprets these solutions as a phantom scalar field collapse, the singularities can be easily removed.
There seems to be much prejudice and suspicion against fields with NKE, especially, due to their possible vacuum instability. The stability is important, however, depending on which problem is to be addressed. If we approach the cosmological singularity, the decay of the fields with NKE should not pose conceptual difficulties. One needs these fields just for short times to smoothen the singularity, and when this is achieved they may as well decay: the ghost appears and then disappears, leaving the world ghost-free. It would be interesting to see whether the phenomenological approach considered here can be obtained from field/string theories, we leave this, however, for future works.
Acknowledgements
================
We are grateful to José Senovilla for helpful discussions and valuable comments. A.F. was supported by the University of the Basque Country Grants 9/UPV00172.310-14456/2002, and The Spanish Science Ministry Grant 1/CI-CYT 00172. 310-0018-12205/2000. S.J. acknowledges support from the Basque Government research fellowship.
[99]{}
S. M. Carroll, M. Hoffman and M. Trodden, *Can the dark energy equation-of-state parameter w be less than -1?*, arXiv:astro-ph/0301273.
G. W. Gibbons, *Phantom Matter and the Cosmological Constant*, arXiv: hep-th/0302199.
S. Nojiri and S.D. Odintsov, *Quantum deSitter cosmology and phantom matter*, arXiv:hep-th/0303117 .
R. R. Caldwell, Phys. Lett. B [**545**]{}, 23 (2002); T. Chiba, T. Okabe and M. Yamaguchi, Phys. Rev. D [**62**]{} 023511 (2000); B. Boisseau, G. Esposito-Farese, D. Polarski and A. A. Starobinsky, Phys. Rev. Lett. [**85**]{}, 2236 (2000); V. Faraoni, Int. J. Mod. Phys. D [**11**]{}, 471 (2002); L. Parker and A. Raval, Phys. Rev. D [**60**]{}, 123502 (1999) \[Erratum-ibid. D [ **67**]{}, 029902 (2003)\]; I. Maor, R. Brustein, J. McMahon and P. J. Steinhardt, Phys. Rev. D [**65**]{}, 123003 (2002); J. G. Hao and X. Z. Li, *An attractor solution of phantom field*, arXiv:gr-qc/0302100; Varun Sahni and Yuri Shtanov, *Braneworld models of dark energy*, arXiv:astro-ph/0202346 .
P. Candelas, Phys. Rev. D [**21**]{}, 2185 (1980); D. W. Sciama, P. Candelas and D. Deutsch, Adv. Phys. [**30**]{}, 327 (1981).
R. E. Slusher, L. W. Hollberg, B. Yurke, J. C. Mertz, J. F. Valley, Phys. Rev. Lett. [**55**]{}, 2409 (1985).
G. W. Gibbons and D. A. Rasheed, Nucl. Phys. B [**476**]{}, 515 (1996).
A. Feinstein and M. A. Vazquez-Mozo, Phys. Lett. B [**441**]{}, 40 (1998); A. Buonanno, T. Damour and G. Veneziano, Nucl. Phys. B [**543**]{}, 275 (1999).
J. E. Lidsey, D. Wands and E. J. Copeland, Phys. Rept. [**337**]{}, 343 (2000).
M. Gasperini and G. Veneziano, Phys. Rept. [ **373**]{}, 1 (2003).
G. De Risi and M. Gasperini, Phys. Lett. B [**521**]{}, 335 (2001).
A. Feinstein and J. Ibañez, Phys. Rev. D [ **39**]{}, 470 (1989).
Yu. Kozarev, L. Okun and I. Pomeranchuk, Yad. Fiz. [**3**]{}, 1154 (1966).
S. L. Glashow, Phys. Lett. [B **167**]{}, 35 (1986).
M. D. Roberts, Gen. Rel. Grav. [**21**]{}, 907 (1989).
Y. Oshiro, K. Nakamura and A. Tomimatsu, Prog. Theor. Phys. [**91**]{}, 1265 (1994).
J. M. Senovilla, Class. Quant. Grav. [**19**]{}, L113 (2002).
[^1]: The idea of mirror worlds was introduced many years ago by Yu. Kozarev, L. Okun and E. Pomeranchuck [@Pomeranchuk]. Glashow [@Glashow] has shown that the particles of the two worlds cannot interact. It is interesting that we come to a similar conclusion and from a completely different direction.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
We search for pair-produced Dirac magnetic monopoles in $35.7~\rm{pb}^{-1}$ of proton-antiproton collisions at $\sqrt{s} = 1.96~\rm{TeV}$ with the Collider Detector at Fermilab (CDF). We find no monopole candidates corresponding to a $95\%$ confidence-level cross-section limit $\sigma <
0.2~{\rm pb}$ for a monopole with mass between $200$ and $700~{\rm
GeV}/c^2$. Assuming a Drell-Yan pair production mechanism, we set a mass limit $m > 360~{\rm GeV}/c^2$.
bibliography:
- 'monopole-prl.bib'
title: ' Direct Search for Dirac Magnetic Monopoles in $p\bar{p}$ Collisions at $\sqrt{s} = 1.96~\rm{TeV}$'
---
The existence of magnetic monopoles would add symmetry to Maxwell’s equations without breaking any known physical law. More dramatically, it would make charge quantization a consequence of angular momentum quantization, as first shown by Dirac [@DIRAC]. With such appeal, monopoles continue to excite interest and have been the subject of numerous experimental searches.
Grand unified theories predict monopole masses of about $10^{17}~\rm{GeV/c^2}$, so cosmic ray experiments have searched extensively for high-mass monopoles produced in the early universe. Accelerator searches for low-mass monopoles have looked for the effects of virtual monopole loops [@MONOVIRTUAL; @MONOVIRTUAL_L3; @MONOVIRTUAL_D0], but the results have been questioned [@MONOVIRTUAL_PROBS]. Detector materials exposed to radiation from $p \bar{p}$ collisions at the Tevatron have been examined for trapped monopoles, but the limit obtained depends on the model for the trapping of monopoles in matter [@MONOTRAPPED]. Despite these efforts, magnetic monopoles have not been discovered [@MONOSUMMARY].
Magnetic monopoles have magnetic charge $g$ satisfying the Dirac quantization condition: $$\frac{g e}{\hbar c} = \frac{n}{2} \Longleftrightarrow
\frac{g}{e} = \frac{n}{2\alpha} \approx 68.5 \cdot n$$ where $n$ is an integer and $\alpha$ is the fine structure constant. In this search, we consider an $n=1$ monopole with mass less than $1~{\rm TeV}/c^2$, spin $\frac{1}{2}$, and no hadronic interactions. Monopoles are accelerated by a magnetic field and are highly ionizing due to the large value of $g/e$.
This search uses a $35.7~{\rm pb}^{-1}$ sample of $p\overline{p}$ collisions at $\sqrt{s} = 1.96~{\rm TeV}$ produced by the Fermilab Tevatron and collected by the CDF II detector during 2003 using a special trigger. The detector consists of a magnetic spectrometer including silicon strip and drift-chamber tracking detectors and a scintillator time-of-flight system, surrounded by electromagnetic and hadronic calorimeters and muon detectors [@CDF]. CDF uses a superconducting solenoid to produce a $1.4~{\rm T}$ magnetic field. The field is parallel to the beam direction, which is taken as the $z$ direction, with $\phi$ the azimuthal angle, and $r$ the radial distance in the transverse plane.
The important detector components for this search are the central outer tracker (COT) [@COT] and the time-of-flight (TOF) detector [@TOF], both positioned inside the solenoid. The coverage of the cylindrical COT extends from a radius of $40~{\rm cm}$ to $137~{\rm cm}$ and to psuedo-rapidity $|\eta| \sim 1$. The COT consists of eight superlayers, each containing 12 layers of sense wires. The COT makes timing measurements for track reconstruction as well as integrated charge measurements for determining a particle’s ionization energy loss $dE/dx$. The COT is surrounded by 216 TOF scintillator bars, which run parallel to the beam line and form a cylinder of radius $140~{\rm cm}$. Each TOF bar is instrumented with a photomultiplier tube (PMT) on each end. The TOF measures both the time and height of PMT pulses; the pulse height is typically used to correct for discriminator-threshold time-slewing. Due to their large ionization and massive production of delta rays, monopoles in scintillator with velocity $\beta > 0.2$ are expected to produce more than $500$ times the light from a minimum-ionizing particle (MIP) [@MONOSUMMARY; @MACRO].
We have built and commissioned a highly ionizing particle trigger that requires large light pulses at both ends of a TOF scintillator bar. The trigger was designed to detect monopoles efficiently while consuming less than $1~{\rm Hz}$ of the CDF data acquisition bandwidth. The electronics response of the TOF has been calibrated [@TOF_TRIGGER; @THESIS_MIKE] to account for non-linearities and channel-to-channel differences. The trigger thresholds of about $30~{\rm MIPs}$ are well below the expected response to a monopole and have a negligible effect on the trigger efficiency.
In the CDF detector, a monopole is accelerated along the uniform solenoidal magnetic field in a parabola slightly distorted by relativistic effects. Because no other particle mimics this behavior, the TOF acceptance must be estimated from Monte Carlo simulation. We have extended the GEANT simulation [@GEANT; @GEANT_MONOPOLES; @THESIS_PHIL; @THESIS_MIKE] to handle magnetic monopoles, including the acceleration from the magnetic field, energy loss and multiple scattering [@AHLEN].
Because the monopole-photon coupling is large and non-perturbative, there is no universally accepted field-theoretic calculation of magnetic-monopole production. However, monopole interactions with matter, such as scattering, require only a replacement of the electric charge with the monopole’s effective charge $g \beta$. This has led the authors of Ref. [@MONOTRAPPED] to adopt a heuristic production model by making the same replacement for Drell-Yan monopole pair production, which we take as our primary benchmark.
Either a monopole or anti-monopole must reach the TOF detector in order to cause a trigger. To calculate the TOF acceptance for the heuristic pair production mechanism, we produce lepton Drell-Yan events with Pythia [@PYTHIA] with the lepton mass replaced by the monopole mass, and weight events according to the additional velocity dependence. The TOF acceptance for monopole pairs simulated with GEANT is shown in Figure \[fig:tofacc\]. Light monopoles, accelerated strongly by the magnetic field, tend to be swept out of the detector before reaching the TOF. Heavy monopoles, produced near threshold, suffer the same fate.
Because we are unable to test experimentally the model for material interactions, we assign a systematic uncertainty of one half the total calculated effect, measured by comparing the TOF acceptance for the full simulation with a ficticious detector consisting of the TOF only. The material in the detector lowers the acceptance due to energy loss and multiple scattering, a $3\%$ systematic error for intermediate-mass monopoles. This method likely overestimates the uncertainty; varying the energy-loss model between a naive model where $e \rightarrow g\beta$ and the full treatment of Ref. [@AHLEN] has a negligible effect.
The TOF acceptance depends on the monopole production kinematics. To quantify this dependence, we consider separately the Drell-Yan mechanism without the additional velocity dependence and with monopole production uniform in the cosine of the polar angle in the center of mass frame. The total variation in the acceptance is $10\%$. We therefore present results for our benchmark mechanism only, with the understanding that mass limits for other production mechanisms can be inferred from the cross-section limit with reasonable accuracy.
During each event, the TOF electronics makes a single measurement for each PMT. Light from other particles, called spoilers, can reach a PMT before the light from monopoles, starting the charge integration. If the monopole light does not reach the PMT within the $20~{\rm ns}$ charge integration window, the monopole’s light will not be integrated and trigger will not fire. Our studies show that pure Monte Carlo underestimates the effect of spoilers seen in data. We therefore estimate the spoiler fraction by embedding Monte Carlo produced monopoles in real $Z \rightarrow e^+e^-$ data. Because these are high-mass central events produced by a Drell-Yan mechanism, we expect the distribution of other particles in the event to be similar to that of a monopole-pair production event. We exclude the bars with signals from the electrons and count the number of spoiler events, which have real pulses arriving more than $20~{\rm ns}$ before the simulated pulse from a magnetic monopole.
The systematic uncertainty is dominated by the uncertainty in the time needed to integrate enough of the monopole’s charge to cause a trigger. To quantify this effect, we note that rise times for TOF pulses are typically less than $1~{\rm ns}$ and redo the calculation with a $15~{\rm ns}$ integration window. We take one-half the difference as a systematic uncertainty. Other effects, such as the dependence on luminosity, are much smaller for our sample. For a $400~{\rm GeV}/c^2$ monopole, the spoiler fraction is $2\% \pm 1\%$ with a $3\%$ systematic uncertainty.
Massive monopoles can have low velocities causing them to arrive at the TOF too late to cause a trigger. The timing acceptance is calculated with a Monte Carlo simulation by requiring pulses to arrive within the 54 [ns]{} timing window. Only heavy monopoles move slowly enough to be affected: a $900~{\rm GeV}/c^2$ monopole is out of time in $10\%$ of events. This is a negligible effect on lighter monopoles.
Monopoles curve in the $r z$ plane, in sharp contrast to electrically charged particles, which curve in the $r \phi$ plane. A specialized reconstruction program isolates monopole candidates using data from the COT. Candidates consist of coincident track segments composed entirely of hits with large ionization, consistent with a straight line in the plane perpendicular to the magnetic field.
The COT electronics encodes the integrated charge as the width of a hit, which is the ionization measurement used for monopole candidate selection. A typical MIP produces hit widths of about $20~{\rm ns}$. An extrapolation of the non-linear COT response for ordinary particles predicts that monopoles would produce hit widths of about $230~{\rm ns}$ ($1000$ MIPs), still within the dynamic range of the COT. We do not use this extrapolation. Instead we cut in the tail of the width distribution from ordinary tracks, found to be at $140~{\rm ns}$ ($50$ MIPs) in minimum-bias data collected with an open trigger highly efficient for inelastic $p \bar{p}$ collisions. Hits with charge below this amount are not considered by the monopole reconstruction. As magnetic monopoles have much greater ionization than the tracks used to determine this cut, it has a negligible effect on the efficiency.
The default COT tracking algorithm first reconstructs track segments in each of eight superlayers. It checks for hits loosely consistent with a straight-line, using a tolerance of $20~{\rm ns}$. The identified hits in each segment are then fit to a circular trajectory. In the monopole algorithm, the segments are required to be composed entirely of high-ionization hits. Also, because a monopole can be as slow as $\beta
\sim 0.1$ with changing transverse velocity, the usual timing assumption ($t_{{\rm flight}} = r / c$) cannot be used. Instead, the time of flight to each superlayer is varied between $r/c$ and $10 r/c$ in $5~{\rm ns}$ increments.
A monopole candidate consists of several $\phi$-coincident, low-curvature segments. From Monte Carlo simulation, we choose a loose cut on the segment curvature $\rho < 0.001~{\rm cm}^{-1}$, which for an electron would correspond to $p_{{\rm T}}> 4~{\rm GeV/c}$. Likewise, the $\phi$ tolerance is a loose $0.2~{\rm radians}$. The remaining cuts are on the minimum number of hits needed in a segment and on the total number of $\phi$-coincident segments required for a monopole candidate. By ignoring the width cut, the segment-finding algorithm efficiency is measured in an independent data sample using high-$p_{{\rm T}}$ tracks. In this manner, we choose a highly efficient cut requiring seven coincident superlayers with at least eight hits in each segment. This has a $94\%$ efficiency with a $1\%$ statistical uncertainty. For these cuts, the efficiency for finding high-mass monopole pairs calculated with the Monte Carlo simulation is nearly $100\%$. The efficiency for high-$p_{{\rm T}}$ electrons in simulation, after removing the width cut, is also nearly $100\%$. There are real detector effects contributing a small inefficiency.
As an ionizing particle passes through matter, the most energetic electrons form delta rays. For highly relativistic low-mass monopoles, the large number of delta rays confuses the segment finding algorithm, lowering the efficiency. We check that GEANT is properly producing delta-rays by comparing the efficiency of monopoles to kinematically equivalent heavy-ions simulated in the absence of a magnetic field. We scale the efficiency determined from Monte Carlo simulation to make the high-mass monopole efficiency agree with the high-$p_{{\rm T}}$ track efficiency. As the small inefficiency from real detector effects cannot be measured directly on monopoles, we take one half of the total inefficiency as a systematic uncertainty: $3\%$ for $400~{\rm GeV}/c^2$ monopoles.
To estimate how effectively the monopole reconstruction rejects background, we use minimum-bias data. In $8 \times 10^5$ events, the event most like a monopole has two coincident super-layers with seven hits per segment. Our monopole requirements are much more stringent. We require a seven-fold coincidence of eight hits or more, resulting in extremely small background. In the trigger sample the background is similarly small; the event most like a monopole has two coincident super-layers with six hits per segment. In Fig. \[fig:background\], we count the number of monopole candidates passing looser cuts on the hit width.
Effect Efficiency
--------------------- -----------------------------
TOF geometric (MC) $ 70\% \pm 3\% \pm 3\% $
TOF response $ 100\% $
TOF spoilers $ 98\% \pm 1\% \pm 3\% $
TOF timing (MC) $ 99\% \pm 1\% \pm 1\% $
COT width cut $ 100\% $
COT segment finding $ 94\% \pm 1\% \pm 3\% $
: \[tab:eff\_sum\] Efficiency of the monopole search with statistical and systematic uncertainties for a monopole mass of $400~{\rm GeV}/c^2$. The full mass dependence is accounted for in the limit.
None of the $130,000$ events from the monopole trigger sample passes the candidate requirements, and we report a limit [@THESIS_MIKE]. Monopole production limits are typically reported by the cross-section upper limit as a function of monopole mass to minimize the dependence on a particular production model. The expected number of events $N$ from a process with cross section $\sigma$ and detector efficiency with acceptance $\epsilon$ after integrated luminosity $L$ is given by $N = L \epsilon \sigma$. We calculate the cross-section limit for zero observed events, based on the efficiency summarized in Table \[tab:eff\_sum\] and a $6\%$ uncertainty in the luminosity measurement [@LUMI]. We find the cross section for which pseudo-experiments with efficiency and luminosity chosen randomly according to their uncertainties yield one or more measured events $95\%$ of the time.
Our cross-section exclusion limit is shown in Figure \[fig:sensitivity\]. Our limit excludes monopole pair production for cross sections greater than $0.2~{\rm pb}$ at the $95\%$ confidence level for monopole masses between $200$ and $700~{\rm GeV}/c^2$. For the Drell-Yan mechanism, this implies a mass limit of $m > 360~{\rm GeV}/c^2$ at the $95\%$ confidence level. This is currently the best limit from a direct search. Additional Run II data will improve the sensitivity: another $300~{\rm pb^{-1}}$ extends the mass reach by $100~{\rm GeV}/c^2$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Errors quoted on results are often given in asymmetric form. An account is given of the two ways these can arise in an analysis, and the combination of asymmetric errors is discussed. It is shown that the usual method has no basis and is indeed wrong. For asymmetric systematic errors, a consistent method is given, with detailed examples. For asymmetric statistical errors a general approach is outlined.'
author:
- Roger Barlow
title: Asymmetric Errors
---
=cmcsc12
Asymmetric Errors
=================
In the reporting of results from particle physics experiments it is common to see values given with errors with different positive and negative numbers, to denote a 68% central confidence region which is not symmetric about the central estimate. For example (one of many) the Particle Data Group[@PDG] quote $$B.R. (f_2(1270) \to \pi\pi) = (84.7 ^{+2.4}_{-1.3}) \%.$$
The purpose of this note is to describe how such errors arise and how they can properly be handled, particularly when two contributions are combined. Current practice is to combine such errors separately, i.e. to add the $\sigma^+$ values together in quadrature, and then do the same thing for the $\sigma^-$ values. This is not, to my knowledge, documented anywhere and, as will be shown, is certainly wrong.
There are two separate sources of asymmetry, which unfortunately require different treatments. We call these ‘statistical’ and ‘systematic’; the label is fairly accurate though not entirely so, and they could equally well be called ‘frequentist’ and ‘Bayesian’.
Asymmetric statistical errors arise when the log likelihood curve is not well described by a parabola [@Eadie]. The one sigma values (or, equivalently, the 68% central confidence level interval limits) are read off the points at which $\ln L$ falls from its peak by ${1 \over 2}$ – or, equivalently, when $\chi^2$ rises by 1. This is not strictly accurate, and corrections should be made using Bartlett functions[@Frodesen], but that lies beyond the scope of this note.
Asymmetric systematic errors arise when the dependence of a result on a ‘nuisance parameter’ is non-linear. Because the dependence on such parameters – theoretical values, experimental calibration constants, and so forth – is generally complicated, involving Monte Carlo simulation, this study generally has to be performed by evaluating the result $x$ at the $-\sigma$ and $+\sigma$ values of the nuisance parameter $a$ (see [@Durham] for a fuller account) giving $\sigma_x^-$ and $\sigma_x^+$. ($a \pm \sigma$ gives $\sigma_x^\pm$ or $\sigma_x^\mp$ according to the sign of ${dx \over da}$.)
This note summarises a full account of the procedure for asymmetric systematic errors which can be found in [@asymmetricpreprint] and describes what has subsequently been achieved for asymmetric statistical errors. For another critical account see [@dAgostini].
Asymmetric Systematic Errors
============================
If $\sigma_x^-$ and $\sigma_x^+$ are different then this is a sign that the dependence of $x$ on $a$ is non-linear and the symmetric distribution in $a$ gives an asymmetric distribution in $x$. In practice, if the difference is not large, one might be well advised to assume a straight line dependence and take the error as symmetric, however we will assume that this is not a case where this is appropriate. We consider cases where a non-linear effect is not small enough to be ignored entirely, but not large enough to justify a long and intensive investigation. Such cases are common enough in practice.
Models
------
For simplicity we transform $a$ to the variable $u$ described by a unit Gaussian, and work with $X(u)=x(u)-x(0)$. It is useful to define the mean $\sigma$, the difference $\alpha$, and the asymmetry $A$: $$\sigma = {\sigma^+ + \sigma^- \over 2}\qquad
\alpha = {\sigma^+ - \sigma^- \over 2}\qquad
A={\sigma^+ - \sigma^- \over \sigma^+ + \sigma^-}\label{Eq1}$$ There are infinitely many non-linear relationships between $u$ and $X$ that will go through the three determined points. We consider two. We make no claim that either of these is ‘correct’. But working with asymmetric errors must involve some model of the non-linearity. Practitioners must select one of these two models, or some other (to which the same formalism can be applied), on the basis of their knowledge of the problem, their preference and experience.
- Model 1: Two straight lines
Two straight lines are drawn, meeting at the central value $$\begin{aligned}
X&=\sigma^+ u \qquad u\geq 0 \nonumber \\
&=\sigma^- u \qquad u \leq 0.\end{aligned}$$
- Model 2: A quadratic function
The parabola through the three points is
$$X=\sigma u + \alpha u^2=\sigma u + A\sigma u^2.$$
These forms are shown in Figure \[figmodels\] for a small asymmetry of 0.1, and a larger asymmetry of 0.4.
![ Some nonlinear dependencies \[figmodels\]](plot1.eps){width="50mm"}
Model 1 is shown as a solid line, and Model 2 is dashed. Both go through the 3 specified points. The differences between them within the range $-1\leq u \leq 1$ are not large; outside that range they diverge considerably.
The distribution in $u$ is a unit Gaussian, $G(u)$, and the distribution in $X$ is obtained from $P(X)={G(u) \over | dX/du |}$. Examples are shown in Figure \[figcom1\]. For Model 1 (again a solid line) this gives a dimidated Gaussian - two Gaussians with different standard deviation for $X>0$ and $X<0$. This is sometimes called a ‘bifurcated Gaussian’, but this is inaccurate. ‘Bifurcated’ means ‘split’ in the sense of forked. ‘Dimidated’ means ‘cut in half’, with the subsidiary meaning of ‘having one part much smaller than the other’ [@OED]. For Model 2 (dashed) with small asymmetries the curve is a distorted Gaussian, given by ${G(u) \over |\sigma + 2 \alpha u |}$ with $u={\sqrt{\sigma^2 + 4 \alpha X} - \sigma \over 2 \alpha}$. For larger asymmetries and/or larger $|X|$ values, the second root also has to be considered.
![ Probability Density Functions from Figure \[figmodels\] []{data-label="figcom1"}](plot2.eps){width="50mm"}
It can be seen that the Model 1 dimidated Gaussian and Model 2 distorted Gaussian are not dissimilar if the asymmetry is small, but are very different if the asymmetry is large.
Bias
----
If a nuisance parameter $u$ is distributed with a Gaussian probability distribution, and the quantity $X(u)$ is a nonlinear function of $u$, then the expectation $\langle X \rangle$ is not $X(\langle u \rangle )$.
For model 1 one has $$<X> =
{\sigma^+ - \sigma^- \over \sqrt{2 \pi}}
\label{eqnbias1}$$
For model 2 one has $$<X> =
{\sigma^+ - \sigma^- \over 2 }
=\alpha
\label{eqnbias2}$$
Hence in these models, (or any others), if the result quoted is $X(0)$, it is not the mean. It differs from it by an amount of the order of the difference in the positive and negative errors. It is perhaps defensible as a number to quote as the result as it is still the median - there is a 50% chance that the true value is below it and a 50% chance that it is above.
Adding Errors
-------------
If a derived quantity $z$ contains parts from two quantities $x$ and $y$, so that $z=x+y$, the distribution in $z$ is given by the convolution:
$$f_z(z)=\int dx f_x(x) f_y(z-x)
\label{eqnConvolute}$$
![ Examples of the distributions from combined asymmetric errors using Model 1. []{data-label="figcom2"}](plot34.eps){width="50mm"}
With Model 1 the convolution can be done analytically. Some results for typical cases are shown in Figure \[figcom2\]. The solid line shows the convolution, the dashed line is obtained by adding the positive and negative standard deviations separately in quadrature (the ‘usual procedure’). The dotted line is described later.
The solid and dashed curves disagree markedly. The ‘usual procedure’ curve has a larger skew than the convolution. This is obvious. If two distributions with the same asymmetry are added the ‘usual procedure’ will give a distribution just scaled by $\sqrt 2$, with the same asymmetry. This violates the Central Limit Theorem, which says that convoluting identical distributions must result in a combined distribution which is more Gaussian, and therefore more symmetric, than its components. This shows that the ‘usual procedure’ for adding asymmetric errors is inconsistent.
A consistent addition technique
-------------------------------
If a distribution for $x$ is described by some function, $f(x;x_0,\sigma^+,\sigma^-)$, which is a Gaussian transformed according to Model 1 or Model 2 or anything else, then ‘combination of errors’ involves a convolution of two such functions according to Equation \[eqnConvolute\]. This combined function is not necessarily a function of the same form: it is a special property of the Gaussian that the convolution of two Gaussians gives a third. The (solid line) convolution of two dimidated Gaussians is not itself a dimidated Gaussian. Figure \[figcom2\] is a demonstration of this. Although the form of the function is changed by a convolution, some things are preserved. The semi-invariant cumulants of Thi\` ele (the coefficients of the power series expansion of the log of the Fourier Transform) add under convolution. The first two of these are the usual mean and variance. The third is the unnormalised skew: $$\gamma = <x^3> - 3<x><x^2> + 2 <x>^3
\label{eqnskew}$$ Within the context of any model, a consistent approach to the combination of errors is to find the mean, variance and skew: $\mu$, $V$ and $\gamma$, for each contributing function separately. Adding these up gives the mean, variance and skew of the combined function. Working within the model one then determines the values of $\sigma_-, \sigma_+$, and $x_0$ that give this mean, variance and skew.
Model 1
-------
For Model 1, for which $\langle x^3 \rangle ={2 \over \sqrt{2 \pi}} (\sigma_+^3 - \sigma_-^3)$ we have $$\begin{aligned}
&\mu=x_0+{1 \over \sqrt{2 \pi}}
(\sigma^+ - \sigma^-)\nonumber\\
&V=
\sigma^2 + \alpha^2\left( 1-{2 \over \pi}\right)\nonumber\\
&\gamma=
{1 \over \sqrt{2 \pi}} \big[
2
({\sigma^+}^3 - {\sigma^-}^3)
-{3 \over 2}
({\sigma^+} - {\sigma^-})
({\sigma^+}^2 + {\sigma^-}^2)
\nonumber\\
&+{1 \over \pi}
({\sigma^+} - {\sigma^-}) ^3
\big]
\label{eqnDict1}\end{aligned}$$ Given several error contributions the Equations \[eqnDict1\] give the cumulants $\mu$, $V$ and $\gamma$ of each. Adding these up gives the first three cumulants of the combined distribution. Then one can find the set of parameters $\sigma^-, \sigma^+, x_0$ which give these values by using Equations \[eqnDict1\] in the other sense.
It is convenient to work with $\Delta$, where $\Delta$ is the difference between the final $x_0$ and the sum of the individual ones. The parameter is needed because of the bias mentioned earlier. Even though each contribution may have $x_0=0$, i.e. it describes a spread about the quoted result, it has non-zero $\mu_i$ through the bias effect (c.f. Equations \[eqnbias1\] and \[eqnbias2\] ). The $\sigma^+$ and $\sigma^-$ of the combined distribution, obtained from the total $V$ and $\gamma$, will in general not give the right $\mu$ unless a location shift $\Delta$ is added. [*The value of the quoted result will shift.*]{}
Recalling section B, for the original distribution one could defend quoting the central value as it was the median, even though it was not the mean. The convoluted distribution not only has a non-zero mean, it also (as can be seen in Figure \[figcom2\] ) has non-zero median. If you want to combine asymmetric errors then you have to accept that the quoted value will shift. To make this correction requires a real belief in the asymmetry of the error values. At this point practitioners, unless they are sure that their errors really do have a significant asymmetry, may be persuaded to revert to quoting symmetric errors.
Solving the Equations \[eqnDict1\] for $\sigma^-, \sigma^+$ and $ x_0$ given $\mu$, $V$ and $\gamma$ has to be done numerically. A program for this is available on . Some results are shown in the dotted curve of Figure \[figcom2\] and Table 1.
$\sigma_x^-$ $\sigma_x^+ $ $\sigma_y^- $ $\sigma_y^+ $ $\sigma^{-} $ $\sigma^{+} $ $\Delta$
-------------- --------------- --------------- --------------- --------------- --------------- ----------
1.0 1.0 0.8 1.2 1.32 1.52 0.08
0.8 1.2 0.8 1.2 1.22 1.61 0.16
0.5 1.5 0.8 1.2 1.09 1.78 0.28
0.5 1.5 0.5 1.5 0.97 1.93 0.41
: Adding errors in Model 1
\[table1\]
It is apparent that the dotted curve agrees much better with the solid one than the ‘usual procedure’ dashed curve does. It is not an exact match, but does an acceptable job given that there are only 3 adjustable parameters in the function. If the shape of the solid curve is to be represented by a dimidated Gaussian, then it is plausible that the dotted curve is the ‘best’ such representation.
Model 2
-------
The equivalent of Equations \[eqnDict1\] are $$\begin{aligned}
&\mu=x_0+\alpha\nonumber\\
&V=\sigma^2 + 2\alpha^2\nonumber\\
&\gamma=6\sigma^2\alpha + 8 \alpha^3
\label{eqnDict2}\end{aligned}$$
As with Method 1, these are used to find the cumulants of each contributing distribution, which are summed to give the three totals, and then Equation \[eqnDict2\] is used again to find the parameters of the distorted Gaussian with this mean, variance and skew. The web program will also do these calculations
Some results are shown in Figure \[figcom3\] and Table \[table2\]. The true convolution cannot be done analytically but can be done by a Monte Carlo calculation.
$\sigma_x^-$ $\sigma_x^+ $ $\sigma_y^- $ $\sigma_y^+ $ $\sigma^{-} $ $\sigma^{+} $ $\Delta$
-------------- --------------- --------------- --------------- --------------- --------------- ----------
1.0 1.0 0.8 1.2 1.33 1.54 0.10
0.8 1.2 0.8 1.2 1.25 1.64 0.20
0.5 1.5 0.8 1.2 1.12 1.88 0.35
0.5 1.5 0.5 1.5 1.13 2.07 0.53
: Adding errors in Model 2
\[table2\]
![ Examples of combined errors using Model 2. \[figcom3\]](plot5.eps){width="50mm"}
Again the true curves (solid) are not well reproduced by the ‘usual procedure’ (dashed) but the curves with the correct cumulants (dotted) do a good job. (The sharp behaviour at the edge of the curves is due to the turning point of the parabola.)
Evaluating $\chi^2$
-------------------
For Model 1 the $\chi^2$ contribution from a discrepancy $\delta$ is just $\delta^2/{\sigma^+}^2$ or $\delta^2/{\sigma^-}^2$ as appropriate. This is manifestly inelegant, especially for minimisation procedures as the value goes through zero.
For Model 2 one has $$\delta=\sigma u + A \sigma u^2.$$
This can be considered as a quadratic for $u$ with solution which when squared gives $u^2$, the $\chi^2$ contribution, as $$u^2={2+4A{\delta \over \sigma} -2 (1+4A{\delta \over \sigma})^{1 \over 2}\over 4 A^2}$$ This is not really exact, in that it only takes one branch of the solution, the one approximating to the straight line, and does not consider the extra possibility that the $\delta$ value could come from an improbable $u$ value the other side of the turning point of the parabola. Given this imperfection it makes sense to expand the square root as a Taylor series, which, neglecting correction terms above the second power, leads to $$\chi^2=({\delta \over \sigma})^2 \left(1-2A ({\delta \over \sigma})+
5 A^2 ({\delta \over \sigma})^2\right).$$
This provides a sensible form for $\chi^2$ from asymmetric errors. It is important to keep the $\delta^4$ term rather than stopping at $\delta^3$ to ensure $\chi^2$ stays positive! Adding higher orders does not have a great effect. We recommend it for consideration when it is required (e.g. in fitting parton distribution functions) to form a $\chi^2$ from asymmetric errors
Weighted means
--------------
The ‘best’ estimate (i.e. unbiassed and with smallest variance) from several measurements $x_i$ with different (symmetric) errors $\sigma_i$ is given by a weighted sum with $w_i = 1/\sigma_i^2$. We wish to find the equivalent for asymmetric errors.
As noted earlier, when sampling from an asymmetric distribution the result is biassed towards the tail. The expectation value $\langle x \rangle$ is not the location parameter $x$. So for an unbiassed estimator one must take $$\hat x= \sum w_i(x_i-b_i) / \sum w_i$$ where $$b={\sigma^+-\sigma^- \over \sqrt{2 \pi}} \quad \hbox{(Model 1)} \qquad
b=\alpha \quad \hbox{(Model 2)}$$ The variance of this is given by $$V={\sum w_i^2 V_i \over \left( \sum w_i \right)^2}$$ where $V_i$ is the variance of the $i^{th}$ measurement about its mean. Differentiating with respect to $w_i$ to find the minimum gives $${2 w_i V_i \over \left( \sum w_j\right)^2} - {2 \sum w_j^2 V_j \over \left( \sum w_j \right)^3}=0\qquad \forall i$$ which is satisfied by $w_i = 1/V_i$. This is the equivalent of the familiar weighting by $1/\sigma^2$. The weights are given, depending on the Model, by (see Equations \[eqnDict1\] and \[eqnDict2\]) $$V=\sigma^2 +(1-{2\over \pi})\alpha^2 \qquad \hbox{or}\qquad V=\sigma^2 + 2 \alpha^2$$
Note that this is not the Maximum Liklelihood estimator - writing down the likelihood in terms of the $\chi^2$ and differentiating does not give a nice form - so in principle there may be better estimators, but they will not have the simple form of a weighted sum.
Asymmetric Statistical Errors
=============================
As explained earlier, (log) likelihood curves are used to obtain the maximum likelihood estimate for a parameter and also the 68% central interval – taken as the values at which $\ln L$ falls by ${1 \over 2}$ from its peak. For large $N$ this curve is a parabola, but for finite $N$ it is generally asymmetric, and the two points are not equidistant about the peak.
The bias, if any, is not connected to the form of the curve, which is a likelihood and not a pdf. Evaluating a bias is done by integrating over the measured value not the theoretical parameter. We will assume for simplicity that these estimates are bias free. This means that when combining errors there will be no shift of the quoted value.
Combining asymmetric statistical errors
---------------------------------------
Suppose estimates $\hat a$ and $\hat b$ are obtained by this method for variables $a$ and $b$. $a$ could typically be an estimate of the total number of events in a signal region, and $b$ the (scaled and negated) estimate of background, obtained from a sideband. We are interested in $u=a+b$, taking $\hat u=\hat a+\hat b$. What are the errors to be quoted on $\hat u$?
Likelihood functions known {#fullsection}
--------------------------
We first consider the case where the likelihood functions $L_a(\vec x|a)$ and $L_b(\vec x|b)$ are given.
For the symmetric Gaussian case, the answer is well known. Suppose that the likelihoods are both Gaussian, and further that $\sigma_a=\sigma_b=\sigma$. The log likelihood term $$\left( {\hat a - a \over \sigma} \right)^2
+ \left( {\hat b - b \over \sigma} \right)^2$$ can be rewritten $${1 \over 2}
\left(
{\hat a+\hat b - (a+b) \over \sigma} \right)^2
+{1 \over 2}
\left(
{\hat a-\hat b - (a-b) \over \sigma} \right)^2$$ so the likelihood is the product of Gaussians for $u=a+b$ and $v=a-b$, with standard deviations $\sqrt 2 \sigma$.
Picking a particular value of $v$, one can then trivially construct the 68% confidence region for $u$ as $[\hat u - \sqrt 2 \sigma,\hat u + \sqrt 2 \sigma]$. Picking another value of $v$, indeed any other value of $v$, one obtains the same region for $u$. We can therefore say with 68% confidence that these limits enclose the true value of $u$, whatever the value of $v$. The uninteresting part of $a$ and $b$ has been ‘parametrised away’. This is, of course, the standard result from the combination of errors formula, but derived in a frequentist way using Neyman-style confidence intervals. We could construct the limits on $u$ by finding $\hat u+\sigma_u^+$ such that the integrated probability of a result as small as or smaller than the data be 16%, and similarly for $\sigma_u^-$, rather than taking the $\Delta \ln L=-{1 \over 2}$ shortcut, and it would not affect the argument.
The question now is how to generalise this. For this to be possible the likelihood must factorise $$L(\vec x|a,b)=L_u(\vec x|u) L_v(\vec x|v)$$ with a suitable choice of the parameter $v$ and the functions $L_u$ and $L_v$. Then we can use the same argument: for any value of $v$ the limits on $u$ are the same, depending only on $L_u(\vec x|u)$. Because they are true for any $v$ they are true for all $v$, and thus in general.
There are cases where this can clearly be done. For two Gaussians with $\sigma_a \neq \sigma_b$ the result is the same as above but with $v=a \sigma_b^2 -b \sigma_a^2 $. For two Poisson distributions $v$ is $a/b$. There are cases (with multiple peaks) where it cannot be done, but let us hope that these are artificially pathological.
On the basis that if it cannot be done, the question is unanswerable, let us assume that it is possible in the case being studied, and see how far we can proceed. Finding the form of $v$ is liable to be difficult, and as it is not actually used in the answer we would like to avoid doing so. The limits on $u$ are read off from the $\Delta \ln L(\vec x|u,v) = -{1 \over 2}$ points where $v$ can have any value provided it is fixed. Let us choose $v=\hat v$, the value at the peak. This is the value of $v$ at which $L_v(v)$ is a maximum. Hence when we consider any other value of $u$, we can find $v=\hat v$ by finding the point at which the likelihood is a maximum, varying $a-b$, or $a$, or $b$, or any other combination, always keeping $a+b$ fixed. We can read the limits off a 1 dimensional plot of $\ln L_{max}(\vec x|u)$, where the ‘max’ suffix denotes that at each value of $u$ we search the subspace to pick out the maximum value.
This generalises to more complicated situations. If $u=a+b+c$ we again scan the $\ln L_{max}(\vec x|u)$ function, where the subspace is now 2 dimensional.
Likelihood functions not completely known
-----------------------------------------
In many cases the likelihood functions for $a$ and $b$ will not be given, merely estimates $\hat a$ and $\hat b$ and their asymmetric errors $\sigma^+_a$, $\sigma^-_a$, $\sigma^+_b$ and $\sigma^-_b$. All we can do is to use these to provide best guess functions $L_a(\vec x|a)$ and $L_b(\vec x|b)$. A parametrisation of suitable shapes, which for $\sigma^+ \sim \sigma^-$ approximate to a parabola, must be provided. Choosing a suitable parametrisation is not trivial. The obvious choice of introducing small higher-order terms fails as these dominate far from the peak. A likely candidate is: $$\ln{ L}(a)=-{1 \over 2} \left( \ln{\left(1+a/\gamma \right)}\over \ln{\beta} \right)^2
\label{parametrise}$$ where $\beta = {\sigma_+ /\sigma_-}$ and $\gamma={\sigma_+ \sigma_- \over \sigma_+ - \sigma_-}$. This describes the usual parabola, but with the x-axis stretched by an amount that changes linearly with distance. Figure \[figparab\] shows two illustrative results.
![ Approximations using Equation \[parametrise\][]{data-label="figparab"}](plot9.eps){width="40mm"}
The first is the Poisson likelihood from 5 observed events (solid line) for which the estimate using the $\Delta \ln L={1 \over 2}$ points is $\mu = 5^{+2.58}_{-1.92}$, as shown. The dashed line is that obtained inserting these numbers into Equation \[parametrise\]. The second considers a measurement of $x=100 \pm 10$, of which the logarithm has been taken, to give a value $4.605^{+0.095}_{-0.105}$. Again, the solid line is the true curve and the dashed line the parametrisation. In both cases the agreement is excellent over the range $\approx \pm 1 \sigma$ and reasonable over the range $\approx \pm 3 \sigma$.
To check the correctness of the method we can use the combination of two Poisson numbers, for which the result is known. First indications are that the errors obtained from the parametrisation are indeed closer to the true Poisson errors than those obtained from the usual technique.
Combination of Results
----------------------
A related problem is to find the combined estimate $\hat u$ given estimates $\hat a$ and $\hat b$ (which have asymmetric errors). Here $a$ and $b$ could be results from different channels or different experiments. This can be regarded as a special case, constrained to $a=b$, i.e. $v=0$, but this is rather contrived. It is more direct just to say that one uses the log likelihood which is the sum of the two separate functions, and determines the peak and the $\Delta \ln L=-{1 \over 2}$ points from that. If the functions are known this is unproblematic, if only the errors are given then the same parametrisation technique can be used.
Conclusions
===========
If asymmetric errrors cannot be avoided they need careful handling.
A method is suggested and a program provided for combining asymmetric systematic errors. It is not ‘rigorously correct’ but such perfection is impossible. Unlike the usual method, it is at least open about its assumptions and mathematically consistent.
Formulæ for $\chi^2$ and weighted sums are given.
A method is proposed for combining asymmetric statistical errors if the likelihood functions are known. Work is in progress to enable it to be used given only the results and their errors.
The author a gratefully acknowledges the support of the Fulbright Foundation.
[9]{} D.E. Groom [*et al.*]{}, Eur. Phys. J. [**C15**]{} 1 (2000).
W. T. Eadie et al, “Statistical Methods in Experimental Physics”, North Holland, 1971.
A.G. Frodesen [*et al.*]{} “Probablity and Statistics in Particle Physics”, Universitetsforlaget Bergen-Oslo-Tromso (1979), pp 236-239.
R. J. Barlow “Systematic Errors: Facts and Fictions" in Proc. Durham conference on Advanced Statistical Techniques in Particle Physics, M. R. Whalley and L. Lyons (Eds). IPPP/02/39. 2002.
R. J. Barlow, “Asymmetric Systematic Errors” preprint MAN/HEP/03/02, ArXiv:physics/030613.
G. D’Agostini “Bayesian Reasoning in Data Analysis: a Critical Guide”, World Scientific (2003).
The Shorter Oxford English Dictionary, Vol I (A-M) p 190 and p 551 of the 3rd edition (1977).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We observed the brightest central galaxy (BCG) in the nearby ($z=0.0821$) cool core galaxy cluster Abell 2597 with the IRAC and MIPS instruments on board the Spitzer Space Telescope. The BCG was clearly detected in all Spitzer bandpasses, including the 70 and 160 $\mu$m wavebands. We report aperture photometry of the BCG. The spectral energy distribution exhibits a clear excess in the FIR over a Rayleigh-Jeans stellar tail, indicating a star formation rate of $\sim 4-5$ solar masses per year, consistent with the estimates from the UV and its H$\alpha$ luminosity. This large FIR luminosity is consistent with that of a starburst or a Luminous Infrared Galaxy (LIRG), but together with a very massive and old population of stars that dominate the energy output of the galaxy. If the dust is at one temperature, the ratio of 70 to 160 micron fluxes indicate that the dust emitting mid-IR in this source is somewhat hotter than the dust emitting mid-IR in two BCGs at higher-redshift ($z\sim0.2-0.3$) and higher FIR luminosities observed earlier by Spitzer, in clusters Abell 1835 and Zwicky 3146.'
author:
- 'Megan Donahue, Andrés Jordán, Stefi A. Baum, Patrick Côté, Laura Ferrarese, Paul Goudfrooij, Duccio Macchetto, Sangeeta Malhotra, Christopher P. O’Dea, James E. Pringle, James E. Rhoads, William B. Sparks, G. Mark Voit'
title: Infrared Emission from the Nearby Cool Core Cluster Abell 2597
---
Introduction
============
The brightest cluster galaxies (BCGs) in the cores of the most X-ray luminous clusters of galaxies are the most massive galaxies in the universe. Their unusually extended stellar envelopes, high optical luminosities, and red colors pose a challenge to models of galaxy formation, which, conversely, predict that the brightest cluster galaxies should be even more luminous than observed and should be blue, not red [@2004MNRAS.347.1093B]. According to such models, galaxies at the centers of clusters should be highly luminous and blue because supernova feedback is unable to prevent runaway cooling, condensation, and star formation in the gas at the center of a cluster. However, even though the galaxies at the centers of X-ray luminous clusters are red, Spitzer Multiband Imaging Photometer for Spitzer (MIPS) imaging of moderate redshift ($z\sim 0.2-0.3$) clusters with cool cores and large H$\alpha$ luminosities has revealed that they can still be surprisingly luminous infrared sources, $\sim 10^{44-45}$ erg s$^{-1}$ [@2006ApJ...647..922E]. Their mid-infrared luminosity turns out to be $\sim0.1-1\%$ times the total X-ray luminosity from the hot intracluster medium (ICM), and puts these galaxies in the same luminosity class as LIRGs (Luminous Infrared Galaxies.)
Historically, clusters in which the central gas radiates enough energy to cool and condense within a Hubble time have been categorized as cooling-flow clusters. Such clusters exhibit highly peaked X-ray fluxes and, usually, ICM temperature gradients that decrease into the cluster core. Cool core clusters are rather common at $z<0.4$, accounting for $\sim50\%$ of the X-ray luminous population, indicating that this phase is not short lived. Yet, the central gas in these clusters does not appear to be cooling and condensing at the rates implied by naive interpretations of X-ray imaging data, which could exceed 100 M$_{\odot} \, {\rm yr}^{-1}$. Central star formation rates are much lower than this, and the central galaxies do not appear to contain the $\sim 10^{12} M_\odot$ of cold gas that should have collected over a Hubble time . This was the notorious “cooling flow” problem: How can these systems be so common, when there is no obvious source of heat to replenish the energy these systems radiate so quickly?
High-resolution X-ray spectroscopy has given the cooling-flow problem an additional twist by showing that most of the X-ray gas with a short cooling time does not even cool below $10^7$K . Hot gas is detected at a range of temperatures down to $\sim 1/3$ of the virial temperature but does not appear to emit at lower temperatures. This result confirms that the gas condensation rates in these systems are not very large but doesn’t explain what suppresses cooling. Recent Chandra X-ray observations have suggested that AGN feedback might be what inhibits cooling in cluster cores, because some (but not all) cooling flow clusters host central AGN which excavate kpc-scale cavities in the ICM [eg. @2001ApJ...562L.149M]. Still more clusters exhibit elevated central entropy levels which may have been produced by a now-inactive AGN [@Donahue2005A; @2006ApJ...643..730D]. There is increasing support for the idea that the central AGN is the culprit providing the “extra" heating, not only countering radiative cooling in cluster cores, but also providing a self-regulating feedback mechanism in massive galaxies in general [e.g., @2006MNRAS.368L..67B]. These observations and theories have sparked a renewed interest in the role of AGN in the formation of galaxies, and in particular, their role in heating the ICM and stifling star formation.
Interestingly, the mechanism that counteracts uninhibited radiative cooling in cluster cores does not appear to shut off star formation altogether. Cooler gas is not completely absent in the central galaxies, a few of which have substantial quantities ($\sim 10^{10-11}$ M$_\odot$) of molecular gas at a range of temperatures. Extended, vibrationally-excited H$_2$ 2-micron emission lines have been detected in these sources [@1994iaan.conf..169E; @2001MNRAS.324..443J; @2006ApJ...652L..21E], and about $10^{10-11}$ M$_\odot$ of cold molecular hydrogen has been inferred from CO observations in many of these systems . If some of the ICM cools and forms stars, that could explain why most cool core systems exhibit luminous optical emission-line nebulae, which in turn nearly always accompany the presence of molecular gas. Such nebulae produce bright H$\alpha$ and forbidden line emission [@1989ApJ...338...48H]. Emission-line spectroscopy suggests that at least some of the emission can be explained as the result of photoionization and heating by hot stars [@1997ApJ...486..242V]. Tantalizing possible detections by FUSE of million degree gas, via OVI emission, suggest that at least some of the gas does cool below X-ray emitting temperatures [@2001ApJ...560..187O; @2006ApJ...642..746B]. However, the interpretation of such spectra is complicated, sometimes by the presence of a low-luminosity AGN and possibly by slow shocks or heating by cosmic rays or local X-rays.
In this paper, we present Spitzer observations of mid-infrared emission from the BCG in Abell 2597, a nearby cool-core cluster, that address an important question about the cool-core phenomenon: what is the dusty star formation rate in this cluster? These data also stimulate new questions about the star formation in BCGs: (1) Is the infrared emission from a BCG in a cool-core cluster typical of a starburst in a spiral galaxy? (2) Is the dust spectrum characteristic of relatively unprocessed Milky Way-type dust that has been transported to the center of the BCG during a merger, or is it consistent with the dust having been recently exposed to the harsh environment of the intracluster medium? (3) If a modest amount of gas is condensing from the ICM, is the condensation rate consistent with the observed star-formation activity and molecular gas content? Our Spitzer observations show that the IR spectrum of this BCG is fairly typical of a normal starburst, with a far-IR peak at $\sim$70$\mu$m and a mid-IR excess consistent with emission from polycyclic aromatic hydrocarbons (PAHs). The star formation rate implied by the IR and UV emission from this galaxy are also consistent with observational limits on the cooling and condensation rate of gas from the central ICM. In our analysis, we assume $H_0=70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_M=0.3$, and $\Omega_\Lambda=0.7$ cosmology. At the redshift of the BCG in A2597 ($z=0.0821\pm0.0002$, [@VD1997]) the scale is 1.551 kpc arcsec$^{-1}$.
The Brightest Cluster Galaxy in Abell 2597
==========================================
The brightest cluster galaxy in Abell 2597 (Abell richness class 0) contains a well-studied FRI radio source PKS2322-12 and is conveniently positioned in region of the sky with very low Galactic extinction ($A_B = 0.131$ mag; @1998ApJ...500..525S), confirmed by @2003MNRAS.343..315B. The optical spectrum of the central galaxy was studied intensively by @VD1997, who placed the first model-independent reddening, temperature, and metallicity constraints on a cluster emission-line nebula using faint forbidden lines. They also determined that the excitation mechanism of the emission lines could not be shocks. The preferred excitation source was stars, plus an additional, unidentified, source of heat. This conclusion was supported by analysis of the infrared emission line spectrum of Paschen alpha lines and vibrationally excited H$_2$ [@2005MNRAS.360..748J]. One slight difference between these analyses is that the @VD1997 study also placed limits on the UV spectral shape, from the lack of a measurable He II recombination line.
@Cardiel1998 found radial gradients in the 4000 Å break and Mg$_2$ indices of Abell 2597’s BCG, indicating recent (0.1 Gyr) star formation. @1999ApJ...518..167M found that its U-band light is unlikely to be from scattered AGN light, based on polarization limits on the continuum. Most recently, it has been studied in the far UV using Hubble Space Telescope STIS observations [@ODea2004] and WFPC2 blue and emission-line images [@Koekemoer1999]. Vibrationally excited molecular hydrogen was discovered to trace the same features as the optical emission lines by @Donahue2000.The Chandra X-ray Observatory has reveals two-sided radio bubbles, surrounded by hot ICM [@McNamaraA2597_2001]. This BCG also has one of the most convincing FUSE detections of OVI [@2001ApJ...560..187O]. Further, $\sim4\times10^9$ M$_\odot$ of cold H$_2$ has been inferred from CO detections [@Edge2001].
The most recent X-ray observations of A2597 are from a 120-kilosecond XMM observation. Analysis of both EPIC and RGS spectroscopy by @2005MNRAS.358..585M suggests a classical cooling flow at the level of $90\pm15$ solar masses per year in the central region, dropping to about 20 M$_\odot$ yr$^{-1}$ in the central 40 kpc surrounding the BCG. This result is based on the joint analysis of vanishingly weak ($\sim 1-2\sigma$) FeXVII features in the RGS and the extremely difficult interpretation of soft X-ray excess in the CCD spectrum. While this result at best is only suggestive of cooling, it makes the BCG in Abell 2597 the only source with published emission-line detections from gas at $10^{5-7}$K, suggesting a possible connection between condensation from the hot gas and star formation.
Observations and Data Reduction
===============================
Infrared Array Camera
---------------------
The Infrared Array Camera (IRAC) has four wavelength channels, 3.6, 4.5, 5.8, and 8 $\mu$m [@2004ApJS..154...10F]. The Astronomical Observing Request (AOR) number for the IRAC observation was 13372160. The total observing time with all 4 detectors available for IRAC was 270 minutes. Each frame was 100 seconds. Thirty six positions were dithered for a total of 3600 seconds per bandpass.
We have used the flux-calibrated images from the SSC IRAC pipeline (software version S14.0.0) for our analysis. The standard pipeline subtracts dark current based on laboratory measurements and a dark-sky frame based on observations of the darkest, star-free parts of the sky. A flat field, based on observations of zodiacal background and cleaned of cosmic rays, is then divided out of the data. Finally, the data are flux calibrated, producing images with flux units of MJy per steradian. See the IRAC data handbook[^1] and the IRAC calibration paper of @2005PASP..117..978R for more detail.
Multiband Imaging Photometer
----------------------------
The Multiband Imaging Photometer (MIPS) has three bands, with weighted wavelength averages of 23.68 $\mu$m, 71.42 $\mu$m, and 155.9 $\mu$m. For convenience, we will refer to these bands as 24, 70, and 160 $\mu$m. The FWHM of the PSF in those bands is $6\arcsec$, $18\arcsec$, and $40\arcsec$ respectively. Our total observing time with the MIPS was 36 minutes (total exposure time of 10 seconds per pixel at 24 $\mu$m and 15 seconds per pixel for 70 and 160 $\mu$m each). The AOR number for the MIPS observation was 13371904.
We used the standard Spitzer Science Center (SSC) pipeline processing (software version S14.4.0) of the 24, 70, and 160 $\mu$m data. We investigated whether the 70 and 160 $\mu$m data would benefit from additional work. We reprocessed the raw 70 and 160 $\mu$m data with the GeRT software package[^2], following the algorithms derived by the MIPS Instrument Team and the MIPS Instrument Support team [@2005PASP..117..503G]. We time-filtered the resulting 70 $\mu$m basic data by subtracting a smoothed version of the signal obtained with a boxcar median filter of 30 frames. This procedure subtracts the residual time-variations of the response. We then used the MOPEX package[^3] to co-add the filtered images. In Figure \[figure:70\] we show the GeRT-filtered 70 $\mu$m image side by side with the standard pipeline image. Here we note that there is some evidence for a faint extended region in the filtered data. However, since this region is aligned with the higher-noise streak in the pipeline image, subtracted from the GeRT data, and since a mosaic of the individual pipeline-filtered exposures does not show this feature, we do not make strong claims about its reality.
We reduced the 160 $\mu$m data in a very similar way, using GeRT. However, since there are very few pixels and the central source is marginally extended in an odd wedge-shape, additional filtering in the time domain did not result in significantly cleaner 160 $\mu$m images. The exclusion of individual images taken directly after the periodic “stim" images had no effect on the final product. The MOPEX package was used to combine the 160 $\mu$m basic calibration products.
Data Analysis
-------------
We measured the flux in all seven wavebands within a circular aperture of $r=25\arcsec$ ($\sim39 h_{70}^{-1}$ kpc), centered on the coordinates $\alpha=351.33199$ and $\delta = -12.124612$ (J2000). (RA of $23^h$ $25^m$ $19.6^s$, DEC of -12 07’ 29"). The background was estimated from pixels in an annulus between $38.5\arcsec$ and $42.5\arcsec$ from the center. We also measured the flux in a larger aperture ($r=35\arcsec$) for the MIPS 70 and 160 $\mu$m images. A histogram of the background values was fit to a Gaussian initially centered on the median background per pixel. The mean background value was subtracted from the aperture flux. The net fluxes were averaged and multiplied by the aperture sky area in steradians to yield the flux in Janskys. No aperture correction was required for the IRAC photometry.
Since the MIPS images are nearly point sources, we used the APEX package to fit point response functions (PRFs) and obtain total fluxes for the MIPS. For comparison, we report both the large aperture flux and the PRF flux for each band. We obtain reasonable agreement except in the case of the 160 $\mu$m aperture flux, for which the aperture correction, even at $r=35\arcsec$ we estimate to be $\sim1.5$, based on the convolution of the standard PRF with the source. The IRAC flux uncertainties are $\sim5\%$. The color terms at 3.6 and 4.5 $\mu$m are unlikely to be significant; however, if the spectrum is dominated by PAHs at 5.8 and 8 $\mu$m, the color-term corrections could be significant, up to $\sim50\%$. MIPS absolute flux uncertainties are 10% at 24 $\mu$m and 20% at 70 and 160 $\mu$m. We report raw and aperture-corrected aperture fluxes in Table \[table:fluxes\].
We extracted photometry for this galaxy from the Two Micron All Sky Survey (2MASS) extended source catalog [@2000AJ....119.2498J][^4] in J, H, and K infrared bands. Absolute photometric calibration from @2003AJ....126.1090C converts the 2MASS magnitudes to Janskys. The 2MASS total apertures (the apertures that measured the total light from the extended source) of $25.35\arcsec$ were similar to that used for the Spitzer data. We show all images (2MASS and Spitzer) in Figure \[figure:all\] together with the $r=25\arcsec$ aperture in the 1-24 micron images and the $r=25\arcsec$ and $35\arcsec$ aperture for the 160 micron image, for scale.
Discussion
==========
The total amount of far-IR emission from the BGC in Abell 2597 is $\nu L_\nu \sim 1 \times 10^{44}$ erg s$^{-1}$, corresponding to $\sim 4 M_\odot \, {\rm yr}^{-1}$ according to @Kennicutt1998. This FIR luminosity is as high as that of a LIRG (Luminous Infrared Galaxy), and the broad-band spectrum is consistent with that of a starburst. The IR-inferred star formation rate is also consistent with the conclusion of @ODea2004 that hot stars, detected in the UV, forming at the rate of a few solar masses per year, are the dominant source of ionization of the optical nebula. The emission we detect with IRAC is extended, but it appears to be mainly associated with the central galaxy. The mid-IR sources, except at 24 $\mu$m, are not well-fit by point sources, but those too are completely contained within the boundaries of the stellar light of the BCG. Because of Spitzer’s diffraction limit, we cannot say much about its mid-IR morphology. It is also very difficult to place any flux limits on extremely extended emission, beyond the galaxy, such as emission from the cluster ICM itself, due to the nature of mid-IR observations which require multiple, offset exposures to assess the background and the current state of knowledge about absolute Spitzer backgrounds. Because of the positional association of the Spitzer-detected IR emission with the BCG, we suspect that what we detect is entirely interstellar, not intracluster, in nature.
Figure \[figure:SED\] compares the broad-band spectral energy distribution of this galaxy with the spectrum expected from an old stellar population with zero dust. The near-infrared spectrum is fairly flat, as one would expect from an old stellar population at this redshift, and the stellar mass implied by the near-IR luminosity is $\sim 3.12 \times 10^{11}$ M$_\odot$. Stellar continuum emission is expected to dominate in the 3.5 and 4.5 $\mu$m bands, and the ratio of those two bands to the near-IR emission is consistent with that expectation. Those bands are well-fit by a simple giant elliptical spectral template, obtained from the SED template library of the Hyper-z photometric redshift code . In the far-IR, one can see the prominent 70 - 160 $\mu$m peak observed with MIPS. Such a peak is expected from dust in active star-forming regions. Excess emission over an old, dust-free population is also found in the 5.8 $\mu$m and particularly in the 8 $\mu$m IRAC bands. The excess at 8 $\mu$m is likely due to polycyclic aromatic hydrocarbons (PAH) transiently heated by UV photons from hot stars. Such emission features are commonly found in the spectra of star-forming galaxies and usually accompany a far-infrared spectral peak at $\sim 70 \mu$m. Adding a nuclear starburst SED model from of with a total starburst luminosity of $0.95 \times 10^{44}$ erg s$^{-1}$ to the emission from old stars yields an adequate fit (solid line) to the combined 2MASS and Spitzer photometry for this galaxy. Note that PAH features are not expected from dust that has been exposed to a harsh X-ray radiation environment, since tiny grains are easily disrupted by X-rays (Voit 1992). This excess, if confirmed to be PAH features, is an argument against the dusty gas originating from condensations from the hot ICM.
The 70 $\mu$m/160 $\mu$m flux ratio is larger than that observed for higher redshift ($z=0.25-0.3$) BCGs by Egami et al. (2006). The observed ratios seen for Abell 1835 and Zwicky 3146 by Egami et al. (2006) are 0.4-0.6, corresponding to a rest-frame black body temperature of $\sim35-40$K. The observed ratio for Abell 2597 is $\sim1.5$, corresponding to a rest-frame black body temperature of $65-75$K. This finding may mean that the temperature of the hot dust in Abell 2597 is greater than in higher redshift BCGs that are forming stars more quickly than the BCG in Abell 2597. It may seem puzzling that a galaxy with a lower star formation rate (SFR) seems to have hotter dust than large-SFR galaxies. One possible explanation is that galaxies with very low SFRs may have only cool dust (if any), those with an intermediate SFR may have warmer dust, and perhaps those with a very high SFR may destroy the dust in the star formation regions. The dust in these systems might then be farther from the heat sources, therefore generating a higher IR luminosity but at a lower dust temperature. Another explanation is suggested by the radiative transfer models presented in that indicate that the very luminous ($L>10^{12.5}L_\odot$) sources are cooler because for a given $A_V$, the dust mass $M_d$ increases like $R^2$, and the dust temperature scales like $L/M_d$ . Also, a large fraction of buried OB stars increases the near-IR flux. Mid-IR spectra, with more than 2-3 points per spectrum, of these sources will provide an interesting discriminant between these model SEDs.
Assuming for the moment that the far-IR peak is indeed from star-forming regions, we can compare the implied star-formation rate with that inferred from the H$\alpha$ emission. The diameter of the H$\alpha$ nebula is about $16\arcsec$ [@HBvM1989], although the high-surface brightness structure visible in the HST image from @Donahue2000 has a similar diameter to that of the radio source ($\sim5-6\arcsec$). Corrected to $H_0=70$ km s$^{-1}$ Mpc$^{-1}$, @HBvM1989 measure a total H$\alpha$ + \[N II\] luminosity of $3.1 \times 10^{42}$ erg s$^{-1}$, which is $\sim 1.3-1.5 \times 10^{42}$ erg s$^{-1}$ in H$\alpha$ alone. (The ratio of H$\alpha/$\[N II\]6584Å varies in this source between 0.8-1.0, and the \[N II\]6548Å line flux is 1/3 that of the 6584Å line.) A reddening analysis of 5 hydrogen Balmer lines in the spectrum of Abell 2597 by @1997ApJ...486..242V optical depth at H$\beta$ is about 1.2. The correction to total H$\alpha$ is approximately 25%, corresponding to $\sim 1.6-1.8 \times 10^{42}$ erg s$^{-1}$ Such an analysis is insensitive to absorption by so-called “grey” dust, i.e. a population of large grains whose absorption is independent of wavelength. This luminosity corresponds to a total star formation rate of $\sim 12-14$ M$_\odot$ yr$^{-1}$ using the conversion from [@Kennicutt1998]. This rate is somewhat larger than the $4 M_\odot \, {\rm yr}^{-1}$ indicated by the far-IR emission but could be an overestimate if any of the H$\alpha$ arises from processes other than star formation (e.g. AGN, shocks).
The OVI detection reported by @2001ApJ...560..187O suggests a gas cooling rate $20\pm5$ solar masses per year in the central 26 kpc (quantities converted to $H_0=70$ km s$^{-1}$ Mpc$^{-1}$). This quantity is higher than the star formation rate inferred from Spitzer data by a factor of $\sim 5\pm3$ but is consistent with the local cooling rate of $\sim 20~ \rm{M}_\odot ~\rm{yr}^{-1}$ inferred from X-ray spectra by @2005MNRAS.358..585M.
An alternative source of energy for both H$\alpha$ and far-IR emission emission is conduction [@1989ApJ...345..153S]. If conduction is not completely suppressed by magnetic fields, it must transfer at least some energy from the X-ray gas to the nebula and the cooler dusty gas associated with it. If we assume saturated heat conduction [@1977ApJ...211..135C] from the surrounding $kT=4$ keV ICM through a spherical surface of radius $r=10 r_{10}$ kpc, we obtain a maximum heating rate of a few times $10^{43} r_{10}^2$ erg s$^{-1}$, which represents a significant fraction of the mid-IR luminosity of $10^{44}$ erg s$^{-1}$. However, many factors could reduce the conductive heating rate, such as suppression of conduction via tangled magnetic fields or the deposition of energy into ionized nebular gas instead of into the grains. Additional theoretical work is needed to explore more quantitatively the effect of the surrounding hot ICM on the dusty clouds responsible for the far-IR emission.
In summary, the UV observations of [@ODea2004] place a lower limit on the star-formation rate because extinction corrections revise the UV rate upwards; the H$\alpha$ luminosity provides only an approximate limit since other processes can generate H$\alpha$, and H$\alpha$ can be attenuated by grey absorption. These far-IR infrared observations place the best upper limit on the obscured star formation, because alternative contributions (such as conduction from the ICM) would also revise the inferred star formation rate downward. Because we have estimated above that conduction could in principle provide up to $\sim 50$% of the IR luminosity, the uncertainty of the IR-inferred star formation rate is at least a factor of two, and we require better theoretical models and more detailed spectra to improve our estimates.
Conclusions
===========
We have detected mid-infrared emission from the central galaxy of Abell 2597 with the Spitzer Space Telescope, from 3.6 - 160 $\mu$m. This galaxy has the luminosity and spectral shape of a LIRG embedded in a giant elliptical galaxy. The far infrared luminosity rivals that of the local X-ray luminosity. We have constructed a broad-band spectrum of the galaxy, which is most simply interpreted as a dust-enshrouded stellar population forming stars at a rate $\sim 4 M_\odot \, {\rm yr}^{-1}$ solar masses per year inside the central $35\arcsec$ ($\sim 54$ kpc). This estimate is consistent within a factor of two of estimates from optical, H$\alpha$, and ultraviolet observations. We cannot, however, rule out additional heating of the dust by electron thermal conduction from the hot gas, and we suggest the development of more quantitative physical models of this process. The presence of UV continuum light argues in favor of a substantial fraction of the mid-IR flux being associated with star formation. The star formation rate inferred from the mid-IR emission is somewhat lower than the cooling rate inferred from OVI and recent X-ray observations, but we cannot rule out the hypothesis that cooling gas may feed the star formation implied by the UV, optical, and mid-IR data.
Finally, we have have been able to model the broadband infrared SED of the BCG with a basic giant elliptical template and a standard nuclear starburst model from , including PAH features. We therefore have no evidence, based on these data, that the dusty gas condensed from the hot ICM. Further spectroscopic detail is required to test the hypothesis that PAHs are responsible for the mid-IR excess.
Support for Donahue was provided by a NASA Spitzer contract (JPL 1268128) and a NASA LTSA grant (NASA NNG-05GD82G). MD acknowledges useful discussions about Spitzer photometry with the Spitzer helpdesk personnel and with Dr. Grant Tremblay. WBS acknowledges support from NASA Spitzer contract JPL 1269604.
[41]{} natexlab\#1[\#1]{}
, D. G., & [Nulsen]{}, P. E. J. 2003, , 343, 315
, P. N., [Kaiser]{}, C. R., [Heckman]{}, T. M., & [Kauffmann]{}, G. 2006, , 368, L67
, J. 2004, , 347, 1093
, M., [Miralles]{}, J.-M., & [Pell[ó]{}]{}, R. 2000, , 363, 476
, J. N., [Fabian]{}, A. C., [Miller]{}, E. D., & [Irwin]{}, J. A. 2006, , 642, 746
, N., [Gorgas]{}, J., & [Aragon-Salamanca]{}, A. 1998, , 298, 977
, M., [Wheaton]{}, W. A., & [Megeath]{}, S. T. 2003, , 126, 1090
, L. L., & [McKee]{}, C. F. 1977, , 211, 135
, M., [Horner]{}, D. J., [Cavagnolo]{}, K. W., & [Voit]{}, G. M. 2006, , 643, 730
, M., [Mack]{}, J., [Voit]{}, G. M., [Sparks]{}, W., [Elston]{}, R., & [Maloney]{}, P. R. 2000, , 545, 670
, M., [Voit]{}, G. M., [O’Dea]{}, C. P., [Baum]{}, S. A., & [Sparks]{}, W. B. 2005, , 630, L13
, A. C. 2001, , 328, 762
—. 2001, , 328, 762
, A. C., & [Frayer]{}, D. T. 2003, , 594, L13
, E., [Misselt]{}, K. A., [Rieke]{}, G. H., [Wise]{}, M. W., [Neugebauer]{}, G., [Kneib]{}, J.-P., [Le Floc’h]{}, E., [Smith]{}, G. P., [Blaylock]{}, M., [Dole]{}, H., [Frayer]{}, D. T., [Huang]{}, J.-S., [Krause]{}, O., [Papovich]{}, C., [P[é]{}rez-Gonz[á]{}lez]{}, P. G., & [Rigby]{}, J. R. 2006, , 647, 922
, E., [Rieke]{}, G. H., [Fadda]{}, D., & [Hines]{}, D. C. 2006, , 652, L21
, R., & [Maloney]{}, P. 1994, in ASSL Vol. 190: Astronomy with Arrays, The Next Generation, ed. I. S. [McLean]{}, 169–+
, A. C. 1994, , 32, 277
, G. G., [et al.]{} 2004, , 154, 10
, K. D., [et al.]{} 2005, , 117, 503
, T. M., [Baum]{}, S. A., [van Breugel]{}, W. J. M., & [McCarthy]{}, P. 1989, , 338, 48
—. 1989, , 338, 48
, W., [Bremer]{}, M. N., & [Baker]{}, K. 2005, , 360, 748
, W., [Bremer]{}, M. N., & [van der Werf]{}, P. P. 2001, , 324, 443
, T. H., [Chester]{}, T., [Cutri]{}, R., [Schneider]{}, S., [Skrutskie]{}, M., & [Huchra]{}, J. P. 2000, , 119, 2498
, Jr., R. C. 1998, , 498, 541
, A. M., [O’Dea]{}, C. P., [Sarazin]{}, C. L., [McNamara]{}, B. R., [Donahue]{}, M., [Voit]{}, G. M., [Baum]{}, S. A., & [Gallimore]{}, J. F. 1999, , 525, 621
, B. R., [Jannuzi]{}, B. T., [Sarazin]{}, C. L., [Elston]{}, R., & [Wise]{}, M. 1999, , 518, 167
, B. R., [Wise]{}, M. W., [Nulsen]{}, P. E. J., [David]{}, L. P., [Carilli]{}, C. L., [Sarazin]{}, C. L., [O’Dea]{}, C. P., [Houck]{}, J., [Donahue]{}, M., [Baum]{}, S., [Voit]{}, M., [O’Connell]{}, R. W., & [Koekemoer]{}, A. 2001, , 562, L149
—. 2001, , 562, L149
, R. G., & [Fabian]{}, A. C. 2005, , 358, 585
, C. P., [Baum]{}, S. A., [Mack]{}, J., [Koekemoer]{}, A. M., & [Laor]{}, A. 2004, , 612, 131
, W. R., [Cowie]{}, L., [Davidsen]{}, A., [Hu]{}, E., [Hutchings]{}, J., [Murphy]{}, E., [Sembach]{}, K., & [Woodgate]{}, B. 2001, , 560, 187
, J. R., [Paerels]{}, F. B. S., [Kaastra]{}, J. S., [Arnaud]{}, M., [Reiprich]{}, T. H., [Fabian]{}, A. C., [Mushotzky]{}, R. F., [Jernigan]{}, J. G., & [Sakelliou]{}, I. 2001, , 365, L104
, W. T., [Megeath]{}, S. T., [Cohen]{}, M., [Hora]{}, J., [Carey]{}, S., [Surace]{}, J., [Willner]{}, S. P., [Barmby]{}, P., [Wilson]{}, G., [Glaccum]{}, W., [Lowrance]{}, P., [Marengo]{}, M., & [Fazio]{}, G. G. 2005, , 117, 978
, P., [Combes]{}, F., [Edge]{}, A. C., [Crawford]{}, C., [Erlund]{}, M., [Fabian]{}, A. C., [Hatch]{}, N. A., [Johnstone]{}, R. M., [Sanders]{}, J. S., & [Wilman]{}, R. J. 2006, , 454, 437
, D. J., [Finkbeiner]{}, D. P., & [Davis]{}, M. 1998, , 500, 525
, R., & [Kr[ü]{}gel]{}, E. 2007, , 461, 445
, W. B., [Macchetto]{}, F., & [Golombek]{}, D. 1989, , 345, 153
, G. M., & [Donahue]{}, M. 1997, , 486, 242
—. 1997, , 486, 242
[lcccccccccc]{} Band ($\mu$m) & 1.235 & 1.662 & 2.159 & 3.6 & 4.5 & 5.8 & 8.0 & 24 & 70 & 160\
Flux (Unc. mJy) & 9.7 & 9.3 & 9.64 & 6.0 & 4.0 & 3.1 & 2.7 & 2.1 & 89 & 35 (52)\
Error (mJy) & 0.6 & 1.1 & 0.96 & 0.8 & 0.2 & 0.2 & 0.01 & 0.2 & 4 & 2 (3)\
PRF Flux (mJy) & & & & & & & & 2.09 & 86 & 57.0\
Error (mJy) & & & & & & & & 0.06 & 1 & 1.6\
![ This grey-scale figure shows two matched versions of the MIPS 70 micron image of the brightest cluster galaxy (BCG) in Abell 2597. North is towards the top of the image, and East is towards the left. A one-arcminute scale bar is displayed. The units on the grey scale color bar at the bottom of the figure are MJy per steradian. The left image is of the standard pipeline product, and the right image shows the same object and data, where we subtracted median sky images from the individual exposures using GeRT routines (see text for details), then we co-added using MOPEX. Only the BCG and the point source in the lower left hand corner were masked during this procedure. The result was a cleaner image, and an intriguing hint of an extended 70-micron source, extending approximately 1 arcminute south-east of the brighter compact source near the center of the BCG. This feature is likely a residual of the stripes in the original image, as discussed in the text. A point source $1\arcmin$ north of the BCG in the original image vanishes in the filtered image. \[figure:70\]](f1.pdf){width="180mm"}
![This grey-scale figure shows matched multiwavelength infrared images of the brightest cluster galaxy in Abell 2597. The angular scale of each image is $130\arcsec$ horizontal and $86\arcsec$ vertically. North is up and East is to the left. All subimages have the same angular scale. Left to right, top row: J-band from 2MASS, IRAC 3.6 and 4.5 microns. Second row: IRAC 5.8, 8.0 microns, MIPS 24 microns. Third row: MIPS 70 micron image from the SSC pipeline, the median-sky-subtracted 70 micron image (70F), and the MIPS 160 micron image. An $r=25\arcsec$ aperture is plotted for scale over each subimage. The 160 micron image also shows an $r=35\arcsec$ aperture. \[figure:all\]](f2.pdf){width="180mm"}
![ \[figure:SED\] Spectral energy distribution (SED) of the central galaxy of Abell 2597. The solid symbols indicate the 2MASS and *Spitzer* IRAC and MIPS measurements presented in Table \[table:fluxes\]. The error bars include systematic uncertainties as described in the text. The solid line is the best-fit two component SED model composed of a giant elliptical SED taken from the SED template library of the Hyper-z photometric redshift code (dashed line) and a nuclear starburst SED model (dotted line) from . The inferred mass of the giant elliptical is $\sim 3.12 \times 10^{11} ~\rm{M}_\odot$ The best-fit component from the Siebenmorgen & Krügel library is indicated with a dotted line, and corresponds to a mid-IR starburst nucleus of $r=0.35$ kpc where 60% of the luminosity is from hot spots, dense clouds ($n=10^2$ cm$^{-3}$) around buried OB stars, with a visual extinction $A_V=36$ mag. The total luminosity of this component is $L=10^{10.4}\,\, L_{\odot} \sim 0.95 \times 10^{44}$ erg sec$^{-1}$. ](f3.pdf){width="180mm"}
[^1]: <http://ssc.spitzer.caltech.edu/data/hb>
[^2]: GeRT is available from the SSC at <http://ssc.spitzer.caltech.edu/mips/gert/>
[^3]: MOPEX: MOsaicking and Point source EXtraction, available from the SSC at <http://ssc.spitzer.caltech.edu/postbcd/download-mopex.html>.
[^4]: Described in <http://spider.ipac.caltech.edu/staff/jarrett/2mass/XSC/> and query service available through GATOR <http://irsa.ipac.caltech.edu/applications/Gator/>
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We consider 6-manifolds endowed with a symplectic half-flat SU(3)-structure and acted on by a transitive Lie group G of automorphisms. We review a classical result allowing to show the non-existence of compact non-flat examples. In the noncompact setting, we classify such manifolds under the assumption that G is semisimple. Moreover, in each case we describe all invariant symplectic half-flat SU(3)-structures up to isomorphism, showing that the Ricci tensor is always Hermitian with respect to the induced almost complex structure. This last condition is characterized in the general case.'
address: |
Dipartimento di Matematica e Informatica “U. Dini”\
Università degli Studi di Firenze\
Viale Morgagni 67/a\
50134 Firenze\
Italy
author:
- Fabio Podestà and Alberto Raffero
title: 'Homogeneous symplectic half-flat 6-manifolds'
---
Introduction
============
This is the first of two papers aimed at studying symplectic half-flat 6-manifolds acted on by a Lie group ${{\mathrm G}}$ of automorphisms. Here, we focus on the homogeneous case, i.e., on transitive ${{\mathrm G}}$-actions, while in a forthcoming paper we shall consider cohomogeneity one actions.
An ${{\mathrm{SU}}}(3)$-structure on a six-dimensional manifold $M$ is given by an almost Hermitian structure $(g,J)$ and a complex volume form $\Psi = {\psi}+i{\widehat{\psi}}$ of constant length. By [@Hit1], the whole data depend only on the fundamental 2-form $\omega\coloneqq g(J\cdot,\cdot)$ and on the real 3-form ${\psi}$, provided that they fulfill suitable conditions.
The obstruction for the holonomy group of $g$ to reduce to ${{\mathrm{SU}}}(3)$ is represented by the intrinsic torsion, which is encoded in the exterior derivatives of $\omega$, ${\psi}$, and ${\widehat{\psi}}$ [@ChSa]. When all such forms are closed, the intrinsic torsion vanishes identically and the ${{\mathrm{SU}}}(3)$-structure is said to be [*torsion-free*]{}.
In this paper, we focus on 6-manifolds endowed with an ${{\mathrm{SU}}}(3)$-structure $(\omega,{\psi})$ such that $d\omega=0$ and $d{\psi}=0$, known as [*symplectic half-flat*]{} in literature (SHF for short). These structures are half-flat in the sense of [@ChSa], and their underlying almost Hermitian structure $(g,J)$ is almost Kähler.
Being half-flat, SHF structures can be used to construct local metrics with holonomy contained in ${{\mathrm G}}_2$ by solving the so-called Hitchin flow equations [@Hit1]. Moreover, it is known that every oriented hypersurface $M$ of a ${{\mathrm G}}_2$-manifold is endowed with a half-flat ${{\mathrm{SU}}}(3)$-structure, which is SHF when $M$ is minimal with $J$-invariant second fundamental form [@MarCab]. Starting with a SHF 6-manifold $(M,\omega,{\psi})$, it is also possible to obtain examples of closed ${{\mathrm G}}_2$-structures on the Riemannian product $M\times {\mathbb S}^1$, and on the mapping torus of a diffeomorphism of $M$ preserving $\omega$ and ${\psi}$ (see e.g. [@ManTh]).
In theoretical physics, compact SHF 6-manifolds arise as solutions of type IIA supersymmetry equations [@FiUg].
SHF 6-manifolds were first considered in [@DeB], and then in [@DeBTom; @DeBTom0]. In [@DeBTom0], equivalent characterizations of SHF structures in terms of the Chern connection $\nabla$ were given, showing that $\mathrm{Hol}(\nabla)\subseteq{{\mathrm{SU}}}(3)$. Moreover, as ${\psi}$ defines a calibration on $M$ in the sense of [@HarLaw], the authors introduced and studied special Lagrangian submanifolds in this setting.
In [@BedVez], the Ricci tensor of an ${{\mathrm{SU}}}(3)$-structure was described in full generality. Using this result, it was proved that SHF structures cannot induce an Einstein metric unless they are torsion-free. It is then interesting to investigate the existence of SHF structures whose Ricci tensor has special features. By the results in [@BlIa], the Ricci tensor being $J$-Hermitian seems to be a meaningful condition. In Proposition \[JRic\], we characterize this property in terms of the intrinsic torsion.
Recently, A. Fino and the second author showed that SHF structures fulfilling some extra conditions can be used to obtain explicit solutions of the Laplacian ${{\mathrm G}}_2$-flow on the product manifold $M\times{\mathbb S}^1$ [@FiRa]. In particular, the class of SHF structures satisfying the required hypothesis includes those having $J$-Hermitian Ricci tensor.
Most of the known examples of SHF 6-manifolds consist of six-dimensional simply connected Lie groups endowed with a left-invariant SHF structure. The classification of nilpotent Lie groups admitting such structures was given in [@ConTom], while the classification in the solvable case was obtained in [@FMOU]. Previously, some examples on unimodular solvable Lie groups appeared in [@DeBTom; @FrSch; @TomVez]. Moreover, in [@TomVez] a family of non-homogeneous SHF structures on the 6-torus was constructed.
In the present paper, we look for new examples in the homogeneous setting. We first show that compact homogeneous SHF 6-manifolds with invariant SHF structure are exhausted by flat tori (Corollary \[inexistencecpt\]). This result is based on a classical theorem concerning compact almost Kähler manifolds acted on transitively by a semisimple automorphism group [@WoGrII]. We then focus on the noncompact case ${{\mathrm G}}/{{\mathrm K}}$ with ${{\mathrm G}}$ semisimple. We provide a full classification in Theorem \[classThm\], showing that only the twistor spaces of ${{\mathbb R}}{{\mathbb H}}^4$ and ${{\mathbb C}}{{\mathbb H}}^2$ occur. Furthermore, we prove that the former admits a unique invariant SHF structure up to homothety, while the latter is endowed precisely with a one-parameter family of invariant SHF structures which are pairwise non-homothetic and non-isomorphic. We point out that all almost Kähler structures underlying the SHF structures in this family share the same Chern connection, which coincides with the canonical connection of the homogeneous space. Finally, in both cases a representation theory argument allows to conclude that the Ricci tensor is $J$-Hermitian.
Throughout the paper, we shall denote Lie groups by capital letters, e.g. ${{\mathrm G}}$, and the corresponding Lie algebras by gothic letters, e.g. ${\mathfrak{g}}$.
Preliminaries
=============
Stable 3-forms in six dimensions
--------------------------------
A $k$-form on an $n$-dimensional vector space $V$ is said to be [*stable*]{} if its orbit under the natural action of ${{\mathrm {GL}}}(V)$ is open in $\Lambda^k(V^*)$. Among all possible situations that may occur (see e.g. [@Hit; @Hit1; @Rei]), in this paper we will be concerned with stable 3-forms in six dimensions.
Assume that $V$ is real six-dimensional, and fix an orientation by choosing a volume form $\Omega\in\Lambda^6(V^*)$. Then, every 3-form $\rho\in\Lambda^3(V^*)$ gives rise to an endomorphism $S_\rho:V\rightarrow V$ via the identity $$\label{P}
\iota_v\rho{\wedge}\rho{\wedge}\eta = \eta(S_\rho(v))\,\Omega,$$ for all $\eta\in\Lambda^1(V^*)$, where $\iota_v\rho$ denotes the contraction of $\rho$ by the vector $v\in V.$ By [@Hit], $S_\rho^2=P(\rho)\mathrm{Id}_V$ for some irreducible polynomial $P(\rho)$ of degree 4, and $\rho$ is stable if and only if $P(\rho)\neq0$. The space $\Lambda^3(V^*)$ contains two open orbits of stable forms defined by the conditions $P>0$ and $P<0$. The ${{\mathrm {GL}}}^{{\scriptscriptstyle}+}(V)$-stabilizer of a 3-form $\rho$ belonging to the latter is isomorphic to ${{\mathrm {SL}}}(3,{{\mathbb C}})$. In this case, $\rho$ induces a complex structure $$\label{Jpsi}
J_\rho:V\rightarrow V,\quad J_\rho\coloneqq \frac{1}{\sqrt{-P(\rho)}}\,S_\rho,$$ and a complex $(3,0)$-form $\rho+i\widehat\rho$, where $\widehat\rho\coloneqq J_\rho\rho =\rho(J_\rho\cdot,J_\rho\cdot,J_\rho\cdot)=-\rho(J_\rho\cdot,\cdot,\cdot)$. Moreover, the 3-form $\widehat\rho$ is stable, too, and $J_{\widehat\rho}=J_\rho$.
Note that $S_\rho$, $P(\rho)$ and $J_\rho$ depend both on $\rho$ and on the volume form $\Omega$. In particular, after a scaling $(\rho,\Omega)\mapsto(c\rho,\lambda\Omega)$, $c,\lambda\in{{\mathbb R}}\smallsetminus\{0\}$, they transform as follows $$\frac{c^2}{\lambda}\, S_\rho,\qquad \frac{c^4}{\lambda^2}\,P(\rho),\qquad \frac{\left|\lambda\right|}{\lambda}\,J_\rho.
$$ Thus, the sign of $P(\rho)$ does not depend on the choice of the orientation.
Symplectic half-flat 6-manifolds
--------------------------------
Let $M$ be a connected six-dimensional manifold. An ${{\mathrm{SU}}}(3)$-structure on $M$ is an ${{\mathrm{SU}}}(3)$-reduction of the structure group of its frame bundle. By [@Hit1], this is characterized by the existence of a non-degenerate 2-form $\omega\in\Omega^2(M)$ and a stable 3-form $\psi\in\Omega^3(M)$ with $P(\psi_x)<0$ for all $x\in M,$ fulfilling the following three properties. First, the [*compatibility condition*]{} $$\label{compcond}
\omega{\wedge}\psi=0,$$ which guarantees that $\omega$ is of type $(1,1)$ with respect to the almost complex structure $J\in\operatorname{End}(TM)$ induced by $\psi$ and by the volume form $\frac{\omega^3}{6}$. Second, the [*normalization condition*]{} $$\label{normalization}
{\psi}{\wedge}{\widehat{\psi}}= \frac23\,\omega^3,$$ where ${\widehat{\psi}}\coloneqq J{\psi}$. Finally, the positive definiteness of the symmetric bilinear form $$g\coloneqq\omega(\cdot,J\cdot).$$ Note that the pair $(g,J)$ is an almost Hermitian structure with fundamental form $\omega$, and that ${\psi}+i{\widehat{\psi}}$ is a complex volume form on $M.$
Given an ${{\mathrm{SU}}}(3)$-structure $(\omega,{\psi})$ on $M,$ we denote by $*:\Omega^k(M)\rightarrow\Omega^{6-k}(M)$ the Hodge operator defined by the Riemannian metric $g$ and the volume form $\frac{\omega^3}{6}$, and we indicate by $\left|\,\cdot\,\right|$ the induced pointwise norm on $\Omega^k(M)$. When $k=3,4$, the irreducible decompositions of the ${{\mathrm{SU}}}(3)$-modules $\Lambda^{k}\left({{{\mathbb R}}^6}^*\right)$ give rise to the splittings $$\label{3forms}
\Omega^{3}(M) = \mathcal{C}^\infty(M)\,{\psi}\oplus \mathcal{C}^\infty(M)\,{\widehat{\psi}}\oplus \left\llbracket\Omega^{2,1}_{0}(M)\right\rrbracket
\oplus \Omega^{1}(M){\wedge}\omega,$$ $$\label{4forms}
\Omega^{4}(M) = \mathcal{C}^\infty(M)\,\omega^2 \oplus \left[\Omega^{1,1}_{0}(M)\right] {\wedge}\omega \oplus\Omega^{1}(M){\wedge}{\psi},
$$ where $$\left[\Omega^{1,1}_{0}(M)\right] \coloneqq \left\{\kappa\in\Omega^{2}(M){\ |\ }\kappa{\wedge}\omega^{2}=0,~J\kappa=\kappa\right\}$$ is the space of primitive 2-forms of type $(1,1)$, and $$\left\llbracket\Omega^{2,1}_{0}(M)\right\rrbracket \coloneqq \left\{{\varphi}\in\Omega^{3}(M) {\ |\ }{\varphi}{\wedge}\omega=0,~{\varphi}{\wedge}\psi={\varphi}{\wedge}\widehat\psi=0 \right\}$$ is the space of primitive 3-forms of type $(2,1)+(1,2)$ (see e.g. [@BedVez; @ChSa]).
By [@ChSa], the intrinsic torsion of $(\omega,{\psi})$ is determined by $d\omega$, $d{\psi}$, and $d{\widehat{\psi}}$. In particular, it vanishes identically if and only if all such forms are zero. When this happens, the Riemannian metric $g$ is Ricci-flat, ${\rm Hol}(g)$ is a subgroup of ${{\mathrm{SU}}}(3)$, and the ${{\mathrm{SU}}}(3)$-structure is said to be [*torsion-free*]{}.
A six-dimensional manifold $M$ endowed with an ${{\mathrm{SU}}}(3)$-structure $(\omega,{\psi})$ is called [*symplectic half-flat*]{} (SHF for short) if both $\omega$ and ${\psi}$ are closed. By [@ChSa Thm. 1.1], in this case the intrinsic torsion can be identified with a unique 2-form ${\sigma}\in{\left[\Omega^{\scriptscriptstyle 1,1}_{\scriptscriptstyle0}(M)\right]}$ such that $$\label{SHFeqn}
d{\widehat{\psi}}={\sigma}{\wedge}\omega$$ (cf. ). We shall refer to ${\sigma}$ as the [*intrinsic torsion form*]{} of the [*SHF structure*]{} $(\omega,{\psi})$. It is clear that ${\sigma}$ vanishes identically if and only if the ${{\mathrm{SU}}}(3)$-structure is torsion-free. When the intrinsic torsion is not zero, the almost complex structure $J$ is non-integrable, and the underlying almost Hermitian structure $(g,J)$ is (strictly) almost Kähler.
Since ${\sigma}$ is a primitive 2-form of type $(1,1)$, it satisfies the identity $*{\sigma}=-{\sigma}{\wedge}\omega$. Using this together with , it is possible to show that ${\sigma}$ is coclosed, and that its exterior derivative has the following expression with respect to the decomposition of $\Omega^{3}(M)$ $$\label{dw2}
d{\sigma}= \frac{\left|{\sigma}\right|^2}{4}{\psi}+\nu,$$ for a unique $\nu\in{\left\llbracket\Omega^{\scriptscriptstyle 2,1}_{\scriptscriptstyle0}(M)\right\rrbracket}$ (see e.g. [@FiRa Lemma 5.1] for explicit computations).
Symplectic half-flat SU(3)-structures with $J$-Hermitian Ricci tensor
=====================================================================
In this section, we discuss the curvature properties of a SHF 6-manifold $(M,\omega,{\psi})$. We begin reviewing some known facts from [@BedVez].
By [@BedVez Thm. 3.4], the scalar curvature of the metric $g$ induced by $(\omega,{\psi})$ is given by $$\label{ScalSHF}
\mbox{Scal}(g) = -\frac12\left|{\sigma}\right|^2.$$ Therefore, it is zero if and only if the ${{\mathrm{SU}}}(3)$-structure is torsion-free.
The Ricci tensor of $g$ belongs to the space $\mathcal{S}^{2}(M)$ of symmetric 2-covariant tensor fields on $M.$ The ${{\mathrm{SU}}}(3)$-irreducible decomposition of $\mathcal{S}^{2}({{{\mathbb R}}^6}^*)$ induces the splitting $$\mathcal{S}^{2}(M)= C^\infty(M)\,g\oplus\mathcal{S}^{2}_{{\scriptscriptstyle}+}(M)\oplus\mathcal{S}^{2}_{{\scriptscriptstyle}-}(M),$$ where $$\mathcal{S}^{2}_{{\scriptscriptstyle}+}(M) \coloneqq \left\{h\in\mathcal{S}^{2}(M){\ |\ }Jh=h \mbox{ and } {\rm tr}_{g}h=0\right\},\quad
\mathcal{S}^{2}_{{\scriptscriptstyle}-}(M) \coloneqq \left\{h\in\mathcal{S}^2(M){\ |\ }Jh=-h\right\}.$$ Consequently, we can write $${\rm Ric}(g) = \frac16\,{\rm Scal}(g)g + {\rm Ric}^0(g),$$ and the traceless part ${\rm Ric}^0(g)$ of the Ricci tensor belongs to $\mathcal{S}^2_{\scriptscriptstyle+}(M)\oplus\mathcal{S}^2_{\scriptscriptstyle-}(M)$. It follows from [@BedVez Thm. 3.6] that for a SHF structure $$\label{ric0}
{\rm Ric}^0(g) = \pi_{{\scriptscriptstyle}+}^{-1}\left(\frac14*({\sigma}{\wedge}{\sigma})+\frac{1}{12}\left|{\sigma}\right|^2\omega\right) + \pi_{{\scriptscriptstyle}-}^{-1}(2\nu),$$ where $\nu$ is the ${\left\llbracket\Omega^{\scriptscriptstyle 2,1}_{\scriptscriptstyle0}(M)\right\rrbracket}$-component of $d{\sigma}$ (cf. ), and the maps $\pi_{{\scriptscriptstyle}+}: \mathcal{S}^2_{{\scriptscriptstyle}+}(M)\rightarrow{\left[\Omega^{\scriptscriptstyle 1,1}_{\scriptscriptstyle0}(M)\right]}$ and $\pi_{{\scriptscriptstyle}-}:\mathcal{S}^2_{{\scriptscriptstyle}-}(M)\rightarrow{\left\llbracket\Omega^{\scriptscriptstyle 2,1}_{\scriptscriptstyle0}(M)\right\rrbracket}$ are induced by the pointwise ${{\mathrm{SU}}}(3)$-module isomorphisms given in [@BedVez $\S$2.3].
Equation together with a representation theory argument allows to show that the Riemannian metric $g$ induced by a SHF structure is Einstein, i.e., ${\rm Ric}^0(g)=0$, if and only if the intrinsic torsion vanishes identically [@BedVez Cor. 4.1]. In light of this result, it is natural to ask which distinguished properties $g$ might satisfy. Since the almost Hermitian structure $(g,J)$ underlying a SHF structure is almost Kähler, the Ricci tensor of $g$ being $J$-Hermitian seems a meaningful condition. Indeed, on a compact symplectic manifold $(M,\omega)$, almost Kähler structures with $J$-Hermitian Ricci tensor are the critical points of the Hilbert functional restricted to the space of all almost Kähler structures with fundamental form $\omega$ (see [@ApDr; @BlIa]).
Using the above decomposition of ${\rm Ric}(g)$, we can show that SHF structures with $J$-Hermitian Ricci tensor are characterized by the expression of $d{\sigma}$.
\[JRic\] The Ricci tensor of the metric $g$ induced by a SHF structure $(\omega,{\psi})$ is Hermitian with respect to the corresponding almost complex structure $J$ if and only if $$\label{dw2Jric}
d{\sigma}= \frac{\left|{\sigma}\right|^2}{4}{\psi}.$$ When this happens, the scalar curvature of $g$ is constant.
The Ricci tensor of $g$ is $J$-Hermitian if and only if it has no component in $\mathcal{S}^2_{{\scriptscriptstyle}-}(M)$. By , this happens if and only if $\nu=0$, i.e., if and only if $d{\sigma}$ is given by .
Taking the exterior derivative of both sides of , we get $d\left|{\sigma}\right|^2{\wedge}{\psi}=0$. This implies that $\left|{\sigma}\right|^2$ is constant, since wedging 1-forms by ${\psi}$ is injective. The second assertion follows then from .
Examples of SHF 6-manifolds with $J$-Hermitian Ricci tensor include the twistor space of an oriented self-dual Einstein 4-manifold of negative scalar curvature (cf. [@DavMus] and [@Xu $\S$1.2]).
Homogeneous symplectic half-flat 6-manifolds
============================================
In this section, we focus on the homogeneous case. More precisely, we shall consider the following class of SHF 6-manifolds.
A [*homogeneous symplectic half-flat manifold*]{} is the data of a SHF 6-manifold $(M,\omega,{\psi})$ and a connected Lie group ${{\mathrm G}}$ acting transitively and almost effectively on $M$ preserving the SHF structure $(\omega,{\psi})$.
Since the pair $(g,J)$ induced by $(\omega,{\psi})$ is a ${{\mathrm G}}$-invariant almost Kähler structure, the homogeneous manifold $M$ is ${{\mathrm G}}$-equivariantly diffeomorphic to the quotient ${{\mathrm G}}/{{\mathrm K}}$, where ${{\mathrm K}}$ is a compact subgroup of ${{\mathrm G}}$ [@KNI Ch. I, Cor. 4.8].
In what follows, we review some basic facts on homogeneous symplectic and almost complex manifolds, and then we will focus on invariant SHF structures on compact and noncompact homogeneous spaces.
Invariant almost Kähler structures on homogeneous spaces {#SCH}
--------------------------------------------------------
Let ${{\mathrm G}}/{{\mathrm K}}$ be a homogeneous space with ${{\mathrm K}}$ compact. It is well-known that there exists an $\operatorname{Ad}({{\mathrm K}})$-invariant subspace ${\mathfrak{m}}$ of ${\mathfrak{g}}$ such that ${\mathfrak{g}}={\mathfrak{k}}\oplus{\mathfrak{m}}$. Moreover, there is a natural identification of $T_{[{{\mathrm K}}]}({{\mathrm G}}/{{\mathrm K}})$ with ${\mathfrak{m}}$, and every ${{\mathrm G}}$-invariant tensor on ${{\mathrm G}}/{{\mathrm K}}$ corresponds to an $\operatorname{Ad}({{\mathrm K}})$-invariant tensor of the same type on ${\mathfrak{m}}$, which we will denote by the same letter.
From now on, we assume that ${{\mathrm G}}$ is semisimple. Recall that in such a case the Cartan-Killing form $B$ of ${\mathfrak{g}}$ is non-degenerate.
Given a ${{\mathrm G}}$-invariant symplectic form ${\omega}$ on ${{\mathrm G}}/{{\mathrm K}}$, the corresponding $\operatorname{Ad}({{\mathrm K}})$-invariant $2$-form ${\omega}\in \Lambda^2({\mathfrak{m}}^*)$ can be written as ${\omega}(\cdot,\cdot) = B(D\cdot,\cdot)$, where $D\in \operatorname{End}({\mathfrak{m}})$ is a $B$-skew-symmetric endomorphism.
Extend $D$ to an endomorphism of ${\mathfrak{g}}$ by setting $D|_{{\mathfrak{k}}}\equiv 0$. Then, $d{\omega}=0$ if and only if $D$ is a derivation of ${\mathfrak{g}}$ (see e.g. [@BFR]). Since ${\mathfrak{g}}$ is semisimple, there exists a unique $z\in {\mathfrak{g}}$ such that $D=\operatorname{ad}(z)$. By the $\operatorname{Ad}({{\mathrm K}})$-invariance of ${\omega}$, $z$ is centralized by ${\mathfrak{k}}$, and since ${\omega}$ is non-degenerate on ${\mathfrak{m}}$, the Lie algebra ${\mathfrak{k}}$ coincides with the centralizer of $z$ in ${\mathfrak{g}}$. Consequently, ${{\mathrm K}}$ is connected.
Since ${{\mathrm K}}$ is compact, there exists a maximal torus ${\rm T}\subseteq {{\mathrm K}}$ whose Lie algebra ${\mathfrak{t}}$ contains the element $z$. Using the results of [@Hel Ch. IX, $\S$4], a standard argument allows to show that the complexification ${\mathfrak{g}}^{{\scriptscriptstyle}{{\mathbb{C}}}}$ has a Cartan subalgebra ${\mathfrak{h}}$ given by ${\mathfrak{t}}^{{\scriptscriptstyle}{{\mathbb{C}}}}$. We can then consider the root space decomposition ${\mathfrak{g}}^{{\scriptscriptstyle}{{\mathbb{C}}}} = {\mathfrak{h}}\oplus \bigoplus_{{\alpha}\in R}{\mathfrak{g}}_{\alpha}$ with respect to ${\mathfrak{h}}$, where $R$ is the relative root system and ${\mathfrak{g}}_{\alpha}$ is the root space corresponding to the root ${\alpha}\in R$. For any pair ${\alpha},{\beta}\in R$ satisfying ${\alpha}+{\beta}\neq0$, the root spaces ${\mathfrak{g}}_{\alpha}$ and ${\mathfrak{g}}_{\beta}$ are $B$-orthogonal. Moreover, for each ${\alpha}\in R$ we can always choose an element $E_{\alpha}$ of ${\mathfrak{g}}_{\alpha}$ so that ${\mathfrak{g}}_{{\alpha}}= {{\mathbb{C}}} E_{\alpha}$, $B(E_{\alpha},E_{-{\alpha}}) = 1$, and $$[E_{\alpha},E_{\beta}]=
\left\{
\begin{array}{ll}
N_{{\alpha},{\beta}}E_{{\alpha}+{\beta}}, &\mbox{if } {\alpha}+{\beta}\in R, \\
H_{\alpha}&\mbox{if } {\beta}=-{\alpha}, \\
0 &\mbox{otherwise},
\end{array} \right.$$ with $N_{{\alpha},{\beta}}\in{{\mathbb R}}\smallsetminus\{0\}$, and $H_{\alpha}\in {\mathfrak{h}}$ defined as ${\alpha}(H) = B(H_{\alpha},H)$ for every $H\in {\mathfrak{h}}$ (see e.g. [@Hel p. 176]).
Since ${\mathfrak{k}}$ contains a maximal torus, we have the decompositions ${\mathfrak{k}}^{{\scriptscriptstyle}{{\mathbb{C}}}} = {\mathfrak{h}}\oplus \bigoplus_{{\alpha}\in R_{\mathfrak{k}}}{\mathfrak{g}}_{\alpha}$ and ${\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb{C}}}} = \bigoplus_{{\beta}\in R_{\mathfrak{m}}}{\mathfrak{g}}_{\beta}$, for two disjoint subsets $R_{\mathfrak{k}},R_{\mathfrak{m}}\subset R$ such that $$R = R_{\mathfrak{k}}\cup R_{\mathfrak{m}},\qquad (R_{\mathfrak{k}}+ R_{\mathfrak{k}})\cap R \subseteq R_{\mathfrak{k}},\qquad (R_{\mathfrak{k}}+ R_{\mathfrak{m}})\cap R \subseteq R_{\mathfrak{m}}.$$
Let $J\in \operatorname{End}({\mathfrak{m}})$ be an $\operatorname{Ad}({{\mathrm K}})$-invariant complex structure on ${\mathfrak{m}}$. Then, its complex linear extension $J\in \operatorname{End}({\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb{C}}}})$ is $\operatorname{ad}({\mathfrak{h}})$-invariant and commutes with the antilinear involution $\tau$ given by the real form ${\mathfrak{g}}$ of ${\mathfrak{g}}^{{\scriptscriptstyle}{{\mathbb{C}}}}$. Moreover, the $\operatorname{ad}({\mathfrak{h}})$-invariance implies that $J$ preserves each root space ${\mathfrak{g}}_{\alpha}$, ${\alpha}\in R_{\mathfrak{m}}$, and determines a splitting ${\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb{C}}}} = {\mathfrak{m}}^{1,0}\oplus{\mathfrak{m}}^{0,1}$, where $${\mathfrak{m}}^{1,0} = \bigoplus_{{\beta}\in R_{\mathfrak{m}}^{{\scriptscriptstyle}+}}{\mathfrak{g}}_{\beta},\qquad {\mathfrak{m}}^{0,1} = \bigoplus_{{\beta}\in R_{\mathfrak{m}}^{{\scriptscriptstyle}-}}{\mathfrak{g}}_{\beta},$$ and $R_{\mathfrak{m}}= R_{\mathfrak{m}}^{{\scriptscriptstyle}+}\cup R_{\mathfrak{m}}^{{\scriptscriptstyle}-}$, $R_{\mathfrak{m}}^{{\scriptscriptstyle}-} = - R_{\mathfrak{m}}^{{\scriptscriptstyle}+}$. The full $\operatorname{Ad}({{\mathrm K}})$-invariance is equivalent to $$(R_{\mathfrak{k}}+R_{\mathfrak{m}}^{{\scriptscriptstyle}+})\cap R \subseteq R_{\mathfrak{m}}^{{\scriptscriptstyle}+}.$$
Non-existence of compact non-flat homogeneous SHF 6-manifolds {#nocptSHF}
-------------------------------------------------------------
We begin reviewing a general result on compact homogeneous almost Kähler manifolds ${{\mathrm U}}/{{\mathrm K}}$, which was proved in [@WoGrII Thm. 9.4] for ${{\mathrm U}}$ semisimple.
\[cptaK\] A compact homogeneous almost Kähler manifold $(M,g,J)$ is Kähler.
Let ${{\mathrm U}}$ be a compact connected Lie group acting transitively and almost effectively by automorphisms on $(M,g,J)$, and let ${\omega}$ be the fundamental form. The group ${{\mathrm U}}$ is (locally) isomorphic to the product of its semisimple part ${{\mathrm G}}$ and a torus ${\mathrm Z}$, and the manifold $M$ splits as a symplectic product $M_1\times {\mathrm Z}$, where $M_1={{\mathrm G}}/{{\mathrm K}}$ and ${{\mathrm K}}$ is the centralizer of a torus in ${{\mathrm G}}$ (see [@ZwBo $\S$5]). The splitting is also holomorphically isometric, since the tangent spaces to $M_1$ and ${\mathrm Z}$ are inequivalent as ${{\mathrm K}}$-modules.
Keeping the same notations as in $\S$\[SCH\], we recall that when ${\mathfrak{g}}$ is a compact semisimple Lie algebra, then $\overline{E}_{\alpha}\coloneqq \tau(E_{\alpha}) = - E_{-{\alpha}}$, for every root ${\alpha}\in R$.
Now, for every ${\alpha}\in R_{\mathfrak{m}}$, we have $E_{\alpha}- E_{-{\alpha}}\in {\mathfrak{m}}$ and $$0 < g(E_{\alpha}- E_{-{\alpha}}, E_{\alpha}- E_{-{\alpha}}) = -2 g(E_{\alpha},E_{-{\alpha}}).$$ Therefore, when ${\alpha}\in R_{\mathfrak{m}}^{{\scriptscriptstyle}+}$ $$0 < -2 g(E_{\alpha},E_{-{\alpha}}) = -2 {\omega}(E_{\alpha},JE_{-{\alpha}}) = 2i {\omega}(E_{\alpha},E_{-{\alpha}}) = 2i {\alpha}(z).$$ This means that ${\alpha}\in R_{\mathfrak{m}}^{{\scriptscriptstyle}+}$ if and only if $i{\alpha}(z) > 0$. Hence, we have that $(R_{\mathfrak{m}}^{{\scriptscriptstyle}+}+R_{\mathfrak{m}}^{{\scriptscriptstyle}+})\cap R \subseteq R_{\mathfrak{m}}^{{\scriptscriptstyle}+}$. This last condition is equivalent to the integrability of $J$ (see e.g. [@BFR (3.49)]).
An immediate consequence of the previous proposition is the following.
\[inexistencecpt\] Let $(M,\omega,{\psi})$ be a compact homogeneous SHF 6-manifold. Then, the ${{\mathrm{SU}}}(3)$-structure $(\omega,{\psi})$ is torsion-free and $M$ is a flat torus.
Consider the almost Kähler structure $(g,J)$ underlying $(\omega,{\psi})$. By Proposition \[cptaK\], the almost complex structure $J$ is integrable. Then, the ${{\mathrm{SU}}}(3)$-structure is torsion-free. In particular, the metric $g$ is Ricci-flat, and thus flat by [@AK].
Noncompact homogeneous SHF 6-manifolds
--------------------------------------
Motivated by the result of $\S$\[nocptSHF\], we now look for examples of noncompact homogeneous SHF 6-manifolds. In particular, assuming that the transitive group of automorphisms ${{\mathrm G}}$ is semisimple, we shall prove the following classification result.
\[classThm\] Let $(M,\omega,{\psi})$ be a noncompact ${{\mathrm G}}$-homogeneous SHF 6-manifold, and assume that the group ${{\mathrm G}}$ is semisimple. Then, one of the following situations occurs:
1) $M = {{\mathrm{SU}}}(2,1)/{\mathrm T}^2$, and there exists a 1-parameter family of pairwise non-homothetic and non-isomorphic invariant SHF structures;
2) $M = {{\mathrm {SO}}}(4,1)/{{\mathrm U}}(2)$, and there exists a unique invariant SHF structure up to homothety.
Moreover, in both cases the Riemannian metric induced by the SHF structure has $J$-Hermitian Ricci tensor.
Observe that the two examples are precisely the twistor spaces of $\mathbb{C H}^2$ and $\mathbb{R H}^4$. The existence of a SHF structure on the latter was already known (see e.g. [@Xu $\S$1.2]).
For the sake of clarity, we divide the proof of Theorem \[classThm\] into various steps. We begin showing a preliminary lemma.
\[simplelemma\] Let $({{\mathrm G}}/{{\mathrm K}},{\omega},{\psi})$ be a homogeneous SHF 6-manifold with ${{\mathrm G}}$ semisimple. Then, ${{\mathrm G}}$ is simple.
Suppose that ${{\mathrm G}}$ is not simple. Then, ${\mathfrak{g}}$ splits as the sum of two non-trivial ideals ${\mathfrak{g}}={\mathfrak{g}}'\oplus{\mathfrak{g}}''$. Since ${\mathfrak{k}}$ is the centralizer of an element $z\in{\mathfrak{g}}$, it splits as ${\mathfrak{k}}= ({\mathfrak{k}}\cap{\mathfrak{g}}')\oplus({\mathfrak{k}}\cap{\mathfrak{g}}'')$, and the manifold ${{\mathrm G}}/{{\mathrm K}}$ is the product of homogeneous symplectic manifolds of lower dimension, say ${{\mathrm G}}/{{\mathrm K}}={{\mathrm G}}'/{{\mathrm K}}' \times {{\mathrm G}}''/{{\mathrm K}}''$. Without loss of generality, we may assume that $\dim({{\mathrm G}}'/{{\mathrm K}}')=2$ and $\dim({{\mathrm G}}''/{{\mathrm K}}'')=4$. The tangent space ${\mathfrak{m}}$ splits as ${\mathfrak{m}}'\oplus{\mathfrak{m}}''$, and a simple computation shows that $[\Lambda^3({\mathfrak{m}}'\oplus{\mathfrak{m}}'')]^{{\mathrm K}}= \{0\}$, since the isotropy representations of ${{\mathrm K}}'$ and ${{\mathrm K}}''$ have no non-trivial fixed vectors.
By the previous lemma, we can focus on the case when the Lie group ${{\mathrm G}}$ is simple and noncompact. Let ${\mathrm L}\subset{{\mathrm G}}$ be a maximal compact subgroup containing ${{\mathrm K}}$. Then $({{\mathrm G}},{\mathrm L})$ is a symmetric pair, and ${{\mathrm K}}$ is strictly contained in ${\mathrm L}$. Indeed, if ${\mathrm L} = {{\mathrm K}}$, then $({{\mathrm G}},{\mathrm L})$ would be a Hermitian symmetric pair, and every invariant almost complex structure on ${{\mathrm G}}/{\mathrm L}$ would be integrable. In particular, every invariant SHF structure on ${{\mathrm G}}/{\mathrm L}$ would be torsion-free, hence flat. This contradicts the simplicity of ${{\mathrm G}}$. Moreover, the space ${\mathrm L}/{{\mathrm K}}$ is symplectic, as ${{\mathrm K}}$ is the centralizer of a torus in ${\mathrm L}$. Consequently, as $\dim({{\mathrm G}}) \geq 6$, we have $\dim({{\mathrm G}}/{\mathrm L})=4$.
Therefore, we have to consider the list of symmetric pairs $({\mathfrak{g}},{\mathfrak{l}})$ of noncompact type, where ${\mathfrak{g}}$ is simple, ${\mathfrak{l}}$ is of maximal rank in ${\mathfrak{g}}$, and $\dim({\mathfrak{g}})-\dim({\mathfrak{l}})=4$. After an inspection of all potential cases in [@Hel Ch. X, $\S$6], we are left with two possibilities, which are summarized in Table \[ncptSHF\].
${{\mathrm G}}$ ${\mathrm L}$ ${{\mathrm K}}$
------------------------- ------------------------------------------------------- --------------------
${{\mathrm{SU}}}(2,1)$ ${\mathrm S}({{\mathrm U}}(2)\times{{\mathrm U}}(1))$ ${\mathrm T}^2$
${{\mathrm {SO}}}(4,1)$ ${{\mathrm {SO}}}(4)$ ${{\mathrm U}}(2)$
: []{data-label="ncptSHF"}
We now deal with the two cases separately.
[**1)**]{} [$M = {{\mathrm{SU}}}(2,1)/{\mathrm T}^2$]{}\
Here ${\mathfrak{g}}^{{\scriptscriptstyle}{{\mathbb{C}}}} = {\mathfrak{sl}}(3,{\mathbb{C}})$, and we may think of ${\mathfrak{t}}$ as the abelian subalgebra $${\mathfrak{t}}=\left\{{{\rm diag}}(ia,ib,-ia-ib)\in {\mathfrak{g}}^{{\scriptscriptstyle}{{\mathbb{C}}}}{\ |\ }a,b\in{{\mathbb R}}\right\}.$$ The root system $R$ relative to the Cartan subalgebra ${\mathfrak{t}}^{{\scriptscriptstyle}{{\mathbb{C}}}}$ is given by $\{\pm{\alpha},\pm{\beta},\pm({\alpha}+{\beta})\}$. Without loss of generality, we assume that $\pm{\alpha}$ are the compact roots, i.e., ${\mathfrak{l}}^{{\scriptscriptstyle}{{\mathbb{C}}}} = {\mathfrak{t}}^{{\scriptscriptstyle}{{\mathbb{C}}}} \oplus {\mathfrak{g}}_{\alpha}\oplus {\mathfrak{g}}_{-{\alpha}}$. Notice that $\overline{E}_{\gamma}=-E_{-{\gamma}}$ for a compact root ${\gamma}\in R$, while $\overline{E}_{\gamma}=E_{-{\gamma}}$ when ${\gamma}$ is noncompact. We can then define the vectors $$\label{realvect}
v_{\gamma}\coloneqq E_{\gamma}+ \overline{E}_{{\gamma}},\quad w_{\gamma}\coloneqq i\left(E_{\gamma}- \overline{E}_{{\gamma}}\right),\quad {\gamma}\in\{{\alpha},{\beta},{\alpha}+{\beta}\},$$ so that if ${\mathfrak{m}}_{\gamma}\coloneqq \mathrm{span}_{{\mathbb R}}(v_{\gamma},w_{\gamma})$, we have ${\mathfrak{m}}= {\mathfrak{m}}_{\alpha}\oplus{\mathfrak{m}}_{\beta}\oplus{\mathfrak{m}}_{{\alpha}+{\beta}}$.
An invariant symplectic form ${\omega}$ is determined by an element $z\in {\mathfrak{t}}\smallsetminus\{0\}$, and for every root ${\gamma}\in R$ the only nonzero components of ${\omega}$ on ${\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb C}}}$ are given by $${\omega}(E_{\gamma},E_{-{\gamma}}) = B([z,E_{\gamma}],E_{-{\gamma}}) = {\gamma}(z).$$ If we fix $z_{a,b}\coloneqq{{\rm diag}}(ia,ib,-i(a+b))\in{\mathfrak{t}}$, we have $$\renewcommand{1}{1.2}
\begin{array}{rcl}
{\omega}(E_{\alpha},E_{-{\alpha}}) &=& {\alpha}(z_{a,b})~=~i(a-b),\\
{\omega}(E_{{\beta}},E_{-{\beta}}) &=& {\beta}(z_{a,b})~=~i(a+2b),\\
{\omega}(E_{{\alpha}+{\beta}},E_{-{\alpha}-{\beta}}) &=& ({\alpha}+{\beta})(z_{a,b})~=~i(2a+b).
\end{array}
\renewcommand{1}{1}$$
Let $\{E^{\gamma}\}_{{\gamma}\in R}$ denote the basis of $({{\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb{C}}}}})^*$ which is dual to the basis given by the root vectors $\{E_{\gamma}\}_{{\gamma}\in R}$. Then, we can write $$\label{omegaab}
\omega = i(a-b)\,E^{\alpha}{\wedge}E^{-{\alpha}} + i(a+2b)\,E^{\beta}{\wedge}E^{-{\beta}} + i(2a+b)\,E^{{\alpha}+{\beta}} {\wedge}E^{-{\alpha}-{\beta}},$$ and the volume form induced by $\omega$ on ${\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb C}}}$ is $$\label{volomg}
\frac{\omega^3}{6} = i(b-a)(a+2b)(2a+b) E^{\alpha}{\wedge}E^{-{\alpha}} {\wedge}E^{\beta}{\wedge}E^{-{\beta}} {\wedge}E^{{\alpha}+{\beta}} {\wedge}E^{-{\alpha}-{\beta}}.$$ We introduce the real volume form $$\label{Omega}
\Omega\coloneqq iE^{\alpha}{\wedge}E^{-{\alpha}} {\wedge}E^{\beta}{\wedge}E^{-{\beta}} {\wedge}E^{{\alpha}+{\beta}} {\wedge}E^{-{\alpha}-{\beta}}\in\Lambda^6(({\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb C}}})^*).$$ Observe that $\omega^3$ and $\Omega$ define the same orientation if and only if $(b-a)(a+2b)(2a+b)>0$.
In the next lemma, we describe closed invariant $3$-forms on $M.$
\[Lpsi\] Let $\psi\in \Lambda^3({\mathfrak{m}}^*)$ be a nonzero $\operatorname{Ad}({\mathrm T}^2)$-invariant 3-form whose corresponding form on $M$ is closed. Then, the 3-form $\psi$ on ${\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb C}}}$ can be written as $$\label{psi}
\psi = i q \left( E^{{\alpha}}\wedge E^{{\beta}}\wedge E^{-{\alpha}-{\beta}} + E^{-{\alpha}}\wedge E^{-{\beta}}\wedge E^{{\alpha}+{\beta}}\right),$$ for a suitable $q\in {{\mathbb R}}\smallsetminus\{0\}$.
The invariance of $\psi$ under the adjoint action of the Cartan subalgebra implies that $$({\gamma}_1 + {\gamma}_2 + {\gamma}_3)(H)\, \psi(E_{{\gamma}_1},E_{{\gamma}_2},E_{{\gamma}_3}) = 0,$$ for all ${\gamma}_1,{\gamma}_2,{\gamma}_3\in R$ and for all $H\in {\mathfrak{t}}$. Thus, $\psi$ is completely determined by the values $$\label{psi1}
\psi(E_{\alpha},E_{\beta},E_{-{\alpha}-{\beta}}) \coloneqq p+iq,$$ and $$\label{psi2}
\psi(E_{-{\alpha}},E_{-{\beta}},E_{{\alpha}+{\beta}}) = - \overline{\psi(E_{\alpha},E_{\beta},E_{-{\alpha}-{\beta}})} = -p + iq,$$ for suitable $p,q\in{{\mathbb R}}$.
Using the Koszul formula for the differential of invariant forms on ${\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb C}}}$, we have $$d\psi(X_0,X_1,X_2, X_3) = \sum_{i<j} (-1)^{i+j}\ \psi\left([X_i,X_j]_{{\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb C}}}},X_k,X_l\right),\quad X_0,\ldots, X_3\in {\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb C}}},$$ where $\{i,j\} \cup\{k,l\} = \{0,1,2,3\}$ for each $0\leq i<j\leq3$ and $k<l$.
By the $\operatorname{ad}({\mathfrak{t}}^{{\scriptscriptstyle}{{\mathbb{C}}}})$-invariance, we only need to check the values $d\psi\left(E_{{\gamma}_1},E_{-{\gamma}_1},E_{{\gamma}_2},E_{-{\gamma}_2}\right)$, with ${\gamma}_1,{\gamma}_2\in R$. From , , and the identity $N_{-{\alpha},-{\beta}} = - N_{{\alpha},{\beta}}$ (cf. e.g. [@Hel p.176]), we get $$\begin{aligned}
d\psi(E_{\alpha},E_{-{\alpha}},E_{\beta},E_{-{\beta}}) &=& \psi\left([E_{\alpha},E_{\beta}],E_{-{\alpha}},E_{-{\beta}}\right) + \psi\left([E_{-{\alpha}},E_{-{\beta}}], E_{\alpha},E_{\beta}\right) \\
&=& N_{{\alpha},{\beta}}\, \psi(E_{-{\alpha}},E_{-{\beta}},E_{{\alpha}+{\beta}}) + N_{-{\alpha},-{\beta}}\, \psi(E_{\alpha},E_{\beta},E_{-{\alpha}-{\beta}})\\
&=& N_{{\alpha},{\beta}}\, [-p+iq -(p+iq)] \\
&=& -2p N_{{\alpha},{\beta}}. \end{aligned}$$ Similarly, we obtain $$d{\psi}(E_{\alpha},E_{-{\alpha}},E_{{\alpha}+{\beta}},E_{-{\alpha}-{\beta}}) = 2pN_{{\alpha},{\beta}},\quad d{\psi}(E_{\beta},E_{-{\beta}},E_{{\alpha}+{\beta}},E_{-{\alpha}-{\beta}}) = 2pN_{{\alpha},{\beta}}.$$ Hence, the condition $d\psi=0$ is equivalent to $p=0$.
Throughout the following, we will consider a closed invariant $3$-form ${\psi}$ as in Lemma \[Lpsi\]. The next result proves the compatibility condition and the stability of ${\psi}$.
\[Lstab\] Let $\psi$ be a closed invariant $3$-form on ${\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb C}}}$ as in . Then, $\psi$ is compatible with every invariant symplectic form ${\omega}$. Moreover, $\psi$ is always stable, and it induces an invariant almost complex structure $J\in\operatorname{End}({\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb C}}})$ such that $$\label{J}
J(E_{\alpha}) =- i{\delta}_{a,b}\, E_{\alpha},\quad J(E_{{\beta}}) = -i{\delta}_{a,b}\, E_{{\beta}},\quad J(E_{{\alpha}+{\beta}}) = i{\delta}_{a,b}\, E_{{\alpha}+{\beta}},$$ where ${\delta}_{a,b}$ is the sign of $(b-a)(a+2b)(2a+b)$.
First, we observe that ${\omega}\wedge\psi=0$, since there are no non-trivial invariant $5$-forms (or, equivalently, 1-forms) on ${\mathfrak{m}}$.
In order to check the stability of $\psi$ and compute the almost complex structure induced by it and $\omega^3$, we complexify the relation for the endomorphism $S_{\psi}$ and we fix the real volume form ${\delta}_{a,b}\,\Omega$ (cf. and ). In this way, we obtain a map $S_{\psi}\in\operatorname{End}({\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb C}}})$ such that $S_{\psi}({\mathfrak{m}})\subseteq{\mathfrak{m}}$ and $S_{\psi}^2 = P(\psi) \mathrm{Id}$. A simple computation shows that for every ${\gamma}\in R$ $$S_{\psi}(E_{\gamma}) = c_{\gamma}E_{\gamma},$$ where $$c_{\alpha}= c_{\beta}= - c_{{\alpha}+{\beta}} = - {\delta}_{a,b}\,i\,q^2,\quad\mbox{ and }\quad c_{-{\gamma}} = - c_{\gamma}.$$ Consequently, $P(\psi) = - q^4<0$. The expression of $J$ can be obtained from .
Since $\omega{\wedge}{\psi}=0$, we can consider the $J$-invariant symmetric bilinear form $g \coloneqq {\omega}(\cdot,J\cdot)$. It is positive definite if and only if $$\begin{aligned}
0 &<& g(v_{\alpha},v_{\alpha}) = 2\,{\omega}(JE_{\alpha},E_{-{\alpha}}) = -2i{\delta}_{a,b}\,{\omega}(E_{\alpha},E_{-{\alpha}}) = 2{\delta}_{a,b}\,(a-b),\\
0 &<& g(v_{\beta},v_{\beta}) = -2\,{\omega}(JE_{\beta},E_{-{\beta}}) = 2i{\delta}_{a,b}\,{\omega}(E_{\beta},E_{-{\beta}}) = -2{\delta}_{a,b}\,(a+2b),\\
0 &<& g(v_{{\alpha}+{\beta}},v_{{\alpha}+{\beta}}) = -2\,{\omega}(JE_{{\alpha}+{\beta}},E_{-{\alpha}-{\beta}}) = -2i{\delta}_{a,b}\,{\omega}(E_{{\alpha}+{\beta}},E_{-{\alpha}-{\beta}}) = 2{\delta}_{a,b}\,(2a+b).\end{aligned}$$ Therefore, the set ${\mathcal{Q}}$ of admissible real parameters $(a,b)$ can be written as ${\mathcal{Q}}={\mathcal{A}}\cup (-{\mathcal{A}})$, where $${\mathcal{A}}\coloneqq \left\{(a,b)\left|\ 0< -\frac{a}{2} < b < -2a \right.\right\}.$$ Note that ${\delta}_{a,b}<0$ and ${\delta}_{-a,-b}>0$ for $(a,b)\in {\mathcal{A}}$.
The last condition we need is the normalization . Using and , we see that $$\label{psim1}
{\widehat{\psi}}= -{\delta}_{a,b}\,q \left( E^{{\alpha}}\wedge E^{{\beta}}\wedge E^{-{\alpha}-{\beta}} - E^{-{\alpha}}\wedge E^{-{\beta}}\wedge E^{{\alpha}+{\beta}}\right).$$ Thus, $${\psi}\wedge{\widehat{\psi}}= 2\,{\delta}_{a,b}\,q^2\,\Omega.$$ Combining this identity with and gives $$\label{normalab}
q^2 = 2 \left|(b-a)(a+2b)(2a+b)\right|,$$ which determines $q$ up to a sign. This provides two invariant SHF structures, namely $(\omega,{\psi})$ and $(\omega,-{\psi})$, which induce isomorphic ${{\mathrm{SU}}}(3)$-reductions. Hence, we assume $q$ to be positive.
Summing up, for any choice of real numbers $(a,b)\in{\mathcal{Q}}$, there is an $\operatorname{Ad}({\mathrm T}^2)$-invariant SHF structure on ${\mathfrak{m}}$ defined by the 2-form $\omega$ and the 3-form ${\psi}$ , with $q>0$ satisfying . Moreover, the Ricci tensor of any metric $g$ in this family is $J$-Hermitian. Indeed, ${\mathfrak{m}}$ is the sum of mutually inequivalent ${\mathrm T}^2$-modules, and on each module the invariant bilinear form ${{\rm Ric}}(g)$ and the metric $g$ are a multiple of each other.
Now, we investigate when two invariant SHF structures corresponding to different values of the real parameters $(a,b)\in{\mathcal{Q}}$ are isomorphic.
Since the transformation $(a,b)\mapsto (-a,-b)$ maps the $2$-form $\omega$ corresponding to $(a,b)$ into its opposite, it leaves the metric ${\omega}(\cdot,J\cdot)$ invariant (cf. ). Note that the standard embedding of ${\mathfrak{g}}$ into ${\mathfrak{sl}}(3,{\mathbb{C}})$ (see e.g. [@Hel p. 446]) is invariant under the action of the conjugation $\theta$ of ${\mathfrak{sl}}(3,{\mathbb{C}})$ with respect to the real form ${\mathfrak{sl}}(3,{\mathbb{R}})$. The involution $\theta$ preserves ${\mathfrak{t}}$, and $\theta|_{{\mathfrak{t}}} =-\mathrm{Id}$. The induced map $\hat\theta:M\to M$ is a diffeomorphism with $\hat\theta^*({\omega})=-{\omega}$ and $\hat\theta^*({\psi})={\psi}$. Thus, the SHF structures corresponding to the pairs $(a,b)$ and $(-a,-b)$ are isomorphic, and we can reduce to considering $(a,b)\in{\mathcal{A}}$.
For any nonzero $\lambda\in{{\mathbb R}}^{{\scriptscriptstyle}+}$, the SHF structures associated with $(a,b)$ and $(\lambda a,\lambda b)$ are homothetic, i.e., the defining differential forms and the induced metrics are homothetic. Then, we can restrict to a subset of ${\mathcal{A}}$ where the volume form is fixed, e.g. $${\mathcal{V}}\coloneqq \left\{(a,b)\in {\mathcal{A}}{\ |\ }(b-a)(a+2b)(2a+b)=-1\right\}.$$
We now claim that the SHF structures corresponding to the pairs $(a,b)$ and $(b,a)$ in ${\mathcal{A}}$ are isomorphic. Indeed, the conjugation in ${{\mathrm G}}$ by the element $$u\coloneqq \left(
\begin{array}{cc:c}
0 & 1 & 0\\
1 & 0 & 0\\ \hdashline
0 & 0 & -1
\end{array}
\right)
\in \mathrm{S}({{\mathrm U}}(2)\times{{\mathrm U}}(1))$$ preserves the isotropy $\mathrm{T}^2$ mapping $z_{a,b}$ into $z_{b,a}$. Consequently, it induces a diffeomorphism $\phi_{u}:M\rightarrow M,$ which is easily seen to be an isomorphism of the considered SHF structures. Therefore, we can further reduce to the set $${{\mathcal{V}}_{{\scriptscriptstyle}\mathrm{SHF}}}\coloneqq \left\{(a,b)\in{\mathcal{V}}\ \left|\ 0< -a \leq b <-2a \right.\right\},$$ which is represented in Figure \[figureset\].
To conclude our investigation, we prove that the SHF structures corresponding to different points in ${{\mathcal{V}}_{{\scriptscriptstyle}\mathrm{SHF}}}$ are pairwise non-isomorphic by showing that the induced metrics have different scalar curvature. From the expression of ${\widehat{\psi}}$ and the identity $d{\widehat{\psi}}=\sigma{\wedge}\omega$, we can determine the intrinsic torsion form $\sigma\in\left[\Lambda^{{\scriptscriptstyle}1,1}_{{\scriptscriptstyle}0}({\mathfrak{m}}^*)\right]$ explicitly. Then, by we have $${{\rm Scal}}(g) = -\frac{1}{2}|\sigma|^2 = -24\,N_{{\alpha},{\beta}}^2\left(a^2+ab+b^2\right).$$ Using the method of Lagrange multipliers, it is straightforward to check that the function ${{\rm Scal}}(g)$ subject to the constraint $(b-a)(a+2b)(2a+b)=-1$ has a unique critical point at $C = \left(-\frac{1}{\sqrt[3]{2}},\frac{1}{\sqrt[3]{2}}\right)\in{{\mathcal{V}}_{{\scriptscriptstyle}\mathrm{SHF}}}$. Moreover, ${{\rm Scal}}(g)$ is easily seen to be strictly decreasing when the point $(a,b)\in{{\mathcal{V}}_{{\scriptscriptstyle}\mathrm{SHF}}}$ moves away from $C$.
(-4.5,0) – (0.5,0); (0.4,0) node\[above\] [$a$]{}; (0,-0.5) – (0,4.5); (0,4.35) node\[right\] [$b$]{}; plot(, -0.5\*); plot(, -2\*); plot(, -); plot([-1/((2\*\*\*+3\*\*-3\*-2)\^(1/3))]{}, [-/((2\*\*\*+3\*\*-3\*-2)\^(1/3))]{}); plot([-1/((2\*\*\*+3\*\*-3\*-2)\^(1/3))]{}, [-/((2\*\*\*+3\*\*-3\*-2)\^(1/3))]{});
Using the properties of the Chern connection $\nabla$ of a homogeneous almost Hermitian space (see e.g. [@Pod $\S$2]), it is possible to check that the natural operator $\Lambda_{{\mathfrak{m}}}:{\mathfrak{m}}\rightarrow \operatorname{End}({\mathfrak{m}})$ associated with $\nabla$ is identically zero for all almost Kähler structures underlying the SHF structures parametrized by ${{\mathcal{V}}_{{\scriptscriptstyle}\mathrm{SHF}}}$. Consequently, all $(g,J)$ in this family share the same Chern connection, which coincides with the canonical connection of the homogeneous space ${{\mathrm{SU}}}(2,1)/\mathrm{T}^2$.
[**2)**]{} [$M = {{\mathrm {SO}}}(4,1)/{{\mathrm U}}(2)$]{}\
In this case, ${\mathfrak{g}}^{{\scriptscriptstyle}{{\mathbb{C}}}}={\mathfrak{so}}(5,\mathbb C)$. We fix the standard maximal abelian subalgebra ${\mathfrak{t}}$ of the compact real form ${\mathfrak{so}}(5)$ and the corresponding root system $R=\{\pm {\alpha},\pm{\beta},\pm({\alpha}+{\beta}),\pm({\alpha}+2{\beta})\}$. Without loss of generality, we may choose $R_{\mathfrak{k}}= \{\pm({\alpha}+2{\beta})\}$ and $\{\pm{\alpha}\}$ as compact roots, and $\{\pm({\alpha}+{\beta}),\pm {\beta}\}$ as noncompact roots. Note that $R_{\mathfrak{k}}\cup\{\pm{\alpha}\}$ is the root system of ${\mathfrak{l}}^{{\scriptscriptstyle}{{\mathbb{C}}}}\cong{\mathfrak{so}}(4,{\mathbb{C}})$. The tangent space ${\mathfrak{m}}$ splits as the sum of two inequivalent ${{\mathrm U}}(2)$-submodules ${\mathfrak{m}}={\mathfrak{m}}_1\oplus{\mathfrak{m}}_2$, with $\dim_{{{\mathbb R}}}{\mathfrak{m}}_1 =2$ and $\dim_{{{\mathbb R}}}{\mathfrak{m}}_2 = 4$. In particular, if we define the vectors $v_{\gamma},w_{\gamma}$ as in , then ${\mathfrak{m}}_1=\mathrm{span}_{{\mathbb R}}(v_{\alpha},w_{\alpha})$ and ${\mathfrak{m}}_2=\mathrm{span}_{{\mathbb R}}(v_{\beta},w_{\beta},v_{{\alpha}+{\beta}},w_{{\alpha}+{\beta}})$.
Any invariant symplectic form ${\omega}$ on ${\mathfrak{m}}$ is determined by a nonzero element $z$ in the one-dimensional center ${\mathfrak{z}}$ of ${\mathfrak{k}}\cong \mathfrak{u}(2)$. Since the root ${\alpha}+2{\beta}\in R_{\mathfrak{k}}$ vanishes on $z$, we have ${\alpha}(z) = -2{\beta}(z)$. Setting ${\alpha}(z) = ia$, $a\in {{\mathbb R}}\smallsetminus\{0\}$, we obtain the following expression for the complexified $\omega$ on ${\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb C}}}$ $$\omega = ia\,E^{\alpha}{\wedge}E^{-{\alpha}} -\frac12\,ia\,E^{\beta}{\wedge}E^{-{\beta}} + \frac12\,ia\,E^{{\alpha}+{\beta}} {\wedge}E^{-{\alpha}-{\beta}},$$ $\{E^{\gamma}\}_{{\gamma}\in R}$ being the basis of $({{\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb{C}}}}})^*$ dual to $\{E_{\gamma}\}_{{\gamma}\in R}$.
We consider an invariant 3-form $\psi$ on ${\mathfrak{m}}$ and its complexification on ${\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb{C}}}}$. As in Lemma \[Lpsi\], the $\operatorname{ad}({\mathfrak{t}}^{{\scriptscriptstyle}{{\mathbb{C}}}})$-invariance implies that $\psi$ is completely determined by the value $$\psi(E_{\alpha},E_{\beta},E_{-{\alpha}-{\beta}}) \coloneqq p + i q,$$ and its conjugate $$\psi(E_{-{\alpha}},E_{-{\beta}},E_{{\alpha}+{\beta}}) = - \overline{\psi(E_{\alpha},E_{\beta},E_{-{\alpha}-{\beta}})} = -p + iq,$$ for some $p,q\in {\mathbb{R}}$. In this case, we also have to check the invariance under $\operatorname{Ad}({{\mathrm U}}(2))$. This follows from the vanishing of $\psi(E_{\alpha},E_{\beta},[E_{{\alpha}+2{\beta}},E_{-{\alpha}-{\beta}}])$, $\psi(E_{\alpha},[E_{-{\alpha}-2{\beta}},E_{\beta}],E_{-{\alpha}-{\beta}})$, and $\psi(E_{-{\alpha}},E_{-{\beta}},[E_{-{\alpha}-2{\beta}},E_{{\alpha}+{\beta}}])$.
Using the same arguments as in the proofs of Lemma \[Lpsi\] and Lemma \[Lstab\], we can show the following.
Let $\psi\in \Lambda^3({\mathfrak{m}}^*)$ be a nonzero $\operatorname{Ad}({{\mathrm U}}(2))$-invariant 3-form whose corresponding form on $M$ is closed. Then, the complexified $\psi$ on ${\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb C}}}$ can be written as $$\label{psiu2}
\psi = i q\, \left( E^{{\alpha}}\wedge E^{{\beta}}\wedge E^{-{\alpha}-{\beta}} + E^{-{\alpha}}\wedge E^{-{\beta}}\wedge E^{{\alpha}+{\beta}}\right),$$ for a suitable $q\in{{\mathbb R}}\smallsetminus\{0\}$. Consequently, $\psi$ is compatible with every invariant symplectic form ${\omega}$, it is always stable, and it induces an invariant almost complex structure $J\in\operatorname{End}({\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb C}}})$ such that $$J(E_{\alpha}) = -i{\delta}_a\, E_{\alpha},\quad J(E_{{\beta}}) = -i{\delta}_a\, E_{{\beta}},\quad J(E_{{\alpha}+{\beta}}) = i{\delta}_a\, E_{{\alpha}+{\beta}},$$ where ${\delta}_a$ is the sign of $a$.
The $J$-invariant symmetric bilinear form $g \coloneqq {\omega}(\cdot,J\cdot)$ is positive definite for all $a\in{{\mathbb R}}\smallsetminus\{0\}$. Indeed $$g(v_{\alpha},v_{\alpha}) = 2{\delta}_a a,\quad g(v_{\beta},v_{\beta}) = {\delta}_a a,\quad g(v_{{\alpha}+{\beta}},v_{{\alpha}+{\beta}}) = {\delta}_a a.$$ Finally, we observe that $${\omega}^3 = \frac32a^3\,\Omega,\quad {\psi}\wedge{\widehat{\psi}}= 2{\delta}_a\,q^2\,\Omega,$$ where $\Omega$ is a real volume form on ${\mathfrak{m}}^{{\scriptscriptstyle}{{\mathbb C}}}$ defined as in . Therefore, the normalization condition gives $$q^2 = \frac{1}{2}\,{\delta}_a\, a^3.$$
Summarizing, we have obtained a 1-parameter family of invariant SHF structures on $M$ which are clearly pairwise homothetic. As the tangent space ${\mathfrak{m}}$ has two mutually inequivalent ${{\mathrm U}}(2)$-submodules, on each module the Ricci tensor of the SHF structure is a multiple of the metric. Hence, it is $J$-Hermitian.
[**Acknowledgements.**]{} The authors would like to thank Anna Fino for useful comments.
[10]{}
D. V. Alekseevski[ĭ]{} and B. N. Kimel’fel’d. Structure of homogeneous [R]{}iemannian spaces with zero [R]{}icci curvature. , [**9**]{}(2), 5–11, 1975.
V. Apostolov and T. Drăghici. The curvature and the integrability of almost-[K]{}[ä]{}hler manifolds: a survey. In [*Symplectic and contact topology: interactions and perspectives*]{}, vol. 35 of [*Fields Inst. Commun.*]{}, 25–53. Amer. Math. Soc., 2003.
L. Bedulli and L. Vezzoni. The [R]{}icci tensor of [SU]{}(3)-manifolds. , [**57**]{}(4), 1125–1146, 2007.
D. E. Blair and S. Ianus. Critical associated metrics on symplectic manifolds. In [*Nonlinear problems in geometry*]{}, vol. 51 of [*Contemp. Math.*]{}, 23–29. Amer. Math. Soc., 1986.
M. Bordermann, M. Forger and H. Römer. Kähler Manifolds: paving the way towards new supersymmetric Sigma Models. , [**102**]{} , 605–647, 1986.
S. Chiossi and S. Salamon. The intrinsic torsion of [$\rm SU(3)$]{} and [G$_2$]{} structures. In [*Differential geometry, [V]{}alencia, 2001*]{}, 115–133. World Sci. Publ., River Edge, NJ, 2002.
D. Conti and A. Tomassini. Special symplectic six-manifolds. , [**58**]{}(3), 297–311, 2007.
J. Davidov and O. Muškarov. Twistor spaces with [H]{}ermitian [R]{}icci tensor. , [**109**]{}(4), 1115–1120, 1990.
P. de Bartolomeis. Geometric structures on moduli spaces of special [L]{}agrangian submanifolds. , [**179**]{}, 361–382, 2001.
P. de Bartolomeis and A. Tomassini. On solvable generalized [C]{}alabi-[Y]{}au manifolds. , [**56**]{}(5), 1281–1296, 2006.
P. de Bartolomeis and A. Tomassini. On the [M]{}aslov index of [L]{}agrangian submanifolds of generalized [C]{}alabi-[Y]{}au manifolds. , [**17**]{}(8), 921–947, 2006.
M. Fern[á]{}ndez, V. Manero, A. Otal, and L. Ugarte. Symplectic half-flat solvmanifolds. , [**43**]{}(4), 367–383, 2013.
A. Fino and A. Raffero. Closed warped ${{\mathrm G}}_2$-structures evolving under the Laplacian flow. [arXiv:1708.00222](https://arxiv.org/abs/1708.00222).
A. Fino and L. Ugarte. On the geometry underlying supersymmetric flux vacua with intermediate [SU]{}(2)-structure. , [**28**]{}(7), 075004, 21 pp., 2011.
M. Freibert and F. Schulte-Hengesbach. Half-flat structures on decomposable [L]{}ie groups. , [**17**]{}(1), 123–141, 2012.
R. Harvey and H. B. Lawson, Jr. Calibrated geometries. , [**148**]{}, 47–157, 1982.
S. Helgason. Differential Geometry, Lie Groups, and Symmetric Spaces. Academic Press, Inc., 1978
N. Hitchin. The geometry of three-forms in six dimensions. , [**55**]{}(3), 547–576, 2000.
N. Hitchin. Stable forms and special metrics. In [*Global differential geometry: the mathematical legacy of [A]{}lfred [G]{}ray ([B]{}ilbao, 2000)*]{}, vol. 288 of [*Contemp. Math.*]{}, 70–89. Amer. Math. Soc., 2001.
S. Kobayashi and K. Nomizu. . Interscience Publishers, New York-London, 1963.
V. Manero. . PhD thesis, Universidad del País Vasco, 2015. Available at <https://addi.ehu.es/handle/10810/16773>.
F. Mart[í]{}n Cabrera. (3)-structures on hypersurfaces of manifolds with [G]{}$_2$-structure. , [**148**]{}(1), 29–50, 2006.
F. Podestà. Homogeneous [H]{}ermitian manifolds and special metrics. , 2017. doi:[10.1007/s00031-017-9450-9](https://doi.org/10.1007/s00031-017-9450-9)
W. Reichel. . PhD thesis, Greifswald, 1907.
A.Tomassini and L.Vezzoni. On symplectic half-flat manifolds. [*Manuscripta Math.*]{}, [**125**]{}(4), 515–530, 2008.
J. A. Wolf and A. Gray. Homogeneous spaces defined by [L]{}ie group automorphisms. [II]{}. , [**2**]{}, 115–159, 1968.
F. Xu. . PhD thesis, Duke University, 2008. Available at <http://dukespace.lib.duke.edu/dspace/handle/10161/826>.
P. B. Zwart and W. M. Boothby. On compact, homogeneous symplectic manifolds. , [**30**]{}(1), 129–157, 1980.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We demonstrate that in a triangular configuration of an optical lattice of two atomic species a variety of novel spin-$1/2$ Hamiltonians can be generated. They include effective three-spin interactions resulting from the possibility of atoms tunneling along two different paths. This motivates the study of ground state properties of various three-spin Hamiltonians in terms of their two-point and n-point correlations as well as the localizable entanglement. We present a Hamiltonian with a finite energy gap above its unique ground state for which the localizable entanglement length diverges for a wide interval of applied external fields, while at the same time the classical correlation length remains finite.'
address: |
${}^{1}$ Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB3 0WA, UK,\
${}^{2}$ Quantum Optics and Laser Science Group, Blackett Laboratory, Imperial College, London SW7 2BW, UK.
author:
- 'Jiannis K. Pachos${}^{1}$ and Martin B. Plenio${}^{2}$'
title: 'Three-spin interactions in optical lattices and criticality in cluster Hamiltonians'
---
[2]{}
The combination of cold atom technology with optical lattices [@Raithel; @Mandel] gives rise to a variety of possibilities for constructing spin Hamiltonians [@Kuklov; @Duan]. This is particularly appealing as the high degree of isolation from the environment that can be achieved in these systems allows for the study of these Hamiltonians under idealised laboratory conditions. In parallel, techniques have been developed for minimising imperfections and impurities [@Cirac1; @Carl] in the implementation of the desired structures and for their subsequent probing and measurement [@Roberts]. These achievements permit the experimental investigation of Hamiltonians that are of interest in areas such as quantum information or condensed matter physics with the added advantage of a remarkable freedom in the choice of external parameters. Presently, attention both in condensed matter physics and in cold atom research is focusing on two-spin interactions as these are most readily accessible experimentally. However, the unique experimental capability provided by cold atom technology allows us to relax this restriction. Here we demonstrate that cold atom technology provides a laboratory to generate and study higher order effects such as three-spin interactions that give rise to unique entanglement properties.
The present work serves two purposes. Firstly, it demonstrates that in a two species Bose-Hubbard model in a triangular configuration a wide range of Hamilton operators can be generated that include effective three-spin interactions. They result from the possibility of atomic tunneling through different paths from one vertex to the other. This can be extended to a one dimensional spin chain with three-spin interactions. Secondly, we take this novel experimental capability as a motivation to study unique ground state properties of Hamiltonians that include three-spin interactions. In this context one can study possible quantum phase transitions by considering both the classical correlation properties as well as the entanglement properties of these systems. Specifically we consider the so-called cluster Hamiltonian and its ground state, the cluster state which has previously been shown to play an important role as a resource in the context of quantum computation [@Briegel; @R; @99]. Subject to an additional Zeeman term the combined Hamiltonian possesses a finite energy gap above its unique ground state in a finite parameter range, hence exhibiting no critical behaviour in the classical correlations in that regime. We shall show that at the same time it exhibits a critical behaviour in its entanglement properties due to its three spin-1/2 interaction term. This is manifested by a diverging entanglement length of the localizable entanglement [@Verstraete; @PC; @03]. Our example demonstrates that divergence in entanglement properties are not necessarily related to the existence of classical critical points, the latter giving a rather incomplete description of the long-range quantum correlations against popular belief [@Sachdev]. A related example was arrived at independently in [@Verstraete; @MC; @03].
Consider an ensemble of ultracold bosonic atoms confined in an optical lattice formed by several standing wave laser beams [@Kuklov; @Duan; @Jaksch]. Each atom is assumed to have two relevant internal states, denoted with the index $\sigma=a,b$, which are trapped by independent standing wave laser beams differing in polarisation. We are interested in the regime where the atoms are sufficiently cooled and the periodic potential is high enough so that the atoms will be confined to the lowest Bloch band and the low energy evolution can be described by the two species Bose-Hubbard Hamiltonian [@Jaksch]. The tunneling couplings $J^\sigma$ and the collisional couplings $U_{\sigma
\sigma'}$ can be widely varied by adjusting the amplitude of the lattice laser fields. For the generation of the multi-particle interactions discussed here we require large collisional couplings in order to have a significant effect within the decoherence time of the system. This can be achieved experimentally by Feshbach resonances [@Inouye; @Donley; @Kokkelmans], for which first theoretical [@Mies] and experimental [@Donley1] advances are already promising.
Let us begin by considering the case of only three sites in a triangular configuration (see Figure \[chain\]) with tunneling coupling activated between all three of them. We are interested in the regime where the tunneling couplings are much smaller than the collisional ones, $ J^\sigma \ll U_{\sigma \sigma'}$ which corresponds to the Mott insulating phase and we demand that we have on average one atom per lattice site. Hence, the basis of states of site $i$ can be defined by $|\!\!\uparrow\rangle \equiv
|n^a=1, n^b=0\rangle$ and $|\!\!\downarrow \rangle \equiv|n^a=0,
n^b=1\rangle$, where $n^a$ and $n^b$ are the number of atoms in state $a$ or $b$ respectively. It is possible to expand the resulting evolution generated by the Bose-Hubbard Hamiltonian in terms of the small parameters $J^\sigma/U_{\sigma \sigma'}$. In an interaction picture with respect to the collisional Hamiltonian, $H^{(0)}=\frac{1}{2} \sum _{i \sigma \sigma'} U_{\sigma \sigma'}
a^{\dagger}_{i\sigma}a^{\dagger}_{i\sigma'}a_{i\sigma'}a_{i\sigma}$, one obtains the effective evolution from the perturbation expansion up to the third order with respect to the tunneling interaction, $V=-\sum_{i\sigma} (J^\sigma_{i} a_{i\sigma}^\dagger
a_{i+1 \sigma} +\text{H.c.})$, given by $$H=-\sum _\gamma {V_{\alpha \gamma} V_{\gamma \beta} \over
E_\gamma} + \sum _{\gamma \delta} {V_{\alpha \gamma} V_{\gamma
\delta} V_{\delta \beta} \over E_\gamma E_\delta}.$$ The indices $\alpha$, $\beta$ refer to states with one atom per site while $\gamma$, $\delta$ refer to states with two or more atomic populations per site, $E_\gamma$ are the eigenvalues of the collisional part, $H^{(0)}$, while we neglected fast rotating terms effective for long time intervals [@Pachos]. Written explicitly in terms of spin operators we obtain $$\begin{aligned}
&&H = \sum_{i=1}^3 \Big[ \vec{B} \cdot \vec{\sigma}_i
+\lambda^{(1)} \sigma^z_i \sigma^z_{i+1} + \lambda^{(2)}
(\sigma^x_i \sigma^x_{i+1} +\sigma^y_i \sigma^y_{i+1}) \nonumber\\ &&
+\lambda^{(3)} \sigma^z_i \sigma^z_{i+1} \sigma^z_{i+2} +
\lambda^{(4)} (\sigma^x_i \sigma^z_{i+1} \sigma^x_{i+2} +
\sigma^y_i \sigma^z_{i+1} \sigma^y_{i+2}) \Big]. \label{ham1}\end{aligned}$$ The couplings $\lambda^{(i)}$ are given as an expansions in ${J^\sigma}/U_{\sigma \sigma'}$ by $$\begin{aligned}
\lambda^{(1)} = &&- {{J^a}^2 \over U_{aa}} - {{J^b}^2 \over
U_{bb}} - {9 \over 2} {{J^a}^3 \over U_{aa}^2} - {9 \over
2}{{J^b}^3 \over U_{bb}^2} + {1\over 2} {{J^a}^2+{J^b}^2 \over
U_{ab}} \nonumber\\ && +{1 \over 2} {{J^a}^3+ {J^b}^3 \over
U_{ab}^2} +{ 1\over U_{ab}} \big( {{J^a}^3 \over U_{aa}} + {
{J^b}^3 \over U_{bb}} \big),\nonumber\\
\lambda^{(2)}=&&- {J^a J^b \over U_{ab}}\big(1+{J^a \over U_{aa}}
+ { J^b \over U_{bb}} +{3 \over 2} { J^a + J^b \over U_{ab}}\big)
\nonumber\\ &&- {J^a J^b \over 2} \big( {J^a \over U_{aa}^2} +{J^b
\over U_{bb}^2} \big), \nonumber\\
\lambda^{(3)}=&&- {3 \over 2} {{J^a}^3 \over U_{aa}^2}+ {3 \over
2} {{J^b}^3 \over U_{bb}^2} + {1 \over U_{ab}} \big( {{J^a}^3
\over U_{aa}} - {{J^b}^3 \over U_{bb}} \big),\nonumber\\
\lambda^{(4)}=&& -{{J^a} J^b \over U_{ab}} \big({{J^a} \over
U_{aa}} - {J^b \over U_{bb}} \big)- {J^a J^b \over 2} \big({J^a \over
U_{aa}^2} -{J^b \over U_{bb}^2} \big).
\nonumber\end{aligned}$$ The local field $\vec{B}$ can be arbitrarily tuned by applying appropriately detuned laser fields while we need to compensate for single particle phase rotations of the form $B_z \sum_i\sigma^z_i$ with $$B_z = -{{J^a}^2\over U_{aa}}(2+\frac{ 9J^a}{2U_{aa}}+\frac{J^a}{U_{ab}})
+ {{J^b}^2\over U_{bb}}(2+\frac{
9J^b}{2U_{bb}}+\frac{J^b}{U_{ab}}).$$ One can isolate different parts from Eq. (\[ham1\]), each one including a three-spin interaction term, by varying the tunneling and/or the collisional couplings appropriately so that particular $\lambda^{(i)}$ terms such as the two spin interactions vanish, while others can be varied freely.
By employing additional Raman transitions in such a way as to couple the states $a$ and $b$ during tunneling it is possible to obtain variations of the above Hamiltonian [@Duan]. Indeed, Raman transitions can activate tunneling of the states $|+ \rangle\equiv {1 \over \sqrt{2}}
(|\!\!\uparrow\rangle+|\!\!\downarrow\rangle)$, while the tunneling of the states $|- \rangle\equiv {1 \over \sqrt{2}}
(|\!\!\uparrow\rangle-|\!\!\downarrow\rangle)$ is obstructed. Hence, it is possible to generate different coefficients in front of the two spin interaction terms $\sigma^x_i \sigma^x_{i+1}$ or $\sigma^y_i \sigma^y_{i+1}$ as they are the diagonal matrices in their corresponding basis $|+\rangle$ or $|-\rangle$. Considering its effect on the three-spin interaction it is possible to generate additional terms of the form $\sigma^x_i \sigma^x_{i+1} \sigma^x_{i+2}$ or $\sigma^y_i
\sigma^y_{i+1} \sigma^y_{i+2}$ with couplings similar to $\lambda^{(3)}$. Note that the effective spin interactions produced by Raman transitions [*do not*]{} preserve the number of particles in each of the species.
In particular, we are interested in obtaining a whole chain of triangles in a zig-zag one dimensional pattern as in Fig. \[chain\]. Indeed, with this configuration we can extend from a single triangle to a whole triangular ladder. Nevertheless, a careful consideration of the two spin interactions shows that terms of the form $\sigma^z_i \sigma^z_{i+2}$ also appear, due to the triangular configuration (see Fig. \[chain\]). Hamiltonians involving nearest and next-to-nearest neighbour interactions are of interest in their own right (see e.g. Chapter 14 of [@Sachdev] and [@Sachdev1]), but we will not address these systems here. It is possible to introduce a longitudinal optical lattice with half of the initial wave length, and an appropriate amplitude such that it cancels exactly those interactions generating finally chains with only neighbouring couplings.
(-123,-10)[i]{} (-79,-10)[i+2]{} (-103,51)[i+1]{}
In a similar fashion it is possible to avoid generation of the term $\sigma^x_i \sigma^x_{i+2} + \sigma^y_i \sigma^y_{i+2}$ by deactivating the longitudinal tunnelling coupling in one of the modes, e.g. the $a$ mode which deactivates the corresponding exchange interaction. We are particularly interested in three-spin interactions and would like to isolate the chain term $\sum_i
(\sigma^x_i \sigma^z_{i+1} \sigma^x_{i+2}+\sigma^y_i
\sigma^z_{i+1} \sigma^y_{i+2})$ from the $\lambda^{(4)}$ term (see Hamiltonian (\[ham1\])) that includes in addition all the possible triangular permutations. To achieve that we could deactivate now the non-longitudinal tunnelling for one of the two modes, e.g. the $a$-mode. With the above procedures we can finally obtain a chain Hamiltonian as in (\[ham1\]) where the summation runs up to the total number $N$ of the sites. A variety of different Hamiltonians could be generated by different combinations of the above techniques.
In the past Hamiltonians describing three-spin interactions have been of limited interest [@threespinpapers] as they were difficult to implement and control experimentally. The above results demonstrate that Hamiltonians with three-spin interactions can be implemented and controlled across a wide parameter range. One may suspect that ground states of three-spin interaction Hamiltonians exhibit unique properties as compared to ground states generated merely by two-spin interaction. This motivates the study of the properties of the ground state of a particular three-spin Hamiltonian for different parametric regimes. Possible phase transitions induced by varying these parameters are explored employing two possible signatures of critical behaviour that are quite different in nature. In particular, new critical phenomena in three-spin Hamiltonians that cannot be detected on the level of classical correlations will be demonstrated.
\(i) A traditional approach to criticality of the ground state studies two-point correlation functions between spins $1$ and $L$, given by $ C_{1L}^{\alpha \beta} \equiv
\langle \sigma_1 ^{\alpha} \sigma_L^{\beta} \rangle
-\langle \sigma_1 ^{\alpha}\rangle
\langle \sigma_L^{\beta}\rangle
$, for varying $L$, where $\alpha,\beta=x,y,z$. These two-point correlations may exhibit two types of generic behaviours, namely (a) exponential decay in $L$, i.e. the correlation length $\xi$, defined as $$\xi^{-1} \equiv \lim_{L\rightarrow\infty} \frac{1}{L}\log\,
C_{1L}^{\alpha \beta},
\label{corlength1}$$ is finite or, (b), power-law decay in $L$, i.e. $C_{1L}^{\alpha
\beta}\sim L^{-q}$ for some $q$, which implies an infinite correlation length $\xi$ indicating a critical point in the system [@Sachdev].
\(ii) While the two-point correlation functions ${\cal
C}_{1L}^{\alpha \beta}$ are a possible indicator for critical behaviour, they provide an incomplete view of the quantum correlations between spins $1$ and $L$. Indeed they ignore correlations through all the other spins by tracing them out. Already the GHZ state $|GHZ\rangle = (|000\rangle +
|111\rangle)/\sqrt{2}$ shows that this looses important information. Tracing out particle $2$ leaves particles $1$ and $3$ in an unentangled state. However, measuring the second particle in the $\sigma_x$-eigenbasis leaves particles $1$ and $3$ in a maximally entangled state. Therefore one may define the localizable entanglement $E^{(loc)}_{1L}$ between spins $1$ and $L$ as the largest average entanglement that can be obtained by performing optimised local measurements on all the other spins [@Verstraete; @PC; @03]. In analogy to Eq. (\[corlength1\]) one can define the entanglement length $$\xi_E^{-1} \equiv \lim_{L\rightarrow\infty} \frac{1}{L} \log\,
E^{(loc)}_{1L}.
\label{corlength2}$$
It is an interesting question whether criticality according to one of these indicators implies criticality according to the other. The localizable entanglement length is always larger than or equal to the two-point correlation length and indeed, it has been shown that there are cases where criticality behaviour can be revealed only by the diverging localizable entanglement length while the classical correlation length remains finite [@Verstraete; @MC; @03]. Such behaviour is also expected to appear when we consider particular three-spin interaction Hamiltonians. To see this consider the Hamiltonian $$H = \sum_{i} \big( -\sigma^x_{i-1}\sigma^z_i\sigma^x_{i+1} +
B\sigma^z_i \big),
\label{xzxmodel}$$ where we assume periodic boundary conditions. The fact that $\sigma^x_{i-1}\sigma^z_i\sigma^x_{i+1}$ commute for different $i$ and employing raising operator $L^{\dagger}_k=\sigma^x_k -i
\sigma^x_{k-1} \sigma^y_{k}\sigma^x_{k+1}$ allows to determine the entire spectrum of $H$ for $B=0$. The unique ground state of $H$ for $B=0$ is the well-known cluster state [@Briegel; @R; @99; @Verstraete], which has previously been studied as a resource in the context of quantum computation. It possesses a finite energy gap of $\Delta E=2$ above its ground state [@comment]. For finite $B$ the energy eigenvalues of the system can still be found using the Jordan-Wigner transformation and a lengthy but straightforward calculation shows that the energy gap persists for $|B|\neq 1$. The exact solution also shows that the system has critical points for $|B|=1$ at which the two-point correlation length and the entanglement length diverges. For any other value of $B$ and in particular for $B=0$ the system does not exhibit a diverging two-point correlation length as is expected from the finite energy gap above the ground state. Indeed, correlation functions such as $$\begin{aligned}
\label{zzcorrelations}
C^{zz}_{1L} &=& \big(\frac{1}{4\pi}\int_{-2\pi}^{2\pi}\!
\frac{\sin r}{\sqrt{B^2 + 1 + 2B \cos r}}\sin
\frac{(L-1)r}{2} dr \big)^2 \nonumber\\[0.2cm]
-&& \!\!\!\!\!\!\! \big(\frac{1}{4\pi}\int_{-2\pi}^{2\pi}\!
\frac{B+\cos r}{\sqrt{B^2 + 1 + 2B \cos r}}\cos
\frac{(L-1)r}{2} dr \big)^2\end{aligned}$$ can be computed and the corresponding correlation length can be explicitly determined analytically using standard techniques (see e.g. Fig. (\[entanglementlength\])) [@Barouch; @M; @71]. The two-point correlation functions such as Eq. (\[zzcorrelations\]) exhibit a power-law decay at the critical points $|B|=1$ while they decay exponentially for all other values of $B$ in contrast to the anisotropic $XY$-model whose $C^{xx}_{1L}$ correlation function tends to a finite constant in the limit of $L\rightarrow\infty$ for $|B|<1$ [@Barouch; @M; @71]. This discrepancy is due to the finite energy gap the model in Eq. (\[xzxmodel\]) exhibits above a non-degenerate ground state in the interval $|B|<1$.
When we study three-spin interactions it is natural to consider the behaviour of higher-order correlations. For the ground state with magnetic field $B=0$ all three-point correlation except, obviously, $\langle\sigma^x_{i-1}\sigma^z_{i}\sigma^x_{i+1}\rangle$ vanish. Indeed, if we consider $n>4$ neighbouring sites and chose for each of these randomly one of the operators $\sigma_x,\sigma_y,\sigma_z$ or ${\bf 1}$ then the probability that the resulting correlation will be non-vanishing is given by $p=2^{-(2+n)}$. For $|B|>0$ however far more correlations are non-vanishing and the rate of non-vanishing correlations scales approximately as $0.858^n$. This marked difference which distinguishes $B=0$ is due to the higher symmetry that the Hamiltonian exhibits at that point.
In the following we shall consider the localizable entanglement and the corresponding length as described in (ii). Compared to the two-point correlations, the computation of the localizable entanglement is considerably more involved due to the optimization process. Nevertheless, it is easy to show that the entanglement length diverges for $B=0$. In that case the ground state of the Hamiltonian (\[xzxmodel\]) is a cluster state with the property that any two spins can be made deterministically maximally entangled by measuring the $\sigma_z$ operator on each spin in between the target spins, while measuring the $\sigma_x$ operator on the remaining spins. Indeed, this property underlies its importance for quantum computation as it allows to propagate a quantum computation through the lattice via local measurements [@Briegel; @R; @99].
For finite values of $B$ it is difficult to obtain the exact value of the localizable entanglement. Nevertheless, to establish a diverging entanglement length it is sufficient to provide lower bounds that can be obtained by prescribing specific measurement schemes. Indeed, for the ground state of (\[xzxmodel\]) in the interval $|B|<1$ consider two spins $1$ and $L=2k+1$ where $k\in
{\bf N}$. Measure the $\sigma_x$ operator on spin $2$ and on all remaining spins, other than $1$ and $L$, the $\sigma_z$ operator. By knowing the analytic form of the ground state one can obtain the average entanglement over all possible measurement outcomes in terms of the concurrence, that tends to $ E_{\infty} =
\left(1-|B|^2\right)^{1/4}$ for $k\rightarrow\infty$. This demonstrates that the localizable entanglement length is infinite in the full interval $|B|<1$. This surprising critical behaviour for the whole interval $|B|<1$ is [*not*]{} revealed by the two-point or n-point correlation function which exhibit finite correlation lengths. For $|B|>1$ however, numerical results, employing a simulated annealing technique to find the optimal measurement for a chain of 16 spins, show that the localizable entanglement exhibits a finite length scale.
In Fig. (\[entanglementlength\]) both the two-point correlation length and localizable entanglement length are drawn versus the magnetic field. In the interval $|B|<1$ the entanglement length diverges while the correlation length remains finite. For finite temperatures the localizable entanglement becomes finite everywhere but, for temperatures that are much smaller than the gap above the ground state, it remains considerably larger than the classical correlation length. This demonstrates the resilience of this phenomenon against thermal perturbations.
To summarize, we have demonstrated that various Hamiltonians describing three-spin interactions can be created in triangular optical lattices in a two-species Bose-Hubbard model. They can be realized in the laboratory with the near future cold atom technology. In fact, a study of the required experimental values reveals that with a tunneling coupling $J/\hbar\sim$10 kHz [@Mandel] an experimentally achievable collisional coupling of $U/ \hbar \sim$100 kHz is required. With these values a full numerical study demonstrates that the perturbative truncation is valid within a $4\%$ error and a significant effect of the three spin interactions is obtained within the decoherence time of the system taken here to be $10$ms. Previously, the systematic experimental creation of three-spin interaction Hamiltonians has been extremely difficult. The new capability for the systematic creation of three-spin Hamiltonians and their possible isolation from other interactions motivates the study of the properties of their ground states and here in particular of their phase transitions. Motivated by this we presented a particular three-spin cluster Hamiltonian that exhibits a novel kind of critical behavior that is not revealed by two-point correlation functions. In addition, interactions such as $\sigma^z_1
\sigma^z_2 \sigma^z_3$ presented here have proved to be of interest for quantum computation. They can implement multi-qubit gates, like the Toffoli gate, in essentially one step [@Pachos; @K; @03] reducing dramatically the experimental resources.
[*Acknowledgements.*]{} We thank Derek Lee for inspiring conversations. This work was supported by a Royal Society University Research Fellowship, a Royal Society Leverhulme Trust Senior Research Fellowship, the EU Thematic Network QUPRODIS and the QIP-IRC of EPSRC.
A. Kastberg, [*et. al.*]{}, Phys. Rev. Lett. [**74**]{}, 1542 (1995); G. Raithel, [*et. al.*]{}, Phys. Rev. Lett. [**81**]{}, 3615 (1998).
M. Greiner, [*et. al.*]{}, Nature [**415**]{}, 39 (2002); M. Greiner, [*et. al.*]{}, ibid [**419**]{}, 51 (2002); O. Mandel, [*et. al.*]{}, ibid [**425**]{}, 937 (2003).
A. B. Kuklov and B. V. Svistunov, Phys. Rev. Lett. [**90**]{}, 100401 (2003).
L.-M. Duan, [*et. al.*]{}, Phys. Rev. Lett. [**91**]{}, 090402 (2003).
P. Rabl [*et al*]{},cond-mat/0304026.
S. E. Sklarz, [*et. al.*]{}, Phys. Rev. A [**66**]{}, 053620 (2002).
D. C. Roberts and K. Burnett, Phys. Rev. Lett. [**90**]{}, 150401 (2003).
F. Verstraete, [*et. al.*]{}, Phys. Rev. Lett. [**92**]{}, 027901 (2004).
S. Sachdev, [*Quantum Phase Transitions*]{}, Cambridge University Press (1999).
F. Verstraete, [*et. al.*]{}, Phys. Rev. Lett. [**92**]{}, 087201 (2004).
D. Jaksch, [*et. al.*]{}, Phys. Rev. Lett. [**81**]{}, 3108 (1998).
S. Inouye, [*et. al.*]{}, Nature [**392**]{}, 151 (1998).
A. Donley, [*et. al.*]{}, Nature [**412**]{}, 295 (2001).
S. J. J. M. F. Kokkelmans, and M. J. Holland, Phys. Rev. Lett. [**89**]{}, 180401 (2002); T. Koehler, T. Gasenzer, and K. Burnett, cond-mat/0209100.
F. H. Mies, [*et. al.*]{}, Phys. Rev. A [**61**]{}, 022721 (2000).
E. A. Donley, [*et. al.*]{}, Nature (London) [ **417**]{}, 529 (2002).
J. K. Pachos and E. Rico, quant-ph/0404048.
P. Fendley, [*et. al.*]{}, Phys. Rev. B [**69**]{}, 075106 (2004).
K. A. Penson, [*et. al.*]{}, Phys. Rev. B [**26**]{}, 6334 (1982); K. A. Penson, [*et. al.*]{}, Phys. Rev. B [**37**]{}, 7884 (1988); J. C. A. d’Auriac, and F. Iglói, Phys. Rev. E [**58**]{}, 241 (1998).
R. Raussendorf, [*et. al.*]{}, Phys. Rev. A [**68**]{}, 022312 (2003).
F. Verstraete, and J. I. Cirac, quant-ph/0311130.
In contrast to Hamiltonian Eq. (\[xzxmodel\]) the two-spin system $H = \sum_{i}
-\sigma^x_{i}\sigma^x_{i+1}$ does not exhibit a finite gap for an infinite chain and possesses a two-fold degenerate ground state. As a consequence the ground state will not be stable (see eg G. Gallavotti, [*Statistical Mechanics: A Short Treatise*]{}, Springer 1999).
E. Barouch and B. M. McCoy, Phys. Rev. A [**3**]{}, 786 (1971).
J. K. Pachos and P. L. Knight, Phys. Rev. Lett. [**91**]{}, 107902 (2003).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We extend the continuous-time interaction-expansion quantum Monte Carlo method with respect to measuring observables for fermion-boson lattice models. Using generating functionals, we express expectation values involving boson operators, which are not directly accessible because simulations are done in terms of a purely fermionic action, as integrals over fermionic correlation functions. We also demonstrate that certain observables can be inferred directly from the vertex distribution, and present efficient estimators for the total energy and the phonon propagator of the Holstein model. Furthermore, we generalize the covariance estimator of the fidelity susceptibility, an unbiased diagnostic for phase transitions, to the case of retarded interactions. The new estimators are applied to half-filled spinless and spinful Holstein models in one dimension. The observed renormalization of the phonon mode across the Peierls transition in the spinless model suggests a soft-mode transition in the adiabatic regime. The critical point is associated with a minimum in the phonon kinetic energy and a maximum in the fidelity susceptibility.'
author:
- Manuel Weber
- 'Fakher F. Assaad'
- Martin Hohenadler
title: |
Continuous-time quantum Monte Carlo for fermion-boson lattice models:\
Improved bosonic estimators and application to the Holstein model
---
Introduction
============
Quantum Monte Carlo (QMC) methods are among the most established and powerful tools to solve the quantum many-body problem of correlated electrons. In particular, the auxiliary-field QMC method [@PhysRevD.24.2278] and the stochastic series expansion (SSE) representation [@PhysRevB.43.5950] are widely used to simulate lattice models, whereas more recent continuous-time (CTQMC) methods [@PhysRevB.72.035122; @PhysRevLett.97.076405] are predominantly applied as impurity solvers in dynamical mean-field theory (DMFT) [@RevModPhys.83.349]. Recently, progress has been made in the development of new methods to simulate fermionic lattice models [@PhysRevB.91.241118; @PhysRevB.91.235151], the solution of the fermionic sign problem for specific models [@PhysRevD.82.025007; @Chandrasekharan2013; @PhysRevB.91.241117; @PhysRevLett.115.250601], and the calculation of novel observables such as the entanglement entropy [@PhysRevLett.104.157201; @PhysRevB.86.235116; @PhysRevLett.111.130402; @PhysRevB.89.125121; @PhysRevB.91.125146; @2014JSMTE..08..015B; @PhysRevLett.113.110401; @PhysRevB.92.125126] and the fidelity susceptibility [@PhysRevE.76.022101; @PhysRevLett.103.170501; @PhysRevB.81.064418; @PhysRevX.5.031007].
For a large class of QMC methods ([e.g.]{}, SSE and CTQMC), the partition function is calculated stochastically in a series expansion and operators that are sampled can be measured directly from the Monte Carlo configurations. In this paper, we consider the continuous-time interaction expansion (CT-INT) method [@PhysRevB.72.035122]. In CT-INT, the configurations are sets of interaction vertices and expectation values are usually calculated from the single-particle Green’s function using Wick’s theorem [@PhysRevB.81.024509]. However, it can be advantageous to exploit the information contained in the distribution of vertices, an important example being the fidelity susceptibility [@PhysRevX.5.031007].
The action-based formulation of the CT-INT method in particular allows efficient simulations of fermion-boson lattice models [@PhysRevB.76.035116], and has been successfully applied to electron-phonon problems [@PhysRevB.83.115105; @PhysRevLett.109.116407; @PhysRevB.88.064303; @PhysRevB.91.245147]. If the action is quadratic in the bosonic fields, the latter can be integrated out exactly [@PhysRev.97.660], resulting in a fermionic action with retarded interactions. Remarkably, autocorrelations, which can be prohibitively strong in cases where the bosons are sampled explicitly [@Hohenadler2008], are significantly reduced in the fermionic representation.
An apparent disadvantage of the fermionic approach is the loss of access to bosonic observables. However, as shown here, the latter can be systematically calculated from fermionic correlation functions using sum rules derived from generating functionals. Information about the bosonic fields is also encoded in the distribution of vertices. For a local fermion-boson interaction ([e.g.]{}, the Holstein model [@Ho59a]), the bosonic contributions to the total energy as well as the local bosonic propagator can be calculated efficiently from the vertex distribution. Moreover, with the help of auxiliary Ising fields [@PhysRevB.28.4059] originally introduced to avoid the sign problem [@PhysRevB.76.035116], even nonlocal correlation functions such as the full bosonic propagator become accessible. Similar techniques have been applied to solve fermion-boson problems with DMFT and the hybridization expansion (CT-HYB) method [@PhysRevLett.97.076405] to understand dynamical screening effects [@PhysRevLett.104.146401; @PhysRevB.89.235128; @2016arXiv160200584W], and in extended DMFT calculations [@PhysRevB.66.085120; @PhysRevB.87.125149]. The usefulness of such techniques for computationally expensive lattice problems was so far unclear but is demonstrated here. Finally, we derive an estimator for the fidelity susceptibility applicable to retarded boson-mediated interactions that can be used to identify phase transitions.
We apply these (improved) estimators to one-dimensional Holstein models [@Ho59a]. These fundamental models for the effects of electron-phonon interaction constitute a significant numerical challenge due to the infinite phonon Hilbert space, and the different time scales for the fermion and boson dynamics. In the half-filled case considered here, they describe a quantum phase transition from a metallic phase to a Peierls insulator with long-range charge-density-wave order [@PhysRevLett.49.402; @PhysRevB.60.7950]. We investigate two important open questions, namely, the renormalization of the phonon spectrum across the Peierls transition in the adiabatic regime, and two alternative diagnostics (phonon kinetic energy, fidelity susceptibility) to locate the critical point. Importantly, our methodological developments can also be applied in higher dimensions and for other models.
The paper is organized as follows. In Sec. \[Sec:Method\], we discuss the calculation of observables from the vertex distribution in a general formulation of the CT-INT method. In Sec. \[Sec:PathInt\], we derive the effective fermionic action for fermion-boson models and obtain estimators for the total energy and the phonon propagator of the Holstein model. The calculation of bosonic observables from the vertex distribution with the CT-INT method is discussed in Sec. \[Sec:CT-INT\_conf\]. A performance test and results for Holstein models are presented in Sec. \[Sec:Results\]. We conclude in Sec. \[Sec:Conclusions\], and provide appendices on the relation between bosonic observables and the dynamic charge-structure factor as well as on further improvements of the estimators.
Quantum Monte Carlo method {#Sec:Method}
==========================
General formulation of the CT-INT method
----------------------------------------
The CT-INT method [@PhysRevB.72.035122] is based on the path-integral formulation of the grand-canonical partition function $$\begin{aligned}
\label{Eq:Z}
Z
=
\! \int \! \! {\mathcal{D}(\bar{c},c)}\,
e^{-S_0[{\bar{c}_{}},{c_{}}]-S_1[{\bar{c}_{}},{c_{}}]} \, ,\end{aligned}$$ where the fermions are given in the Grassmann coherent-state representation ${\hat{c}^{\vphantom\dagger}_{}}\ket{c} = {c_{}}\ket{c}$ and time-ordering is implicit. We split the action into the free-fermion part $S_0$ and the interaction $S_1$. The weak-coupling perturbation expansion of Eq. (\[Eq:Z\]) is $$\begin{aligned}
\label{Eq:Zexpansion_gen}
\frac{Z}{Z_0}
=
\sum_{n=0}^{\infty} \frac{\left(-1\right)^n}{n!} {\left\langle S_1^n \right\rangle_0} \, ,\end{aligned}$$ where we have defined ${\langle O \rangle_0} = Z_0^{-1} \int {\mathcal{D}(\bar{c},c)}\,
e^{-S_0} O$ with $Z_0 = \int {\mathcal{D}(\bar{c},c)}\, e^{-S_0}$. In the CT-INT method, the expansion in Eq. (\[Eq:Zexpansion\_gen\]) is calculated stochastically by sampling configurations of interaction vertices. For this purpose, we write the interaction in the vertex notation $$\begin{aligned}
\label{Eq:Vertex_notation}
S_1
=
\sum_{\nu} w_{\nu} h_{\nu} \, .\end{aligned}$$ A vertex is represented by an instance of the superindex $\nu$ that contains both discrete ([e.g.]{}, lattice sites) and continuous variables ([e.g.]{}, imaginary times), a weight $w_{\nu}$, and the Grassmann representation of the operators $h_{\nu}[{\bar{c}_{}},{c_{}}]$. The perturbation expansion becomes $$\begin{aligned}
\label{Zexpansion}
\frac{Z}{Z_0}
=
\sum_{n=0}^{\infty}
\underbrace{
\sum_{\nu_1 \dots \nu_n}
}_{\sum_{C_n}}
\underbrace{
\frac{\left(-1\right)^n}{n!} \,
w_{\nu_1} \dots w_{\nu_n}
{\left\langle h_{\nu_1} \dots h_{\nu_n} \right\rangle_0}
\vphantom{ \sum_{n=0}^{\infty} \sum_{\nu_1 \dots \nu_n}}
}_{W[C_n]} \, .\end{aligned}$$ The sum runs over the expansion order $n$ and all configurations of vertices $C_n = \{\nu_1,\dots,\nu_n\}$ for a given $n$. We can identify the weight $W[C_n]$ to be sampled with the Metropolis-Hastings algorithm [@1953JChPh..21.1087M; @10.2307/2334940], which involves the determinant ${\left\langle h_{\nu_1} \dots h_{\nu_n} \right\rangle_0}=\det M[C_n]$ of the $\mathcal{O}(n)\times \mathcal{O}(n)$ matrix $M[C_n]$ whose entries are noninteracting Green’s functions. Updates correspond to the addition or removal of individual vertices, and involve matrix-vector multiplications with $\mathcal{O}(n^2)$ operations. Since $\mathcal{O}(n)$ updates are necessary to reach an independent configuration, the algorithm scales as $\mathcal{O}(n^3)$. The average expansion order $\langle n\rangle$ scales linearly with the system size $L$ and the inverse temperature $\beta=(k_BT)^{-1}$ [@PhysRevB.72.035122] (see below). Expectation values ${\langle O \rangle} = Z^{-1} \int {\mathcal{D}(\bar{c},c)}\, e^{-S_0-S_1} O$ are calculated via $$\begin{aligned}
\label{Eq:Obs_MC}
{\left\langle O \right\rangle} = \sum_{n=0}^{\infty} \sum_{C_n} \, p[C_n] {\left\llangle O \right\rrangle_{C_n}} \, ,\end{aligned}$$ where $p[C_n] = W[C_n]/ \sum_n \sum_{C_n} W[C_n]$ and ${\llangle O \rrangle_{C_n}}$ is the value of the observable for configuration $C_n$. For any $C_n$, Wick’s theorem [@PhysRevB.81.024509] can be used to calculate ${\llangle O \rrangle_{C_n}}$ from the single-particle Green’s function. However, especially the calculation of the time-displaced Green’s function can be expensive because a matrix-vector multiplication of $\mathcal{O}(n^2)$ must be performed for each imaginary time $\tau$ and each pair of lattice sites. For further details, see Ref. [@RevModPhys.83.349].
Estimators from the vertex distribution
---------------------------------------
In the SSE method [@PhysRevB.43.5950], operators contained in the operator string are accessible from the Monte Carlo configurations, whereas in the CT-HYB method [@PhysRevLett.97.076405] the single-particle Green’s function can be obtained directly from the perturbation expansion. Similarly, in CT-INT, expectation values of operators $h_{\nu}$ contained in the interaction $S_1$ can be calculated efficiently from the distribution of vertices [@PhysRevB.56.14510]. To this end, $h_{\nu}$ is regarded as an additional vertex written as $h_{\nu} = w_{\nu}^{-1}
\sum_{\nu_{n+1}} w_{\nu_{n+1}} h_{\nu_{n+1}} \delta_{\nu,\nu_{n+1}}$ and absorbed into the perturbation expansion: $$\begin{aligned}
{\left\langle h_{\nu} \right\rangle}
&=
\frac{Z_0}{Z}
\sum_{n=0}^{\infty}
\sum_{C_n}
\frac{\left(-1\right)^n}{n!} \,
w_{\nu_1} \dots w_{\nu_n}
{\left\langle h_{\nu_1} \dots h_{\nu_n} h_{\nu} \right\rangle_0}
\nonumber\\
&=
- \frac{1}{w_{\nu}}
\sum_{n=0}^{\infty}
\sum_{C_{n+1}}
\!\!
\left( n+1 \right) p[C_{n+1}] \,
\delta_{\nu,\nu_{n+1}}
\\
&=
\sum_{n=0}^{\infty}
\sum_{C_n} \, p[C_n]
\left[
-\frac{1}{w_{\nu}}
\sum_{k=1}^{n} \delta_{\nu,\nu_k}
\right]
\, .
\nonumber\end{aligned}$$ Here, we first identified the probability distribution $p[C_{n+1}]$ of a configuration with $n+1$ vertices and then shifted the summation index to obtain $p[C_n]$. Finally, we included the $n=0$ contribution to the sum and replaced the factor of $n$ by a sum over the equivalent vertices. Comparison with Eq. (\[Eq:Obs\_MC\]) yields $$\begin{aligned}
\label{Eq:Obs_vert_1}
{\left\llangle h_{\nu} \right\rrangle_{C_n}} = - \frac{1}{w_{\nu}} \sum_{k=1}^{n} \delta_{\nu,\nu_k} \ .\end{aligned}$$ From Eq. (\[Eq:Obs\_vert\_1\]) we obtain the familiar relation between the interaction term and the average expansion order, ${\left\langle S_1 \right\rangle}=-{\left\langle n \right\rangle}$ [@PhysRevB.72.035122]. Because ${\left\langle S_1 \right\rangle}$ is an extensive thermodynamic quantity, the average expansion order ${\left\langle n \right\rangle} \sim \beta L$. In the same way, we can obtain higher-order correlation functions, [e.g.]{}, $$\begin{aligned}
\label{Eq:Obs_vert_2}
{\left\llangle h_{\nu}h_{\nu'} \right\rrangle_{C_n}} = \frac{1}{w_{\nu}w_{\nu'}} \sum_{k\neq l} \delta_{\nu,\nu_k} \delta_{\nu',\nu_l} \ .\end{aligned}$$ Each variable contained in $\nu$ can be resolved from a configuration $C_n$, but continuous variables ([e.g.]{}, imaginary time $\tau$) have to be integrated over (at least on a small interval) to make sense of the corresponding delta functions. The evaluation of observables via Eqs. (\[Eq:Obs\_vert\_1\]) and (\[Eq:Obs\_vert\_2\]) only requires $\mathcal{O}(n)$ operations since $$\begin{aligned}
\label{Eq:sumtrick}
\sum_{k \neq l} f_{ik} f_{jl}
=
\sum_k f_{ik}
\sum_l f_{jl}
- \sum_k f_{ik} f_{jk} \, .\end{aligned}$$ Because only operators that appear in the interaction can be measured, the cheaper vertex measurements cannot completely replace the more expensive calculation of the single-particle Green’s function. However, the class of accessible observables grows with the complexity of the interaction, as demonstrated below for the fermion-boson problem.
Fidelity susceptibility
-----------------------
Recently, Wang [*et al.*]{} [@PhysRevX.5.031007] derived a universal QMC estimator for the fidelity susceptibility $\chi_\text{F}$ based on the distribution of vertices. We briefly summarize their results, focusing on the CT-INT method.
The fidelity susceptibility is a geometrical tool originating from quantum information theory [@2008arXiv0811.3127G]. It can be used to detect quantum critical points without prior knowledge of the order parameter from the change of the ground state upon changing the Hamiltonian $\hat{H}(\alpha) = \hat{H}_0 + \alpha \, \hat{H}_1$ via a driving parameter $\alpha$. In Refs. [@PhysRevE.76.022101; @PhysRevLett.103.170501; @PhysRevB.81.064418], $\chi_\text{F}$ was generalized to finite temperatures in terms of the structure factor $$\begin{aligned}
\label{Eq:FS_finiteT}
\chi_\text{F}(\alpha)
=
\int_0^{\beta/2} \! \! d\tau
\left[
{\left\langle \hat{H}_1(\tau) \hat{H}_1(0) \right\rangle}
- {\left\langle \hat{H}_1(0) \right\rangle}^2
\right]
\tau
\, .\end{aligned}$$ Wang [*et al.*]{} [@PhysRevX.5.031007] recognized that Eq. (\[Eq:FS\_finiteT\]) can be recovered from the distribution of vertices using Eqs. (\[Eq:Obs\_vert\_1\]) and (\[Eq:Obs\_vert\_2\]), leading to the covariance estimator $$\begin{aligned}
\label{Eq:FS_MC}
\chi_\text{F}
=
\frac{{\left\langle n_\text{L} n_\text{R} \right\rangle} - {\left\langle n_\text{L} \right\rangle} {\left\langle n_\text{R} \right\rangle}}{2\alpha^2} \, .\end{aligned}$$ For each vertex configuration, ${n}_\text{L}$ and ${n}_\text{R}$ count the number of vertices in the intervals $[0,\beta/2)$ and $[\beta/2,\beta)$, respectively. The calculation of $\chi_\text{F}$ via Eq. (\[Eq:FS\_MC\]) is restricted to fermionic actions that are local in time and related to a Hamiltonian, [i.e.]{}, $S_1 = \alpha \int \! d\tau H_1(\tau)$. A generalization to retarded boson-mediated interactions is given below.
Path-integral formulation of the fermion-boson problem {#Sec:PathInt}
======================================================
In the following, we derive an effective fermionic action for a generic fermion-boson model that can be simulated with the CT-INT method. With the help of generating functionals, any bosonic observable can be recovered from fermionic correlation functions. In particular, we derive sum rules for the phonon propagator and the total energy of the Holstein model.
Fermion-boson models
--------------------
We consider a generic one-dimensional fermion-boson Hamiltonian of the form $$\begin{aligned}
\label{Eq:Ham_gen}
\hat{H}
=
\hat{H}_0
+ \sum_q {\omega_q}{\hat{b}^{\dagger}_{q}} {\hat{b}^{\vphantom\dagger}_{q}}
+ \sum_q \gamma_q \left( {\hat{\rho}^{\vphantom\dagger}_{q}} {\hat{b}^{\dagger}_{q}} + {\hat{\rho}^{\dagger}_{q}} {\hat{b}^{\vphantom\dagger}_{q}} \right) \,,\end{aligned}$$ with fermionic (bosonic) creation and annihilation operators ${\hat{c}^{\dagger}_{}}$, ${\hat{c}^{\vphantom\dagger}_{}}$ (${\hat{b}^{\dagger}_{}}$, ${\hat{b}^{\vphantom\dagger}_{}}$) and the free-fermion part $\hat{H}_0[{\hat{c}^{\dagger}_{}} \!\!, {\hat{c}^{\vphantom\dagger}_{}}]$. $\hat{H}$ is restricted to be quadratic in the bosons, but we allow a general dispersion $\omega_q$ and a coupling to an arbitrary fermionic operator ${\hat{\rho}^{\vphantom\dagger}_{q}}[{\hat{c}^{\dagger}_{}} \!\!, {\hat{c}^{\vphantom\dagger}_{}}]$ with coupling parameter $\gamma_q$.
As an example, we consider the Holstein model [@Ho59a] $$\begin{aligned}
\label{Eq:Holstein_model}
\hat{H}
=
\hat{H}_0
+\sum_i \left( \frac{1}{2M} {\hat{P}_{i}}^2 + \frac{K}{2} {\hat{Q}_{i}}^2 \right)
+ g \sum_i {\hat{Q}_{i}} \hat{\rho}_i
\, ,\end{aligned}$$ where the electronic part is given by the nearest-neighbor hopping of spinful fermions with amplitude $t$, $$\begin{aligned}
\hat{H}_0
=
-t \sum_{i\sigma} \left( {\hat{c}^{\dagger}_{i\sigma}} {\hat{c}^{\vphantom\dagger}_{i+1\sigma}}
+ {\hat{c}^{\dagger}_{i+1\sigma}} {\hat{c}^{\vphantom\dagger}_{i\sigma}} \right) \, .\end{aligned}$$ The phonons are described by local harmonic oscillators with displacements ${\hat{Q}_{i}}$ and momenta ${\hat{P}_{i}}$; $M$ is the oscillator mass and $K$ the spring constant. The displacements couple to the charge density $\hat{\rho}_i= \sum_{\sigma} (
\hat{n}_{i\sigma} - 1/2)$ (here ${\hat{n}_{i\sigma}}={\hat{c}^{\dagger}_{i\sigma}}{\hat{c}^{\vphantom\dagger}_{i\sigma}}$) with coupling parameter $g$. The spinless Holstein model is obtained by dropping spin indices.
The Holstein model follows from the generic model (\[Eq:Ham\_gen\]) by dropping the momentum dependence of the bosons, [i.e.]{}, ${\omega_q}\to{\omega_0}$ and $\gamma_q
\to\gamma$, and assuming a density-displacement coupling so that ${\hat{\rho}^{\dagger}_{q}} = {\hat{\rho}^{\vphantom\dagger}_{-q}}$. The same simplifications arise in electron-phonon models with nonlocal density-displacement [@PhysRevLett.109.116407] or bond-displacement couplings [@PhysRevB.91.245147]. Therefore, the formulas derived below for the Holstein model can be easily transferred to other models. For the Holstein case, ${\omega_0}= \sqrt{K/M}$, $\gamma = g/\sqrt{2M{\omega_0}}$, and we also introduce the dimensionless coupling parameter $\lambda =
\gamma^2/(2\omega_0t) = g^2/(4Kt)$. Simulations were performed at half-filling, but the estimators are general.
Effective fermionic action for the bosons and observables from generating functionals
-------------------------------------------------------------------------------------
For the generic fermion-boson model (\[Eq:Ham\_gen\]), the partition function takes the form $$\begin{aligned}
\label{Eq:pathint_eph}
Z
= \!
\int \! \! {\mathcal{D}(\bar{c},c)}\, e^{-S_0[{\bar{c}_{}},{c_{}}]} \!
\int \! \! {\mathcal{D}(\bar{b},b)}\,
e^{-S_{\mathrm{ep}}[{\bar{c}_{}},{c_{}},{\bar{b}_{}},{b_{}}]} \, .\end{aligned}$$ We use the coherent-state representation ${\hat{c}^{\vphantom\dagger}_{}} \ket{c} =
{c_{}} \ket{c}$ with Grassmann variables ${c_{}}$ for the fermions, and ${\hat{b}^{\vphantom\dagger}_{}} \ket{b} = {b_{}} \ket{b}$ with complex variables ${b_{}}$ for the bosons. The action is split into the fermionic part $S_0$ and the remainder $S_{\mathrm{ep}}$ containing the free-boson part and the interaction, $$\begin{aligned}
\label{Eq:Sep}
S_{\mathrm{ep}}
= \!
\int_0^{\beta} \! \! d\tau \sum_q
&\left\{ \,
{\bar{b}_{q}}(\tau)
\left[ \partial_{\tau} + {\omega_q}\right]
{b_{q}}(\tau)
\right. \\
& \, \, \left.
+ \gamma_q
\left[ {\rho_{q}}(\tau) \, {\bar{b}_{q}}(\tau) +
{\bar{\rho}_{q}}(\tau) \, {b_{q}}(\tau) \right]
\right\} \, .
\nonumber\end{aligned}$$ The bosons can be integrated out exactly [@PhysRev.97.660], leading to an effective fermionic interaction $$\begin{aligned}
\label{Eq:Seff_gen}
S_1
=
- \sum_q \frac{\gamma_q^2}{{\omega_q}}
\iint_{0}^{\beta} d\tau d\tau'
{\bar{\rho}_{q}}(\tau) \,
{P_{\! q}}(\tau-\tau') \, {\rho_{q}}(\tau')\end{aligned}$$ mediated by the noninteracting bosonic Green’s function ${P_{\! q}}(\tau-\tau') = {\omega_q}{\langle {\bar{b}_{q}}(\tau){b_{q}}(\tau') \rangle_0}$. Here, ${\langle \dots \rangle_0}$ also denotes expectation values with respect to the free-boson part of the action. For $0 \leq \tau < \beta$, ${P_{\! q}}(\tau)$ is given by $$\begin{aligned}
\label{Eq:ph_green_q}
{P_{\! q}}(\tau)
=
{\omega_q}\,\frac{ e^{-{\omega_q}\tau}}{1-e^{-{\omega_q}\beta}} \end{aligned}$$ and we impose ${P_{\! q}}(\tau+\beta)={P_{\! q}}(\tau)$. With the factor of ${\omega_q}$, the adiabatic and antiadiabatic limits of ${P_{\! q}}(\tau)$ are $$\begin{aligned}
\label{Eq:ph_green_limits}
\lim_{{\omega_q}\to0} {P_{\! q}}(\tau) = \frac{1}{\beta} \, , \qquad
\lim_{{\omega_q}\to\infty} {P_{\! q}}(\tau) = \delta(\tau) \, .\end{aligned}$$
In principle, the fermionic interaction (\[Eq:Seff\_gen\]) can be simulated with the CT-INT method if transformed into real space. However, for any nontrivial dispersion ${\omega_q}$ the transformed bosonic propagator has negative contributions that cause a sign problem [@PhysRevB.91.245147]. Therefore, we focus on models with optical bosons, [i.e.]{}, ${\omega_q}={\omega_0}$.
To obtain estimators for bosonic correlation functions in the CT-INT method, we add the source term $$\begin{aligned}
\label{Eq:gen_func_gen}
S_{\mathrm{source}}
=
- \! \int_0^{\beta} \! \! d\tau \sum_q
\left[ {\eta_{q}}(\tau) \, {\bar{b}_{q}}(\tau) + {\bar{\eta}_{q}}(\tau) \, {b_{q}}(\tau) \right]\end{aligned}$$ to $S_{\mathrm{ep}}$. After integrating out the bosons, the complex source fields ${\eta_{q}}(\tau)$ and ${\bar{\eta}_{q}}(\tau)$ appear in $S_1$, [i.e.]{}, $$\begin{aligned}
\label{Eq:Seff_gen_func}
S_{1,\mathrm{source}}
=
- \sum_q \frac{\gamma_q^2}{{\omega_q}} \iint_0^{\beta} &d\tau d\tau'
\left[{\bar{\rho}_{q}}(\tau) - \gamma_q^{-1} {\bar{\eta}_{q}}(\tau) \right] \ \phantom{.} \\
\times \, &{P_{\! q}}(\tau-\tau')
\left[{\rho_{q}}(\tau') - \gamma_q^{-1} {\eta_{q}}(\tau')
\right] \, .
\nonumber\end{aligned}$$ From Eq. (\[Eq:Seff\_gen\_func\]), any bosonic correlation function can be expressed in terms of fermionic fields by taking functional derivatives and the limit $\eta\to0$.
Application to the Holstein model {#sec:obs:hol}
---------------------------------
In the following, we illustrate the use of this formalism for the Holstein model (\[Eq:Holstein\_model\]). The notation is kept as general as possible to facilitate applications to other models. Replacing ${P_{\! q}}(\tau)\to {P}(\tau)$ the effective interaction $$\begin{aligned}
\label{Eq:Seff_HS}
S_1
=
- 2\lambda t
\iint_0^{\beta} d\tau d\tau' \sum_i
{\rho_{i}}(\tau)
\, {P}(\tau-\tau') \, {\rho_{i}}(\tau')\end{aligned}$$ becomes diagonal in real space. To express bosonic observables in terms of the displacements ${q_{i}}(\tau)$ or the momenta ${p_{i}}(\tau)$ we rewrite the source term (\[Eq:gen\_func\_gen\]) as $$\begin{aligned}
S_{\mathrm{source}}
=
- \! \int_0^{\beta} \! \! d\tau \sum_i
\left[
{\xi_{i}}(\tau) \, {q_{i}}(\tau) + {\zeta_{i}}(\tau) \, {p_{i}}(\tau)
\right] \, ,
\label{gen1}\end{aligned}$$ with real fields ${\xi_{i}}(\tau)$ and ${\zeta_{i}}(\tau)$. Transformation of the source fields in Eq. (\[Eq:Seff\_gen\_func\]) leads to the action $$\begin{aligned}
S_{1,\mathrm{source}}
=
S_1
+ S_{\xi\rho}^+
+ S_{\xi\xi}^+
+ S_{\zeta\rho}^-
+ S_{\zeta\zeta}^+
+ S_{\xi\zeta}^- \, ,
\label{Eq:genac2}\end{aligned}$$ where the individual contributions are given by $$\begin{aligned}
\label{Eq:gen_precise}
S_{\mu\nu}^{\pm}
=
- \alpha_{\mu\nu}
\iint_0^{\beta} d\tau d\tau' \sum_i
\mu_i(\tau) \, {P_{\! \pm}}(\tau-\tau') \, \nu_i(\tau')\end{aligned}$$ with $\alpha_{\xi\rho} = -2 \sqrt{\lambda t/K}$, $\alpha_{\xi\xi} = 1/(2K)$, $\alpha_{\zeta\rho} = 2i \sqrt{M\lambda t}$, $\alpha_{\zeta\zeta} = M/2$, and $\alpha_{\xi\zeta} = i/{\omega_0}$. Here, we defined the phonon propagators ${P_{\! \pm}}(\tau) = \frac{1}{2}\left[ {P}(\tau) \pm {P}(\beta-\tau) \right]$, corresponding to ${P_{\! +}}(\tau-\tau') = K \, {\langle {q_{i}}(\tau) {q_{i}}(\tau') \rangle_0} = M^{-1}
{\langle {p_{i}}(\tau) {p_{i}}(\tau') \rangle_0}$ and ${P_{\! -}}(\tau-\tau') = -i \, {\omega_0}{\langle {q_{i}}(\tau) {p_{i}}(\tau') \rangle_0}$.
With the help of the generating functionals in Eqs. (\[Eq:genac2\]) and (\[Eq:gen\_precise\]), we get access to the phonon propagators $$\begin{aligned}
\label{Eq:ph_prop_int_Q}
K {\left\langle {q_{i}}(\tau){q_{j}}(\tau') \right\rangle}
&=
{P_{\! +}}(\tau-\tau') \, \delta_{i,j}
+ X_{ij}^{++}(\tau,\tau') \, ,
\\
\label{Eq:ph_prop_int_P}
\frac{1}{M}
{\left\langle {p_{i}}(\tau){p_{j}}(\tau') \right\rangle}
&=
{P_{\! +}}(\tau-\tau') \, \delta_{i,j}
+ X_{ij}^{--}(\tau,\tau') $$ consisting of the free propagator ${P_{\! +}}$ and the interaction contributions $$\begin{aligned}
X_{ij}^{\pm\pm}(\tau,\tau')
=
4\lambda t
\iint_0^{\beta} d&\tau_1 d\tau'_1 \,
{P_{\! \pm}}(\tau-\tau_1) \\
&\times {\left\langle {\rho_{i}}(\tau_1) {\rho_{j}}(\tau'_1) \right\rangle}
{P_{\! \pm}}(\tau'_1 - \tau') \, .
\nonumber\end{aligned}$$ The total energy is $E = {E_{\mathrm{e\vphantom{ph}}}^{\mathrm{kin}\vphantom{\mathrm{pk}}}}+ {E_{\mathrm{ph}}^{\mathrm{kin}\vphantom{\mathrm{pk}}}}+ {E_{\mathrm{ph}}^{\mathrm{pot}\vphantom{\mathrm{pk}}}}+ {E_{\mathrm{eph}}^{\vphantom{\mathrm{p}}}}$, with [$$\begin{aligned}
{E_{\mathrm{ph}}^{\mathrm{kin}\vphantom{\mathrm{pk}}}}&=
\frac{{E_{\mathrm{ph}}^{0}}}{2}
- 2 \lambda t \iint_0^{\beta} d\tau d\tau'
{P_{\! -}}(\tau) \, {P_{\! -}}(\tau') \, C_{\rho}(\tau-\tau') \, ,
\label{Eq:Epkin}
\\
{E_{\mathrm{ph}}^{\mathrm{pot}\vphantom{\mathrm{pk}}}}&=
\frac{{E_{\mathrm{ph}}^{0}}}{2}
+ 2 \lambda t \iint_0^{\beta} d\tau d\tau'
{P_{\! +}}(\tau) \, {P_{\! +}}(\tau') \, C_{\rho}(\tau-\tau') \, ,
\label{Eq:Eppot}
\\
{E_{\mathrm{eph}}^{\vphantom{\mathrm{p}}}}&=
- 4 \lambda t \! \int_0^{\beta} \! \! d\tau \,
{P_{\! +}}(\tau) \, C_{\rho}(\tau) \, .
\label{Eq:Eep}\end{aligned}$$ ]{} Here, ${E_{\mathrm{ph}}^{0}}= L {P_{\! +}}(0)$ and $C_{\rho}(\tau-\tau') = \sum_i
{\left\langle {\rho_{i}}(\tau){\rho_{i}}(\tau') \right\rangle}$. ${E_{\mathrm{ph}}^{\mathrm{pot}}}$ and ${E_{\mathrm{ph}}^{\mathrm{kin}}}$ follow from Eqs. (\[Eq:ph\_prop\_int\_Q\]) and (\[Eq:ph\_prop\_int\_P\]) by fixing the interaction to $X^{\pm\pm}_{ii}(0,0)$. In Appendix \[Sec:SumRulesEnergies\], we provide further information on the relation between the bosonic observables and the dynamic charge structure factor.
The observables (\[Eq:ph\_prop\_int\_Q\])–(\[Eq:Eep\]) can be recovered from the charge correlation function $\langle\rho_i(\tau)\rho_j(\tau')\rangle$ which is accessible in CT-INT via Wick’s theorem. In Ref. [@PhysRevB.91.235150], we calculated ${\left\langle {\rho_{i}}(\tau){\rho_{j}}(0) \right\rangle}$ on an equidistant $\tau$ grid with spacing ${\Delta\tau_{\mathrm{obs}}}=0.1$ and performed the remaining integrals numerically. However, as shown below, it is more efficient to use the distribution of vertices.
CT-INT for the Holstein model {#Sec:CT-INT_conf}
=============================
Vertex notation for the effective interaction
---------------------------------------------
For the Holstein model, the interaction term sampled with the CT-INT method takes the form $$\begin{aligned}
S_1
=
- \lambda t \iint_0^{\beta} d\tau d\tau' \!\! \sum_{i\sigma\sigma's}
&\left[{\rho_{i\sigma}}(\tau) - s \delta \right] \\
&\times {P_{\! +}}(\tau-\tau')
\left[{\rho_{i\sigma'}}(\tau') - s
\delta \right]
\, .
\nonumber\end{aligned}$$ Compared to Eq. (\[Eq:Seff\_HS\]), we introduced an auxiliary Ising variable $s=\pm 1$ (and $\delta = 0.51$) to avoid the sign problem [@PhysRevB.76.035116], and used the symmetrized phonon propagator ${P_{\! +}}(\tau)$. In the notation of Eq. (\[Eq:Vertex\_notation\]), $\nu=\{i,\tau,\tau',\sigma,\sigma',s\}$, $w_{\nu} = -\lambda t \,
{P_{\! +}}(\tau-\tau')$, and $$\begin{aligned}
\label{Eq:HS_vert_1}
h_{\nu} = {\rho_{i\sigma}}(\tau) {\rho_{i\sigma'}}(\tau')
+ \delta^2
- s \delta \left[ {\rho_{i\sigma}}(\tau)
+ {\rho_{i\sigma'}}(\tau') \right]
\, .\end{aligned}$$ The QMC simulation is performed as described before. The acceptance rate for adding a new vertex can be optimized by proposing $\tau-\tau'$ according to ${P_{\! +}}(\tau-\tau')$ via inverse transform sampling.
Observables from the distribution of vertices
---------------------------------------------
The operators contained in Eq. (\[Eq:HS\_vert\_1\]) can be measured from the distribution of vertices. In particular, we have access to the dynamical charge correlations required for the calculation of the bosonic observables in Sec. \[sec:obs:hol\]. In the following, we use Eqs. (\[Eq:Obs\_vert\_1\]) and (\[Eq:Obs\_vert\_2\]) to derive improved estimators for the total energy, the fidelity susceptibility, and the phonon propagator.
### Total energy
The kinetic energy of the electrons is calculated from the single-particle Green’s function. To recover the phononic contributions (\[Eq:Epkin\])–(\[Eq:Eep\]) from the distribution of vertices, we sum over the auxiliary Ising variable $s$ in Eq. (\[Eq:HS\_vert\_1\]) and use Eq. (\[Eq:Obs\_vert\_1\]) to obtain the estimator $$\begin{aligned}
\label{Eq:den_loc_vert}
&{\left\llangle {\rho_{i\sigma}}(\tau) {\rho_{i\sigma'}}(\tau') \right\rrangle_{C_n}}
+ \delta^2
\\
&\hspace{1.4cm}
=
\sum_{k=1}^n
\frac{\delta_{i,i_k} \delta_{\sigma,\sigma_k}
\delta_{\sigma' \!\!,\sigma'_k} \delta(\tau-\tau_k) \,
\delta(\tau'-\tau'_k)}{2\lambda t \, {P_{\! +}}(\tau_k -\tau'_k)}
\nonumber\end{aligned}$$ for the local charge-charge correlation function. From Eq. (\[Eq:den\_loc\_vert\]) we get the estimators $$\begin{aligned}
{E_{\mathrm{ph}}^{\mathrm{kin}\vphantom{\mathrm{pk}}}}[C_n]
&=
\frac{{E_{\mathrm{ph}}^{0}}}{2}
- \sum_{k=1}^{n}
\frac{{P_{\! -}}(\tau_k) {P_{\! -}}(\tau'_k)}{{P_{\! +}}(\tau_k - \tau'_k)} \, ,
\label{Eq:Epkin_vert}
\\
{E_{\mathrm{ph}}^{\mathrm{pot}\vphantom{\mathrm{pk}}}}[C_n]
&=
\frac{{E_{\mathrm{ph}}^{0}}}{2}
+ \sum_{k=1}^{n}
\frac{{P_{\! +}}(\tau_k) {P_{\! +}}(\tau'_k)}{{P_{\! +}}(\tau_k - \tau'_k)}
- {2\lambda t L N_{\sigma}^2\delta^2} \, ,
\label{Eq:Eppot_vert}
\\
{E_{\mathrm{eph}}^{\vphantom{\mathrm{p}}}}[C_n]
&=
- \frac{2 n}{\beta} + 4 \lambda t L N_{\sigma}^2\delta^2 \, .
\label{Eq:Eep_vert}\end{aligned}$$ For the kinetic energy the term $\sim\delta^2$ vanishes due to the antisymmetry of ${P_{\! -}}(\tau)$. $N_{\sigma}$ counts the number of spin components of the Holstein model, [i.e.]{}, $N_{\sigma}=1$ for the spinless and $N_{\sigma}=2$ for the spinful model.
The estimators (\[Eq:Epkin\_vert\]) and (\[Eq:Eppot\_vert\]) can be further improved by exploiting the global translational invariance of all vertices, [i.e.]{}, $\tau_k\to \tau_k+\Delta\tau$ and $\tau'_k\to \tau'_k+\Delta\tau$ with $\Delta\tau\in[0,\beta)$. We integrate over $\Delta\tau$ to treat all the translations exactly, see Appendix \[Sec:TranslVert\] for details. Thereby, especially ${E_{\mathrm{ph}}^{\mathrm{kin}}}[C_n]$ is substantially improved, as shown in Sec. \[Sec:PerformanceTest\].
### Fidelity susceptibility
To calculate the fidelity susceptibility for a retarded interaction we start from Eq. (\[Eq:FS\_finiteT\]) and identify the electron-phonon coupling as the driving term with $\alpha=g$ and $\hat{H}_1 =
\sum_i {\hat{Q}_{i}} \hat{\rho}_i$. The displacements ${\hat{Q}_{i}}$ entering the expectation values of the Hamiltonian in Eq. (\[Eq:FS\_finiteT\]) can be replaced with fermionic operators using the source terms introduced before. ${\langle H_1 \rangle}$ is given by Eq. (\[Eq:Eep\]), and $$\begin{aligned}
\begin{split}
{\left\langle H_1(\tau) H_1(\tau') \right\rangle}
=
2 \sum_{\nu_1} w_{\nu_1} {\left\langle h_{\nu_1} \right\rangle} \delta(\tau-\tau_1) \,
\delta(\tau'-\tau'_1) \\
+ 4 \sum_{\nu_1 \nu_2} w_{\nu_1} w_{\nu_2} {\left\langle h_{\nu_1} h_{\nu_2} \right\rangle}
\delta(\tau - \tau'_1) \, \delta(\tau' - \tau'_2)
\end{split}\end{aligned}$$ in the vertex notation of the Holstein model. Continuing the derivation as in Ref. [@PhysRevX.5.031007], we obtain an estimator very similar to Eq. (\[Eq:FS\_MC\]), $$\begin{aligned}
\label{Eq:FS_MC_HS}
\chi_\text{F}
=
\frac{
{\left\langle \tilde{n}_\text{L} \tilde{n}_\text{R} \right\rangle}
- {\left\langle \tilde{n}_\text{L} \right\rangle} {\left\langle \tilde{n}_\text{R} \right\rangle}
}{2g^2}
\, .\end{aligned}$$ However, in the present case, each vertex contains two bilinears with times $\tau_k$ and $\tau'_k$, and $\tilde{n}_\text{L}$ and $\tilde{n}_\text{R}$ count the numbers of these bilinears in the left and right half of the partitioned imaginary-time axis. For simplicity, we omitted a constant shift in Eq. (\[Eq:FS\_MC\_HS\]) that arises from the $\delta$-dependent terms in Eq. (\[Eq:HS\_vert\_1\]). Taking it into account leads to $\chi_\text{F} \to \chi_\text{F} - {2\lambda t L N_{\sigma}^2 \delta^2
\tanh(\beta{\omega_0}/4)}/{({\omega_0}g^2)}$.
### Phonon propagator
Equation (\[Eq:den\_loc\_vert\]) only gives access to local charge-charge correlations. For the Holstein model, we can also obtain nonlocal correlation functions from the distribution of vertices, including the phonon propagator. For this purpose, we exploit the information provided by the Ising variable $s$. If we consider $\sum_s s \, h_{\nu}$, the first two terms in Eq. (\[Eq:HS\_vert\_1\]) drop out and only individual charge operators are left. Analogously, by taking $$\begin{aligned}
\label{Eq:nonloc_vert}
\begin{split}
\sum_{s_1 s_2} s_1 s_2 \, h_{\nu_1} h_{\nu_2}
=
4 \delta^2
&\left[ {\rho_{i_1\sigma_1}}(\tau_1) + {\rho_{i_1\sigma'_1}}(\tau'_1) \right] \\
\times &\left[ {\rho_{i_2\sigma_2}}(\tau_2) + {\rho_{i_2\sigma'_2}}(\tau'_2) \right]
\, ,
\end{split}\end{aligned}$$ we can recover nonlocal charge correlations from Eq. (\[Eq:Obs\_vert\_2\]). The simplest estimator is the charge susceptibility $$\begin{aligned}
\label{Eq:suscharge_vert}
\chi_{ij}[C_n]
&=
\frac{1}{\beta}
\iint d\tau d\tau'
{\left\llangle {\rho_{i}}(\tau){\rho_{j}}(\tau') \right\rrangle_{C_n}}
\\
&=
\frac{1}{16 (\lambda t)^2 N_{\sigma}^2 \delta^2 \beta^3}
\sum_{k \neq l}
\frac{s_k \, \delta_{i,i_k}}{{P_{\! +}}(\tau_k-\tau'_k)}
\frac{s_l \, \delta_{j,i_l}}{{P_{\! +}}(\tau_l-\tau'_l)}
\, ,
\nonumber\end{aligned}$$ which is obtained from the summation over all variables except for the lattice sites. Similarly, the (spin-resolved) charge correlation function can be calculated directly in Matsubara frequencies. The phonon propagators (\[Eq:ph\_prop\_int\_Q\]) and (\[Eq:ph\_prop\_int\_P\]) take the form
$$\begin{aligned}
\label{Eq:ph_prop_Q_vert}
K {\left\llangle q_i(\tau) q_j(\tau') \right\rrangle_{C_n}}
&=
{P_{\! +}}(\tau-\tau') \, \delta_{i,j}
+ \frac{1}{4\lambda t N_{\sigma}^2 \delta^2}
\sum_{k \neq l}
\frac{{P_{\! +}}(\tau-\tau_k) {P_{\! +}}(\tau-\tau'_k) \, s_k \, \delta_{i,i_k}}{{P_{\! +}}(\tau_k - \tau'_k)}
\frac{{P_{\! +}}(\tau'-\tau_l) {P_{\! +}}(\tau'-\tau'_l) \, s_l \, \delta_{j,i_l}}{{P_{\! +}}(\tau_l - \tau'_l)}
\, ,
\\
\label{Eq:ph_prop_P_vert}
\frac{1}{M} {\left\llangle p_i(\tau) p_j(\tau') \right\rrangle_{C_n}}
&=
{P_{\! +}}(\tau-\tau') \, \delta_{i,j}
- \frac{1}{\lambda t N_{\sigma}^2 \delta^2 \beta^2}
\sum_{k \neq l}
\frac{{P_{\! -}}(\tau-\tau_k) \, s_k \, \delta_{i,i_k}}{{P_{\! +}}(\tau_k - \tau'_k)}
\frac{{P_{\! -}}(\tau'-\tau_l) \, s_l \, \delta_{j,i_l}}{{P_{\! +}}(\tau_l - \tau'_l)}
\, .\end{aligned}$$
To arrive at Eq. (\[Eq:ph\_prop\_Q\_vert\]), we multiplied Eq. (\[Eq:nonloc\_vert\]) with the symmetrized propagator ${P_{\! +}}$ for each of the four times on the right-hand side before integrating over the imaginary times. For Eq. (\[Eq:ph\_prop\_P\_vert\]), we included the antisymmetrized propagator ${P_{\! -}}$ only for one pair of times, but the estimator can be further improved by considering the remaining three combinations. Similar to ${E_{\mathrm{ph}}^{\mathrm{kin}}}[C_n]$, the estimator (\[Eq:ph\_prop\_P\_vert\]) can be substantially improved by exploiting translational invariance of the vertices, see Appendix \[Sec:TranslVert\].
Results {#Sec:Results}
=======
Performance of the vertex measurements {#Sec:PerformanceTest}
--------------------------------------
In the CT-INT method, the computation of the single-particle Green’s function for the calculation of observables via Wick’s theorem requires $\mathcal{O}(n^2LN_{\tau})$ operations, where $N_{\tau}$ is the number of $\tau$ points. If $N_\tau$ is scaled with $\beta$, the calculation of dynamical correlation functions is of the same order as the Monte Carlo updates. For fermion-boson problems, even the bosonic energies in Eqs. (\[Eq:Epkin\])–(\[Eq:Eep\]) require the full time dependence of ${\left\langle {\rho_{i}}(\tau) {\rho_{j}}(0) \right\rangle}$. On the other hand, the calculation from the vertex distribution involves only $\mathcal{O}(n)$ operations for the energies and $\mathcal{O}(n N_{\tau})$ for the phonon propagator. For the latter, exploiting translational invariance leads to another $\mathcal{O}(L^2N_{\tau}^2)$ operations to set up the final estimator, [cf.]{}Appendix \[Sec:TranslVert\]. For large $n$, the computational cost for the vertex measurements becomes negligible.
The above considerations were verified for the spinless Holstein model with ${\omega_0}/t=0.4$, $L=\beta t = 22$, $\lambda=1.5$, and $1000$ QMC steps between measurements. The average expansion order was ${\left\langle n \right\rangle} \approx 660$ and we used ${\Delta\tau_{\mathrm{obs}}}=0.1$ ($N_{\tau} =220$). The computation of dynamical correlation functions using Wick’s theorem took $26\%$ of the total time, of which $86\%$ went into the matrix-vector multiplications necessary to calculate the Green’s function. Only $1\%$ of the total time was used for the vertex measurements, most of which went into the $\mathcal{O}(L^2N_{\tau}^2)$ operations necessary to set up the translation-invariant phonon propagator. If we omitted this last operation, the vertex measurements only took $0.02\%$ of the total time, and were dominated by the exact evaluation of ${P_{\! \pm}}(\tau)$ for each vertex. Approximately the same time would be needed for equal-time measurements from Wick’s theorem using $N_{\tau}=1$. Hence, further improvements through tabulation of ${P_{\! \pm}}(\tau)$ seem unnecessary.
Aside from the significant speed-up, another advantage of the vertex measurements is the exact calculation of imaginary-time integrals. In contrast, Wick’s theorem provides ${\left\langle {\rho_{i}}(\tau) {\rho_{j}}(0) \right\rangle}$ only on a finite grid so that systematic errors from numerical integration can arise. For ${\omega_0}/t=0.4$, using Simpson’s rule on an equidistant grid with ${\Delta\tau_{\mathrm{obs}}}=0.1$ was sufficient to make systematic errors irrelevant. However, more elaborate integration schemes may be necessary for larger ${\omega_0}$.
Table \[Tab:Statistics\] reports ratios of statistical errors of averages obtained from either the vertex distribution or Wick’s theorem, as determined in the same simulation and hence for the same number of bins. We considered different bosonic energies, as well as the charge susceptibility $\chi(q)$ at $q=\pi$ which tracks charge-density-wave order. For ${E_{\mathrm{ph}}^{\mathrm{pot}}}$ and ${E_{\mathrm{ph}}^{\mathrm{kin}}}$ we compared three different estimators: the simple estimators (\[Eq:Epkin\_vert\]) and (\[Eq:Eppot\_vert\]) from one set of vertices, the improved estimators using translational invariance \[Eq. (\[Eq:prop\_repl\])\], and the estimators for the phonon propagators using the Ising spins, Eqs. (\[Eq:ph\_prop\_Q\_vert\]) and (\[Eq:ph\_prop\_P\_vert\]).
The reference results are for the spinless Holstein model with ${\omega_0}/t=0.4$, $\lambda = 0.5$, $L=\beta t = 22$, and $\delta =
0.51$. For the resulting rather small expansion order ${\left\langle n \right\rangle} \approx 151$, the estimators from Wick’s theorem have better statistics, [i.e.]{}, the ratios in Table \[Tab:Statistics\] are larger than one. The vertex estimators improve significantly upon exploiting translational invariance, especially ${E_{\mathrm{ph}}^{\mathrm{kin}}}$. Increasing the number of vertices per phase-space volume via the interaction parameter $\lambda$ levels out the differences between estimators, except for ${E_{\mathrm{ph}}^{\mathrm{kin}}}$ at $\lambda=1.5$. In contrast, changing ${\left\langle n \right\rangle}$ via the phase-space parameters $L$ and $\beta$ leaves most of the ratios essentially unchanged. The same is true when increasing the number of vertices via the Ising-spin parameter $\delta$. Finally, Table \[Tab:Statistics\] confirms that ${\left\langle n \right\rangle}\sim \beta L$, whereas the dependence on $\lambda$ is nonlinear.
------------------- ---------------------- ---------------------- --------------------- ---------------------------- ---------------------------------- --------------------- ---------------------------- -------------------------- -----
observable ${E_{\mathrm{eph}}}$ $\chi(\pi)$ ${\left\langle n \right\rangle}$
from Eq. (\[Eq:Eep\_vert\]) (\[Eq:Eppot\_vert\]) (\[Eq:prop\_repl\]) (\[Eq:ph\_prop\_Q\_vert\]) (\[Eq:Epkin\_vert\]) (\[Eq:prop\_repl\]) (\[Eq:ph\_prop\_P\_vert\]) (\[Eq:suscharge\_vert\])
reference 2.6 4.0 2.6 2.5 20 5.6 5.9 1.2 151
$\lambda=1.0$ 1.2 1.4 1.1 1.2 4.8 1.6 1.3 1.0 371
$\lambda=1.5$ 1.1 1.3 1.1 1.6 18 3.3 2.9 1.0 661
$L = \beta t =14$ 3.2 4.0 3.2 0.2 19 6.4 4.0 1.2 62
$L = \beta t =30$ 2.6 5.0 2.7 2.8 23 5.4 13 1.3 282
$\delta=1.0$ 3.7 7.0 4.0 2.1 32 8.5 4.4 1.2 510
------------------- ---------------------- ---------------------- --------------------- ---------------------------- ---------------------------------- --------------------- ---------------------------- -------------------------- -----
: \[Tab:Statistics\] Ratios of statistical errors for averages from vertex measurements and Wick’s theorem for different simulation parameters. The reference point is the spinless Holstein model with ${\omega_0}/t=0.4$, $\lambda = 0.5$, $L=\beta t =
22$, and $\delta = 0.51$. The first two rows indicate the observable and estimator used. The last column reports the average expansion order.
Although the dependence of the statistical errors on the simulation parameters is not completely systematic, the vertex measurements become advantageous especially at large expansion orders. The errors are of the same order of magnitude, but the vertex estimators are much faster and avoid systematic integration errors.
Peierls transition in Holstein models
-------------------------------------
![\[Fig:Energies\_adiabatic\] (Color online) Phonon kinetic energy per site for (a) the spinless Holstein model with ${\omega_0}/t=0.4$ and (b) the spinful Holstein model with ${\omega_0}/t=0.5$. The inset in (b) shows a closeup of the region around the minimum. ](fig1.pdf){width="1.0\linewidth"}
{width="\linewidth"}
The Peierls quantum phase transition in half-filled spinful Holstein and Holstein-Hubbard models has been studied with a number of numerical techniques (see Ref. [@PhysRevB.92.245132] for a review). While early QMC results [@PhysRevB.27.4302] suggested the absence of a metallic phase, more recent work has established a phase transition at a nonzero critical value $\lambda_c$ [@PhysRevB.60.7950; @PhysRevB.87.075149] in accordance with functional renormalization group results [@Barkim2015]. However, the exact determination of the phase boundary, as well as the characterization of the metallic phase in terms of Luttinger liquid parameters remain open problems [@PhysRevB.92.245132]. The difficulties are associated with the Berezinskii-Kosterlitz-Thouless (BKT) nature of the transition, so that gaps are exponentially small near $\lambda_c$, and a small but nonzero spin gap caused by attractive backscattering which is hard to resolve numerically [@PhysRevB.92.245132]. In particular, the spin gap renders the previously used charge susceptibility [@ClHa05] essentially useless for detecting long-range charge order [@PhysRevB.92.245132]. In contrast, no such complications are encountered for the spinless Holstein model. Although the quantum phonons still represent a significant numerical challenge, the phase diagram and the Luttinger liquid parameters have been determined quite accurately [@PhysRevB.73.245120; @0295-5075-87-2-27001].
Here, we consider alternative diagnostics to detect the Peierls transition, namely, the phonon kinetic energy and the fidelity susceptibility. In addition, we present significantly improved results for the phonon spectral function over the entire coupling range.
### Phonon kinetic energy
Figure \[Fig:Energies\_adiabatic\] shows the phonon kinetic energy for the spinless Holstein model with ${\omega_0}/t=0.4$ and the spinful Holstein model with ${\omega_0}/t=0.5$. For both models, ${E_{\mathrm{ph}}^{\mathrm{kin}}}$ exhibits a distinct minimum as a function of $\lambda$. In the spinless case, ${E_{\mathrm{ph}}^{\mathrm{kin}}}$ has almost converged for the largest system size considered ($L=30$) and the position of the minimum is consistent with the previous estimate $\lambda_c \approx 0.7$ [@PhysRevB.73.245120]. While the critical value of the spinful model is still under debate [@PhysRevB.92.245132], the position of the minimum in Fig. \[Fig:Energies\_adiabatic\](b) suggests a slightly larger value than in previous results where $\lambda_c\approx0.25$ [@0295-5075-84-5-57001; @PhysRevB.92.245132]. The nonmonotonic finite-size dependence of ${E_{\mathrm{ph}}^{\mathrm{kin}}}(L)$ near $\lambda_c$ in the spinful case is expected to arise from the small but nonzero spin gap in the metallic phase [@PhysRevB.92.245132].
The minimum in ${E_{\mathrm{ph}}^{\mathrm{kin}}}$ can be related to the behavior of the dynamic charge structure factor $S_{\rho}(q,\omega)$ using the sum rules derived in Appendix \[Sec:SumRulesEnergies\]. Because of the density-displacement coupling, $S_{\rho}(q,\omega)$ also contains contributions from the renormalized phonon dispersion $\tilde{\omega}(q)$. The minimum of ${E_{\mathrm{ph}}^{\mathrm{kin}}}$ near $\lambda_c$ arises from the softening and subsequent hardening of $\tilde{\omega}(q)$ near $q=\pi$ discussed below. Interestingly, a minimum of the phonon kinetic energy is also observed in the crossover from a large to a small polaron in the Holstein model [@PhysRevB.45.7730; @PhysRevB.69.024301].
The renormalization of $\tilde{\omega}(q)$ was also used in Ref. [@Creffield2005] to estimate $\lambda_c$ from fits to the phonon Green’s function. In our results (see below and Ref. [@PhysRevB.91.235150]) the value of $\lambda$ at which complete softening of the phonon mode occurs matches the position of the minimum in ${E_{\mathrm{ph}}^{\mathrm{kin}}}$. The latter quantity is easier and faster to calculate with the CT-INT method. For the spinless Holstein model, we have also tested this estimator for other phonon frequencies. At ${\omega_0}/t=1$, the position of the minimum in ${E_{\mathrm{ph}}^{\mathrm{kin}}}$ approaches the critical coupling $\lambda_c\approx 1.3$ from density-matrix renormalization group calculations [@0295-5075-87-2-27001], but CT-INT simulations become difficult at these stronger couplings. At ${\omega_0}/t=0.1$, we find considerable finite-size effects even at $\beta t = L =42$ where the position of the minimum still deviates significantly from $\lambda_c\approx0.4$ [@0295-5075-87-2-27001].
### Phonon spectral function
Previous results for the spinless Holstein model suggest that in the adiabatic regime considered here, the phonon dispersion softens at and around $q=\pi$ (the ordering wavevector for the Peierls transition) on approaching $\lambda_c$ from the metallic phase [@PhysRevB.73.245120; @SyHuBeWeFe04; @Creffield2005; @PhysRevB.91.235150; @PhysRevB.83.115105]. For a soft-mode transition, the phonon mode should become completely soft at $q=\pi$ and $\lambda=\lambda_c$, and subsequently harden for $\lambda>\lambda_c$. Indications for such a hardening were recently observed for the spinful Holstein model [@PhysRevB.91.235150], but a clear identification is complicated by the dominant central peak in the Peierls phase [@PhysRevB.91.235150; @PhysRevB.83.115105] and—in the case of exact diagonalization—the small system sizes accessible at strong coupling [@PhysRevB.73.245120].
Here, we consider the phonon spectral functions $$\begin{aligned}
B_\alpha(q,\omega)
=
\frac{1}{Z}
\sum_{mn}
e^{-\beta E_m}
{\left| \bra{m} \hat{O}^{\alpha}_q \ket{n} \right|}^2
\delta(\omega - \Delta_{nm}) \end{aligned}$$ calculated either from the displacement \[$\alpha=Q$, Eq. (\[Eq:ph\_prop\_int\_Q\])\] or the momentum correlation function \[$\alpha=P$, Eq. (\[Eq:ph\_prop\_int\_P\])\], with $\hat{O}^Q=K^{1/2}{\hat{Q}_{}}$, $\hat{O}^P=M^{-1/2}{\hat{P}_{}}$, and $\Delta_{nm}=E_n-E_m$.
In principle, both spectral functions contain the same information, but spectral weights may differ significantly. In particular, the Monte Carlo estimators (\[Eq:ph\_prop\_Q\_vert\]) and (\[Eq:ph\_prop\_P\_vert\]) may be subject to different statistical fluctuations that affect the stochastic analytic continuation [@PhysRevB.57.10287; @2004cond.mat..3055B].
The displacement spectrum $B_Q(q,\omega)$ in Fig. \[Fig:ph\_softening\](a) reveals the softening of the phonons near $q=\pi$ in the metallic phase. Near the critical point, the dispersion appears completely soft at $q=\pi$ \[Fig. \[Fig:ph\_softening\](b)\], and the spectrum is dominated by a central peak at $\omega=0$ associated with the long-range charge order. This peak grows strongly with $\lambda$ and introduces strong fluctuations in the dynamic displacement correlation function (\[Eq:ph\_prop\_int\_Q\]) at all momenta $q$. The fluctuations cause a significant broadening of the spectrum obtained by analytic continuation, and in particular make it virtually impossible to resolve finite-frequency contributions at $q=\pi$, [cf.]{}Fig. \[Fig:ph\_softening\](c).
To follow the phonon dispersion in the ordered phase, we instead consider the spectral function $B_P(q,\omega)$ shown in Figs. \[Fig:ph\_softening\](d)–(f). The use of the momentum correlation function (\[Eq:ph\_prop\_int\_P\]) filters out the central mode, and allows us to unambiguously identify the hardening of the phonon dispersion at $q=\pi$ in the Peierls phase \[Fig. \[Fig:ph\_softening\](f)\]. Hence, the Peierls transition in the adiabatic regime can be classified as a soft-mode transition.
### Fidelity susceptibility
![\[Fig:susfid\] (Color online) Fidelity susceptibility per site for (a) the spinless Holstein model with ${\omega_0}/t=0.4$ and (b) the spinful Holstein model with ${\omega_0}/t=0.5$. Results were obtained from Eq. (\[Eq:FS\_MC\_HS\]) with $g^2\to\lambda=g^2/(4Kt)$ and including the shift discussed after Eq. (\[Eq:FS\_MC\_HS\]). ](fig3.pdf){width="\linewidth"}
Using the estimator (\[Eq:FS\_MC\_HS\]) we calculated the fidelity susceptibility $\chi_\text{F}$ for the spinless and the spinful Holstein model. The phonon frequencies were chosen as in Fig. \[Fig:Energies\_adiabatic\].
Figure \[Fig:susfid\](a) shows $\chi_\text{F}/L$ for the spinless Holstein model as a function of $\lambda$. We find a maximum that grows and shifts to smaller $\lambda$ with increasing $L$. In contrast, finite-size effects are smaller at weak and strong coupling. In the thermodynamic limit, a cusp at the critical coupling is expected for a BKT transition [@PhysRevB.91.014418]. For the accessible system sizes, the position of the maximum deviates significantly from the expected value $\lambda_c \approx 0.7$ [@PhysRevB.73.245120], in contrast to Fig. \[Fig:Energies\_adiabatic\](a). A slow convergence of the fidelity susceptibility with system size was previously observed for the BKT transition in the spin-$\frac{1}{2}$ XXZ chain [@PhysRevB.91.014418].
Results for the spinful Holstein model are shown in Fig. \[Fig:susfid\](b). We again observe a maximum at intermediate values of $\lambda$ that are significantly larger than previous estimates $\lambda_c\approx0.25$ [@0295-5075-84-5-57001; @PhysRevB.92.245132] and the position of the minimum in Fig. \[Fig:Energies\_adiabatic\](b). Finite-size effects appear to be less systematic than for the spinless case, which we attribute to the additional spin gap; the latter is not fully resolved for small $L$ [@PhysRevB.92.245132]. The results in Fig. \[Fig:susfid\](b) are consistent with a phase transition at a $\lambda_c>0$ and hence a metallic phase at weak coupling, as reported in previous work.
Conclusions {#Sec:Conclusions}
===========
The CT-INT quantum Monte Carlo method is particularly useful to simulate fermion-boson models because the bosons can be integrated out. While advantageous for simulations, this integration makes it nontrivial to calculate expectation values of bosonic variables. In this work, we presented estimators for arbitrary bosonic correlation functions using generating functionals. As a concrete example, we derived sum rules for the total energy and the phonon propagator of the Holstein model. Moreover, we showed that several observables of interest can be measured directly from the vertex distribution instead of using Wick’s theorem. Additionally, we generalized the QMC estimator for the fidelity susceptibility [@PhysRevX.5.031007] to retarded boson-mediated interactions, thereby providing a rather general diagnostic to detect phase transitions.
A comparison of different observables and simulation parameters showed that statistical errors are of the same order for the vertex estimators and the estimators based on Wick’s theorem. The vertex estimators are easy to implement, more efficient, and often avoid systematic errors from numerical integration. These findings complement previous applications in the context of impurity problems. Our results are general and can be applied to a variety of other lattice fermion-bosons models. For example, the possibility to calculate the total energy provides access to the specific heat. Moreover, the calculation of the charge susceptibility from the auxiliary Ising spins may be advantageous to detect charge order in higher dimensions or in Hubbard-type models.
These methodological developments were applied to one-dimensional spinless and spinful Holstein models for electron-phonon interaction. The phonon kinetic energy was found to exhibit a minimum related to the renormalization (softening) of the phonon mode. For intermediate phonon frequencies, the location of the minimum is consistent with other estimates of the critical point. The phonon spectral function calculated from the phonon momentum correlator reveals the hardening of the phonon mode in the Peierls phase, and thereby provides evidence for the soft-mode nature of the Peierls transition. Finally, the fidelity susceptibility exhibits a broad maximum at intermediate coupling and significant finite-size effects. While it hence does not provide more accurate critical values in the one-dimensional case considered, the qualitatively similar behavior observed for the spinless and the spinful model may be regarded as additional evidence for an extended metallic phase in the latter.
The authors gratefully acknowledge the computing time granted by the John von Neumann Institute for Computing (NIC) and provided on the supercomputer JURECA [@Juelich] at Jülich Supercomputing Centre, as well as financial support from the DFG Grant Nos. AS120/10-1 and Ho 4489/4-1 (FOR 1807). We further thank J. Hofmann for helpful discussions.
Exact relations to the charge spectrum {#Sec:SumRulesEnergies}
======================================
For the Holstein model, the phonon propagators (\[Eq:ph\_prop\_int\_Q\]) and (\[Eq:ph\_prop\_int\_P\]) as well as the energies (\[Eq:Epkin\])–(\[Eq:Eep\]) are determined by the time-displaced charge correlation function $C_{\rho}(q,\tau-\tau')={\left\langle {\rho_{q}}(\tau){\rho_{-q}}(\tau') \right\rangle}$. The latter is related to the dynamic charge structure factor $$\begin{aligned}
S_{\rho}(q,\omega)
&=
\frac{1}{Z}
\sum_{mn}
e^{-\beta (E_m-\mu N_m)}
{\left| \bra{m} {\hat{\rho}^{\vphantom\dagger}_{q}} \ket{n} \right|}^2
\\\nonumber
&\hspace*{6em}\times\delta(E_n - E_m - \omega)\end{aligned}$$ via $
C_{\rho}(q,\tau)
=
\int_0^{\infty} \!\! d\omega \,
K(\tau,\omega) \, S_{\rho}(q,\omega) $, where $K(\tau,\omega) = \exp[-\tau\omega] +
\exp[-\left(\beta-\tau\right)\omega]$. Therefore, the entire single-particle dynamics of the phonons is contained in $S_{\rho}(q,\omega)$. In particular, $B(q,\omega)$ is directly related to $S_{\rho}(q,\omega)$ [@PhysRevB.91.235150]. The energies (\[Eq:Epkin\])–(\[Eq:Eep\]) can be calculated from $S_{\rho}(\omega) = \sum_q S_{\rho}(q,\omega)$ via [$$\begin{aligned}
{E_{\mathrm{ph}}^{\mathrm{kin}\vphantom{\mathrm{pk}}}}&=
\frac{{E_{\mathrm{ph}}^{0}}}{2}
- 2 \lambda t
\! \int_0^{\infty} \!\! d\omega \,
K_{--}(\omega/{\omega_0}, \beta {\omega_0}) \, S_{\rho}(\omega) \, ,
\label{Eq:Epkin_spec}
\\
{E_{\mathrm{ph}}^{\mathrm{pot}\vphantom{\mathrm{pk}}}}&=
\frac{{E_{\mathrm{ph}}^{0}}}{2}
+ 2 \lambda t
\! \int_0^{\infty} \!\! d\omega \,
K_{++}(\omega/{\omega_0}, \beta {\omega_0}) \, S_{\rho}(\omega) \, ,
\label{Eq:Eppot_spec}
\\
{E_{\mathrm{eph}}^{\vphantom{\mathrm{p}}}}&=
- 4 \lambda t
\! \int_0^{\infty} \!\! d\omega \,
K_{+}(\omega/{\omega_0}, \beta {\omega_0}) \, S_{\rho}(\omega) \, ,
\label{Eq:Eep_spec}\end{aligned}$$ ]{} with the kernels ($x=\omega/\omega_0$, $y=\beta\omega_0$, $\omega_0>0$)
$$\begin{aligned}
K_{\pm\pm}(x,y)
=
\frac{1}{4\pi \left(x^2-1\right)}
\left\{
x \tanh(xy/2) \coth(y/2)
\pm \frac{x y \tanh(xy/2)}{2 \sinh^2(y/2)}
\mp \frac{2x}{\left(x^2-1\right)}
\left[ \tanh(xy/2) \coth(y/2) - x^{\mp 1} \right]
\right\}\,,\end{aligned}$$
and $K_+ = K_{++} + K_{--}$, with $$\begin{aligned}
K_+(x,y)
=
\frac{x \tanh(xy/2) \coth(y/2) - 1}{2\pi \left( x^2 -1 \right)}
\, .\end{aligned}$$
![\[Fig:Kernels\] The kernels $K_{--}$, $K_{++}$, and $K_+$. Solid lines correspond to $T=0$ results, whereas dashed lines correspond to $y=\beta{\omega_0}=\{10,5,3,2,1\}$ as shown from the top in (a) and from the bottom in (b)–(c). ](fig4.pdf){width="\linewidth"}
The kernels are plotted in Fig. \[Fig:Kernels\] for different temperatures. At $T=0$, $K_{++}$ and $K_{+}$ are largest at $\omega = 0$ and decrease monotonically with increasing $\omega$, whereas $K_{--}$ is zero at $\omega=0$ and has a maximum at $\omega={\omega_0}$. Therefore, ${E_{\mathrm{ph}}^{\mathrm{pot}}}$ and ${E_{\mathrm{eph}}}$ mainly capture the charge ordering. In contrast, because $K_{--}$ filters out the zero-frequency contributions to $S_\rho(\omega)$, ${E_{\mathrm{ph}}^{\mathrm{kin}}}$ reveals the softening of the phonons and the opening of the Peierls gap. The same reasoning applies to the phonon spectral function. If calculated from Eq. (\[Eq:ph\_prop\_int\_Q\]) it is dominated by the central mode in the Peierls phase. This mode is filtered out when using Eq. (\[Eq:ph\_prop\_int\_P\]). The kernels broaden significantly when the temperature becomes comparable to $\omega_0$ but the qualitative behavior for $\omega\ll{\omega_0}$ remains unchanged.
Translational invariance of the vertices {#Sec:TranslVert}
========================================
The bosonic estimators from the distribution of vertices can be substantially improved by exploiting translational invariance in imaginary time: replacing $\tau_k\to \tau_k+\Delta\tau$ and $\tau'_k\to \tau'_k+\Delta\tau$ for all vertices $k\in\{1,\dots,n\}$ leaves the weight $W[C_n]$ unchanged. Thereby, we can derive improved estimators for the bosonic energies (\[Eq:Epkin\_vert\]) and (\[Eq:Eppot\_vert\]) as well as the phonon propagator (\[Eq:ph\_prop\_P\_vert\]).
For the energies (\[Eq:Epkin\_vert\]) and (\[Eq:Eppot\_vert\]), translational invariance allows for the transformation $$\begin{aligned}
\label{Eq:prop_repl}
\frac{{P_{\! \pm}}(\tau_k) {P_{\! \pm}}(\tau'_k)}{{P_{\! +}}(\tau_k - \tau'_k)}
\, \longrightarrow \,
\underbrace{
\frac{1}{\beta} \int_0^{\beta} \!\! d\tau \,
\frac{
{P_{\! \pm}}(\tau_k +\tau) {P_{\! \pm}}(\tau'_k +\tau)
}{
{P_{\! +}}(\tau_k - \tau'_k)
}
}_{
{\bar{P}_{\!\pm}}(\tau_k - \tau'_k)
} \end{aligned}$$ to the averaged propagator ($\tau \in [-\beta,\beta]$) $$\begin{aligned}
\label{Eq:Pbar}
\begin{split}
{\bar{P}_{\!\pm}}(\tau)
=
\frac{1}{2\beta}
&\pm \frac{\omega_0}{4} \frac{\beta -{\left| \tau \right|}}{\beta}
\left[\coth(\omega_0\beta/2) - \frac{{P_{\! -}}(\tau)}{{P_{\! +}}(\tau)}\right] \\
&\pm \frac{\omega_0}{4} \frac{{\left| \tau \right|}}{\beta}
\left[\coth(\omega_0\beta/2) +\frac{{P_{\! -}}(\tau)}{{P_{\! +}}(\tau)}\right]
\, .
\end{split}\end{aligned}$$ Since the substitution (\[Eq:prop\_repl\]) applies to time differences of the same vertex, the computational cost to calculate the energies remains $\mathcal{O}(n)$. The improvement is particularly noticeable for ${E_{\mathrm{ph}}^{\mathrm{kin}}}$ (see Sec. \[Sec:PerformanceTest\]).
The simplest way to calculate the phonon propagators (\[Eq:ph\_prop\_Q\_vert\]) and (\[Eq:ph\_prop\_P\_vert\]) is to fix the second time argument to $\tau'=0$ and apply Eq. (\[Eq:sumtrick\]) to obtain the necessary information from the vertices in $\mathcal{O}(n N_{\tau})$ operations. Similar to the equal-time case, especially the estimator for the momentum correlations can be improved by using translational invariance. However, the rigorous approach of integrating over all translations increases the computational cost to $\mathcal{O}(n^2
N_{\tau})$ operations since the sums in the first term of Eq. (\[Eq:sumtrick\]) can no longer be calculated independently. This problem can be overcome by measuring the correlation functions on an equidistant grid with spacing ${\Delta\tau_{\mathrm{obs}}}$ so that translations of all vertices by multiples of ${\Delta\tau_{\mathrm{obs}}}$ are available and the computational cost remains $\mathcal{O}(n N_{\tau})$. Regardless, translational invariance can be applied rigorously to the second term in Eq. (\[Eq:sumtrick\]). Putting the contributions of the phonon propagator together requires another $\mathcal{O}(L^2N_{\tau}^2)$ operations, where an additional factor of $N_{\tau}$ comes from exploiting translational invariance. This last step dominates the computational time for vertex measurements ([cf.]{}Sec. \[Sec:PerformanceTest\]).
[61]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1103/PhysRevD.24.2278) [****, ()](\doibase 10.1103/PhysRevB.43.5950) [****, ()](\doibase 10.1103/PhysRevB.72.035122) [****, ()](\doibase
10.1103/PhysRevLett.97.076405) [****, ()](\doibase
10.1103/RevModPhys.83.349) [****, ()](\doibase 10.1103/PhysRevB.91.241118) [****, ()](\doibase
10.1103/PhysRevB.91.235151) [****, ()](\doibase 10.1103/PhysRevD.82.025007) [****, ()](\doibase 10.1140/epja/i2013-13090-y) [****, ()](\doibase 10.1103/PhysRevB.91.241117) [****, ()](\doibase
10.1103/PhysRevLett.115.250601) [****, ()](\doibase 10.1103/PhysRevLett.104.157201) [****, ()](\doibase 10.1103/PhysRevB.86.235116) [****, ()](\doibase 10.1103/PhysRevLett.111.130402) [****, ()](\doibase 10.1103/PhysRevB.89.125121) [****, ()](\doibase 10.1103/PhysRevB.91.125146) [****, ()](\doibase 10.1088/1742-5468/2014/08/P08015) [****, ()](\doibase 10.1103/PhysRevLett.113.110401) [****, ()](\doibase 10.1103/PhysRevB.92.125126) [****, ()](\doibase 10.1103/PhysRevE.76.022101) [****, ()](\doibase 10.1103/PhysRevLett.103.170501) [****, ()](\doibase
10.1103/PhysRevB.81.064418) [****, ()](\doibase 10.1103/PhysRevX.5.031007) [****, ()](\doibase 10.1103/PhysRevB.81.024509) [****, ()](\doibase 10.1103/PhysRevB.76.035116) [****, ()](\doibase 10.1103/PhysRevB.83.115105) [****, ()](\doibase 10.1103/PhysRevLett.109.116407) [****, ()](\doibase 10.1103/PhysRevB.88.064303) [****, ()](\doibase 10.1103/PhysRevB.91.245147) [****, ()](\doibase 10.1103/PhysRev.97.660) in [**](\doibase 10.1007/978-3-540-74686-7_11), (, , ) pp. @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.28.4059) [****, ()](\doibase 10.1103/PhysRevLett.104.146401) [****, ()](\doibase 10.1103/PhysRevB.89.235128) [****, ()](http://stacks.iop.org/0953-8984/28/i=38/a=383001) [****, ()](\doibase 10.1103/PhysRevB.66.085120) [****, ()](\doibase 10.1103/PhysRevB.87.125149) [****, ()](\doibase 10.1103/PhysRevLett.49.402) [****, ()](\doibase 10.1103/PhysRevB.60.7950) [****, ()](\doibase 10.1063/1.1699114) [****, ()](http://www.jstor.org/stable/2334940) [****, ()](\doibase 10.1103/PhysRevB.56.14510) @noop [ ()]{}, [****, ()](\doibase 10.1103/PhysRevB.91.235150) [****, ()](\doibase 10.1103/PhysRevB.92.245132) [****, ()](\doibase 10.1103/PhysRevB.27.4302) [****, ()](\doibase 10.1103/PhysRevB.87.075149) [****, ()](\doibase 10.1103/PhysRevB.91.085114) @noop [****, ()]{} [****, ()](\doibase
10.1103/PhysRevB.73.245120) [****, ()](http://stacks.iop.org/0295-5075/87/i=2/a=27001) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.45.7730) [****, ()](\doibase 10.1103/PhysRevB.69.024301) [****, ()](\doibase 10.1140/epjb/e2005-00112-9) @noop [****, ()]{} [****, ()](\doibase 10.1103/PhysRevB.57.10287) @noop [ ()]{} [****, ()](\doibase 10.1103/PhysRevB.91.014418) @noop [****, ()]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
In this work, we investigate a novel semantic approach for pattern discovery in trajectories that, relying on ontologies, enhances object movement information with event semantics. The approach can be applied to the detection of movement patterns and behaviors whenever the semantics of events occurring along the trajectory is, explicitly or implicitly, available. In particular, we tested it against an exacting case scenario in maritime surveillance, i.e., the discovery of suspicious container transportations.
The methodology we have developed entails the formalization of the application domain through a domain ontology, extending the Moving Object Ontology (MOO) described in this paper. Afterwards, movement patterns have to be formalized, either as Description Logic (DL) axioms or queries, enabling the retrieval of the trajectories that follow the patterns.
In our experimental evaluation, we have considered a real world dataset of 18 Million of container events describing the deed undertaken in a port to accomplish the shipping (e.g., loading on a vessel, export operation). Leveraging events, we have reconstructed almost 300 thousand container trajectories referring to 50 thousand containers travelling along three years. We have formalized the anomalous itinerary patterns as DL axioms, testing different ontology APIs and DL reasoners to retrieve the suspicious transportations.
Our experiments demonstrate that the approach is feasible and efficient. In particular, the joint use of Pellet and SPARQL-DL enables to detect the trajectories following a given pattern in a reasonable time with big size datasets.
author:
- Elena Camossi
- Paola Villa
- Luca Mazzola
-
bibliography:
- 'semTrj.bib'
title: ' Semantic-based Anomalous Pattern Discovery in Moving Object Trajectories[^1] '
---
Introduction {#intro}
============
[*Semantic trajectory*]{} is a research trend that has recently emerged in Geographical Information Science and Spatio-temporal Knowledge Discovery [@Alvares07; @Guc08; @spacca11; @Spaccapietra; @Yan2012], to enhance the modelling and analysis of moving object data, e.g., GPS trajectories, mobile telephone streams, data collected from sensor networks. In this domain, a moving object is an entity that changes position over time, such as a person that walks or cycles, a car, taxi or bus moving in a city, a vessel navigating by sea, etc.
In Semantic Trajectory, the goal is not the mere processing of the geographical trajectory for conventional GIS analysis, but the [*understanding*]{} of the motion of the moving object with respect to the application of interest. Therefore, the spatio-temporal modelling of object trajectory is enriched with semantic information that characterizes the application context, such as the points of interest, like museums, schools, shops, etc., or the annotation of parts of the trajectory to describe different movement behaviors, e.g., walking, cycling, driving. Semantics enhances the analysis of data and facilitates the discovery of semantically implicit patterns and behaviors [@Parent2013], useful for abstracting the modelling domain and for inferring new knowledge. In particular, the ontology-driven enrichment of moving object trajectories is a promising approach for the discovery of itinerary *patterns* [@BaglioniMRTW09], which can be applied for example to detect outliers in sequences of movements.
The analysis of moving object trajectories is a largely used tool in the field of maritime surveillance and security [@camossi2012; @etienne2012], for fighting commercial frauds [@ctrf] and for enforcing the supply chain security to fight smuggling, counterfeiting and drug traffic. Beyond its importance from an economic and citizen security perspective, supply chain monitoring is a challenging application scenario, in particular because the number of containerized shipments to verify is enormous. Indeed, containers are used to ship the 25% of world trade cargo, and even if recent legislation imposes to increase the inspections rate, currently less than 2% of containers can be physically checked without causing expensive delays in the good trade chain. Furthermore, 90% of containers, i.e., 19 millions per year, travel by sea, with an estimated growth to reach 27 million by 2020. This, combined with the complexity of the shipping operations and with the number of subjects involved, makes containerized transport particularly suitable to conceal illegal or hazardous materials.
In such a complex domain, effective Risk Analysis tools are essential to help Customs authorities identify effective suspicious transportations. Route-based risk indicators (RRI), for example, target high risk consignments of goods by evaluating the trajectories of cargos, ships and containers. RRIs analyse spatial information such as the ports where a container has been loaded and discharged, the logistic of transshipment operations, and the actual route followed by a container. RRIs support more traditional risk factors, such as the name of the consignee, the carrier, the value of transported goods.
In this work, we describe a novel methodology for semantic pattern discovery that relies on ontology and describe the tests we have run in the maritime surveillance scenario to detect suspicious containerized transportations. The approach we propose relies on a top-level ontology for modelling moving object trajectories, namely the Moving Object Ontology (MOO), that has to be extended to represent the properties of the specific application domain. On top of this formalization, movement patterns of interest may be defined as Description Logic (DL)[@DBLP:conf/dlog/2003handbook] axioms. The ontology instances that satisfy the axioms represent the trajectories with the modelled movement behaviour.
In our test scenario, we have defined a knowledge base for the domain of maritime containers, namely the Maritime Container Ontology (MCO) [@DBLP:conf/geos/VillaC11], and modelled [*anomalous*]{} container patterns that describe suspicious movement behaviors. We have run a set of experiments translating the axioms into DLs queries, that can be easily tested with different ontology APIs and reasoners on the populated ontology, retrieving the suspicious shipments that follow the defined patterns. For our tests, we consider two suspicious pattern examples, the proposed formalization can be extended to any number of patterns. The patterns we considered are [*[[Loop]{}]{}*]{} and [*[[Unnecessary Transshipment]{}]{}*]{}, and are well known in maritime risk analysis. They formalize irregular behaviors involving not only containers but also different vessels, because usually more than one vessel is used to accomplish a container shipment and containers are moved from one vessel to another during transshipment operations. Such patterns are complex enough to show the potentialities of the semantic approach we propose, and are a step forward with respect to existing approaches proposed in the literature to detect patterns in moving object trajectories [@Baglioni08]. However, despite they apparent complexity, they may be successfully discovered by integrating the knowledge of the locations where the events occur and the event semantics.
The methodology we propose can be applied in every context where the event semantics can be explicitly described with respect to [*STOPs*]{} or [*MOVEs*]{} [@Spaccapietra]: specifically, STOPs are the places where a moving object stays for a minimum amount of time, while MOVEs are the subtrajectories between consecutive STOPs. In our application scenario, we modelled STOPs and enriched them semantically with information on container and vessel [*events*]{}. These ones describe the deeds undertaken on containers to accomplish shipment operations and arrival and departure operations of vessels in ports.
The advantages of the semantic approach we propose in this paper are twofold. First, abstracting the properties of the domain to high-level semantic concepts, it simplifies the reasoning. For instance, every carrier company represents information on events using its own vocabulary but, within the ontology, we can abstract from different vocabularies and reason on generic categories of events that are relevant for the application, such as transshipment events. Moreover, our formalisation relies on DLs, a family of formal knowledge representation languages used to describe and classify concepts and their instances, that combine good expressivity and good computational properties, supporting the practical feasibility of the approach. Indeed, knowledge representation systems based on description logics have been proven useful for structurally representing the terminological knowledge of an application domain. Compared with first-order logic, DLs achieve a better trade-off between the computational complexity of reasoning and the expressiveness of the language. DLs are briefly introduced in Section \[sec:dls\]. The research presented in this paper relies on a previous work [@DBLP:conf/geos/VillaC11], where we introduced the MCO design and the application of axioms for anomalous patterns discovery in container itineraries. With respect to [@DBLP:conf/geos/VillaC11], in this work: (1) we abstract from the application domain to define a methodology for semantic pattern discovery that can be applied in other domains involving moving object trajectories; (2) we define DL-queries, semantically equivalent to ontology axioms, for the efficient retrieval of trajectories that verify the axioms conditions; (3) we run an extensive experimental evaluation on a real world dataset to test the feasibility of the approach.
In our experiments, we have tested different DL reasoners, i.e., Hermit [@hermit], Pellet [@pellet], and FaCT++ [@fact], and two of the most used API for DL querying: OWL-API [@Horridge2011-owlapi] and SPARQL-DL API [@Sirin07sparqldl], and run the queries implementing the anomalous patterns against four ontologies of increasing size. These have been populated with data taken from a dataset of eighteen million container events, preprocessed to define three hundred thousand container shipments. We have verified that the implementation solution combining SPARQL-DL API and Pellet achieves the maximum query language expressivity with the best performance, enabling to test a suspicious patterns in few minutes.
In the following, we use the term [*container trajectory*]{} to refer the spatial trajectory a container follows along a shipment, while with the term [*itinerary*]{} we refer to the same trajectories, semantically annotated with information on the events that occur during the shipment.
The rest of the paper is organized as follows. We first provide the background of this research, discussing recent work on Semantic Trajectories in Section \[sec:related\] and introducing the basic concepts of DLs in Section \[sec:dls\]. In Section \[sec:methodology\], we present the methodology we propose for the discovery of patterns and behaviors in moving object trajectories, that we apply to the domain of containerized transportation in the next sections: we describe the domain knowledge base for maritime container MCO (Section \[sec:mco\]) and give the description logic formalisation of suspicious container itineraries (Section \[sec:axioms\]). Before introducing the experiments we have run in this domain (Section \[sec:exp\]), in Section \[sec:semtools\] we compare the different tools and API for ontology querying that we have evaluated for our experimental evaluation. Finally, in Section \[sec:conclusion\] we discuss the potential development and the shortcomings of the approach we are proposing, concluding the paper.
Semantic Trajectories {#sec:related}
=====================
Most of the research on Semantic Trajectory has originated by the community grown within the FP6 project GeoPKDD [@geopkdd], whose original focus was on privacy aware exploitation of spatio-temporal data. To continue the investigation on the discovery of knowledge and exploitation of moving object data, GeoPKDD has been followed first by MODAP [@modap] and more recently by SEEK [@seek]. The same community has recently presented a survey of the research on this area [@Parent2013]. Among the active initiatives aiming at boosting the research on moving object modelling, analysis and visualization, a notable contribution has originated also by the COST Action MOVE [@move].
Another recent overview has been presented by Spaccapietra and collaborators [@spacca11], the same group that originally proposed the first conceptual model for the representation of semantics in trajectories [@Spaccapietra], which has become a reference model for trajectory data analysis (for example, [@Alvares07; @Guc08; @BaglioniMRTW09; @bogornyIJGIS2009] refer to this model). This model relies on the conceptualization of STOPs and MOVEs in trajectories: a STOP is an interesting place in which a moving entity has stopped or reduced significantly its speed for a sufficient amount of time, likely to accomplish some activity; a MOVE is any subset of the object trajectory between consecutive STOPs, and can be classified, for example, with respect to the type of moving (e.g., running, cycling, driving) or by the mean of transportation used to move.
Most of the research advances on trajectories and semantics may be broadly classified among three research areas: Spatio-temporal Data Modelling for the representation of semantic trajectories; Knowledge Discovery from Data (KDD) for semantic trajectory mining; and Geographic Visualization and Visual Analytics for semantic trajectory visualization. In the rest of the section, we first overview work on semantic trajectories falling in the research areas above; then, we conclude discussing how our approach differs from the existing state of the art.
Representing Semantic Trajectories
----------------------------------
For the representation and modelling of semantic trajectories, we can distinguish two different approaches: a traditional one that includes moving object semantics since the phase of data design, and a-posteriori approach in which trajectories are annotated by analyzing its raw features, such as the speed of the moving object or the intersection of the object trajectory with Places Of Interest (POI) previously extracted from the corresponding geographical layer.
The first approach is adopted in [@Christophe], where the authors introduce an algebraic model that represents a spatio-temporal trajectory as an Abstract Data Type (ADT) that encapsulates the semantic dimension. A series of trajectory states is potentially observed and measured, and the ADT representation combines a formal definition with manipulation operations, allowing the user to formulate queries on the semantics of the spatio-temporal trajectory data type. Close to this approach we can account also the work of Pfoser et al. [@Pfoser2003], that generate synthetic datasets of semantic trajectories. The second approach, which can be also referred to as (semantic) [*segmentation*]{} of trajectories, or [*episodes*]{} identification, is more frequent in the literature. The resulting representation is compliant to the model defined by Spaccapietra et al. [@Spaccapietra] whenever interesting places, activities or means of transportation are identified to annotate the STOPs and MOVEs of the trajectory. In particular, STOPs, somewhere called stay points, semantic places or locations, distinguish the different [*episodes*]{}, i.e., the significant segments of a trajectory that identify different phases of the object movement and can be assigned a clear semantics, relevant for the application domain.
Information on candidate STOPs is often encoded in the underlying geographical representation. For example, Cao et al. [@Cao2010] and Guc et al. [@Guc08] select STOPs from pre-encoded POIs crossing the moving object trajectory. Alvares et al. [@Alvares07] apply a similar approach, but selecting the Regions of Interest (ROI) in which the moving object stays for more than a given time, a temporal threshold that can differ for each ROI and is encoded within the ROI representation at a semantic level. Cao et al. [@Cao2010] give also a ranking of the top-k significant locations for each trajectory. The significance of locations for a user is discussed also by Zheng et al. [@Zheng2009], who adopt a hierarchical approach to detect important places and typical travel sequences from user trajectories.
Other works infer STOPs evaluating only the raw features of the trajectory, for example, the time the moving object does not move along the trajectory and the distance between these stops [@Zheng2011], the change of speed [@Palma2008] or direction [@Rocha2010].
The two approaches can be combined, validating and correcting the geographical position of the STOPs resulting by the trajectory features processing with contextual information, like in the work by Yan et al. [@Yan2012; @Yan2011]. Moreover, Yan et al. [@Yan2012; @Yan2011] abstract from the requirement of a specific application domain using POI, ROI and Lines of Interest to annotate STOPs, and enabling to annotate also MOVEs, both as activities, such as walking, driving, cycling, and transportation modes, like bus, car, taxi, etc.
Annotation of MOVEs is also addressed by Yan et al. [@SeTraStream], who realize [*online*]{} identification of episodes by detecting the alteration of patterns within the trajectory. The trajectory segmentation adopts an existing approach for the discovery of trends that evaluates correlation coefficients, and incorporates also modules for trajectory cleaning and compression. The episode tagging is done at a second stage by a classification model trained on trajectory features collected during the online segmentation, such as distance, duration, density, speed, acceleration, heading.
Annotation of MOVEs is also manually assisted by the visual tool developed by Guc et al. [@Guc08]. The work of Wannous et al. [@Wannous2013] is a case of MOVEs annotation for animals trajectories, specifically seals’, to distinguish travelling states (e.g., travelling, resting, foraging). They adopt ontologies to integrate the time knowledge to infer the different travelling states, which differentiate on duration and are defined in term of temporal axioms. Zhu et al. [@Zhu2012] segment GPS trajectories of taxis to infer the taxi status, i.e., free, occupied or parked. Wang et al. in [@Wang06] apply clustering on whole trajectories to distinguish among different trajectory types (e.g., pedestrian, vehicles) and activities (e.g., walking, cycling). In this case the labelling is done on an entire trajectory. The result of the clustering is used in particular to infer the structure of the scene in which the objects are moving.
Clustering is also used by Cao et al. [@Cao2010] for the extraction of semantic locations and by Palma et al. [@Palma2008], who adopt spatio-temporal clustering to classify trajectory with respect to their speed.
Finally, van Hage et al. [@vanHage2009] present an interesting approach for modelling and analysing ship trajectories for early time awareness for Maritime Surveillance and Security, which takes into account the semantics of the trajectories. Taking in input Marine Automatic Identification System (AIS) messages sent by ships, they build trajectories and segment them by detecting the significant events that represent changes in ship behaviour, such as speeding up, anchored, stopped. Reasoning rules for event labelling are specified in SWI-Prolog, and the geographical knowledge relies on the GeoNames[^2] ontology.
Knowledge Discovery and Exploitation of Semantic Trajectories
-------------------------------------------------------------
As we have seen, some of the methods described above [@Wang06; @Cao2010; @Palma2008] adopt data mining, clustering in particular, for the semantic annotation of trajectories. However, there are also approaches that exploit semantic trajectory for knowledge discovery, in particular movement patterns. In this area, several works have been published by the communities collaborating within the project GeoPKDD and its followers.
Alvares et al. [@Alvares2007b] and Moreno et al. [@MorenoTRB10] take semantic trajectory with annotated STOPs and MOVEs and extract moving patterns considering also background geographical information. Bogorny et al. in [@Bogorny2011] present Weka-STPM, a data mining toolkit for geographical data that takes trajectories with annotated POIs and performs episode recognitions as pre-processing for analysis and visualization. Bogorny et al. in [@bogornyIJGIS2009; @Bogorny2010] formalize the idea of semantic trajectory pattern mining to boost data preprocessing and to mine data at a higher abstraction level. They discuss in particular the discovery of frequent and sequential patterns and association rules from trajectories. Relying on the results presented in [@Alvares07; @Palma2008], they preprocess trajectories to annotate STOPs and MOVEs. Then, mining can be applied directly on the annotated dataset.
Ying et al. [@Ying2010] compute similarity of user trajectories, taking into account trajectory semantics. The same authors in [@Ying2011] rely on user behaviour in similar clusters to predict the next location in a semantic trajectory.
Baglioni et al. [@Baglioni08; @BaglioniMRTW09] represent annotated trajectories in an ontology encompassing also geographical and application domain knowledge. Different kinds of STOPs are considered, and temporal knowledge is used to discriminate among them. Afterwards, they use ontology axioms to infer behaviour al patterns.
Similarly to [@Baglioni08; @BaglioniMRTW09], Yan et al. [@YanQuery08] use an ontological approach for the representation of semantic trajectory. They define three different ontology modules for representing geometry, geography and the requirements of the application domain and apply their approach to the application case of traffic management. The geometric modules includes a Trajectory Ontology compliant with the model defined by the same authors in [@Spaccapietra]. In their approach, the ABox of the ontology, containing the ontology instances, is stored in a database, specifically Oracle extended with Oracle Semantics, which includes the OWLPrime language, a DL subset, for ontology representation, querying and inference.
Based on space time ontology and events approach, Boulmakoul et al. [@DBLP:journals/corr/abs-1205-1796] propose a generic meta-model for trajectories of moving objects to allow independent applications processing trajectories data benefit from a high level of interoperability, information sharing as well as an efficient answer for a wide range of complex trajectory queries. Their approach is inspired by ontologies, but the resulting system they propose is database-based.
Apart from pure mining and knowledge discovery, there are also approaches that exploit trajectory semantics for different purposes. For example, Richter et al. [@RichterSL12] use geographical knowledge on POIs to compress trajectories while maintaining an acceptable information loss. Monreale et al. [@Monreale2010] discuss the privacy issues of semantic trajectories. Whenever a user trajectory crosses locations that may enable to infer sensitive information on the trajectory user, such as an hospital, a privacy issue arises. To solve such problem, they propose a privacy model for semantic trajectories, and an algorithm to preserve user privacy modifying the trajectory representation: in a safe trajectory, sensitive locations are abstracted along a place taxonomy to mask them, while preserving the trajectory semantics.
Visualization of Semantic Trajectories
--------------------------------------
Visual Analytics, together with Information Visualization, provides the instruments to empower human capacity for distillation and knowledge extraction from very large data repositories. In particular, Visual analytics develops intelligent visualization for data analysis. The research community in this area proposed several tools to improve the visualization of geographical data, bringing to the development of the area of GeoVisualization and Geo Visual Analytics. Not to be neglected is the contribution in stressing the contextual information attached over the trajectories, that allows its refinement and classification [@andrienko2012visual].
One of the main advantages of these visual techniques is the possibility to confirm expected patterns by detecting them, but also to observe the emergence of unexpected ones. This can guide the users towards the revision, either of the collection, extraction, distillation or representation mechanism, or the model updating. Another observed effect is the possibility to improve the effectiveness in decision making process by people: this can result from the availability of filtering, aggregating and drilling down functionalities in the visualisation interface. For the specific task of visualizing the Geo-Spatial data enriched with temporal information –which Semantic Trajectories is a subtype– a recent review from Andrienko et al. [@DBLP:journals/vlc/AndrienkoAG03] presents some possible techniques, working as a reference framework for choosing the techniques that better fit the specific characteristics of the data to be represented and the objectives of the analysis.
Other works that address visualization to offer knowledge to the user are present in literature, such as the Weka-STPM tool [@Bogorny2011]. Beyond the pre-processing of data to semantically annotate trajectories and mining them, it includes also a visualization interface for the semantic patterns extracted, such as frequent STOPs, MOVEs, and sequential STOPs. Another approach proposed by Bakshev et al. [@BakshevSMVC11] proposes a framework for trajectory visualisation and querying, where the semantic context of trajectories is modelled as an application domain ontology.
In this area, the work of Andrienko and Andrienko is particularly relevant and a reference for the research community. In [@Andrienko:2011], Andrienko et al. discuss how visualization and the graphical representation of object movement can help understand its meaning, and present a conceptual framework about the possible types of information that can be extracted from movement data. Currently the established visualization techniques for geographical data are [*animated map*]{} and [space-time cube]{} (see, for example, [@DBLP:journals/vlc/AndrienkoAG03]), which enhance understanding taking into account also the temporal dimension of data to support data analysis.
The space-time cube is also used by Zhong et al. [@Zhong2010] to design a method for semantic visualisation of trajectories based on the notion of events, that are modelled as ADTs. Each event is characterised by the actor that does it, and by the place and the time it occurs. Moreover, levels of detail (LOD) are associated to each event type.
Finally, [@Lau10] evaluates the importance of contextual information derived by geographical knowledge for visual analytics approaches to enhance the understanding of human behaviour.
Comparison with the proposed approach
-------------------------------------
With respect to the current state of the art in Semantic Trajectory, our work has some distinguish characteristics and innovative aspects that we discuss in this section. Referring to the previous classification of the research on this topic, the main contribution of this paper can be accounted to Knowledge Discovery, because we exploit semantically annotated trajectories for the discovery of movement patterns. However, our work addresses also the representation of trajectories and their semantics, therefore we compare it with the research in both areas.
In our approach, both trajectories and patterns are represented in an application domain ontology that extends a top-level ontology for representing moving objects. Differently from work on trajectory segmentation that infers implicit semantics of episodes by processing the raw features of the trajectories or from the contextual knowledge, we adopt a reverse approach: taken spatio-temporal events with explicit semantics, we reconstruct the trajectories that describe the movements from one event to another.
In the test case scenario we propose, we start from Container Status Messages that encompass an explicit description of the activities that are undergoing on containers in a port, and from these labelled STOPs we reconstruct the container trajectories. The case of vessels is slightly different: we first aggregate container events to derive the implicit semantics of vessel events, and from them we build vessel trajectories as in the case of containers. However, we take into consideration the underlying geographical knowledge to distinguish among ports and other types of locations, that do not intervene in the patterns we discuss as examples.
Our approach has in common with [@Wannous2013; @Baglioni08; @BaglioniMRTW09; @YanQuery08; @BakshevSMVC11] the use of ontology for the representation of the domain and expert knowledge. The usage of DL axioms for automatic reasoning on moving object data is applied in particular by [@Wannous2013; @Baglioni08; @BaglioniMRTW09]. Specifically, similarly to Baglioni et al. [@Baglioni08; @BaglioniMRTW09], we focus on the discovery of patterns expressed as ontology axioms and on the retrieval of ontology instances that verify such patterns. However, even if the general approach is the same, with respect to [@Baglioni08; @BaglioniMRTW09], we go a step forward in term of complexity of domain knowledge and axioms. In the application scenario we have considered for testing, the design of the MCO includes multiple moving objects (i.e., containers and vessels), and the ontology axioms formalizing anomalous patterns involve different semantic trajectories for these objects. In particular, usually more than one vessel is used to accomplish a container shipment: in transshipment operations, containers are unloaded from one vessel to another, and continue for another step of the trip. Transshipments can occur several times along a container trajectory. This implies that, to verify if a container trajectory is anomalous, we have to compare it with several vessel trajectories.
Moreover, differently from [@Baglioni08; @BaglioniMRTW09], we translate axioms into DL queries, and evaluate according different implementation settings, considering combinations of different DL query languages and APIs and reasoning engines. By contrast, Baglioni et al. tested their approach in [@BaglioniMRTW09] importing the domain ontology in ORACLE and using OWLPrime to test the axioms. In our case, we considered also this implementation alternative, but we discovered that OWLPrime is too limited to express the complexity of the axiom conditions we have specified for the application case of maritime containers.
Our work has some similarities with [@DBLP:journals/corr/abs-1205-1796]: actually, the authors have elaborated a meta-model to represent moving objects using a mapping ontology for locations; despite this similarity, in extracting information from the instantiated model during the evaluation phase, they seem to rely on a pure SQL-based approach, whether we rely on semantics queries.
Description Logics (DL) {#sec:dls}
=======================
In this section we introduce the main features of DLs [@DBLP:conf/dlog/2003handbook], that are the foundational basis of our formalization. In DLs, the domain of interest is modeled by means of individuals, concepts, and roles, denoting objects of the domain, unary predicates, and binary predicates respectively. Concepts correspond to classes, which are sets of objects, while roles correspond to relations, i.e., binary relations on objects.
The basic syntactic building blocks of DLs are atomic concepts ($A$), and atomic roles ($R$). Complex concepts (denoted by $C$ or $D$) can be built from them inductively according to the syntax in the upper part of Table \[tab:dl\].\
From a semantic point of view, concepts are interpreted as subsets of an abstract domain, while roles are interpreted as binary relations over such a domain. More precisely, an *interpretation* $(\Delta^{{\cal I}},\cdot^{{\cal I}})$ consists of a domain of interpretation $\Delta^{{\cal I}}$, and an interpretation function
\[summary\]
**Description** **Syntax** **Semantics**
------------------------- --------------- --------------------------------------------------------------------------------------------------------------------
universal concept $\top$ $\Delta^{{\cal I}}$
bottom concept $\bot$ $\bot={\neg}\top$
atomic concept $A$ $A^{{\cal I}}$
concept negation ${\neg}C$ $\Delta^{{\cal I}}\setminus C^{{\cal I}}$
intersection $C\sqcap D$ $C^{{\cal I}}\cap D^{{\cal I}}$
union $C\sqcup D$ $C^{{\cal I}}\cup D^{{\cal I}}$
existential restriction $\exists R.C$ $\{x \in \Delta^{{\cal I}}~|~\exists y \in \Delta^{{\cal I}},(x,y)\in r^{{\cal I}}\wedge y\in C^{{\cal I}}\}$
universal restriction $\forall R.C$ $\{x \in \Delta^{{\cal I}}~|~\forall y \in \Delta^{{\cal I}},(x,y)\in r^{{\cal I}}\rightarrow y\in C^{{\cal I}}\}$
transitive role $r_{T}$ $(x,y)\in r_{T}^{{\cal I}}$ and $(y,z)\in r_{T}^{{\cal I}}$ imply $(x,z)\in r_{T}^{{\cal I}}$
nominal $\{o\}$ $\{o\}^{{\cal I}}$
A *Knowledge Base* (KB) comprises two components: the [*TBox*]{} and the [*ABox*]{}. The TBox is a finite set of terminological axioms which make statements about how concepts are related to each other. Generally, they have two forms: $C\equiv D$ or $C \sqsubseteq D$, where $C, D$ are concepts. The first kind is called *equalities* which states that $C^{{\cal I}}$ is equivalent to $D^{{\cal I}}$, and the second is called *inclusions* which states that $C^{{\cal I}}$ is a subset of $D^{{\cal I}}$ for all ${{\cal I}}$. The ABox is a finite set of individual assertions, which can be of two types: $C(a)$ or r$(a,b)$, where $C$ is a [*concept*]{}, $r$ is a [*role*]{}, $a,b$ are individuals. The first kind is called *concept assertions* which states that $a^{{\cal I}}\in C^{{\cal I}}$, and the second is called *role assertions* which states that $(a^{{\cal I}},b^{{\cal I}})\in r$ for all ${{\cal I}}$.
The basic reasoning services in DLs are *satisfiability* and *subsumption*. A concept $C$ is satisfiable in a $KB$ $K$ if $K$ admits a model in which the extension of $C$, i.e., the set of individuals that belong to $C$, is non empty. By contrast, $C$ subsumes $D$ in $K$ if $C^{{\cal I}}\subseteq D^{{\cal I}}$ for every interpretation ${{\cal I}}$ of $K$. Subsumption can be easily reduced to satisfiability as follows: A concept $C$ is subsumed by a concept $D$ in $K$ if and only if $C \sqcap {\neg}D$ is not satisfiable in $K$. Upon that it is sufficient to consider concept satisfiability only.
We refer to the DL $\mathcal{ALC}$[@DBLP:conf/dlog/2003handbook] to represent and reason on the domain according to its features. Moreover, we have extended its expressivity to represent the domain of containerised transportation. In particular, *nominals* and *transitive* roles are needed in this context. Nominals are necessary to identify the locations involved in a suspicious pattern. Transitive roles are necessary to bind every container event with all the subsequent ones. The addition of these two features does not influence the complexity of the basic reasoning services, which, in presence of an acyclic TBox[^3], remains PSpace-complete as in $\mathcal{ALC}$ [@DBLP:conf/dlog/2003handbook]. Although the reasoning is of a relatively high complexity, the pathological cases that lead to the worst case complexity rarely occur in practice [@DBLP:conf/dlog/2003handbook].
A Methodology for Trajectory Pattern Discovery {#sec:methodology}
==============================================
In this section we present the methodology we propose for the discovery of patterns and behaviors in moving object trajectories. Specifically, given a dataset of moving object trajectories, we want to retrieve the trajectories that follow a given pattern, i.e., have a certain movement behaviour. Our approach strongly relies on ontology and on the DL formalism: we use ontology for the representation of the moving object application domain, and DL axioms for the specification of the patterns.
In the following, we define the graphical formalism we use in the paper for describing the ontology design; then, using such formalism, we introduce a top-level ontology for modelling moving object trajectories, namely the Moving Object Ontology (MOO). Afterwards, we discuss how the MOO can be extended to formalize the semantics of a specific application domain, and explain how trajectory patterns can be formally defined to enable instance retrieval. Finally, we describe the implementation workflow we have developed for itinerary pattern discovery.
Ontology diagrams
-----------------
In the paper we introduce the ontology design we apply through the support of ontology diagrams describing the [*concepts*]{} and the [*roles*]{} between them, where concept and role have the semantics we have introduced in Section \[sec:dls\]. An example of ontology diagram is given in Fig. \[fig:toponto\], that illustrates the MOO design. We represent concepts as rectangles with rounded corners, while we depict roles as directed arrows. For the sake of clarity, we do not report the concept’s structural properties but describe them in the text whenever necessary. In the text, the ontology names are emphasized (e.g., [*Moving Object*]{}). However, within the discourse entity and concept names are used interchangeably where no ambiguity arises.
Concept [*generalizations*]{} are depicted as straight lines that go from low-level to top-level concepts, similarly to the IS-A relation of object-oriented models. Starred labels ([label\*]{}) model one to many relationships. Underlined arrow labels represent roles that have been re-defined in sub-concepts; the corresponding domain and co-domain are restricted accordingly by means of ontology axioms.
Moving Object Ontology (MOO) {#subsec:moo}
----------------------------
{width="15cm"}
The fundamental entities of the MOO abstract the features that are common to different domains focusing on the movement of some kind of object, such as traffic analysis for route planning, pedestrian trajectory analysis, animal movement analysis, detection of shipping corridors for maritime surveillance, etc.
The concepts formalising these entities are depicted in Fig. \[fig:toponto\], namely, [*Moving Object*]{} (MO), [*MO Trajectory*]{}, [*Location*]{}, [*Time*]{}, and [*MO Event*]{}. [*MO*]{} formalises any class of objects that move, such as cars, persons, airplanes, buses, etc. [*MO Itinerary*]{} models the semantically enriched movement of the MO, defined as [*sequences*]{} of [*MO Event*]{}s. Events are crucial concepts in our modelling, because we rely on them to leverage the trajectory semantics. Events describe the activities accomplished by the MO, each occurring at a specific [*Time*]{} in a particular [*Location*]{}. For example, a container in a port is [*loaded*]{} on a cargo vessel; a car at a gas station is [*refuelling*]{}.
Event semantics can either be [*explicit*]{}, i.e., declared in the data, as we see for the case of containerized transportation, or [*implicit*]{}, but nevertheless inferrable from other contextual information: for example, knowing that a person is in a restaurant at lunch time we can likely infer that this person is eating. Event semantics may also help infer additional information on the object activity: for example, after a container has being loaded on a vessel, we can foresee that it will start soon travelling.
We can navigate the events in an itinerary according to the sequence they occur, relying on their timestamps. Navigating the sequence, we can follow the MO along its trajectory and along the activities it has done during the itinerary. Moreover, event sequences are also modelled intensionally in the MOO through the [*transitive*]{} property [*hasNextEvent*]{}, which links each event to the next event in the sequence.
In Fig. \[fig:toponto\] we have depicted also the roles between concepts. For example, events are connected to [*MO*]{} by the role [*hasMO*]{}; by role [*hasLocation*]{} to [*Location*]{}, which generalizes [*City*]{}, [*Port*]{}, [*Train Junction*]{}, etc; and by role [*hasTime*]{} to [*Time*]{}.
Domain Ontology and Patterns
----------------------------
To model the entities of the application domain of interest, ontology concepts and roles in the MOO have to be extended. For example, in Fig. \[fig:toponto\] we have extended the concept [*Moving Object*]{} to represent [*Car*]{}s, [*Person*]{}s, [*Airplane*]{}s, [*Bus*]{}es. In the next section, we see how the MOO has been extended to model the domain of containerized transportation.
In our application scenario, we are interested in formalizing movement patterns and in retrieving the trajectories that comply with the behaviour such patterns express. Patterns may be specified directly in the domain ontology as axioms. An axiom defines, using the DL syntax, a new class of objects, whose ontology instances are those verifying the axiom conditions. Therefore, to retrieve the trajectory instances that verify the patterns, it is sufficient to check the pattern axioms against the ontology.
As an alternative, axioms can be transformed into explicit DL queries, which can be used to query the ontology instances. This solution enlarges the implementation possibilities because different languages and APIs are available to express them. Currently, the most used ones are OWL-API [@Horridge2011-owlapi] and SPARQL-DL [@Sirin07sparqldl], that we have tested in the experimental evaluation in Section \[sec:exp\].
Pattern Discovery Workflow
--------------------------
The complete workflow for pattern discovery is illustrated in Fig. \[fig:process\]. Once the MOO is extended at step (1) to model the application domain and (2) the movement patterns have been defined as described above, we can proceed with the development of the pattern discovery tool. At step (3), data have to be selected, to extract the event sequences, and the event semantics must be made explicit, annotating the moving object trajectories.
{width="14cm"}
Maritime Container Ontology {#sec:mco}
===========================
In [@DBLP:conf/geos/VillaC11], we proposed the Maritime Container Ontology (MCO) to represent the domain of the maritime containers.
In the remaining of the section, we describe the MCO design, that extends the MOO formalised above to define containers, container and vessel itineraries, leveraging on the semantics of events. Herein we do not report the detailed design of shipments and shipment phases, that goes beyond the scope of the paper. We refer the interested reader to [@DBLP:conf/geos/VillaC11] for the details.
In the ontology diagrams in the section, we use the following convention for role inheritance: roles in [*italic*]{} are inherited by the MOO as they are, while roles whose name is are inherited roles that have been specialized to refer to specific sub-concepts.
Containers and Shipments {#sec:container}
------------------------
{width="15cm"}
In the MCO every container is modelled by an instance of the concept [*Container*]{}, which extends [*Moving Object*]{} in the MOO (see Fig. \[fig:toponto\]). Each container has a unique identifier, that maps an ISO 6346 [@ISO6346] identification code, i.e., the BIC code [^4]. Every container belongs to a [*Carrier*]{}, i.e., a shipping or a leasing company, which leases the container to a carrier, to whom it is connected by the role [*belongsTo*]{} (see Fig. \[fig:container\]).
Each [*Shipment*]{} is handled by a [*Carrier*]{} to deliver a set of [*Goods*]{} and encompasses the dates when the order has been placed, shipped and delivered to a [*Consignee*]{}. A shipment is made by at least one [*Container Shipment*]{}; each [*Container Shipment*]{} refers to a single container and has one [*Container Itinerary*]{}.
Container Itineraries and Events
--------------------------------
A [*Container Itinerary*]{} is defined by all the events occurring to a container to accomplish a shipment. These encompass the transport, which is mainly performed by sea, but also the operations to prepare and conclude the shipment. Therefore, a container itinerary goes beyond the mere trajectory of the container, and represents the complete history of the shipment performed using the container.
A [*Container Event*]{} describes any deed undertaken on a container, such as [*Loaded to vessel*]{}, [*Discharged at port*]{}. [*Container Event*]{} extends [*MO Event*]{} in the MOO and refers to the [*Time*]{} it occurs (e.g., 26th of November 2020) and the [*Location*]{} where this event took place. This can be either a port in intra-customs transport, or a train station or a city in inland transportation. Each container event refers also to other information dimensions, including the container [*Loading Status*]{} (i.e., empty, full) and, for events referring to transportation, to a [*Mean of Transport*]{}, in particular [*Vessel*]{}s for [*Maritime Container Event*]{}s which are the events occurring during the maritime transportation.
{width="16.5cm"}
There is no standard for event descriptions, and each carrier adopts a different one. Within the project an effort towards standardization of container events has been promoted, and the outcome has been formalized in the MCO: in Fig. \[fig:events\] we report eighteen events, classified among four classes of top-level events: [*Trip Start*]{}, [*Maritime/Transshipment Event*]{}, [*Trip End*]{}, and [*Other*]{}. Each event, as specified by the carrier, is mapped to an instance of one of the concepts specified in the figure. This mapping simplifies the representation of the application domain, and enables to abstract from the contextual knowledge of the carrier vocabulary when defining the axioms for anomalous patterns, as we will see in Section \[sec:axioms\]. Top-level events characterize the different phases of a shipment. In Fig \[fig:container\], only [*Maritime/Transshipment Event*]{}s are shown to focus on the main events occurring during the maritime part of a container itinerary, that is loading to and discharging from vessels during the maritime transportation. For such events, the [*Vessel*]{} the container has been loaded to or from which it has been discharged is also reported (roles [*hasDischargingVessel*]{} and [*hasLoadingVessel*]{}). In case a transshipment occurs in an intermediate port, the vessels involved are always two and the two roles are filled in accordingly. We can see in Section \[sec:axioms\] that transshipments from one vessel to another play an important role in defining suspicious patterns.
Other events, such as [*Released to Shipper for Cargo Stuffing*]{} and [*Empty Returned*]{}, do not describe any container movement, but deeds occurring to prepare the container for the shipping at the source port or to complete it at the port of destination. They may be helpful to confirm the presence of a container in a port at the begin and at the end of a shipment, as well as to define the temporal period spent by the container in a port, helping characterise the itinerary with better accuracy.
Vessels Events and Itineraries {#subsec:vesselRoutes}
------------------------------
In the MCO, we focus in particular on cargo vessels, because most of the import-export of goods is performed by sea. Vessels in the ontology are uniquely identified through their name and, when available, the International Maritime Organization (IMO) number.
![[*Vessel Event*]{} and [*Vessel Itinerary*]{}[]{data-label="fig:vessel"}](Fig05_Vessel_restricted.png){width="8cm"}
We focus on [*Arrival*]{} and [*Departure*]{} events (see \[fig:vessel\]), that occur in [*Port*]{}s and are sufficient to define the vessel movement. A [*Vessel Itinerary*]{}, as above, models extensively a sequence of events, which is also defined intensively through the transitive relationships [*hasNextEvent*]{} inherited by [*Moving Object Event*]{}. As before, instances of [*Vessel Event*]{} model the STOPs of a [*Vessel Itinerary*]{} [@Bogorny2010; @Spaccapietra]. In particular, as described above, a [*Transshipment*]{} of a container involves two different vessels.
Suspicious Patterns {#sec:axioms}
===================
On top of the semantic model formalising the domain knowledge, we developed the axioms for the discovery of anomalous patterns. In particular, here we present two suspicious patterns: namely, [*[[Loop]{}]{}*]{} and [*[[Unnecessary Transshipment]{}]{}*]{}. Such patterns have been defined in a collaboration with experts of Custom’s Risk Intelligent Department, and are patterns that potentially suggest some fraud activity has occurred, because they carry out unnecessary operations that entail extra costs or delays for the shipper.
These patterns are defined in the MCO as DL axioms. Each axiom combines ontology concepts with logical operators, defining implicitly the class of objects describing the container itineraries following the corresponding suspicious pattern.
Both the axioms crosscheck container and vessel itineraries: this is because cargo vessels transport thousand of containers during their trips, and usually pass through more than one port for each voyage. For logistic reasons, when a vessel arrives in a port, some containers are [*transshipped*]{} to reach the next port in their itinerary; at the same time, other containers are loaded to the vessel, that will continue its trip. A container may be transshipped several times before reaching its destination, therefore, vessels routes do not coincide with maritime container itineraries, but partially overlap with them. To discover anomalies, we have to crosscheck container itineraries with vessel trips, in order to discover the real trajectory followed by a container.
Suspicious pattern go beyond the simple patterns presented in similar approaches [@Baglioni08], in particular because they involve multiple itineraries and events classes, i.e., each axiom evaluates a container itinerary and the itineraries of the vessels used for its shipment. This is necessary because the container itinerary is not completely specified by its own, but to fully understand it we have to take into account loading and discharging operations and intersecting the container trajectory with those of the vessels used for its transportation. Moreover, the semantics of the container STOPs [@Spaccapietra] is not inferred from the place classification, but is derived from event descriptions.
Loop
----
The pattern [[Loop]{}]{} is graphically depicted in Fig. \[fig:cycle\]. A container is loaded on $Vessel_1$ in port $P_1$ at time $t_1$, with destination $P_x$. At time $t_3$ $Vessel_1$ reaches the intermediate port $P_3$, where the container is transshipped on $Vessel_2$. Afterwards, $Vessel_1$ continues its itinerary, while $Vessel_2$ comes back to port $P_1$ before reaching $P_x$.
![Pattern [*Loop*]{}: (1) the container is loaded on $Vessel_1$ in port $P_1$; (2) the container is transshipped on $Vessel_2$ in port $P_3$; (3) the container is back in port $P_1$ before reaching its final destination[]{data-label="fig:cycle"}](Fig06_PatternLoop.png){width="8cm"}
Given the formalisation represented in Fig. \[fig:container\] and Fig. \[fig:vessel\], the axiom that formalises pattern [[Loop]{}]{} defines the class of container itineraries that involve a transshipment on a vessel that comes back to port $P_1$ before reaching port $P_X$, as depicted in Fig. \[fig:cycle\]. The corresponding DL specification is as follows:
[*(axiom [[Loop]{}]{})*]{}\
$$\begin{aligned}
&{\texttt{LoopP1\_P2}}\equiv&
{\texttt{MaritimeContainerItinerary}}\\
& &
\sqcap \exists{\texttt{hasCISourcePort}}.{\texttt{\{P1\}}}\sqcap\\
& &
\exists{\texttt{hasCIDestinationPort}}.{\texttt{\{PX\}}}\sqcap\\
& &
\exists{\texttt{hasContainerEvent}}.({\texttt{Transshipment\_Event}}\sqcap\\
& &
\exists{\texttt{hasLoadingVesselEvent}}.(\exists{\texttt{hasNextEvent}}\\
& &
.(\exists{\texttt{hasVPort}}.{\texttt{\{P1\}}}\sqcap\\
& &
\exists{\texttt{hasNextEvent}}.\exists{\texttt{hasVPort}}.{\texttt{\{PX\}}}))))\\\end{aligned}$$ [ $\Box$]{}
The core of the axiom is the concept [*Transhippment Event*]{}, which allows to abstract from the specific definitions of transhippment to avoid depending on different ways to describe the same events, combined with the role [*hasLoadingVesselEvent*]{} (see Fig. \[fig:vessel\]), which links the container itinerary to the route of any vessel used for its transportation. The axiom [[Loop]{}]{} matches all the itineraries in which a loading vessel comes back to the port of origin of a container before reaching the shipment destination.
Note that it matches all cycle patterns, disregarding the number or transshipments done during the itinerary of the container. However, to be sure of pruning false positive cases, we have to take into account two dates; the first one is the container arrival, and the second one is the arrive of the vessel that performs the loop: if they are in the same day, or in very close days, we can be sure that the itinerary is suspicious; if they differ of months, of even years, then we can be in presence of a gap in the container or vessel event sequence.
$P_1$ and $P_X$ are two nominal concepts that indicate two different ports. To process all the ports in a dataset, the implementation described in Section \[sec:exp\] process the axiom iteratively on all possible pairs of locations.
We also propose a slightly different specification of the axiom to describe the event that $P_1$ is not the starting port for a container itinerary, but is one of the intermediate ports that the container reaches before arriving to the final destination. In this case, we have to test the axiom considering for $P_1$ all possible values that come before $P_X$ in the trip. The corresponding DL specification is as follows:
[*(axiom [[Loop]{}]{} - intermediate ports)*]{}\
$$\begin{aligned}
&{\texttt{LoopP1\_P2}}\equiv&
{\texttt{MaritimeContainerItinerary}}\\
& &
\sqcap \exists{\texttt{hasCIEvent}}.(\exists{\texttt{hasLocation}}.{\texttt{\{P1\}}})\sqcap\\
& &
\exists{\texttt{hasCIDestinationPort}}.{\texttt{\{PX\}}}\sqcap\\
& &
\exists{\texttt{hasContainerEvent}}.({\texttt{Transshipment\_Event}}\sqcap\\
& &
\exists{\texttt{hasLoadingVesselEvent}}.(\exists{\texttt{hasNextEvent}}\\
& &
.(\exists{\texttt{hasVPort}}.{\texttt{\{P1\}}}\sqcap\\
& &
\exists{\texttt{hasNextEvent}}.\exists{\texttt{hasVPort}}.{\texttt{\{PX\}}}))))\\\end{aligned}$$ [ $\Box$]{}
Unnecessary Transshipment
-------------------------
Pattern [[Unnecessary Transshipment]{}]{} is in Fig. \[fig:pattern2\], where a container, loaded at time $t_1$ on $Vessel_1$ in port $P_1$, is transshipped on $Vessel_2$ in an intermediate port $P_3$ at time $t_3$, and afterwards, both $Vessel_1$ and $Vessel_2$ arrive at port $P_4$, which is the container destination, therefore the transhippment was not necessary. Such a manipulation in the container itineraries is often put in place to conceal the real origin of a shipment, to take advantage of convenient duties agreement between the countries involved: Indeed, thanks to such unnecessary transshipment, a fraudulent shipper can easily manipulate the container documents pretending that the shipment originated from the starting port of $Vessel_2$, i.e., port $P_2$, instead of $P_1$.
![Suspicious Pattern [*Unnecessary transshipment*]{}: (1) the container is loaded on $Vessel_1$ in port $P_1$; (2) the container is transshipped on $Vessel_2$ in port $P_3$; (3) the container arrives at port $P_4$; also $Vessel_1$ reaches the same port[]{data-label="fig:pattern2"}](Fig07_UnnecessaryTranssipment.png){width="8cm"}
Given the formalisation represented in Fig. \[fig:container\] and Fig. \[fig:vessel\], the DL axiom formalizing pattern [[Unnecessary Transshipment]{}]{} is as follows:
[*(axiom [[Unnecessary Transshipment]{}]{})*]{}\
$$\begin{aligned}
&{\texttt{Unnecess\_TransP}}\equiv&
{\texttt{MaritimeContainerItinerary}}\sqcap\\
& &
\exists{\texttt{hasCIDestinationPort}}.{\texttt{\{P\}}}\sqcap\\
& &
\exists{\texttt{hasContainerEvent}}.({\texttt{Transshipment\_Event}}\sqcap\\
& &
\exists{\texttt{hasDischargingVesselEvent}}.( \exists{\texttt{hasNextEvent}}\\
& &
.(\exists{\texttt{hasVPort}}.{\texttt{\{P\}}}))))\\\end{aligned}$$ [ $\Box$]{}
Also in this example, the main parts of this axiom are represented by the concept [*Transhippment Event*]{} and by the connection between the container and the vessel events: in this case, this connection is represented by the role [*hasDischargingVesselEvent*]{}, that allows to pass from the description of the container itinerary to the one that brought it to the transshipment port.
We have to point out that the instances matching this axiom have to be further elaborated, because it matches all the ships that pass from the container destination, i.e., port $P$ in the example, after the transshipment. As a simple strategy to prune the suspicious itineraries, one can evaluate the date of arrival of the first vessel to the container destination: if the date is in the same day, or in very close days, to the one of the container arrival, the transshipment was not necessary and the container itinerary can be labeled as anomalous.
Ontology Querying Tools: a Survey {#sec:semtools}
=================================
In this section we review the tools and technologies available to query our ontology. As we discussed above, we can retrieve the trajectories that follow the patterns we are interested in by checking the DL axioms that formalize such patterns against the ontology, because axiom checking implicitly creates the classes encompassing the trajectory instances that verify the patterns. Different DL reasoners can be applied to check the axioms, the most common ones being Pellet [@pellet], FaCT++ [@fact], Hermit [@hermit] and RacerPro [@racer].
As an alternative, we can retrieve the trajectory instances by querying the ontology through an ontology Query Language (QL). This solution augments the expressivity at our disposal for pattern specification, and enables us to test alternative QLs and different QL implementations, possibly benefiting from improved performance.
[**QL**]{} [**KBL**]{} [**Expressiveness**]{}
------------------------------------- ------------- ----------------------------------------
SPARQL [@sparql] RDF, OWL subgraph matching, conjunctive queries
RQL [@Karvounarakis_rql] RDF subgraph matching
SeRQL [@SeRQL] RDF subgraph matching
RDQL [@rdql] RDF subgraph matching
ASK DIG [@dig] OWL DL atomic queries (TBox/RBox/ABox)
OWLink protocol [@owllink] OWL DL atomic queries (TBox/RBox/ABox)
OWL-QL (DQL) [@Fikes2004-owlql] OWL DL atomic queries (TBox/RBox/ABox)
OWLQ [@owlq] OWL DL atomic queries (TBox/RBox/ABox)
SAIQL [@Kubias07owlsaiql] OWL DL atomic queries (TBox/RBox/ABox)
nRQL [@Haarslev04nRQL] OWL conjunctive ABox queries
ONTOVQL [@FadhilH07-ontovql] OWL DL atomic queries (TBox/RBox/ABox)
SQWRL [@OConnorD08-sqwrl] OWL + SWRL DL atomic queries + SWRL rules
SPARQL-DL [@Sirin07sparqldl] OWL conjunctive TBox, RBox, ABox queries
SPARQL 1.1 [@sparql11] OWL conjunctive TBox, RBox, ABox queries
SPARQL-OWL [@Kollia2011-sparqlowl] OWL conjunctive TBox, RBox, ABox queries
Table \[tab:ql\] gives an overview of the existing ontology QLs. They can be broadly classified into three categories: RDF-based, applying subgraph matching of RDF triples against the ontology graph but lacking DL reasoning capabilities; DL-based, supporting directly the DL semantics, usually in the form of atomic DL expressions; and mixed approaches that combine DL expressivity with DL query conjunction.
The most used RDF-QLs is SPARQL, the W3C recommendation for querying triples in RDF graphs through subgraph matching. DL-based languages enable to express TBox, RBox and ABox queries that can be run directly against OWL files. For some QL, such as nRQL [@Haarslev04nRQL], the Racer DL-QL, a limited possibility for query conjunction is also supported. Other DL-based approaches augment the QL expressivity providing graphical instruments to specify a query, like ONTOVQL [@FadhilH07-ontovql], or integrate the support for rules (i.e., Horn clauses), like SQWRL [@OConnorD08-sqwrl], which takes rule antecedents as query specifications. Finally, the OWLink protocol [@owllink], which overcomes the ASK DIG interface [@dig] to interact with OWL 2.0 ontologies, is a reference interface for DL reasoning and querying.
A big step forward towards improving the language expressivity, while preserving decidability and performance, is given by recent proposals combining the two approaches above, specifically extending the SPARQL simple entailment based on subgraph matching with with DL reasoning, in particular OWL semantics. The widest proposal is a recent W3C Candidate Recommendation: SPARQL 1.1 [@sparql11]. It encompasses entailment regimes [@sparql11-entailment] for RDF, RDFS, RIF Core, D-entailment, OWL Direct and RDF-Based Semantics entailment. The SPARQL 1.1 specification relies on the work of different communities, including the ones working on SPARQL-OWL [@Kollia2011-sparqlowl] and SPARQL-DL [@Sirin07sparqldl]. SPARQL-OWL, in particular, has been implemented extending the engine of the Hermit reasoner (a benchmark is provided, but the source code is not available). By contrast, a fully functional API for SPARQL-DL [@Sirin07sparqldl] is available. It extends the Pellet [@pellet] query engine, and is currently a very competitive solution for ontology querying, as we discuss in the experimental evaluation section. The SPARQL-DL API [@Sirin07sparqldl] and other tools that either support ontology QL or generically enable to query an ontology, are reported in Table \[tab:owltools\].
[**Tool/API**]{} [**QL/Expressiveness**]{} [**Reasoner**]{}
---------------------------------------------------- ---------------------------------- ---------------------------------------------------------------------------------------
JENA [@jena] SPARQL OWL reasoners but only subgraph matching
KAON2 [@kaon2] SPARQL Integrated reasoner (OWL Lite, DL safe SWRL, FLOGIC) DIG ASK interface
KAON2 OWL Tools [@owltools] SPARQL-DL Lite OWL-API compliant
NEON Toolkit [@neon] SAIQL OWL-API compliant
Protégé-OWL API [@protegeowl] DL atomic TBox/RBox/ABox queries DIG ASK compliant
SQWRL-API [@Kollia2011-sparqlowl] SQWRL Jess Rule Engine, RacerPro
OWL2Query [@owl2query] SPARQL-DL$^{NOT}$ OWL-API v. 3 compliant
RacerPro APIs [@racer] nRQL RacerPro
OWL-API [@Horridge2011-owlapi] DL atomic TBox/RBox/ABox queries FaCT++, Hermit, Pellet, CEL [*(OWL-API v.3 compliant)*]{}, and RacerPro (via OWLLink)
OWLLink API [@owllink] DL atomic TBox/RBox/ABox queries RacerPro, OWL-API v.3 compliant reasoners
SPARQL-DL API [@Sirin07sparqldl] SPARQL-DL OWL-API v.3 compliant reasoners
ORACLE Database Semantic Technologies [@owlprime] RDF,RDFS++,OWLSIF,OWLPrime
Among the tools listed in the table, JENA [@jena] and KAON2 [@kaon2] are mainly designed for RDF knowledge bases: even if they can handle OWL ontologies, reasoning is performed as subgraph matching in JENA, while KAON2 implements the DIG ASK interface, limited to OWL-Lite for DL reasoning, but partially extended towards SWRL and FLOGIC. KAON2 OWL TOOLS partially supports SPARQL-DL, but apparently this project is not maintained anymore.
Among the tools specifically designed for OWL ontologies, the OWLink API [@owllink], the API for the OWLink protocol is the evolution of the DIG interface for OWL 2.0. The Protégé-OWL API [@protegeowl] is an API designed for plugin development, while and SQWRL-API [@Kollia2011-sparqlowl] and OWL2Query [@owl2query] are Protégé plugins for ontology querying, integrating SWRL rules and SPARQL-DL$^{NOT}$, which is SPARQL-DL with negation as failure, respectively.
NEON [@neon], RacerPro APIs and SQWRL-API adopt query languages specifically designed for the tools, respectively SAIQL, nRQL and SQWRL. Of these, the RacerPro API is the most used. However, the supported QL nRQL, as we mentioned above, enables only ABox conjunctive queries; moreover, only the 32bit version of the reasoner is available and the free license for research has some limitation.
The OWL-API [@Horridge2011-owlapi] is an open source API written in Java that is considered as a reference interface for ontology manipulation. It is widely used and is implemented by several DL reasoners, including FaCT++, Hermit, Pellet, CEL (which are referred to in the table as OWL-API v.3 compliant reasoners), and RacerPro. It supports directly entailment checking for answering DL atomic queries, but it does not enable to answer conjunctive or SPARQL based queries.
By contrast, as mentioned above, this functionalities are supported by the SPARQL-DL API [@Sirin07sparqldl], that extends the OWL API to enable conjunctive DL query answering. Moreover, through OWL API, querying can be realized using any OWL API compliant reasoner.
Recently, also mainstream database vendors propose products that combine the ability of databases to handle big amounts of data with the reasoning capabilities offered by ontology. In its latest version 11g, ORACLE Database includes a module for Semantic Technologies, that supports RDF and OWL files, with three different vocabularies: RDFS++, which is an extension of RDFS; OWLIFS, OWL with the support of the IF semantics; and OWLPrime, which is a OWL subset that does not support cardinality property restriction, set operators (union,intersection) and enumeration. OWLPrime is by far the language that provide the maximum expressivity among those offered by this product, and OWLPrime expressions can be integrated in SPARQL-like queries that can be specified directly against the database. Unfortunately, the lack of set expressions does not allow to specify DL axioms with conjunction or disjunction of atomic expression, limiting the application of this type of products.
Experimental Evaluation {#sec:exp}
=======================
{width="15cm"}
The experimental evaluation has been organized in three steps, as depicted in Fig. \[fig:evaluation\]. At step (1), we first select the data to process. We have chosen a sample dataset from the data collected by JRC as part of its container monitoring activity. The dataset includes 18 millions of [*Container Status Messages*]{} (CSM). A CSM is a semi-structured text that describes a shipping deed undertaken by carrier companies on a container. Each CSM includes the position of the container, the operation carried out on it (that we formalize in the MCO as a container event), its loading status and the vessel used for its transportation. The initial dataset included CSM referring to 50 thousand containers travelling worldwide for three years, from 2009 to 2012.
During the pre-processing phase in step (1), we segment CSM sequences to extract container itineraries, identifying container shipments and vessel trips. As a result of the segmentation phase, more than 290 thousand container itineraries and more than 43 thousands vessel trips have been identified. Since usually more than one vessel is used for accomplishing a container shipment, and every vessel transports in a single trips thousands of containers, we need to map every part of a container itinerary with the corresponding vessel trips. This concludes the pre-processing phase.
We populate the MCO at step (2) with the itineraries and the related information. The MCO has been implemented in OWL-DL, the description logic sublanguage of the Web Ontology Language OWL [@owl], according to the design described in Section \[sec:mco\]. OWL is widely used for ontology definition, therefore a lot of tools and libraries are available for ontology editing, population and visualization and querying (cf. Section \[sec:semtools\]). Moreover, it includes semantic features to enhance reasoning, in particular ontology axioms, that we use to express suspicious itinerary patterns. Among the available tools, we chose the Jena Java API [@jena] for populating the ontology.
To have a more meaningful evaluation of the approach, in particular in terms of performance scalability, we run different tests using four ontologies of different sizes, randomly created starting from the initial dataset, that contain, respectively: 100589, 153816, 207356 and 260637 individuals. To have an insight on the complexity of the ontology, we can consider the number of other types of individuals, namely container and vessel itineraries, containers, vessels, and ports, as summarized in Table \[tab:dataset\]. Notice for instance that, while the number of containers increases more or less proportionally to the number of container itineraries, which is our reference dimension for the experimental evaluation, the increase in the number of vessels remains limited. This phenomenon is more evident when considering the ports traversed by the itineraries. The fact that the number of ports remains bounded is an advantage for our application, because in the evaluation of the axioms, we have to scan iteratively all the ports the containers passed through, therefore the number of ports in the dataset can become very easily a bottleneck for the application. By contrast, if the axiom has acceptable performance with a limited number of container itineraries, we can expect reasonable processing time even with a bigger number of shipments, because the number of ports does not increase proportionally. We expect this consideration applies as well in other application domains, for example the locations crossed by itineraries do not increase proportionally when considering a bigger dataset of trajectories.
Ontology OWL individuals Container itineraries Containers Vessels Ports
---------- ----------------- ----------------------- ------------ --------- ------- --
owl5K 100589 5000 4763 841 565
owl10K 153816 10000 9203 960 593
owl15K 207356 15000 13264 1023 604
owl20K 260637 20000 17012 1078 618
At step (3), we query the MCO against a set of DL-queries that implement the anomalous itinerary axioms we formalized in Section \[sec:axioms\]. We tested different ontology APIs, languages and reasoners: the OWL-API [@Horridge2011-owlapi] and SPARQL-DL [@Sirin07sparqldl] DL-query languages, combined with Pellet [@pellet], Hermit [@hermit], FaCT++[@fact].
Data selection and pre-processing {#sec:datapreparation}
---------------------------------
------------------------------------------------------------------------------------------------------ -- -- -- -- -- --
**CSM identifier &**Container identifier & **Time & **Event& **Location& **Loading status& **Vessel\
12345 &ABCD1234567 &27 May 2010 &Received at Origin &Shangai (CN) &Empty &–\
12346 &ABCD1234567 &27 May 2010 &Gate In &Shangai (CN) &Full &–\
12350 &ABCD1234567 &30 May 2010 &Loaded/Ramped &Shangai (CN) &Full &Aurora\
12365 &ABCD1234567 &15 Jun 2010 &Discharged/Deramped &Port Kelang (MY) &Full &–\
12366 &ABCD1234567 &17 Jun 2010 &Loaded/Ramped &Port Kelang (MY) &Full &Dawn\
12381 &ABCD1234567 &03 Jul 2010 &Discharged/Deramped &Antwerpen (BE) &Full &–\
12399 &ABCD1234567 &09 Jul 2010 &Gate Out &Antwerpen (BE) &Full &–\
12455 &ABCD1234567 &16 Jul 2010 &Final Destination &Antwerpen (BE) &Full &–\
12484 &ABCD1234567 &20 Aug 2010 &Received at Origin &Antwerpen (BE) &Empty &–\
12545 &ABCD1234567 &23 Aug 2010 &Gate In &Antwerpen (BE) &Full &–\
12555 &ABCD1234567 &24 Aug 2010 &Loaded/Ramped &Antwerpen (BE) &Full &Sun\
**************
------------------------------------------------------------------------------------------------------ -- -- -- -- -- --
For each container in the CSM dataset, we extracted the corresponding [*event sequence*]{}, which details the shipment history of a single container. An example of container sequence is reported in Table \[tab:seq\]. Each line in the table represents a CSM, which is composed by: a CSM identifier; an ISO 6346 container identifier[^5]; the date when the event occurred; textual description; the place, usually a port, where it took place; the loading status of the container (empty or full); depending on the event type, a vessel identifier.
Each container sequence is then processed to extract the container and vessel itineraries, as described next.
### Reconstructing Container Itineraries
![UML Class diagram of the API for itinerary segmentation[]{data-label="fig:acid"}](ACIDClassDiagram.png){width="8cm"}
The itinerary segmentation is implemented in Java, and leverages the semantics of container events, as defined in the ontology excerpt reported in Fig. \[fig:events\]. The class diagram of the API is reported in Fig. \[fig:acid\]. Specifically, we segment every container event sequence among different shipments.
Ideally, an itinerary is composed by the following phases, corresponding to the five main categories of events described in Section \[sec:container\]:
1. Begin of Trip;
2. Container Export;
3. an optional sequence of Container Transshipments;
4. Container Import;
5. End of Trip.
Given for instance the sequence in Table \[tab:seq\], it includes two itineraries for container ABCD1234567: the first starting at Shangai in China on the 27th of May and ending at Antwerpen in Belgium on the 16th of July; and the second, which is partial, starting at Antwerpen the 20th of August. Note that we can have gaps in the event sequence, therefore the segmentation algorithm can produce partial itineraries, or merge different itineraries in a single one. To partially overcome this issue, the algorithm takes into account also events that do not describe a container movement but are deeds occurring to prepare the container for the shipping at the source port or to complete it at the port of destination (e.g., released to shipper for cargo stuffing, empty returned). These events, complemented with the loading status of the container, help put a container in a specific port at the begin and at the end of a shipment, and define more precisely the temporal period a container spends in a port.
### Reconstructing Vessel Trips
Vessel itineraries are extracted from the same dataset of container sequences we processed above. Indeed, vessel routes are implicitly defined by CSMs that can include also the names of the vessels used for the container transportation. Typically, a vessel transports many containers in a single trip between two ports, hence its movements can be inferred by considering CSM of different containers. In this case, we are likely to overcome the issue of incomplete container sequences.
For each vessel in the dataset, we aggregate container events with respect to their occurrence in each port at a specific time, obtaining the temporal interval during which the vessel stopped in each port. Ordering such interval-based vessel events, we obtain a sequence of events for the vessel, with the events dates and locations, from which we infer the event description, i.e., departure or arrival. Vessel itineraries are extracted from vessel sequence, considering them made by pairs of [*departure*]{} and [*arrival*]{} vessel events.
### Binding Itineraries to Trips
Once container and vessel itineraries have been reconstructed, we proceed to link them relying on transshipment events. Transshipments play a fundamental role in both the anomalous axioms described in Section \[sec:axioms\], therefore, in order to detect the corresponding anomalous patterns, we need to set correctly the roles involved in the transhippment specification. These ones are not explicit in the dataset, but should be set explicitly in the ontology. Therefore, we connect every discharging container event with the arrival event of the corresponding vessel that occurs immediately [*before*]{} its discharge; similarly, every loading container event with the vessel departure that happens immediately [*after*]{} its loading. The results of this procedure, which has been implemented in ORACLE PL/SQL, are stored back in the database.
Ontology population {#subsec:Ontology population}
-------------------
![Classes for Knowledge Base Population[]{data-label="fig:population"}](MCOpopulation2_compressed.png){width="8.5cm"}
We use the Jena [@jena] framework to obtain four populated ontology files, described in Table \[tab:dataset\], that has to be queried to detect anomalous itineraries. To reach this goal, we implemented an ad-hoc Java package, whose design is illustrated in Fig. \[fig:population\], and whose main classes describe the domain knowledge base for the MCO. For sake of simplicity, we show in Fig. \[fig:population\] only the attributes of these classes. The population of the ontology is fulfilled by the class [Population.java]{} (see Fig.\[fig:population\]) which, relying on Jena, builds the corresponding objects to insert them into the the ontology source file.
Detecting anomalous itineraries {#subsec:Detecting anomalous itineraires}
-------------------------------
We have started to test the axioms by considering the Java OWL-API [@Horridge2011-owlapi] interface. Among the compatible reasoners, we have tested FaCT++, HermiT, and Pellet. RacerPro has platform limitations and its free license for research has limitations. After having collected their performance in terms of time used to get the positive cases, we have searched for other tools in order to get better results. We have taken into account the SPARQL-DL[@Sirin07sparqldl] engine in Pellet, and the SPARQL-DL implementation by Derivo.
All the tests have been done using a PC with a 64 bits processor: Intel(R) Xeon(R) CPU E5620, equipped with 133MHz of clock, reserving 5 Gb of RAM to the process.
In the following, for each suspicious pattern we show the steps we have followed in our experimentation, and the performance of our tests.
### Detecting Unnecessary Transshipments {#subsec:querying unnecesstrans}
\[ymin=0,ymax=50000, height=5cm, width=8cm, ymajorgrids=true, legend style=[legend pos=north west]{}, title=, xlabel=[[number of container itineraries]{}]{}, ylabel=[minutes (ln scale)]{}, symbolic x coords=[5000,10000,15000,20000]{}, xtick=data,nodes near coords, nodes near coords align=[vertical]{}\]
coordinates [(5000,180.33) (10000,647.20) (15000,1600.5) (20000,3500.9)]{}; coordinates [(5000,11.5) (10000,44.5) (15000,89.5) (20000,150.5) ]{}; coordinates [(5000,3.2) (10000,4.25) (15000,5.5) (20000,7.9)]{};
-------------------- --------- ---------- ---------- ----------
Reasoner/Interface *owl5K* *owl10K* *owl15K* *owl20K*
Pellet & OWL-API 180 647 1600 3005
FaCT++ & OWL-API 11 44 89 150
Pellet & SPARQL-DL 3 4 5 6
-------------------- --------- ---------- ---------- ----------
: Performance of [[Unnecessary Transshipment]{}]{} detection for different reasoners and interfaces[]{data-label="tab:unnPerf"}
In Table \[tab:unnPerf\] and in Fig. \[fig:unnPerf\], we show the most performant results of the experimental phase with the [[Unnecessary Transshipment]{}]{} axiom. We have started our experimentation relying on OWL-API, and we have exploited it by developing a Java package called [itineraries.query]{}, that is based on Matthew Horridge’s example code in [@Horridge2011-owlapi].
In the core class of this package, we extract from the database all the ports where the containers passed through, and we test the axiom of Section \[sec:axioms\] against every port. To reach this goal, the axiom has been rewritten into Manchester syntax for OWL [@HorridgeP08]. We have tested three reasoners: HermiT, Pellet and FaCT++. We have found that FaCT++ is by far the reasoner that performs better with OWL-API. On the other hand, we stopped testing Hermit after having realized that, in the case with the smallest dataset, its computation took more than twice as long as the one with Pellet.
However, the main problems with the OWL-API pure approach are the slowness of the computation, and the necessity of another mechanism in order to clean the itineraries found by selecting those that have compatible arrival dates (see Section \[sec:axioms\] for details). Actually, OWL-API does not allow us to extract this information.
As an alternative, we considered SPARQL-DL [@Sirin07sparqldl]: it is an expressive language for querying OWL-DL ontologies, and allows us to extract the dates that are necessary to get the real suspicious itineraries. Moreover, Pellet is equipped with an engine that can speed up the performance with this tool. In Table \[tab:unnPerf\] and in Fig. \[fig:unnPerf\], we can see that the results with Pellet and SPARQL-DL are better performing than the pure OWL-API approach. After this test, we have decided to test a generic implementation of SPARQL-DL, and we have considered the one by Derivo. Since their SPARQL-DL query engine is settled on top of the OWL-API, we have tried to combine its use with the more performant reasoner with OWL-API, that is FaCT++ according to our tests. Unfortunately, in this test the performance has been very bad and we have stopped it when we have realized that it would not have terminated the execution in a reasonable time. The code of the experiment can be seen in the appendix.
By looking at the experiments results, we can see that the combination of SPARQL-DL and Pellet is by far the best one in terms of time: for example, if we consider the case of 10000 itineraries, we have an improvement of more than 99% of time with respect to the OWL-API and Pellet approach. If we take into account the other cases, we have improvements of the same order of size. Moreover, we have to observe that SPARQL-DL enables to compare dates, hence its use eliminates the need of a post-processing phase for eliminating false positive cases. Hence, it seems the most appropriate to analyse itineraries of this kind.
### Detecting Loops {#subsec:querying cycle}
\[ymin=0,ymax=10000000, height=6cm, width=8cm, ymajorgrids=true, legend style=[legend pos=north west]{}, title=, xlabel=, ylabel=[minutes (ln scale)]{}, symbolic x coords=[5000,10000,15000,20000]{}, xtick=data,nodes near coords, nodes near coords align=[vertical]{}\],
coordinates [(5000,1444.5) (10000,2555.5) (15000,3951.9) (20000,5315.3) ]{};
coordinates [(5000,39.9) (10000,512.2) (15000,588.5) (20000,658.5) ]{};
coordinates [(5000,3.5) (10000,5.5) (15000,8.7) (20000,10.3)]{};
coordinates [(5000,5.49) (10000,7.19) (15000,9.8) (20000,40.46) ]{};
-- --------- ---------- ---------- ---------- -----
*owl5K* *owl10K* *owl15K* *owl20K*
1444 2555 3951 5315
\[1\] 39 512 588 658
\[2\] 3 5 8 10
\[3\] 5 7 10 40
-- --------- ---------- ---------- ---------- -----
: Performance of Loop detection.For the rows [*Pellet & SPARQL-DL*]{}, we have cases: \[1\] without date filter; \[2\] with date filter; \[3\] with date filter and intermediate ports.[]{data-label="tab:loopPerf"}
In Table \[tab:loopPerf\] and in and in Fig. \[fig:Loop2Graph\], we have the fastest performance of the experimental phase with the [[Loop]{}]{} axiom. We obtained an acceptable performance with OWL-API only when combined with Pellet: actually, in the other cases (involving HermiT and FaCT++) we were obliged to stop the tests because of the slowness of their computation.
We have also tested the SPARQL-DL version of the axiom, ([*Pellet & SPARQL-DL(1)*]{} in the Figure), and we have found an improvement in performance.
However, exploiting the ability offered by SPARQL-DL to compare dates, we were able to test another formalisation of the query: in this version, it considers containers loaded on a ship that goes back to its port of start and, after this fact, it is discharged. This formalization improves by far the performance ([*Pellet & SPARQL-DL(2)*]{} in Fig. \[fig:Loop2Graph\]) to be compared with the previous versions.
By exploiting the same ability, we have implemented also the other version of the query, that matches the itineraries when a container goes back to an intermediate port before reaching its final destination. The performance of this experiment is labelled by [*Pellet & SPARQL-DL(3)*]{} in Table \[tab:loopPerf\] and in Fig. \[fig:Loop2Graph\]. The code of the experiment can be seen in the appendix. By looking at the experiments results, we can see that also in this case that the joint use of SPARQL-DL and Pellet is by far the best one in terms of time: for example, if we consider the case of 10000 itineraries, the performance of Pellet & SPARQL-DL(1) improves of almost 80% the time with respect to the OWL-API and Pellet approach. Moreover, the possibility to compare dates enables us to rewrite the query in a different way: this can cause a further improvement of the performances, and this is what happens with the Pellet & SPARQL-DL(2) query version. This improvement of the performance has given us the reason to implement and test the Pellet & SPARQL-DL(3) query version. From these tests, we can deduce that the combination of SPARQL-DL and Pellet seems one of the most indicates to implement our methodology.
We remark that, while in the [[Unnecessary Transshipment]{}]{} experiment Fact++ seemed to be the most promising reasoner to be used with the OWL-API, in this case Pellet has obtained better performance. Relying only on the OWL-API, it would be very difficult to choose the best reasoner for the application. However, the solution that combines Pellet and SPARQL-DL is efficient in both cases.
Discussion and Conclusions {#sec:conclusion}
==========================
In this paper, we have shown a semantic approach for pattern discovery in trajectories that, relying on ontologies, enhances moving object information with event semantics. Our methodology includes a top-level ontology for modelling moving object trajectories, that can be extended to formalize the semantics of a specific application domain. The domain ontology can be queried to search for trajectories following given patterns. These can be formalized as ontology axioms, or specified as DL queries using some ontology query languages. We have validated our approach in a real world scenario, evaluating different implementation solutions.
The main asset of this approach is the possibility to define concepts and properties by exploiting the ontology expressivity and its capability of abstracting the entities of the application domain. In particular, axioms formalizing patterns may be expressed in terms of high-level semantic concepts, abstracting from the specific modelling adopted to represent the domain. This is a remarkable feature in heterogeneous domains like the one we considered for testing, because it enabled us to refer to the standard events classes defined in the ontology instead of referring the specific events defined by carrier companies using their own vocabulary.
Moreover, this approach enables to use a DL reasoner for building an automatic system for the characterization of different itineraries in terms of the user’s needs. The approach is robust because the decidability of axiom evaluation is guaranteed by the robusteness of the DL formalism.
It is worth mentioning that, for application domains requiring more complex formalization, we can further improve the representation language expressivity using formalisations such as OWL and SWRL[@swrl], to enable the use of variables and express equality comparison between instances. However, this entails weakening the decidability constraint.
The use of an ontology to describe the behaviour of movement has also some drawbacks. In particular, scalability with a large datasets is an open issue. In the case of maritime surveillance and security, the search of suspicious patterns may involve the analysis of several thousands of records, therefore we have to take into consideration scalability when chosing the approach to apply.
As we have discussed in Section \[sec:semtools\], recently, reasoning engines specifically designed to handle big knowledge bases have been presented[@oracleSemantics]. However, even if this products are a potential solution to the scalability issue, currently these technologies are not mature enough, because their expressivity is very limited, and lack of fundamental DL operations (e.g., OWLPrime does not provide union and intersection [@oracleSemantics], which are necessary for axiom evaluation). Another way to face up with the scalability issue might be the development of pre-processing procedures to reduce the size of the dataset, providing the DL reasoner with a smaller knowledge base input. The same approach has been adopted in [@Bogorny2010], where an input dataset of touristic trajectories is first pre-processed with a set of data mining procedures to discover a bunch of data-mining patterns; only after this step, such patterns are loaded in the knowledge base to reason on them.
However, in the test scenario we have considered, we showed that the combined use of Pellet and SPARQL-DL API is efficient even when considering datasets with thousands of itineraries and instances, and we can obtain even better performance when applying some a priori filtering directly in the DL query specification.
We remark that at the moment our approach is related only to complete itineraries: a possible extension of this work will be to integrate data mining technologies for managing incomplete itineraries. Moreover, since a peculiarity of such technologies is to discover implicit semantics, we can rely on them to manage unexpected patterns.
As for the future work, we plan to investigate the employment of OWL and SWRL formalism in order to increase the expressiveness of our approach. Moreover, we plan to study the development of pre-processing procedures to reduce the size of the initial dataset. We are currently developing a pre-processing module to address container itineraries that includes also non explicit events, such as the container passing in a port without being handled. These can be retrieved by reasoning vessel events, defined relying on other containers that travel on the same vessel.
APPENDIX A {#sec:app .unnumbered}
==========
### Querying [[Unnecessary Transshipment]{}]{} {#querying-unnecessary-transshipment .unnumbered}
[**[[Unnecessary Transshipment]{}]{}in OWL-API**]{}\
[`Maritime_Container_Itinerary`]{} [`and`]{}\
[`hasCDestionationPort`]{} [`value`]{} [`P`]{} [`and`]{}\
[`hasContainerEvent`]{} [`some`]{} [`(Transhipment_Event`]{} [`and`]{}\
[`hasDischargingVesselEvent`]{} [`some`]{} [`(hasNextVesselEvent`]{} [`some`]{}\
[`(Event`]{} [`and`]{} [`hasVPort`]{} [`value`]{} [`P)))`]{}
[ $\Box$]{}
[**[[Unnecessary Transshipment]{}]{}in SPARQL-DL**]{}\
[``SELECT DISTINCT ?c ?endCI ?vesStop WHERE { ]{}\
[`?c a st:Container_itinerary . `]{}\
[`?c st:hasEndTime ?cd . `]{}\
[`?c st:hasCIDestinationPort st:port . `]{}\
[`?c st:hasContainerEvent ?t . `]{}\
[`?t rdf:type ?eventClass . `]{}\
[`?eventClass rdfs:subClassOf st:Transshipment_Event . `]{}\
[`?t st:hasDischargingVesselEvent ?v . `]{}\
[`?v st:hasNextVesselEvent ?v1 . `]{}\
[`?v1 st:hasLocation st:port . `]{}\
[`?v1 st:hasTimestamp ?vd . `]{}\
[`BIND( fn:substring(?cd,5,10) AS ?endCI ) .`]{}\
[`BIND( fn:substring(?vd,5,10) AS ?vesStop ) .`]{}\
[`FILTER (xsd:date(?vesStop) > xsd:date(?endCI)) .`]{}\
[`}`]{}
[ $\Box$]{}
### Querying [[Loop]{}]{} {#querying-loop .unnumbered}
[**[[Loop]{}]{}in OWL-API**]{}\
[`Maritime_Container_Itinerary`]{} [`and`]{} [`hasCSourcePort`]{} [`value`]{} [`P1`]{} [`and`]{}\
[`hasCDestinationPort`]{}\
[`value`]{} [`P2`]{} [`and`]{} [`hasContainerEvent`]{} [`some`]{}\
[`(Transhipment_Event`]{} [`and`]{} [`hasLoadingVesselEvent`]{} [`some`]{}\
[`(hasNextVesselEvent`]{} [`some`]{} [`(Event`]{} [`and`]{} [`hasVPort`]{} [`value`]{} [`P1`]{} [`and`]{}\
[`hasNextVesselEvent`]{} [`some`]{} [`(Event`]{} [`and`]{} [`hasVPort`]{} [`value`]{} [`P2))))`]{}
[ $\Box$]{}
[**[[Loop]{}]{}in SPARQL-DL**]{}\
[``SELECT DISTINCT ?c ?cd ?vd WHERE { ]{}\
[`?c a st:Container_itinerary . `]{}\
[`?c st:hasEndTime ?cd . `]{}\
[`?c st:hasCISourcePort st:port1 . `]{}\
[`?c st:hasCIDestinationPort st:port2 . `]{}\
[`?c st:hasContainerEvent ?t . `]{}\
[`?t rdf:type ?eventClass . `]{}\
[`?eventClass rdfs:subClassOf st:Transshipment_Event . `]{}\
[`?t st:hasLoadingVesselEvent ?v . `]{}\
[`?v st:hasNextVesselEvent ?v1 . `]{}\
[`?v1 st:hasLocation st:port1 . `]{}\
[`?v1 st:hasNextVesselEvent ?v2 . `]{}\
[`?v2 st:hasLocation st:port2 . `]{}\
[`?v2 st:hasTimestamp ?vd . `]{}\
[`}`]{}
[ $\Box$]{}
[**[[Loop]{}]{}in SPARQL-DL (alternative formalization)**]{}\
[``SELECT DISTINCT ?c ?endCI ?vesStop WHERE { ]{}\
[`?c a st:Container_itinerary . `]{}\
[`?c st:hasEndTime ?cd . `]{}\
[`?c st:hasCISourcePort st:port . `]{}\
[`?c st:hasContainerEvent ?t . `]{}\
[`?t rdf:type ?eventClass . `]{}\
[`?eventClass rdfs:subClassOf st:Transshipment_Event . `]{}\
[`?t st:hasLoadingVesselEvent ?v . `]{}\
[`?v st:hasNextVesselEvent ?v1 . `]{}\
[`?v1 st:hasLocation st:port . `]{}\
[`?v1 st:hasTimestamp ?vd . `]{}\
[`?v1 st:hasNextVesselEvent ?v2 . `]{}\
[`?t2 st:hasDischargingVesselEvent ?v2 . `]{}\
[`?t2 rdf:type ?eventClass2 . `]{}\
[`?eventClass2 rdfs:subClassOf st:Transshipment_Event . `]{}\
[`?c st:hasContainerEvent ?t2 . `]{}\
[`?t2 st:hasTimestamp ?disDate . `]{}\
[`BIND( fn:substring(?disDate,5,10) AS ?endvTimeDis) . `]{}\
[`BIND( fn:substring(?cd,5,10) AS ?endCI ) . `]{}\
[`BIND( fn:substring(?vdstr,5,10) AS ?vesStop)) . `]{}\
[`FILTER (xsd:date(?endCI) > xsd:date(?vesStop)) . `]{}\
[`FILTER (xsd:date(?endvTimeDis) > xsd:date(?vesStop)) . `]{}\
[`}`]{}
[ $\Box$]{}
[**[[Loop]{}]{}in SPARQL-DL (intermediate ports)**]{}\
[``SELECT DISTINCT ?c ?endCI ?vesStop WHERE { ]{}\
[`?c a st:Container_itinerary . `]{}\
[`?c st:hasEndTime ?cd . `]{}\
[`?c st:hasContainerEvent ?interMediate . `]{}\
[`?interMediate st:hasLocation st:port . `]{}\
[`?interMediate st:hasTimestamp ?interMediateTimeStamp . `]{}\
[`?c st:hasContainerEvent ?t . `]{}\
[`?t rdf:type ?eventClass . `]{}\
[`?eventClass rdfs:subClassOf st:Transshipment_Event . `]{}\
[`?t st:hasLoadingVesselEvent ?v . `]{}\
[`?v st:hasNextVesselEvent ?v1 . `]{}\
[`?v1 st:hasLocation st:port . `]{}\
[`?v1 st:hasTimestamp ?vd . `]{}\
[`?v1 st:hasNextVesselEvent ?v2 . `]{}\
[`?t2 st:hasDischargingVesselEvent ?v2 . `]{}\
[`?c st:hasContainerEvent ?t2 . `]{}\
[`?t2 rdf:type ?eventClass2 . `]{}\
[`?eventClass2 rdfs:subClassOf st:Transshipment_Event . `]{}\
[`?t2 st:hasTimestamp ?disDate . `]{}\
[`BIND( fn:substring(?disDate,5,10) AS ?endvTimeDis ) `]{}\
[`BIND( fn:substring(?interMediateTimeStamp,5,10) AS ?interMediateTime ). `]{}\
[`BIND( fn:substring(?cd,5,10) AS ?endCI ) `]{}\
[`BIND( fn:substring(?vd,5,10) AS ?vesStop ) `]{}\
[`FILTER (xsd:date(?vesStop) > xsd:date(?interMediateTime)) .`]{}\
[`FILTER (xsd:date(?endCI) > xsd:date(?vesStop)) .`]{}\
[`FILTER (xsd:date(?endvTimeDis) > xsd:date(?vesStop)) .`]{}\
[`}`]{}
[ $\Box$]{}
[^1]: This paper relies on the research presented in: P. Villa, E. Camossi, [*A Description Logic Approach to Discover Suspicious Itineraries from Maritime Container Trajectories*]{}, In Proc. of GEOS 2011, LNCS 6631, p. 182-199. Springer-Verlag 2011. This research contributes to European Commission JRC action 41004 [*Vessel and Container Surveillance*]{}.
[^2]: www.geonames.org
[^3]: a TBox is acyclic iff no concept name uses itself.
[^4]: BIC codes are assigned by the [*Bureau International des Containers et du Transport Intermodal*]{} (BIC).
[^5]: In ISO 6346 identifier ABCU1234567, ABC identifies a carrier company, D is a container category; 123456 is a serial identification number and 7 is a check digit.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric $r$. For Clifford gates with arbitrary small errors described by process matrices, $r$ was believed to reliably correspond to the mean, over all Cliffords, of the *average gate infidelity* (AGI) between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gateset. It depends on the *representations* used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from $r$ by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. These theories allow explicit computation of the error rate that RB measures ($r$), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.'
author:
- Timothy Proctor
- Kenneth Rudinger
- Kevin Young
- Mohan Sarovar
- 'Robin Blume-Kohout'
bibliography:
- '../../../../../Paper-Library/Bibliography.bib'
title: What randomized benchmarking actually measures
---
Randomized benchmarking (RB) [@emerson2005scalable; @emerson2007symmetrized; @dankert2009exact; @knill2008randomized; @magesan2011scalable; @magesan2012characterizing; @wallman2014randomized; @epstein2014investigating; @magesan2012efficient; @wallman2015estimating; @kimmel2014robust; @gambetta2012characterization; @alexander2016randomized; @helsen2017multiqubit; @fogarty2015nonexponential; @carignan2015characterizing; @cross2016scalable; @granade2015accelerated; @sheldon2016characterizing; @ball2016effect; @kelly2014optimal; @wallman2015robust] is a simple and efficient protocol for measuring an average error rate of a quantum information processor (QIP), and is among the most commonly used experimental methods for characterizing QIPs [@barends2014superconducting; @chen2016measuring; @barends2014rolling; @xia2015randomized; @muhonen2015quantifying; @corcoles2015demonstration; @laucht2015electrically; @chow2014implementing; @veldhorst2014addressable; @johnson2015demonstration; @corcoles2013process]. In its purest form, RB consists of: (1) performing many randomly chosen sequences of Clifford gates that ought to return the QIP to its initial state; (2) measuring at the end of each sequence to see whether the QIP “survived” (i.e., returned to its initial state); and (3) plotting the observed survival probabilities vs. sequence length and fitting this to an exponential decay curve. The decay rate of the survival probability is – up to a dimensionality constant, and neglecting any finite-sampling error – the “RB number” ($r$). RB experiments estimate $r$, which is used as a metric for judging the processor’s performance.
The $r$ that RB measures has a clear operational definition, but it is not clear how it relates to common metrics – i.e., what it is that RB measures. In QIP theory, the ideal “target” operations and the imperfect as-implemented operations are usually represented by *process matrices*, a.k.a. CPTP (completely positive, trace-preserving) maps. The generally accepted theory behind RB [@magesan2011scalable; @magesan2012characterizing; @wallman2014randomized; @epstein2014investigating] suggests that $r$ is approximately equal to the average, over all $n$-qubit Cliffords, of the *average gate infidelity* \[AGI, Eq. \] between the imperfect Cliffords and their ideal counterparts. We call this quantity the *average gateset infidelity* \[AGsI, Eq. \] and denote it by $\epsilon$. It has been widely believed that $r \approx \epsilon$ whenever the errors in the gates are small, and describable by process matrices [@magesan2011scalable; @magesan2012characterizing; @wallman2014randomized; @epstein2014investigating; @magesan2012efficient; @wallman2015estimating; @kimmel2014robust; @gambetta2012characterization; @alexander2016randomized; @helsen2017multiqubit; @fogarty2015nonexponential; @carignan2015characterizing; @cross2016scalable]. In this Letter, we show that $r$ and $\epsilon$ can differ by orders of magnitude (Fig. \[Fig:scaling\]). This happens because $\epsilon$ is not a well-defined property of a physical QIP. Instead, $\epsilon$ is a property of the *representation* used to describe the gates, and depends strongly on which of several equivalent and indistinguishable representations is used. We provide a new theory for the RB decay that *is* representation-independent, proves that the RB decay is always exponential when the noise is described by process matrices, and gives an efficient representation-independent approximate formula for $r$ with small error bars.
[Fig\_1]{}
**Experimental RB:** The basic RB protocol (extensions exist [@magesan2012efficient; @kimmel2014robust; @gambetta2012characterization; @wallman2015estimating; @alexander2016randomized]) was summarized above. Complete details can be found in Refs. [@magesan2011scalable; @magesan2012characterizing; @wallman2014randomized; @epstein2014investigating], and in the appendix. As in most experiments [@barends2014superconducting; @chen2016measuring; @barends2014rolling; @xia2015randomized; @muhonen2015quantifying; @corcoles2015demonstration; @laucht2015electrically; @chow2014implementing; @veldhorst2014addressable], we consider benchmarking an implementation of the $n$-qubit Clifford gates with $n \geq 1$. The standard way to estimate $r$ from RB data is to fit the average of the sampled survival probabilities ($P_m$), for many sequence lengths $m$, to the model $P_m = A + (B + Cm)p^m$, where $A$, $B$, $C$, and $p$ are fit parameters [@magesan2011scalable; @magesan2012characterizing; @wallman2014randomized; @epstein2014investigating]. The estimate of $p$, denoted $\hat{p}$, gives an estimate of $r$ as $\hat{r} = (d - 1)(1 - \hat{p})/d$, where $d = 2^n$. It is common to fix $C = 0$, but Magesan *et al.* [@magesan2012characterizing; @magesan2011scalable] suggest that fitting $C$ may be necessary when the error varies from gate to gate.
**Theory of RB:** The average survival probabilities $P_m$ are unambiguously real and experimentally accessible. And $r$ is equally well-defined, as long as the $P_m$ decay exponentially with $m$. The motivation for further analysis – for a *theory* of RB – is primarily to answer two questions. First, under what circumstances does $P_m$ decay exponentially? Second, when it does, what is $r$? That is, to what property of the imperfect gates does $r$ correspond? Building such a theory requires specifying a model for the operations used in RB.
These operations comprise: (1) a set of gates; (2) a set of state preparations; and (3) a set of measurements, which together form a *physical gateset*. A model associates them with mathematical objects that can be used to compute $P_m$. If each operation is independent of all external contexts – e.g., time, external fields, ancillary qubits – then each gate can be represented by a *process matrix* $G_i$, each state preparation by a density operator $\rho_j$, and each measurement by a positive operator-valued measure (POVM) $\mathcal{M}_{k} = \{ E_{k,l} \}$. Probabilities of events are given by Born’s Rule: $\mathrm{Pr}(E_{k,l}\, | \,\rho_j, G_i) = {\mathrm{Tr}}\left[E_{k,l} G_i \rho_j\right]$. In this commonly used model for analyzing RB, an as-built processor with an imperfect physical gateset can be *represented* by some $\tilde{\mathcal{G}} = \{ \tilde{G}_i, \tilde{\rho}_j , \tilde{E}_{k,l} \}$, and an idealized perfect device by some $\mathcal{G} = \{ G_i, \rho_j , E_{k,l} \}$. Since $r$ is independent of the state preparation and measurement [@magesan2012characterizing; @magesan2011scalable], we will usually only need representations of the imperfect and ideal Cliffords, denoted $\tilde{\mathcal{C}} = \{\tilde{C}_i\}$ and $\mathcal{C} =\{C_i\}$, respectively.
RB theory is clear when the gateset has *gate-independent errors*; which means that there is a process matrix $\Lambda$ such that each imperfect Clifford can be represented as $\tilde{C}_i = \Lambda C_i$. In this situation, $r$ is exactly equal to the *average gate infidelity* (AGI) between $\Lambda$ and the identity process matrix $\Id$ [@magesan2011scalable]. The AGI between process matrices $\tilde{G}$ and $G$ is simply $1 - \bar{F}$, where $$\bar{F}(\tilde{G},G) := \int d\psi \,\, \text{Tr} \left(\tilde{G} [|\psi \rangle \langle \psi |] G[|\psi \rangle \langle \psi | ] \right).
\label{eq:AGF}$$ But a general theory of RB needs to address the more likely case of gate-*dependent* errors, where $\tilde{C}_i = \Lambda_i C_i$. A starting point is the observation that, for gate-independent errors, every imperfect Clifford has the same AGI with its ideal counterpart: $\bar{F}(\tilde{C}_i, C_i) = \bar{F}(\Lambda, \Id)$. So, a plausible generalization of AGI to gate-dependent errors is its *average* over all Cliffords: $$\epsilon( \tilde{\mathcal{C}},\mathcal{C} ) := \text{avg}_{i}\left[ 1 - \bar{F}(\tilde{C}_i,C_i) \right],
\label{eq:AGsI}$$ a quantity we call the *average gateset infidelity* (AGsI).
An extensive literature suggests or argues that $r \approx \epsilon$ [@magesan2011scalable; @magesan2012characterizing; @wallman2014randomized; @epstein2014investigating; @magesan2012efficient; @wallman2015estimating; @kimmel2014robust; @gambetta2012characterization; @alexander2016randomized; @helsen2017multiqubit; @fogarty2015nonexponential; @carignan2015characterizing; @cross2016scalable] for “weakly gate-dependent” errors [@magesan2012characterizing; @magesan2011scalable] – i.e., when all the error maps $\Lambda_i = \tilde{C}_iC_i^{-1}$ are close to their average. More precisely, when $\delta := \|\Lambda_i - \bar{\Lambda}\|_{1\to1}^{H} \ll 1$ for all $i$ [@magesan2012characterizing; @magesan2011scalable], where $\bar{\Lambda} := \text{avg}_i[\Lambda_i]$ is the *average error map*, and $\| \cdot \|_{1\to1}^{H} $ is the Hermitian 1-to-1 norm [@magesan2012characterizing]. Since this is true whenever the $\Lambda_i$ are all close to $\Id$, it holds for all small errors. However, $r$ and $\epsilon$ can actually differ by orders of magnitude, for simple and realistic noise models. Consider a simple 1-qubit example involving Cliffords compiled into two “primitive” gates.
**Example 1:** The ideal primitive gates are represented by $G_x = R(\sigma_x, \pi/2)$ and $G_y = R(\sigma_y, \pi/2)$, where $R(H, \theta)[\rho] := \exp(- i \theta H/2)\rho\exp(i \theta H/2)$. Any 1-qubit Clifford can be compiled into $G_x$ and $G_y$. The *imperfect* primitives are represented by $\tilde{G}_{x} = R(\sigma_z, \theta)G_x$ and $\tilde{G}_y = R(\sigma_z, \theta)G_y$ with $\theta \ll 1$, which corresponds to a small systematic detuning or timing error.
We simulated RB with Cliffords compiled into these imperfect gates and observed $r \ll \epsilon$. For $\theta = 0.1$, the theory predicts $\epsilon \approx 10^{-3}$, but we observed $\hat{r} \approx 10^{-5}$ (Fig. \[Fig:scaling\]). Varying $\theta$ (Fig. \[Fig:scaling\], inset) shows that $r \propto \theta^4$, while $\epsilon \propto \theta^2$. As the errors become small, the ratio $\epsilon/r$ diverges.
This example lies within the domain of standard RB theory – the errors are small and only weakly gate-dependent (as defined in Refs. [@magesan2012characterizing; @magesan2011scalable]) – and it does not contradict the technical results of Refs. [@magesan2011scalable; @magesan2012characterizing], that link $r$ to $\epsilon$. Refs. [@magesan2011scalable; @magesan2012characterizing] include error bounds that bound the difference between actual and predicted RB decay curves. These bounds, which we plot for Example 1 in Fig. \[fig:error-bounds\], are sufficiently loose that they do not significantly constrain $\epsilon/r$. A complete description of our simulation methodology is provided in the appendix.
[Fig\_2]{}
**Understanding the discrepancy:** The discrepancy between $r$ and $\epsilon$ has a simple but subtle explanation: RB, like all experiments, probes properties of a *physical* QIP, not of a *model* for it. Although a physical QIP’s gates may be accurately represented by a fixed set of process matrices, that representation is not unique. The RB error rate $r$ is a property of the physical gates, and therefore representation-independent. But $\epsilon$, as conventionally defined, is not.
Two representations of a physical gateset are equivalent if they cannot be distinguished by any experiment. More precisely, representations $\mathcal{G}$ and $\mathcal{G}'$ are equivalent iff they predict the same probabilities for *every* quantum circuit. Equivalent representations are easy to construct. If $\mathcal{G} = \{G_i, \rho_j, E_{k,l}\}$ accurately represents a QIP, then so does $$\mathcal{G}(M) = \{MG_iM^{-1}, M (\rho_j), M^{-1}(E_{k,l})\},$$ where $M$ is any invertible linear map, which we call a *gauge transformation* [@blume2013robust; @merkel2013self; @greenbaum2015introduction; @blume2016certifying]. If $f$ is an observable property of the QIP that can be computed from a model $\mathcal{G}$, then $f(\mathcal{G})$ must be the same for all equivalent representations. So observable properties like $r$ must correspond to *gauge-invariant* functions: $f(\mathcal{G}) = f(\mathcal{G}(M))$ for all $M$.
The AGsI defined in Eq. (\[eq:AGsI\]) is not gauge-invariant. It depends on the representations for the physical and perfect gatesets. If $\tilde{\mathcal{C}}$ and $\mathcal{C}$ are representations for the imperfect and ideal Cliffords respectively, then $\tilde{\mathcal{C}}(M)$ and $\mathcal{C}(N)$ are equivalent representations, for arbitrary invertible $M$ and $N$. The AGsI has a continuum of values as $M$ and $N$ are varied, and this is still true if we (arbitrarily) fix either representation.
Transforming the perfect and imperfect Cliffords in the same way (i.e., $M = N$ above) leaves the AGsI unchanged. So, we can define a gauge-invariant AGsI by comparing $\tilde{\mathcal{C}}$ *not* to the usual fixed representation of the Cliffords $\mathcal{C}$, but to a $\mathcal{C}$-dependent representation of them, $\mathcal{C}_{\tilde{\mathcal{C}}}$, that satisfies $\mathcal{C}_{\tilde{\mathcal{C}}(M)} = \mathcal{C}_{\tilde{\mathcal{C}}}(M)$. For example, we could define the AGsI with respect to the representation of the perfect Cliffords that is “closest” to the process matrices representing the imperfect Cliffords. If we do so, the assertion that $r \approx \epsilon$ is not *wrong*, but *ambiguous*; it requires a unique definition for the “closest” representation of the Cliffords. We return to this at the end of the Letter.
As far as we can tell, $\epsilon$ has not been defined or calculated in a representation-independent way in the literature. It is generally defined by: (1) taking $\mathcal{C}$ as the automorphism group of the Pauli matrices; (2) taking the imperfect gateset to be $\tilde{\mathcal{C}} = \{\Lambda_i C_i\}$ where the $\Lambda_i$ describe the “relevant error process”; and (3) calculating the AGsI (Eq. \[eq:AGsI\]) between $\tilde{\mathcal{C}}$ and the already-defined matrices $\mathcal{C}$. This procedure, which we followed in our example above, is explicit in the RB simulations of Refs. [@epstein2014investigating; @carignan2015characterizing] and is the most natural reading of the foundational RB papers by Magesan *et al.* [@magesan2011scalable; @magesan2012characterizing].
**Example 2:** A perfect Clifford gateset $\tilde{\mathcal{C}} = \mathcal{C}$ has an AGsI to $\mathcal{C}$ of $\epsilon(\tilde{\mathcal{C}},\mathcal{C}) = 0$. But if $U$ is a unitary and $\mathcal{U}[\rho] := U\rho U^\dagger$, then $\tilde{\mathcal{C}}(\mathcal{U})$ is an equivalent representation of the gateset with generally non-zero AGsI.
Example 1 is actually very similar to Example 2. The imperfect primitive gates in Example 1, $\tilde{G}_x$ and $\tilde{G}_y$, are *almost* gauge-equivalent to their perfect counterparts. Some algebra shows that $\tilde{G}_{x/y}(\rho) = U_{x/y}\rho U_{x/y}^{\dagger}$ where $U_{x/y} = \exp( -i \phi (\hat{v}_{x/y} \cdot \vec{\sigma})/2)$, $\phi = \pi/2 + O(\theta^2)$, and $\hat{v}_x \cdot \hat{v}_y = 0 + O(\theta^2)$. So at $O(\theta)$ the $\tilde{G}_{x}$ and $\tilde{G}_{y}$ gates induce rotations by $\pi/2$ around orthogonal axes. Hence, there exists some $\mathcal{U}$ with $\mathcal{U}[\rho]=U\rho U^{\dagger}$ for unitary $U$ such that $\mathcal{U}\tilde{G}_{x/y}\mathcal{U}^{-1} = R(\hat{w}_{x/y} \cdot \vec{\sigma},\varphi_{x/y})G_{x/y}$ where $\varphi_{x/y} = O(\theta^2)$ and $\hat{w}_{x/y}$ are some unit vectors. In this representation, the Clifford error maps $\Lambda_i=\tilde{C}_iC_i^{-1}$ are unitary rotations by $O(\theta^2)$, which suggests an RB number of $r = O(\theta^4)$, as observed. Although the $O(\theta)$ detuning error is real and physical, its effect on *these* gates is, at $O(\theta)$, equivalent to a gauge transformation. So, in *all* circuits consisting of only these gates, it behaves like a coherent error with a rotation angle of $O(\theta^2)$.
**New theories for the RB decay:** We would like to know what property of a physical gateset RB *is* measuring, and to have an accurate, efficient formula for $r(\{\tilde{C}_i\})$. To this end, we now present new theories for the RB decay that are representation-independent and highly accurate.
The average survival probability over all RB sequences of length $m$ is $$P_m = \frac{1}{|\mathcal{C}|^m}\sum_{{\boldsymbol{s}}} \text{Tr} (E \tilde{C}_{{\boldsymbol{s}}^{-1}} \tilde{C}_{s_m} \dots \tilde{C}_{s_{1}} (\rho )) ,
\label{Pmdef}$$ where $E$ and $\rho$ are the (imperfect) measurement and state preparation, $C_{{\boldsymbol{s}}^{-1}}$ is the Clifford that inverts the first $m$ Cliffords, ${\boldsymbol{s}} \in [1..|\mathcal{C}|]^m$, and $|\mathcal{C}|$ is the order of the Clifford group. The map $\mathcal{S}_m = \text{avg}_{{\boldsymbol{s}}}[ \tilde{C}_{{\boldsymbol{s}}^{-1}} \tilde{C}_{s_m} \dots \tilde{C}_{s_{1}}] $ can be written as $\mathcal{S}_m = | \mathcal{C}| \vec{v}^T \mathscr{R}^{m+1} \vec{v}$, where $\vec{v} = (\mathbb{1},\mathbb{0}, \dots, \mathbb{0} )^T$, $\mathbb{1}$ and $\mathbb{0}$ are the $n$-qubit identity and “zero” superoperators ($\mathbb{0}(\rho) = 0$) respectively, and $$\mathscr{R} = \frac{1}{|\mathcal{C}|}\begin{pmatrix}
\tilde{C}_{1 \to 1} & \tilde{C}_{2 \to 1} & \cdots & \tilde{C}_{|\mathcal{C}| \to 1} \\
\tilde{C}_{1 \to 2} & \tilde{C}_{2 \to 2} & \cdots & \tilde{C}_{|\mathcal{C}| \to 2} \\
\vdots & \vdots & \ddots & \vdots \\
\tilde{C}_{1 \to |\mathcal{C}|} & \tilde{C}_{2 \to |\mathcal{C}|} & \cdots & \tilde{C}_{|\mathcal{C}|\to |\mathcal{C}|}
\end{pmatrix},$$ where $C_{j\to k} = C_{j}^{-1}C_k$ and $\tilde{C}_{j \to k}$ is the corresponding imperfect Clifford. It follows that $P_m = |\mathcal{C}| \, \text{Tr}(E(\vec{v}^T \mathscr{R}^{m+1} \vec{v})(\rho))$, and so $$P_m = \sum_i \alpha_i \lambda_i^{m+1},$$ where $\{\lambda_i\}$ are the $4^n|\mathcal{C}|$ eigenvalues of $\mathscr{R}$, $n$ is the number of qubits, and $\{\alpha_i\}$ are constants depending on $\rho$, $E$, and the eigenvectors of $\mathscr{R}$.
This exact expression for the RB decay curve can be calculated efficiently in $m$ (unlike exhaustive averaging over $|\mathcal{C}|^{m-1}$ sequences). However, it is intractable for $n > 1$ qubits, and does not explain why decays with a functional form of $A + Bp^m$ are normally observed in practice. We therefore make a small approximation.
Because $\tilde{C}_{{\boldsymbol{s}}^{-1}} = (\bar{\Lambda} + \Delta_{{\boldsymbol{s}}^{-1}}) C_{s_1}^{-1} \dots C_{s_m}^{-1}$, where $\Delta_{i} = \Lambda_i - \bar{\Lambda}$, we can rewrite Eq. (\[Pmdef\]) as $$P_m = \frac{1}{|\mathcal{C}|^m} \sum_{{\boldsymbol{s}}} \text{Tr} (E \bar{\Lambda} C_{s_1}^{-1} \dots C_{s_m}^{-1} \tilde{C}_{s_m} \dots \tilde{C}_{s_{1}} (\rho)) +\tilde{\delta}_m.$$ Therefore $P_m = \text{Tr}(E \bar{\Lambda} [\mathscr{L}^m(\Id)](\rho )) + \tilde{\delta}_m$, where $$\mathscr{L}(\mathcal{E}) = \text{avg}_i[C_{i}^{-1} \mathcal{E} \tilde{C}_{i} ],$$ is a linear map on superoperators (a “superduperoperator” [@crooks2008quantum]; note that a similar matrix was constructed in Ref. [@chasseur2015complete] to analyze leakage in RB). Hence, $P_m = \sum_i \omega_i\gamma_i^m + \tilde{\delta}_m$, where $\{\gamma_i \}$ are the $16^n$ eigenvalues of $\mathscr{L}$, and $\{\omega_i\}$ depend on $\rho$, $E$, $\bar{\Lambda}$, and the eigenvectors of $\mathscr{L}$. The $\{\gamma_i\}$ are representation-independent. In the appendix we prove that $\tilde{\delta}_m$ satisfies $|\tilde{\delta}_m| \leq \delta_{\diamond} \equiv \frac{1}{2}\text{avg}_i \|\Lambda_i - \bar{\Lambda}\|_{\diamond}$. Because the $\Lambda_i$ are representation-dependent, the size of $\delta_{\diamond}$ depends on the representations $\tilde{\mathcal{C}}$ and $\mathcal{C}$. But since this bound holds for *any* CPTP representations, $|\tilde{\delta}_m| \leq \delta_{\diamond}^{\min}$ with $\delta_{\diamond}^{\min}$ the minimum of $\delta_{\diamond}$ over all CPTP representations of the gatesets.
If there exists a representation in which the errors are gate-independent ($\tilde{C}_i = \Lambda C_i$ for all $i$ and some $\Lambda$), then $\delta_{\diamond}^{\min} = 0$ and the RB decay is exactly described by $\mathscr{L}$. Because the Cliffords are a unitary 2-design [@dankert2009exact], $\mathscr{L}$ has only three distinct eigenvalues in this case: 1, $\gamma$, and 0 (0 has a degeneracy of $16^n - 2$). The RB decay is then exactly described by $P_m = \omega_0 + \omega_1\gamma^m$. This recovers the exact RB theory for gate-independent error maps [@magesan2012characterizing; @magesan2011scalable].
Small errors are a small perturbation away from the case of no error ($\gamma = 1$), and cause similarly small perturbations of the eigenvalues. Hence, for any small errors, $\gamma_0 = 1$ (as 1 is always an eigenvalue of $\mathscr{L}$), $\gamma_1$ satisfies $1 - \gamma_1 \ll 1$, and $|\gamma_{i}| \ll 1$ for all $i > 1$. As such, $$P_m = \omega_0 + \omega_1 \gamma^m + \delta_m,$$ where $\gamma = \gamma_1$, $|\delta_m| \leq \delta_{\diamond}^{\min} + \kappa_m$, and $\kappa_m = |\omega_{2}\gamma_2^m + \omega_{3}\gamma_3^m + \dots|$ is an exponentially decreasing function of $m$. Hence, for $m \gg 1$ the RB decay curve is well approximation by the functional form $P_m = A + Bp^m$. Therefore, the $p$ obtained from fitting RB data to $P_m = A + Bp^m$ is an estimate of $\gamma$, the second largest eigenvalue of $\mathscr{L}$. Similarly, as $r$ is given by $r=(d - 1)(1 - p)/d$, $r$ is approximately an estimate of $r_{\gamma} \equiv (d - 1)(1 - \gamma)/d$. That is, $r = r_{\gamma} + \delta_r$ with $\delta_r \ll 1$ a small correction factor. Fig. \[Fig:scaling\] demonstrates this for the gateset of Example 1.
To our knowledge, this is the first proof that the RB decay curve is guaranteed to always be exponential for small errors that can be described by CPTP maps – including gate-dependent errors. This indicates that the model $P_m = A + (B + Cm)p^m$ is not necessary. Fitting it should always yield $\hat{C} \approx 0$, so estimating $C$ is not likely to help quantify gate-dependence (see suggestion in Refs. [@magesan2012characterizing; @magesan2011scalable]). Instead, our results show that significant non-exponential decay is a clear symptom of non-Markovianity (e.g., time dependence).
We now return to a question raised earlier: Are there natural representations of the perfect and imperfect gatesets in which $\epsilon = r$? “Natural” is important, because $\epsilon$ varies so widely over representations. An absurd answer would be to compute $r$ and then search over *all* representations of a gateset to find one in which $\epsilon = r$. The most obvious reasonable option is to arbitrarily fix a CPTP representation of the perfect gateset and to choose the representation of the imperfect gateset in which the gates are all CPTP and $\epsilon$ is minimal ($\epsilon$ can always be made large by choosing a “bad” representation – see Example 2). This defines a new and gauge-invariant AGsI $\epsilon_{\min} := \min_{M} [\epsilon(\tilde{\mathcal{C}}(M), \mathcal{C})]$, with the minimization restricted such that the gates in $\tilde{\mathcal{C}}(M)$ are CPTP. But $\epsilon_{\min}$ does *not* exactly correspond to $r$, as it can be strictly *less* than $r$ (see the appendix).
After the initial version of this Letter appeared, Wallman [@wallman2017randomized] published an independent analysis of RB. Based on a different representation of the $\mathscr{L}$ operator, Wallman’s theory also derives an exponential decay at the same rate $\gamma$ derived here, but proves a tighter error bound that decays exponentially with $m$, confirming that the RB decay is completely described by $\gamma$, and $\delta_r$ is negligible in $r = r_{\gamma} + \delta_r$. Wallman’s construction implies that there exists a representation of the imperfect gates for which $\epsilon = r$. To prove this, let $\mathscr{L}'(\mathcal{E}) = \text{avg}_i[\tilde{C}_{i} \mathcal{E} C_{i}^{-1}]$. $\mathscr{L}'$ has the same spectrum as $\mathscr{L}$. Wallman [@wallman2017randomized] gives an explicit construction of a superoperator $\mathcal{L}$ that satisfies $\mathscr{L}'(\mathcal{L}) = \mathcal{L}\mathcal{D}_{\gamma}$, where $\mathcal{D}_{\lambda}$ is a depolarizing channel ($\mathcal{D}_{\lambda} (\rho) = (1 - \lambda)1/d + \lambda\rho$). Now, consider the particular representation of the imperfect Cliffords $\tilde{\mathcal{C}}(\mathcal{L}^{-1}) = \{\mathcal{L}^{-1} \tilde{C}_{i} \mathcal{L}\}$. Some simple algebra (see the appendix) shows that $ r_{\gamma} = \epsilon(\tilde{\mathcal{C}}(\mathcal{L}^{-1}), \mathcal{C})$. So there is an explicitly calculable representation of the gateset that makes $\epsilon = r$. However, the gates in this representation are not generally completely positive, which makes it hard to consider this gauge “natural" (non-CP gauge choices can even make $\epsilon < 0$).
**Conclusions:** It is surprisingly nontrivial to relate the RB error rate $r$ – a well-defined, representation-independent property of a physical QIP’s gates – to the process matrices describing those gates, and identify what property it corresponds to. The simple relationship for gate-*independent* errors, where $r$ equals the average gateset infidelity (AGsI, $\epsilon$) between imperfect and perfect Cliffords, obscures the complexity of the general case. AGsI can be orders of magnitude larger than $r$ unless the right representations are used. This has serious practical consequences, as shown by Example 1 and some of the results in Ref. [@epstein2014investigating], where $r \ll \epsilon$ for experimentally plausible error models.
Our analysis indicates that RB is even more stable and reliable than indicated by previous work [@epstein2014investigating; @magesan2011scalable; @magesan2012characterizing]; $P_m$ decays exponentially (without higher-order corrections of the form $mp^m$) for all small errors describable by process matrices, including coherent errors. We established this by introducing a new, accurate, theory for the RB decay curve that associates $r$ with a calculable, representation-independent property of the physical gateset. Subsequent results by Wallman [@wallman2017randomized] allow us to observe that this quantity *is* an AGsI, for at least one representation of the imperfect gates, but in this representation the gate process matrices are generally unphysical (not completely positive). Since current theories for many extended RB protocols, such as interleaved [@magesan2012efficient], dihedral [@carignan2015characterizing; @cross2016scalable], and unitarity [@wallman2014randomized] RB, rely on representation-dependent techniques, it is an interesting open question whether they can be reformulated in a representation-independent way as we did here with basic RB.
**Acknowledgements:** acknowledgement.tex
The RB protocol
===============
Here we provide a detailed review of the basic RB procedure, as used throughout the main text. This was defined in Refs. [@magesan2012characterizing; @magesan2011scalable], and see also Refs. [@wallman2014randomized; @epstein2014investigating] for detailed descriptions of basic RB. As in the main text, we denote representations of the physical and ideal Cliffords by $\tilde{\mathcal{C}}=\{\tilde{C}_i\}$ and $\mathcal{C}=\{C_i\}$, respectively, for $i \in [1,2,\dots,|\mathcal{C}|]$. Furthermore, we assume that we can prepare our system in a state $\rho \approx \ket{\psi}\bra{\psi}$ for some pure state $\psi$, and that we can approximately measure whether the system is still in this state, which is represented by the measurement $\mathcal{M} = \{E, \Id-E\}$ where $E \approx \ket{\psi}\bra{\psi}$.
Given some finite set of positive integers $\mathbb{M}$, some $K : \mathbb{M} \to \mathbb{N}$, and some $R \in \mathbb{N}$ (how these quantities are chosen is discussed below), the basic RB protocol is the following:
1. For each $m \in \mathbb{M}$, pick $K(m)$ sequences of length $m+1$ uniformly sampled from the sub-set of all sequences ${\boldsymbol{s}} \in [1,\dots,|\mathcal{C}|]^{m+1}$ with $C_{s_{m+1}}C_{s_m}\dots C_{s_1} = \Id$.
2. For each sampled sequence ${\boldsymbol{s}}$, apply $\mathcal{S}_{{\boldsymbol{s}},m} =\tilde{C}_{s_{m+1}}\tilde{C}_{s_m}\dots \tilde{C}_{s_1}$ to the initial state $\rho$, and measure to see whether the state has “survived” (the measurement $\mathcal{M}$). Repeat this experiment $R$ times to estimate the *survival probability* $P_{m,{\boldsymbol{s}}} =\text{Tr}(E\mathcal{S}_{{\boldsymbol{s}},m} (\rho))$. We denote this estimate by $\hat{P}_{m,{\boldsymbol{s}}}$.
3. For each $m \in \mathbb{M}$, calculate $\hat{P}_m = \text{avg}_{{\boldsymbol{s}}}[ \hat{P}_{m,{\boldsymbol{s}}}]$, where this average is over the $K(m)$ sequences of length $m+1$ that were sampled. This is an estimate of the average survival probability over *all* possible RB sequences of length $m+1$, which we denote $P_m$.
The experimental estimates for $P_m$ are analyzed by fitting them to the model [@magesan2012characterizing; @magesan2011scalable] $$P_m = A+ (B + Cm)p^m ,$$ where $A$, $B$, $C$, and $p$ are fit parameters. The estimated RB number is obtained from the fit parameters via [@magesan2012characterizing; @magesan2011scalable] $$\hat{r} = \frac{d-1}{d} (1-\hat{p}),$$ where $d$ is the dimension of the total Hilbert space on which the benchmarking is performed. For the Clifford gates on $n$ qubits, $d=2^n$.
The standard fitting model fixes $C=0$ (this is also called the “0^th^ order” fitting model [@magesan2012characterizing]). However, Magesan *et al.* [@magesan2012characterizing; @magesan2011scalable] suggest that allowing $C\neq 0$ is more appropriate when the error maps might be “weakly gate-dependent” (the meaning of this is discussed in the main text), rather than perfectly gate-independent. This is called the “1^st^ order” fitting model. Note that the “1^st^ order” fitting function given here is different to that presented in Refs. [@magesan2012characterizing; @magesan2011scalable]. However, the function herein and the “1^st^ order” function given therein have the same functional form (i.e., they are both the sum of a constant term, a purely exponential $p^m$ term, and a $mp^m$ term), which is all that is relevant from the perspective of fitting.
It is clearly essential to make reasonable choices for $\mathbb{M}$ (the set of sequence lengths), $K$ (the function which defines the number of sequences sampled at each length) and $R$ (the number of repeats of each circuit) in order to obtain a good estimate of $r$. How to do this is a statistics problem which we are not concerned with in this Letter. See Refs. [@epstein2014investigating; @granade2015accelerated; @wallman2014randomized; @helsen2017multiqubit] for some work on this.
RB Simulations {#rb-simulations .unnumbered}
--------------
In this work, we are interested in the underlying property of the gateset which RB estimates. We are *not* interested in how well this property is estimated for physically reasonable choices for $R$, $K$, and $\mathbb{M}$ (see above for the meaning of these quantities). Hence, for all the RB simulations in this paper there is no sampling error on individual experiments (i.e., equivalent to $R \to \infty$), we used $K(m)=500$ for all $m$, and sequence lengths including at least $m \in \{1, 1+50,1+100,\dots,1+2000\}$ (in some simulation we used smaller step sizes between $m$ values). This is to minimize statistical contributions to any deviations of the RB number from the behavior predicted by RB theory. Larger values for the maximum sequence length and greater fixed $K(m)$ (or increasing $K(m)$ with $m$) do not appear to make a significant difference to the results, as is expected [@epstein2014investigating].
To fit the simulated RB data we use unweighted least squares minimization, and the fit is to the “1^st^ order” model (see above, or main text). This is because Refs. [@magesan2012characterizing; @magesan2011scalable] suggest that the “1^st^ order” model is more appropriate than the standard fitting model when there could be any gate-dependence to the error maps. However, for all simulations herein (where the gatesets do have gate-dependent error maps) we found empirically that the RB numbers obtained from the standard fitting model are always within error bars (see below) of the RB numbers obtained from fitting to the “1^st^ order” model. This is not surprising, because in the main text we show that the RB decay has a functional form that is well described by $P_m = A+Bp^m$ for any low-error gateset.
To calculate error bars for estimated RB numbers, the RB estimation protocol was simulated 50 times. The reported estimated RB number is taken to be the average of these estimated RB numbers, and the error on the estimated RB number is taken to be the standard deviation of these estimated RB numbers.
In all simulations, state preparation and measurement (SPAM) are taken to be ideal, with preparation and projection onto the $+1$ eigenstate of $\sigma_z$. Although this is not physically realistic, we have chosen this for conceptual simplicity, as our results are independent of whether or not the SPAM is perfect. Imperfect SPAM affects the asymptotic RB survival probability and the $m=0$ intercept, but not $r$ [@magesan2012characterizing; @magesan2011scalable].
In the main text we presented RB decays for Cliffords compiled from imperfect implementations of the $G_x$ and $G_y$ gates. The general behavior of the RB decay for the particular gateset we considered (see main text, Example 1) does not depend upon the details of the compilation table. The particular Clifford compilation table we used is that given in Epstein *et al.* [@epstein2014investigating] (see Table I therein), where we have further decomposed the three non-trivial rotations around the $\sigma_x$ and $\sigma_y$ axes in the obvious way. That is, an $n \pi/2$ rotation around either axis is compiled via $n$ sequential $\pi/2$ rotations, for $n=1,2,3$. We have implemented the identity Clifford gate as “skip to the next gate” (the alternatives being to compile the identity Clifford into $G_x$ and $G_y$ gates, or to also include a, possibly imperfect, “idle” gate).
In the main text we compare simulated RB decay curves to the decay curves predicted by the “1^st^ order” theory given in [@magesan2012characterizing; @magesan2011scalable], which is the most accurate RB theory given therein. The “1^st^ order” theory decay curve is simply a function of the gateset $\{\tilde{C}_i\}$, and the SPAM operations $\rho$ and $E$. We calculate it directly from the formulas given in Ref. [@magesan2012characterizing].
[Fig\_SM]{}
Additional examples of $r \ll \epsilon$ {#additional-examples-of-r-ll-epsilon .unnumbered}
---------------------------------------
In the main text (see Example 1) we considered the behavior of RB for 1-qubit Cliffords compiled into imperfect gates that are represented by $G_x = R(\sigma_x,\pi/2)$ and $G_y = R(\sigma_y,\pi/2)$, where $$R(H,\theta)[\rho] := \exp(- i \theta H/2) \rho\exp(i \theta H/2).$$ We considered the particular imperfect primitives $\tilde{G}_{x} = R(\sigma_z,\theta)G_x$ and $\tilde{G}_y = R(\sigma_z,\theta)G_y$ with $\theta \ll 1$. Here we show that the discrepancy between the $r$ and $\epsilon$ extends to more general unitary and stochastic errors. Consider the two imperfect primitives $$\begin{aligned}
\tilde{G}_{x} &= \mathcal{D}_{\lambda} R(\hat{v}_x \cdot \vec{\sigma},\theta_x)G_x, \label{eq:X-gen}
\\ \tilde{G}_{y} &= \mathcal{D}_{\lambda} R(\hat{v}_y \cdot \vec{\sigma},\theta_y) G_y, \label{eq:Y-gen}\end{aligned}$$ for some $\theta_x$, $\theta_y$, $\lambda$, and unit vectors $\hat{v}_x , \hat{v}_y$, which are not necessarily along the $x$ and $y$ axes, where $ \mathcal{D}_{\lambda} = (1 - \lambda) \mathds{1}/d + \lambda \rho$ is a depolarizing channel. These gates have both coherent and stochastic errors. There are a range of value for these parameters for which $\epsilon \gg r$. An example is given in Figure \[fig:example-z-error-decay-2\]. However, note that we do not have $\epsilon \gg r$ for all values of these parameters.
The gauge-minimized AGsI can be smaller than $r$ {#the-gauge-minimized-agsi-can-be-smaller-than-r .unnumbered}
------------------------------------------------
Consider an imperfect gateset that may be represented by the CPTP maps $\tilde{\mathcal{C}}_a = \{\tilde{C}_{i,a} \}$ where $$\tilde{C}_{i,a} = \mathcal{D}_{\lambda} C_i ,$$ with $\mathcal{C} = \{C_i\}$ the standard representation of the 1-qubit Clifford gates (i.e., the automorphism group of the Pauli matrices), and $\mathcal{D}_{\lambda}(\rho) = (1 - \lambda) \mathds{1}/d + \lambda \rho$ with $1 \geq \lambda \geq -1/3$, which is a uniform depolarization channel. For this gateset, $r = 1 - \bar{F}(\mathcal{D}_{\lambda},\mathds{1}) = \epsilon(\tilde{\mathcal{C}}_{a},\mathcal{C})$, as the error maps are gate independent (see main text, or Refs. [@magesan2011scalable; @magesan2012characterizing]). Now, for $\alpha > 0$, define the invertible linear map $M_{\alpha}$ by $$\begin{aligned}
M_{\alpha}(\mathds{1}) &= \mathds{1},\hspace{0.01cm} &M_{\alpha}(\sigma_x) &= \sigma_x, \\
M_{\alpha}(\sigma_y) &= \alpha \sigma_y ,\hspace{0.01cm} &M_{\alpha}(\sigma_z) &= \sigma_z.\end{aligned}$$ In terms of this map, define the gateset representation $\tilde{\mathcal{C}}_b = \{\tilde{C}_{i,b} \}$ where $$\tilde{C}_{i,b} =M_{\alpha} \tilde{C}_{i,a}M_{\alpha}^{-1}.$$ By construction, $\tilde{\mathcal{C}}_b$ is gauge-equivalent to $\tilde{\mathcal{C}}_a$. So both $\tilde{\mathcal{C}}_a$ and $\tilde{\mathcal{C}}_b$ can represent the same physical gateset. As such, these representations are associated with the same $r$, and we know that $r = \epsilon(\tilde{\mathcal{C}}_a,\mathcal{C})$ from above. We now show that for any non-trivial depolarizing channel (i.e., $\lambda <1$) there exists a range of values for $\alpha$ such that
1. $ \epsilon(\tilde{\mathcal{C}}_b,\mathcal{C}) < \epsilon(\tilde{\mathcal{C}}_a,\mathcal{C})= r$.
2. All of the gates in $\tilde{\mathcal{C}}_b$ are CPTP.
This then implies that the AGsI to the target gates of the CPTP representation of the physical gateset that has the minimal AGsI to the targets is smaller than $r$ for this gateset. That is, $\epsilon_{\min} < r$, in this case, where $\epsilon_{\min}$ is defined in the main text.
Using the same notation as for the AGsI of a gateset, denote the AGI of a gate representation $\tilde{G}$ to a target $G$ by $\epsilon(\tilde{G},G) \equiv 1 - \bar{F} (\tilde{G},G)$. Using the relations from Refs. [@dugas2016efficiently; @wallman2015estimating] (e.g., see Eq. (10) in Ref. [@dugas2016efficiently]), the AGI of a trace-preserving map $\tilde{G}$ to a target $G$, may be written as $$\epsilon(\tilde{G},G)= \frac{d^2- \text{Tr}(\Lambda)}{d(d+1)},
\label{Eq:AGI-as-trace}$$ where $d$ is the dimension of the Hilbert space, and $\Lambda = \tilde{G}G^{-1}$. The depolarizing channel $\mathcal{D}_{\lambda}$ commutes with $M_{\alpha}$, and so the AGI of $\tilde{C}_{i,b}$ to the target Clifford $C_{i}$ is $$\epsilon(\tilde{C}_{i,b},C_{i})= \frac{1}{6}(4- \text{Tr}(\Lambda_{i,b})),
\label{Eq:AGI-as-trace2}$$ where $$\Lambda_{i,b} \equiv \tilde{C}_{i,b} C^{-1}_{i} = \mathcal{D}_{\lambda} M_{\alpha}C_{i} M_{\alpha}^{-1}C^{-1}_i.$$
For any $i$ for which the Clifford $C_i$ maps $\sigma_y \to \sigma_y$ up to phase, $\Lambda_{i,b} = \mathcal{D}_{\lambda}$, and hence $\tilde{C}_{i,a}$ and $\tilde{C}_{i,b}$ have the same AGI to $C_i$. From the definition of $M_{\alpha}$ and because Cliffords map Pauli operators to Pauli operators, for any $i$ for which the Clifford gate $C_i$ does *not* map $\sigma_y \to \sigma_y$, up to phase, it follows that $$\begin{aligned}
\Lambda_{i,b}(\mathds{1}) &= \mathds{1}, \hspace{0.1cm} &\Lambda_{i,b}(\sigma_y) &= \lambda \alpha \sigma_y, \label{Eq:error-maps-i1}\\
\Lambda_{i,b}(\sigma_l) &= \frac{\lambda}{\alpha}\sigma_l,\hspace{0.1cm} & \Lambda_{i,b}(\sigma_m)& = \lambda \sigma_m,
\label{Eq:error-maps-i}\end{aligned}$$ where $l$ and $m$ are some ordering of $x$ and $z$ (the exact labelling depends on $i$, but is irrelevant here). Therefore, from Eq. (\[Eq:AGI-as-trace2\]) the AGI of $\tilde{C}_{i,b}$ to $C_{i}$ for any such $i$ is $$\epsilon(\tilde{C}_{i,b},C_{i})= \frac{1}{6}\left(3 - \lambda \frac{\alpha^2 + \alpha + 1}{\alpha}\right).$$
For any $\alpha >0$ with $\alpha \neq 1$, this AGI is smaller than $\epsilon(\tilde{C}_{i,a},C_{i})$ (which is given by taking $\alpha=1$ in this equation). Therefore, for all $\alpha > 0$ except $\alpha = 1$, the AGsI, which is simply the average of the AGIs of the gates in the gateset, of $\tilde{\mathcal{C}}_{b}$ to $\mathcal{C}$ is smaller than the AGsI of $\tilde{\mathcal{C}}_{a}$ to $\mathcal{C}$. That is, $\epsilon(\tilde{\mathcal{C}}_b,\mathcal{C}) < \epsilon(\tilde{\mathcal{C}}_a,\mathcal{C})$. Note that for general $\alpha$ it is possible that $\epsilon(\tilde{\mathcal{C}}_b,\mathcal{C}) <0$, which is because the gates in $\tilde{\mathcal{C}}_{b}$ are not all completely positive for all values of $\alpha$.
We now confirm that for any $\lambda <1$ there are values of $\alpha \neq 1$ such that $\tilde{\mathcal{C}}_{b}$ is CPTP. All of the gates in $\tilde{\mathcal{C}}_{b}$ are obviously TP. The map $\tilde{C}_{i,b}$ is CP if the error map $\Lambda_{i,b}$ is CP. A map is CP if all of the eigenvalues of the Choi matrix are non-negative [@skowronek2012choi; @choi1975completely], where the Choi matrix $\chi$ for a map $G$ is defined by $$\chi(G) = \sum_{i,j=1}^d B_{ij} \otimes G(B_{ij}),$$ with $B_{ij}$ the $d\times d$ matrix with 1 in the $ij$-th entry and 0s elsewhere (the standard basis for matrices). For those gates for which the error map is simply $ \mathcal{D}_{\lambda}$, then the error map is clearly CP as this is a depolarizing channel. Hence, we need only consider those gates for which the error maps are given by Eq. (\[Eq:error-maps-i1\] – \[Eq:error-maps-i\]). For any such error map, the eigenvalues of its Choi matrix, denoted $\xi_{i}$ with $i=0,1,2,3$, are given by $$\xi_{j+2k} = (-1)^j\lambda\alpha^2 +(1+(-1)^{k+j}\lambda)\alpha + (-1)^{k} \lambda.$$ For any non-identity depolarizing channel (so $\lambda < 1$) there exists an $\alpha \neq 1$ such that all of these eigenvalues are positive (and hence all the gates are CP), which may be easily confirmed numerically. As such, there exists values of $\alpha$ for any non-trivial $\lambda$ such that (1) $ \epsilon(\tilde{\mathcal{C}}_b,\mathcal{C}) < \epsilon(\tilde{\mathcal{C}}_a,\mathcal{C}) = r$, and (2) all of the gates in $\tilde{\mathcal{C}}_b$ are CPTP. In turn, this implies that $\epsilon_{\min} < r$ for at least some gatesets.
Error bounds for the approximate RB theory {#error-bounds-for-the-approximate-rb-theory .unnumbered}
------------------------------------------
In the main text, we started from the exact average survival probability over all RB sequences of length $m$, which is given by $$P_m = \frac{1}{|\mathcal{C}|^m}\sum_{{\boldsymbol{s}}} \text{Tr} (E \tilde{C}_{{\boldsymbol{s}}^{-1}} \tilde{C}_{s_m} \dots \tilde{C}_{s_{1}} (\rho )) ,$$ and by noting that $\tilde{C}_{{\boldsymbol{s}}^{-1}} = (\bar{\Lambda} + \Delta_{{\boldsymbol{s}}^{-1}}) C_{s_1}^{-1} \dots C_{s_m}^{-1} $, where $\Delta_{i} = \Lambda_i - \bar{\Lambda}$, this was rewritten as $$P_m = \frac{1}{|\mathcal{C}|^m}\sum_{{\boldsymbol{s}}} \text{Tr} (E \bar{\Lambda} C_{s_1}^{-1} \dots C_{s_m}^{-1} \tilde{C}_{s_m} \dots \tilde{C}_{s_{1}} (\rho)) +\tilde{\delta}_m.$$ Here $\tilde{\delta}_m$ is the correction required so that this equality holds, and is given explicitly by $$\tilde{\delta}_m = \frac{1}{|\mathcal{C}|^m}\sum_{{\boldsymbol{s}}} \text{Tr} (E \Delta_{{\boldsymbol{s}}^{-1}} C_{s_1}^{-1} \dots C_{s_m}^{-1} \tilde{C}_{s_m} \dots \tilde{C}_{s_{1}} (\rho)).
\label{tildedeltam}$$ We now prove that $$|\tilde{\delta}_m| \leq \delta_{\diamond} \equiv \frac{1}{2} \text{avg}_i \| \Lambda_i - \bar{\Lambda}\|_{\diamond},
\label{eq-to-prove}$$ for all $m$, as claimed in the main text, where $ \| \cdot \|_{\diamond}$ is the diamond norm.
We begin by defining the diamond norm. Let $L(\mathcal{H})$ denote the space of linear operators on some Hilbert space $\mathcal{H}$. For $\rho \in L(\mathcal{H})$, define $\|\rho\|_1 := \text{Tr}(\sqrt{\rho^{\dagger}\rho})$. For a superoperator $\mathcal{A} : L(\mathcal{H}) \to L(\mathcal{H}) $, the diamond norm is defined by [@watrous2005notes; @lidar2013quantum; @aharonov1998quantum] $$\| \mathcal{A} \|_{\diamond} := \sup_{\rho} \| [\mathcal{A} \otimes \mathds{1}](\rho) \|_1 ,$$ where $\mathds{1}$ is the identity superoperator on $L(\mathcal{H})$, and the supremum is over all $\rho \in L(\mathcal{H} \otimes \mathcal{H})$ with $\| \rho \|_1 = 1$. The diamond norm has the following properties: For any CPTP maps $\mathcal{A}$ and $\mathcal{B}$ on $L(\mathcal{H})$, for any linear maps $\mathcal{A}'$ and $\mathcal{B}'$ on $L(\mathcal{H})$, and any density operator $\rho$ and measurement effect $E$ on $\mathcal{H}$, we have $$\begin{aligned}
\| \mathcal{A} \|_{\diamond} &= 1,\label{d1} \\
\| \mathcal{A}'\mathcal{B}' \|_{\diamond} &\leq \|\mathcal{A}'\|_{\diamond}\|\mathcal{B}'\|_{\diamond},\label{d2} \\
2|\text{Tr}(E [\mathcal{A} -\mathcal{B}](\rho)) |&\leq \| \mathcal{A}-\mathcal{B} \|_{\diamond}.\label{d3}\end{aligned}$$
The first of these properties follows easily from the definition of the diamond norm (see also Ref. [@aharonov1998quantum]). The second property is proven in Ref. [@aharonov1998quantum]. The final property can be proven in the following way: for any $\mathcal{A}$, $\mathcal{B}$, $\rho$, and $E$ as above, then $$\begin{aligned}
\| \mathcal{A} -\mathcal{B} \|_{\diamond} &\geq \| [\mathcal{A} -\mathcal{B}](\rho)\|_1, \vphantom{\max_T} \\
&= 2 \max_{T \leq \mathds{1}} \left[ \text{Tr}(T [\mathcal{A} -\mathcal{B}](\rho) ) \right],\label{max-Tr} \\
&\geq 2 | \text{Tr}(E [\mathcal{A} -\mathcal{B}](\rho) )| \vphantom{\max_T},\end{aligned}$$ where the maximization in Eq. (\[max-Tr\]) is over all positive operators $T$ satisfying $T\leq \mathds{1}$, and this equality follows from the relation $ \| \sigma - \sigma' \|_1 = 2 \max_{T \leq \mathds{1}} \left[ \text{Tr}(T (\sigma - \sigma')) \right]$, for any density operators $\sigma$ and $\sigma'$, which is given in Refs. [@gilchrist2005distance; @nielsen2010quantum].
We are now ready to prove Eq. . Using the properties of the diamond norm given above, we have that:
$$\begin{aligned}
|\tilde{\delta}_m| & \leq \frac{1}{|\mathcal{C}|^m}\sum_{{\boldsymbol{s}}} |\text{Tr} (E (\Lambda_{{\boldsymbol{s}}^{-1}} - \bar{\Lambda}) C_{s_1}^{-1} \dots C_{s_m}^{-1} \tilde{C}_{s_m} \dots \tilde{C}_{s_{1}} (\rho))|,\label{ne1} \\
&\leq \frac{1}{2|\mathcal{C}|^m}\sum_{{\boldsymbol{s}}} \| (\Lambda_{{\boldsymbol{s}}^{-1}} - \bar{\Lambda}) C_{s_1}^{-1} \dots C_{s_m}^{-1} \tilde{C}_{s_m} \dots \tilde{C}_{s_{1}}\|_{\diamond}, \label{ne2}\\
&\leq \frac{1}{2|\mathcal{C}|^m}\sum_{{\boldsymbol{s}}} \| (\Lambda_{{\boldsymbol{s}}^{-1}}- \bar{\Lambda}) \|_{\diamond}\| C_{s_1}^{-1} \|_{\diamond} \dots \| C_{s_m}^{-1} \|_{\diamond}\|\tilde{C}_{s_m} \|_{\diamond} \dots \| \tilde{C}_{s_{1}}\|_{\diamond}, \label{ne3}\\
&\leq \frac{1}{2|\mathcal{C}|^m}\sum_{{\boldsymbol{s}}} \| (\Lambda_{{\boldsymbol{s}}^{-1}} - \bar{\Lambda}) \|_{\diamond}, \label{ne4}\\
&= \frac{1}{2|\mathcal{C}|}\sum_{i=1}^{|\mathcal{C}|} \| (\Lambda_i - \bar{\Lambda}) \|_{\diamond}, \label{ne5}\\
& = \delta_{\diamond} \label{ne6}.\end{aligned}$$
Eq. (\[ne1\]) follows from Eqs. (\[tildedeltam\]) and the triangle inequality; Eq. (\[ne2\]) follows from Eq. (\[d3\]); Eq. (\[ne3\]) follows from Eq. (\[d2\]); Eq. (\[ne4\]) follows from Eq. (\[d1\]); Eq. (\[ne5\]) follows by noting that there are $|\mathcal{C}|^{m}$ terms which are being summed over, in the quantity of Eq. (\[ne4\]), and $1/|\mathcal{C}|$ of them are associated with each possible inversion Clifford; Eq. (\[ne6\]) simply follows from the definition for $\delta_{\diamond}$. This concludes our proof that $|\tilde{\delta}_m| \leq \delta_{\diamond} $.
Relating $r$ to an AGsI {#relating-r-to-an-agsi .unnumbered}
-----------------------
Following the main text and Wallman [@wallman2017randomized], let $\mathscr{L}'(\mathcal{E}) = \text{avg}_i[ \tilde{C}_{i} \mathcal{E} C_{i}^{-1}]$ [@wallman2017randomized], and observe that $\mathscr{L}'$ has the same spectrum as $\mathscr{L}$. Wallman [@wallman2017randomized] gives an explicit construction, in terms of the imperfect Clifford superoperators $\tilde{\mathcal{C}}$, of a superoperator $\mathcal{L}$ that satisfies $\mathscr{L}'(\mathcal{L}) = \mathcal{L}\mathcal{D}_{\gamma}$. Now, consider the particular gauge representation of the imperfect Cliffords given by $\tilde{\mathcal{C}}(\mathcal{L}^{-1}) = \{\mathcal{L}^{-1} \tilde{C}_{i} \mathcal{L}\}$. As long as $\mathcal{L}$ is invertible, then we have $\bar{\Lambda}_{\mathcal{L}} = D_{\gamma}$ where $\bar{\Lambda}_{\mathcal{L}} \equiv \text{avg}_i[ \mathcal{L}^{-1} \tilde{C}_{i} \mathcal{L} C_{i}^{-1}]$. Taking the AGI to the identity of both sides of this equation obtains $$\epsilon(\bar{\Lambda}_{\mathcal{L}},\mathds{1}) = \frac{(d-1)(1-\gamma)}{d} \equiv r_{\gamma}.$$
Hence, as we have already argued in the main text that $r = r_{\gamma}+ \delta_r$ with $\delta_r$ negligible, we have that $r = \epsilon(\bar{\Lambda}_{\mathcal{L}},\mathds{1}) + \delta_r$. Now, $\bar{\Lambda}_{\mathcal{L}}$ is simply the average error map calculated in a particular gauge (conjugation of the $\tilde{C}_i$ by $\mathcal{L}^{-1}$ is a gauge transformation). In particular, it then immediately follows that $r =\epsilon(\tilde{\mathcal{C}}(\mathcal{L}^{-1}), \mathcal{C}) +\delta_r$. Hence, $r \approx \epsilon(\tilde{\mathcal{C}}(\mathcal{L}^{-1}), \mathcal{C})$ with the approximation error negligible.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'A three-dimensional hydrodynamical model for a micro random walker is combined with the idea of chemotactic signaling network of E. coli. Diffusion exponents, orientational correlation functions and their dependence on the geometrical and dynamical parameters of the system are analyzed numerically. Because of the chemotactic memory, the walker shows superdiffusing displacements in all directions with the largest diffusion exponent for a direction along the food gradient. Mean square displacements and orientational correlation functions show that the chemotactic memory washes out all the signatures due to the geometrical asymmetry of the walker and statistical properties are asymmetric only with respect to the direction of food gradient. For different values of the memory time, the Chemotactic index (CI) is also calculated.'
author:
- 'H. Mohammady, B. Esckandariun and A. Najafi'
date:
-
-
title: Hydrodynamical random walker with chemotactic memory
---
Introduction
============
Random walk as a general mathematical tool can describe a large class of biophysical systems such as the motion of Brownian colloidal particles and the motion of biological self-propelled microorganisms [@RWrev; @bergbook; @reinforcedRW; @statchemo; @biasedswiming; @vafabakhsh]. Passive colloidal particles in response to the thermal forces randomly change their directions but microorganisms sense the directions and overcome the thermal randomness to reach a predefined target point. Microorganisms do not have a complex intelligence in the form of a brain that can process the complicated signals from different senses. Chemotaxis is a mechanisms that microscopic organisms use to detect their right tracks and navigate toward their targets [@berg; @adler; @naturereview; @kaup]. The phenomenon of chemotaxis has inspired extensive research both due to its direct biological relevance [@bray; @BergEC; @frankchem] and also because of the practical needs for designing artificial nanorobots that can sense the direction [@mit; @molmot]. As the swimming of bacteria takes place in aqueous media, the inertialess condition in microscopic world results a peculiar fluid dynamic problem[@Purcell]. From the other hand, at the scale of micrometer, the fluctuations are the non negligible part of the physics and any kind of modeling should take into account the effects due to randomness. Mathematical description of chemotaxis in terms of random walk requires a knowledge about the values of jumping displacements and rotations and their transition probabilities.
So far, theoretical studies on random walk modeling for microorganisms do not consider the hydrodynamical details and also the mechanism of chemotactic memory in a unified model. In this article, we aim to construct a model that takes into account the details of hydrodynamical displacements, mechanism of chemotaxis memory and also the physics of fluctuations. This model will exhibit detailed features of the motion that provides a coupling between the geometrical parameters of the walker with the conformational rates. Geometrical parameters contribute through the hydrodynamic part and the dynamical parameters will enter through the chemotactic memory mechanism.
The structure of this article is as follows: In section II we introduce the hydrodynamical details of the system and the chemical mechanism for the memory of the swimmer will be studied in section III. Finally the statistical results based on numerical investigations will be presented in section IV.
![ (a) A sphere with radius $R$ models the body of a bacterium and a moving small sphere in resemblance with flagellum, provides the driving force. The hydrodynamical calculations are done in a co-moving frame of reference. (b) A set of jumps are chosen as CW rotations. The rate of this CW rotations are determined by the chemotactic response function while the rate for other jumps are given randomly. (c) A simplified picture of the inter cellular chemical network in E. coli. Two important processes of phosphorylation and methylation take place inside cell. Phosphorylated CheY-P enzymes produced by cheA (enzymes connected to receptors), are responsible to enhance the rotational direction of flagella and subsequently force bacterium to tumble. Methylation level of receptors from one hand and the concentration of the food from other hand, change the activity of receptors and enhance the phosphorylations process. (d) Subsequent runs and tumbles would lead the bacterium to find the source of food. []{data-label="fig1"}](fig1.eps){width="0.95\columnwidth"}
Hydrodynamical Model
====================
Our goal in this article is to combine the idea of chemical memory with a hydrodynamical model of a walker. Now let us introduce the hydrodynamical details of our system. Inspired by a Bacterium, Fig. \[fig1\](a) shows the body of our walker that is modeled by a sphere of radius $R$. The driving force is modeled by a mobile small sphere with radius $a$ ($a\ll R$). These two spheres are connected by an arm with negligible diameter. This model resembles the geometry of a bacterium with a single tail. As shown in this figure, and in a reference frame connected to the large sphere, jumps of this small sphere between $4$ vertices of a pyramid will construct all internal discontinues jumps of the walker. The apex of this pyramid is a point with a distance $L$ apart from the large sphere and is chosen as state $(1)$. The other $3$ states are located on the base of this pyramid. The apex angle is $2\phi$ and the apex sides are $\varepsilon$. The base of this pyramid is an equilateral triangle with sides $2\varepsilon\sin\phi$. The angle $\varphi$ may resemble the amplitude of flagellum undulations. The hydrodynamic question that we need to address here, is the differential change of the position and orientation of the system for an internal jump. For a fluid with viscosity $\eta$ and at the condition of micron scale, the inertialess stokes equation $\eta\nabla^2{\bf u}-\nabla P=0$, written for the fluid velocity ${\bf u}$ and pressure $P$, describes the dynamics. The condition of incomprehensibility, should also be considered. A prescribed motion corresponding to a jump of the small sphere, will enter to the dynamics through the boundary conditions. Solving the Stokes equation with the corresponding boundary conditions would result the dynamical properties of the large sphere during an internal jump. Calculations similar to the details presented elsewhere, reveals the dynamical results [@molhun]. To summarize the hydrodynamical results, let’s denote the relative speed of small sphere with respect to the large sphere in each jump by $v$. Now the differential displacement and rotation of the large sphere in the laboratory frame for a jump from state $(i)$ to state $(j)$ read: $$\Delta\vec{r}^{H}_{ij}={\cal R}( \hat{n})\delta\vec{x}_{ij},~~~\Delta\hat{n}^{H}_{ij}={\cal R}( \hat{n})\delta\vec{\omega}_{ij}\times\hat{n},$$ where $\hat{n}$ represents the directer vector of the walker and ${\cal R}(\hat{n})$ is an appropriate rotation matrix that transform the comoving frame of reference to the frame of laboratory. The comoving frame is shown in Fig. \[fig1\](a). The differential rotations and displacements in the comoving frame are given by: $$\vec{\omega}_{12}=\left (
\begin{matrix}
\frac{-2\alpha}{\sqrt{3}}\sin\varphi\nonumber\\
0\nonumber\\
-2\alpha\sin\varphi\nonumber\\
\end{matrix}\right),~~~
\vec{\omega}_{24}=\left (
\begin{matrix}
3\alpha\nonumber\\
0\nonumber\\
\alpha\nonumber\\
\end{matrix}\right).~~~$$ $$\delta\vec{x}_{12}=\left (
\begin{matrix}
-\delta\sin\varphi\nonumber\\
\delta '\nonumber\\
\frac{\delta}{\sqrt{3}}\sin\varphi\nonumber\\
\end{matrix}\right),~~~
\delta\vec{x}_{24}=\left (
\begin{matrix}
\frac{\delta}{2}\nonumber\\
0\nonumber\\
\frac{-\sqrt{3}\delta}{2}\nonumber\\
\end{matrix}\right),~~~$$ where the parameters are defined as: $$\begin{aligned}
\alpha&=\frac{3}{8}~(\frac{\varepsilon}{R})~(\frac{a}{R})(1+\frac{L}{R}),~~~\delta=(\frac{\varepsilon}{R}) ~(\frac{a}{R})~(1-\frac{3}{4}\frac{R}{L}),\nonumber\\
\delta '&=(\frac{\varepsilon}{R}) ~(\frac{a}{R})~(1-\frac{3}{2}\frac{R}{L})~\sqrt{1-\frac{4}{3}\sin ^{2}\varphi}.\nonumber\end{aligned}$$ Please note that the results are presented for a walker with $\varepsilon\ll L$. The differential changes are given only for two jumps, a jump started from the apex and a jump in the base face of the pyramid. Other jumps can be obtained from these two special jumps by applying the appropriate rotation matrices. Symmetry requires that $\vec{\omega}_{ij}=-\vec{\omega}_{ji}$ and $\delta\vec{x}_{ij}=-\delta\vec{x}_{ji} $. A hydrodynamic time scale can be defined as: $\tau^H=\varepsilon/v$. This is the time for jumps that start farm the apex of pyramid, the time for other jumps are given by: $2\tau^H\sin\varphi$.
In the next section we first introduce the chemotactic memory of a bacterium then show how can we combine the idea of chemotactic memory with the above introduced hydrodynamical swimmer.
Chemotactic memory
==================
To have a plan for internal jumps, we use the chemotactic strategy that bacteria use to navigate. Among different microorganisms, the chemical network responsible for chemotactic signaling is well understood and studied in E. coli [@ecoli1; @ecoli2]. Running state of this bacterium is due to the CCW (counter clockwise) rotation of flagella and changing the flagellar rotational state to CW (clockwise) will result a tumble. Chemical signals inside cell, controls the frequency of these running and tumbling states. A very simplified picture of different proteins involved in the chemical signal transduction pathway is shown in Fig. \[fig1\](c). Two important processes of phosphorylation and methylation take place inside cell. Phosphorylated CheY-P enzymes produced by cheA (enzymes connected to receptors), are responsible to enhance the above mentioned frequency. Methylation level of receptors from one hand and the concentration of the food from other hand, change the activity of receptors and enhance the phosphorylation process [@vladimirov1; @vladimirov2; @vladimirov3]. The methylation process provides a kind of chemical memory for the cell and allows the organism to compare the current local value of food concentration by its value at past. Depending on the parameters, A bacterium with this sort of memory will have chance to find a way to reach a point with maximum value of food concentration, see Fig. \[fig1\](d).
Now we want to combine the idea of chemotactic memory with the details of hydrodynamic jumps of the walker. Our modeling is based on stochastic jumps. Let denote the transition probability for jump from state $(i)$ to state $(j)$ by $P_{ij}$. In the case with $P_{ij}=1/3$, we will have a random walker that has no chance to sense the direction of gradient. What we want to consider, is a sort of an intelligent walker that can dynamically change its jumping probabilities. In comparison with E. coli, we first define a set of jumps that corresponds to a CW rotation. We define all the following jumps as CW jumps[@cwrotation]: $$CW ~\text{jumps}:~~~1\rightarrow 2,~~2\rightarrow 3,~~3\rightarrow 4,~~4\rightarrow 2.$$ In Fig. \[fig1\](b), all these CW jumps, are shown. After defining CW rotations, we assume that the probability for any CW jump is given by a response $S(\vec{r},t)$ from a chemotactic memory, and other jumps are determined randomly so that: $$P_{ij}= \left\{
\begin{array}{lr}
S(\vec{r},t) \text{,} ~~~~~~~~~~ \text{CW}~ \text{jumps} \\
\frac{1-S(\vec{r},t) }{2} \text{,} ~~~~~~~~~\text{otherwise}. \\
\end{array} \right.
\label{e-6}$$ Here $\vec{r}$, is the position of the walker at time $t$. Similar to the chemotaxis signaling network of E. coli, we assume that the signal $S$, is connected to the source $c$ (the local concentrations of food) through an intermediate dimensionless memory function $m$ as: $S(\vec{r},t)=\xi/(1+\exp[m(\vec{r},t)-v_0c(\vec{r})])$ [@vladimirov1; @vladimirov2]. The dynamics of memory function is given by: $\dot{m}(\vec{r},t)=(\tau^{H}/\tau_{ch})(S(\vec{r},t)-\xi/3)$, where the dimensionless time scale of the adaptation is given by $\tau_{ch}/\tau^{H}$ and $v_{0}$ is a constant that has the dimension of volume. For a uniform profile of the concentration, this system reaches a steady state with $S^{*}(\vec{r},t)=\xi/3$. Here $\xi$ is a parameter in the interval $]0,1]$ and it shows how the internal jumps of the walker is anisotropic in the absence of any food gradient. Throughout this paper we will choose $\xi=0.95$. For a nonuniform concentration profile, there is no static steady state solution and the system evolves in time by continuously adjusting its relative position and orientation with respect to the concentration profile.
The statistical properties of this swimmer will be studied in details in the next section.
![Two different trajectories for the walker are shown. Part (a) corresponds to a walker moving in a unifrom concentration and (b) a linear concentration. Other numerical parameters are: $ a=0.2R$, $L=6.1R$, $\varepsilon=0.6R$, $\varphi=\pi /6 $, $\tau^H=0.02$, $ \tau^{ch}=100\tau^H$.[]{data-label="fig2"}](fig2.eps){width="0.9\columnwidth"}
![Chemotactic index in terms of the memory time is investigated for two walkers with different geometrical values. The positive overcome of the chemotactic mechanism is not sensitive to the geometrical parameter.[]{data-label="fig3"}](fig3.eps){width="0.95\columnwidth"}
{width="0.99\linewidth"}
Results
=======
Fig. \[fig2\](a) shows a typical trajectory of the walker in a uniform concentration profile of food molecules. It represents the trajectory of a random walker. To study the effect of a nonuniform concentration we choose a linear gradients in $x$ direction. Throughout this paper, and for nonuniform concentration, we choose a linear gradient with $v_0c(\vec{r})=100 x/R$. A typical trajectory for the walker moving in this concentration field is shown in Fig. \[fig2\](b). As one can see, the subsequent tumbles bias the trajectory toward the place with larger concentration of food molecules. In the literature of chemotaxis, $CI$ the chemotactic index is an important quantity that shows how accurate is a direction sensing mechanism. It is defined as the ratio of the walking displacement along the concentration gradient to the total length of the walking trajectory. Depending on the dynamical variables of the system, $CI$ belongs to the interval $[-1,1]$. In Fig. \[fig3\], $CI$ is plotted in terms of the dimensionless memory time $\tau_{ch}/\tau^H$. Here $\tau_{ch}$ is a parameter that comes from the chemical dynamics and $\tau^{H}$ is a geometrical parameter. For large memory times ($\tau_{ch}\geq\tau^H$), the chemotactic index is positive. This is a signature saying that the chemotactic mechanism has a positive overcome and the walker can successfully reach the target, the place of more food. As the time scale for a single jump is given by $\tau^{H}$, this proves that $\tau_{ch}$ plays the role of a memory time. The memory time should be greater than the individual jumping time and this is the only condition required to have a successful gradient sensing walker. The $CI$ is calculated for two different values of the apex angle. It is seen that the overcome of the searching mechanism is not so sensitive to this angle. Now we can study the statistical properties of the system for $\tau_{ch}\geq\tau^H$. To have a better understanding of the role of fluctuations, we repeat the simulations for an ensemble of walkers and have studied the average statistical properties of the system. Mean displacement (MS), mean square displacements (MSD) and correlation functions are the statistical variables that we consider. To quantify the results of MSD, we define the diffusion exponents as: $$\langle x^2\rangle\sim t^{\nu_\parallel},~~~
\langle y^2\rangle=\langle z^2\rangle\sim t^{\nu_\perp},$$ where $\nu_{\parallel}$ and $\nu_{\perp}$ are the diffusion exponents along the gradient and along a direction perpendicular to the gradient, respectively. For a symmetric and normal random walker we have: $\nu_\parallel=\nu_\perp=1$. MD for a random walker moving in uniform and nonuniform concentration profiles are presented in Fig. \[fig4\](a). As we expect, for nonuniform concentration the characteristics of a random walker is recovered, but for a nonuniform concentration with a gradient in $x$ direction, the walker is biased toward the positive $x$ direction. For nonuniform concentration, MD in the perpendicular directions ($\langle y\rangle$, $\langle z\rangle$)are zero but it is not zero in the direction of gradient ($\langle x\rangle$).
MSD in terms of time, in logarithmic scale, shows a nonlinear crossover from a short time to long time behavior. Figures \[fig4\](b) and (c) show the short and long time MSD results for this walker. This crossover is a result of the hydrodynamical anisotropy of the system. Please note that our system, spherical body with a connected tail, is anisotropic. A similar crossover is recently observed for a diffusing object with boomerang geometry [@boomerang]. Short time behavior for a walker moving in a linear concentration corresponds to $\nu_\parallel=1.8$ and $\nu_\perp=1.7$. Long time behavior of the walker moving in a uniform concentration shows $\nu_\parallel=\nu_\perp=1.0$, that characterizes a normal random walker (Fig. \[fig4\] (c)). For a walker moving in a nonuniform concentration, the exponents are given by $\nu_\parallel=1.6$ and $\nu_\perp=1.5$. The walker performs a super diffusion in all directions with an asymmetry in the direction parallel to the gradient concentration. To have more insights about the role of the geometry, we have studied the orientational correlation function $\langle\theta_{x}(0)\theta_{x}(t)\rangle$ where $\theta_x(t)$ represents the angle that the director of the walker makes with $x$ axis (parallel to the concentration gradients). Fig. \[fig5\], shows the features of the correlation function for walkers moving in uniform and nonuniform concentrations. Correlation time, the decay time for the correlation, is sensitive to the apex angle. For larger apex angles, the correlation time is also larger. This graph shows that the crossover time is essentially the time that orientational correlation washes out.
![Orientational correlation function is plotted as a function of time. Here $\theta_x(t)$, is the angle that the director vector of walker makes with the $x$ axis of the laboratory frame. Correlation time is sensitive on the geometrical variable (here $\varphi$) of the walker. For larger apex angle, the correlation function for a nonuniform gradient is larger than the corresponding value in uniform concentration. The dashed line shows the complete uncorrelated state with correlation $\pi^2/4$. The numerical parameters are as in Fig. \[fig2\].[]{data-label="fig5"}](fig5.eps){width="0.9\columnwidth"}
In conclusion, we have considered the statistical properties of a model hydrodynamical walker moving in a gradient field of food. Nontrivial coupling of geometrical parameters with dynamical parameters, reveals interesting statistical properties of the walker. The memory time that is introduced within chemotactic mechanism, makes a superdiffusion walker. A crossover from a short time to long time behavior of MSD is observed and the crossover time is the orientational correlation time. The hydrodynamic interaction between different walkers, is shown to have interesting features [@coherentcoupling]. Along the extension of this work, we are considering the role of hydrodynamic couplings in the physics of chemotaxis .
A. N. acknowledges the Abdus Salam international centre for theoretical physics for hospitality during the final stage of this work.
[99]{}
E. A. Codling, M. J. Plank and S. Benhamou, J. R. Soc. Interface, [**5**]{}, 813 (2008).
H. C. Berg, [*Random walks in biology*]{} (Princeton University Press, Princeton, 1983).
H. G. Othmer and A. Stevens, J. Appl. Math. [**57**]{} 1044 (1997).
P. S. Lovely and F. W. Dahlquist, J. Theor. Biol. [**50**]{} 477 (1975).
N. A. Hill and D. -P. Häder, J. Theor. Biol. [**186**]{} 503 (1997).
J. R. Howse [*et. al*]{}, Phys. Rev. Lett. [**99**]{} 048102 (2007).
H. C. Berg, Ann. Rev. Biophys. Bioeng. [**4**]{} 119 (1975).
J. Adler, Science [**153**]{}(3737) 708 (1966).
P. J. M. Van Haastert1 and P. N. Devreotes, Nature Reviews Mol. Cel. Biol. [**5**]{}, 629 (2004).
U. B. Kaupp, N. D. Kashikar, and I. Weyand, Ann. Rev. Physio. [**70**]{}, 93 (2008).
D. Bray, [*Cell Movements: From Molecules to Motility*]{} 2nd ed. (Garland, New York, 2001).
H.C. Berg, [*E. coli in Motion*]{} (Springer-Verlag, New York, 2004).
B.M. Friedrich and F. Jülicher, New J. Phys. [**10**]{}, 123025 (2008); B.M. Friedrich and F. Jülicher, Proc. Natl. Acad. Sci. (USA) [**104**]{}, 13256 (2007).
P. Dittrich, J. Ziegler, W. Banzhaf, Artificial life, [**7**]{}, 225 (2001).
E.R. Kay, D.A. Leigh, and F. Zerbetto, Angew. Chem., Int. Ed. [**46**]{}, 72 (2007)
E.M. Purcell, Am. J. Phys. [**45**]{}, 3 (1977).
A. Bren and M. Eisenbach, J. Bacteriol. [**182**]{}, 6865 (2000).
J. J. Falke, R. B. Bass, S. L. Butler, S. A. Chervitz and M. A. Danielson, Annu. Rev. Cell Dev. Biol. [**13**]{}, 457 (1997).
N. Vladimirov, L. Løvdok, D. Lebiedz, and V. Sourjik, PLoS Computational Biology, [**4**]{}(12), 1 (e1000242) (2008). N. Vladimirov, D. Lebiedz, and V. Sourjik, PLoS Computational Biology, 1 (e1000717) ,[**6**]{} (2010). N. Vladimirov, and V. Sourjik, Biol. Chem. [**390**]{}, 1097 (2009).
A. Najafi and R. Zargar Phys. Rev. E [**81**]{}, 067301 (2010); A. Najafi, Phys. Rev. E [**83**]{}, 060902(R) (2011).
R. G. Endres and N. S. Wingreen, PNAS [**105**]{}(41), 15749 (2008).
A. Chakrabarty, A. Konya, F. Wang, J. V. Selinger, K. Sun, and Q. H. Wei, Phys. Rev. Lett. [**111**]{}, 160603 (2013).
Please note that this definition for the CW rotations is not unique. Other choices can work well.
A. Najafi and R. Golestanian Euro. Phys. Lett., [**90**]{}, 68003 (2010).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Recent advancement in statistical learning and computational ability has enabled autonomous vehicle technology to develop at a much faster rate and become widely adopted. While many of the architectures previously introduced are capable of operating under highly dynamic environments, many of these are constrained to smaller-scale deployments and require constant maintenance due to the associated scalability cost with high-definition (HD) maps. HD maps provide critical information for self-driving cars to drive safely. However, traditional approaches for creating HD maps involves tedious manual labeling. As an attempt to tackle this problem, we fuse 2D image semantic segmentation with pre-built point cloud maps collected from a relatively inexpensive 16 channel LiDAR sensor to construct a local probabilistic semantic map in bird’s eye view that encodes static landmarks such as roads, sidewalks, crosswalks, and lanes in the driving environment. Experiments from data collected in an urban environment show that this model can be extended for automatically incorporating road features into HD maps with potential future work directions.'
author:
- '$^{*}$David Paz$^{1}$, $^{*}$Hengyuan Zhang$^{1}$, $^{*}$Qinru Li$^{1}$, $^{*}$Hao Xiang$^{1}$, Henrik Christensen$^{1}$[^1] [^2]'
bibliography:
- 'citation.bib'
title: '**Probabilistic Semantic Mapping for Urban Autonomous Driving Applications** '
---
INTRODUCTION
============
High-definition (HD) maps provide useful information for autonomous vehicles to understand the static parts of the scene. Due to the nature of the information encoded in HD maps–such as centimeter-level definitions for road networks, traffic signs, crosswalks, stop signs, traffic lights and even speed limits–many of these maps become outdated during construction or road network changes. Given these fast-changing environments, manually annotated HD maps become obsolete and may cause vehicles to perform inadequate reference path tracking actions leading to unsafe scenarios. In the process of HD map generation, extracting semantics and attributes from data takes the most amount of the work [@Jiao18MLHDMap]. A model that automates this process could improve HD map generation, reducing labor cost, and increase driving safety.
Retrieval of centimeter-level semantic labels of the scene is a non-trivial task. Prior work such as [@Douillard11Classification; @Sengupta12DenseVisual] adopted Conditional Random Fields (CRF) to assign semantic labels. The advancement of deep learning provides promising results in terms of retrieving semantic information from images. State of the art semantic segmentation algorithms such as [@Long_2015_CVPR_FCNNet; @Zhao_2017_PSPNet; @Chen_2018_DeeplabV3Plus] generate pixel level semantic label with greater accuracy. Researchers have also explored methods to create semantic mapping in the environment. Examples are given in [@Maturana18RealtimeSemantic; @mattyus2017deeproadmapper; @homayounfar2019dagmapper]. Multi-sensor fusion is used to improve the robustness of the algorithm. However, these approaches either use aerial imagery to extract road information or do not explicitly map the lane and crosswalk information, which are required for HD maps. A detailed semantic map for urban autonomous vehicle applications is still of interest to explore.
Our work is focused on leveraging dense point maps built from a 16 channel LiDAR and state of the art semantically labeled images from deep neural networks, trained only on a publicly available dataset, to automatically generate dense probabilistic semantic maps in urban driving environments that provide robust labels for roads, lanes, crosswalks, and sidewalks. The comparison with a real HD map that has been tested in our autonomous vehicle for campus mail delivery tasks shows that the proposed model can identify semantic features in the road and localize them accurately in 3D space.

RELATED WORK
============
**Semantic Segmentation:** Semantic segmentation is the task of assigning each observed data point (e.g. pixel or voxel) to a class label that contains semantic meanings. Research in this field has made tremendous progress as deep learning emerges and large scale datasets like CityScapes [@Cordts_2016_CVPR_cityscapes], CamVid [@BrostowFC:PRL2008_Camvid], Mapillary[@MVD2017_Mapillary_Vistas], become available. As HD maps require centimeter-level labeling accuracy for each object, building such maps can significantly benefit from the pixel level information provided by semantic segmentation algorithms.
In 2D semantic segmentation, predominant works [@Long_2015_CVPR_FCNNet; @Zhao_2017_PSPNet; @chen2017rethinking_deeplabv3] leverage pyramid like encoder-decoder architecture to capture both global and local information in the images. Trained on the large scale datasets aforementioned, these network architectures can easily detect objects on the road even when these objects only have textural difference like colors. In 3D semantic segmentation, work has also been done by projecting 3D LiDAR point clouds into 2D image space and then feeding them to a classic CNN network to classify each point[@Wu_2018_squeezeseg; @heinzler2020cnn_rangenet; @wang2018pointseg]. While the results of these works seem promising, due to the nature of LiDAR sensors, these methods cannot distinguish objects with texture difference such as colors. Researchers have also proposed an alternative approach of segmentation on voxelized point clouds [@Tchapmi_2017_segcloud]. However, such 3D convolution is computationally expensive and usually requires dense raw point cloud measurements (i.e 32/64 channel LiDARs), making it troublesome to operate in real time.
**Semantic Mapping:** Semantic mapping, has a rich meaning in various literature [@KOSTAVELIS201586]. We adopted the definition closest to our context, defined in [@Wolf08SemanticMapping], which is the process of building maps that represent not only an occupancy metric but also other properties of the environment. Specifically, in the driving scenario, we focus on driveable surfaces and road marks.
Sunando et. al. [@Sengupta12DenseVisual] proposed a CRF based method in dense semantic mapping. They use associative hierarchical CRF for semantic segmentation and pairwise CRF for mapping. The pairwise potential minimization enforces the output smoothness. In [@Sengupta13Stereo], a stereo pair is used to provide robust depth estimation, however, they do not explicitly map the lane and crosswalk information, which are required for HD maps.
Daniel et. al. [@Maturana18RealtimeSemantic] fuse semantic images from camera with LiDAR point clouds, but they use real-time raw point clouds from a 64 channel LiDAR, which provides much denser real-time information–while we use a relatively inexpensive 16 channel LiDAR, a trade off between resolution and cost. Additionally, their focus is on off-road terrains, in contrast to our focus: the urban driving scenario. Urban driving scenes require special treatment of the lanes and crosswalks.
**Probabilistic Map**: Probabilistic maps have been successfully applied to localization [@Levinson10ProbMap] [@Shao2017HIGHACCURACYVL] and pedestrian motion prediction [@Wu18ProbMapPedes]. Probabilistic maps can capture the inherent distribution information in a discrete space while filtering the noise. In this work, we also demonstrate the successful attempt to apply this techniques to semantic map generation while leveraging the prior information in LiDAR’s intensity channel to produce more stable local maps.
METHOD
======
By fusing a local point cloud map along with semantic images together via geometric transformations, we propose a probabilistic map that can account for the distribution of labels assigned to each grid. As shown in Figure \[fig:pipeline\], the overall architecture consists of semantic segmentation, point cloud semantic association, semantic mapping, and map transformation.
Semantic Segmentation
---------------------
We use the DeepLabV3Plus [@Chen_2018_DeeplabV3Plus] network architecture to extract the semantic segmentation from 2D images. Multi-level spatial pyramid CNN layers are used to capture both the global and local features of the scene. A skip connection is made from the encoder to pass low level image features to the decoder. A lightweight ResNeXt50 [@Xie_2017_ResNeXt] pretrained on ImageNet [@ImageNet2015] is used as our feature extraction backbone. Compared with other popular backbones like ResNet101 [@He_2016_resnet], it can achieve the same performance in terms of mean of itersection over union (mIOU) with much fewer parameters and faster inference times. We also adopt the depthwise separable convolution inspired by [@Chen_2018_DeeplabV3Plus; @Chollet_2017_Xception] in our spatial pyramid CNN layers and decoder layers to further improve the inference time while preserving the same performance.
Our semantic segmentation network is trained in Mapillary Vistas dataset [@MVD2017_Mapillary_Vistas]. This dataset contains street-level scenes targeted for autonomous vehicle scenarios. It provides a large number of pixel-level semantic segmented images with 66 different kind of labels. We reduce the labels into 19 classes by removing labels that are not essential in our driving environment (e.g. snow) and merging labels with similar semantic meanings together (e.g. zebra line and crosswalk). The rationale behind this step is that we don’t want the network to classify objects that are unlikely to appear in the test environment, or details of which are beyond our interests. The details of label merging can be found in Section \[sec:exp\_semantic\]. In the end, there are 19 classes of semantic labels in Mapillary dataset we use to train our network, as presented in Table \[table:training\_labels\]. Each label’s associated color is also included in the table.
[||c c c c||]{}\
& & &\
& & &\
lane marking (white) & & &\
& & &\
& & &\
Point Cloud Semantic Association
--------------------------------
Given a semantic image, estimating the relative depth for the semantic pixel data can help us reconstruct the 3D scene with semantic labels. This information, however, is usually not available. Depth estimation from multi-view geometry requires salient features, which is prone to error on the road, or when the lighting condition varies a lot. Even with the LiDAR scan that we get at real time, a 16 channel LiDAR’s sparse resolution makes it challenging to infer the underlying geometry. Instead our method extracts small regions of a dense point cloud map and projects them into the semantically segmented image to retrieve depth information. Since building such dense point map only requires driving through the area once, this process is less expensive than human labeling.
The transformation from the local point map to the localizer (Velodyne LiDAR) ${}_{vlp}\mathbf{T}_{pm}$ is given by precise centimeter-level localization. We also calibrate the camera with respect to the LiDAR using a non-iterative method solution for the PnP method [@epnp], to estimate their relative transformation: ${}_{cam}\mathbf{T}_{vlp}$. Thus semantic information for a point $\mathbf{X}_{pm}$ can be retrieved from the label of its projected points in image coordinates $\mathbf{x}_{img}$.
Semantic Mapping
----------------
While the point cloud with semantic labels provides a 3D reconstruction of the scene, these labels are also subject to noise and small semantic label fluctuations. To address this, a local probabilistic map is constructed and updated using the semantic point cloud.
The local map is a bird eye view in the body frame (rear-axle) of the ego vehicle. We build a local map $\mathbf{M}_i$ at frame $i$, with the origin defined by pose $\mathbf{P}_i$, and update it by using the semantic point cloud. Only when the difference of our new pose $\mathbf{P}_j$ and old pose $\mathbf{P}_i$ is beyond a threshold $\mathbf{\Delta}$ do we construct a new map and transform the old map to it.
Then the projection on the bird’s eye view is simply by looking at its $x$, $y$ component. Discretization by $d$ is applied, and we obtain the corresponding discrete map positions $\mathbf{x}_{map}$.
We maintain a probabilistic map over a set of semantic labels to capture the latent distribution of semantic points. For each semantic label, we maintain a channel for each cell, representing the relative probability of that label. Originally for each cell, we simply incremented the log odds according to the number of points for each label that project into the cell. However, since the semantic segmentation’s quality may degrade at long distances, the map can become overconfident with the wrong labels that it accumulates when the cell is far. To address this problem, a decay process is added when we transform the map.
![Intensity of the point cloud[]{data-label="fig:intensity_crosswalk"}](images/intensity_crosswalk.png)
In addition, we utilize the prior knowledge of correspondences between the LiDAR intensity data and semantic labels to further augment the probabilistic map.
As Figure \[fig:intensity\_crosswalk\] shows, the zebra lines and side lanes have higher intensity based on the surface reflectively, suggesting higher chance that this region belongs to specific label. Thus we associate the log-odds with the intensity of that area.
Probabilistic Map Transformation
--------------------------------
For each frame, we update the probabilistic map with semantic point cloud data, but we do not construct an entirely new local map every frame. Since we only account for a local map and usually our old map and new map have a majority overlap, this transformation can be simplified by a homography, which speeds up the process.
EXPERIMENTS
===========
![Vehicle Sensor Configuration[]{data-label="fig:av"}](images/av.png)
Our experimental data was collected by one of our experimental autonomous cars. The car is equipped with a 16 channel LiDAR and six cameras. The camera’s are set up as two on the front, one on each side and two on the back as shown in Figure \[fig:av\]. Data from the front cameras, LiDAR, and vehicle position are recorded for experiments by driving along multiple areas along the UC San Diego campus. The camera data is streamed at around 13 Hz and the LiDAR scan is around 10 Hz. We drive through the campus to collect data for urban driving scenarios–including challenging scenarios like going uphill, downhill, intersections and constructions.
Semantic segmentation {#sec:exp_semantic}
---------------------
**Training Dataset**: The Mapillary Vistas dataset is split into 18,000 training images and 2,000 validation images. We merge terrain into vegetation, different types of riders into the human category, traffic-sign-back and traffic-sign-front into traffic-sign, bridge into building, and different kinds of crosswalks into a single crosswalk class. The training dataset is augmented by random horizontal flips with 0.5 probability, random resize with scale ranged from 0.5 to 2, and random crop. These images are also normalized to a distribution with mean of (0.485, 0.456, 0.406) and standard deviation of (0.229, 0.224, 0.225).
Although the network has never been trained on images from UC San Diego campus, we do not observe severe degeneration in our semantic map. The distribution shift across training and testing domains is alleviated due to the similarity of the Mapillary dataset and our driving scenarios, as well as the intense data augmentation in the training process.
**Hyperparameters**: We use a batch size of 16 with synchronized batch normalization [@Zhao_2017_PSPNet] to train our network for 200 epochs on eight 2080Ti GPUs with input image sizes of 640x640. The output stride of the network is 8. We use a SGD optimizer and employ a polynomial learning rate policy [@Chen_2018_DeeplabV3Plus; @Zhu_2019_nvidia_label_relaxation] where the learning rate is $base\_lr \times (1-\frac{epoch}{max epoch})^{power}$ with 0.005 base learning rate and power=0.9. The momentum and weight decay are set to 0.9 and $4e^{-5}$, respectively.
**Metric**: mean intersection over union (mIOU) is used to evaluate the performance of the network. The ResNeXt50 based network achieves 68.32% mIOU whereas the ResNet101 achieves 70.02% mIoU. Although the performance is slightly decreased in the ResNeXt50 network, it has fewer parameters and slightly faster inference times compared to the ResNet101 network. It is thus preferred to be deployed in our autonomous vehicle in which the GPU memory may be more limited. In our cases, the size of the network reduced from 367MB to 210MB (42.78% reduction) and inference time remains at approximately 0.2s.
![A comparison between not fusing intensity into semantic mapping (top image) and fusing intensity into probability update (bottom image) achieve slightly better results.[]{data-label="fig:fuse_intensity"}](images/without_intensity/Kazam_screenshot_00014.png)
![A comparison between not fusing intensity into semantic mapping (top image) and fusing intensity into probability update (bottom image) achieve slightly better results.[]{data-label="fig:fuse_intensity"}](images/with_intensity/Kazam_screenshot_00017.png)
Semantic Mapping
-----------------
### Fuse Intensity
In Figure \[fig:fuse\_intensity\], we compare the results obtained by incorporating the LiDAR intensity channel to the probabilistic map. Our intuition behind this is that due to poor light conditions, semantic segmentation often fails to capture the true label. After adding the intensity constraint, we achieve slightly better results shown in the bottom figure: the map contains very clear zebra lines and side lanes.
![A sequence of local map shows the automatic correction ability of probabilitic map.[]{data-label="fig:correct"}](images/sequence/correction.pdf)
### Probabilistic map fusion
With the probabilistic modeling approach, the map becomes more and more accurate when more information is available. This can be seen in Figure \[fig:correct\]. The crosswalk measurements are initially sparse, but they are corrected as the vehicle drives closer.
Comparison to Sparse LiDAR Scan
-------------------------------
As previously noted, one possible alternative for extracting the depth information is to use the point cloud data generated by the LiDAR in real time. By following a similar approach, we project the point cloud onto the semantic image frame, and then build the semantic point cloud and semantic local map.
Figure \[fig:points\_raw\_sparse\] shows that this approach gives relatively accurate point semantic correspondences. This is reasonable since we directly calibrate our camera with respect to the LiDAR. The localization error will not influence the semantic retrieval step. However, for the 16 channel LiDAR that we are using, the scans are too sparse to construct a semantic map in real time since the point cloud resolution increases only when we are close enough. This becomes worse when the car drives faster. Therefore, a prebuilt dense point cloud map allows us to construct semantic maps for longer ranges.
![Semantic mapping using real time LiDAR scan. The bottom image shows that the map become even sparser when the car drive faster. []{data-label="fig:points_raw_sparse"}](images/points_raw_crosswalk.png)
![Semantic mapping using real time LiDAR scan. The bottom image shows that the map become even sparser when the car drive faster. []{data-label="fig:points_raw_sparse"}](images/points_raw_sparse.png)
Comparison to Planar Assumption
-------------------------------
Another method explored assumes a flat ground and back projects the semantic images to it. Since this is a plane to plane mapping, a homography $\mathbf{H}$ is calculated in a similar way to the approach that we described to transform the probabilistic map.
![Using planar assumption to generate dense semantic maps. The reconstructed local map looks smooth when the planar assumption holds, as shown in the top image. However, we can observe surface distortion when this assumption fails, as shown in the bottom image.[]{data-label="fig:planar assumption"}](images/planar_crosswalk.png)
![Using planar assumption to generate dense semantic maps. The reconstructed local map looks smooth when the planar assumption holds, as shown in the top image. However, we can observe surface distortion when this assumption fails, as shown in the bottom image.[]{data-label="fig:planar assumption"}](images/planar_distort.png)
The top image in Figure \[fig:planar assumption\] shows that when the ground is flat, the image back projection approach provides dense information. This is because we utilize the 1920$\times$1440 original image, which provides full coverage of the over lap region in the local map. This is two orders of magnitude denser than the dense point map approach, where there are typically at the magnitude of 10K points. However, the planar assumption often fails on road intersections and steep inclines in urban driving scenarios. The bottom image in Figure \[fig:planar assumption\] gives a typical case when the vehicle is going downhill and distortion is observed as a result.
![Comparison with mannually labeled HD map: The white box is labeled crosswalk and the pink area corresponds to the semantic point cloud projection.[]{data-label="fig:vector_map_zoom"}](images/vector_map_zoom.pdf)
Comparison to HD Map
--------------------
An HD map was originally created for autonomous mail delivery tasks in the UC San Diego campus [@fsr19:avl]. This HD map contains manually annotated road information such as crosswalks, stop lines, sidewalks, and center of road lane definitions and has been tested in the realistic environments. It is therefore a good testing dataset for evaluating our automatically generated semantic map. Compared with the HD map, our model can help localize crosswalks as shown in Figure \[fig:vector\_map\_zoom\]. A larger map build from the local map is shown in Figure \[fig:stitched\_map\] and compared with point cloud map in Figure \[fig:stitched\_with\_pcd\].
![A larger map composed of multiple local semantic maps, the amplified image highlights the crosswalk localizations.[]{data-label="fig:stitched_map"}](images/Stitched_zoom2.pdf)
![The semantic map displayed on top of the point cloud map[]{data-label="fig:stitched_with_pcd"}](images/Stitched_overlay2.pdf)
CONCLUSION
==========
By fusing the rich information from semantic labels on image frames, our comparisons to manually annotated maps indicate that this work effectively introduces a statistical method for identifying road features and localizing them in 3D space that can be applied for automating HD map annotation for crosswalks, lane markings, driveable surfaces and sidewalks. These features can be incorporated for generating HD maps independently of predefined HD map formats with the additional extension of center lane identifications which are often used for path tracking algorithms.
By accounting for the road network junctions and forks, future work involves the full automation of road network annotations that could leverage graphical methods. While a combination of the techniques proposed can potentially address the scalability drawbacks from HD maps, they also propose new areas of research on high-level dynamic planning. Currently, many autonomous driving architectures require dense point cloud maps for localization and come at a scalability and maintenance cost in a similar way that HD maps do. By dynamically estimating driveable surfaces, traffic lanes, lane markings and other road features, the notion of using centimeter-level localization could be removed as long as immediate actions can be extracted from a high-level planner. In future work, we plan to explore solutions for fully automating the HD mapping process while exploring the idea of dynamic planning without a detailed dense point cloud map.
[^1]: $^{*}$These members contributed equally to this publication.
[^2]: $^{1}$Contextual Robotics Institute, University of California, San Diego, 9500 Gilman Dr, La Jolla, CA 92093
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We remark on the Garnier system in two variables.'
author:
- Yusuke Sasano
title: 'Remark on the Garnier system in two variables\'
---
Summary
=======
In this note, we consider the following questions;
Why do we need Okamoto-Kimura’s algebraic transformation of degree 2 for the Garnier system in two variables?
Here, the Garnier system in two variables is equivalent to the rational Hamiltonian system given by (see [@oka; @10]) $$\begin{aligned}
\label{1}
\begin{split}
dq_1&=\frac{\partial H_1}{\partial p_1}dt+\frac{\partial H_2}{\partial p_1}ds, \quad dp_1=-\frac{\partial H_1}{\partial q_1}dt-\frac{\partial H_2}{\partial q_1}ds,\\
dq_2&=\frac{\partial H_1}{\partial p_2}dt+\frac{\partial H_2}{\partial p_2}ds, \quad dp_2=-\frac{\partial H_1}{\partial q_2}dt-\frac{\partial H_2}{\partial q_2}ds,\\
H_1 &=-\frac{q_1(q_1-1)(q_1-t)(q_1-s)(q_2-t)}{(q_1-q_2)t(t-1)(t-s)}\{p_1^2+\frac{\kappa}{q_1(q_1-1)}\\
&-\left(\frac{\theta_1-1}{q_1-t}+\frac{\theta_2}{q_1-s}+\frac{\kappa_0}{q_1}+\frac{\kappa_1}{q_1-1} \right)p_1\}\\
&-\frac{q_2(q_2-1)(q_2-t)(q_2-s)(q_1-t)}{(q_2-q_1)t(t-1)(t-s)}\{p_2^2+\frac{\kappa}{q_2(q_2-1)}\\
&-\left(\frac{\theta_1-1}{q_2-t}+\frac{\theta_2}{q_2-s}+\frac{\kappa_0}{q_2}+\frac{\kappa_1}{q_2-1} \right)p_2\},\\
H_2&=\pi(H_1),
\end{split}\end{aligned}$$ where the transformation $\pi$ is explicitly given by $$\begin{aligned}
\begin{split}
\pi:&(q_1,p_1,q_2,p_2,t,s;\kappa_0,\kappa_1,\theta_1,\theta_2,\kappa) \rightarrow(q_2,p_2,q_1,p_1,s,t;\kappa_0,\kappa_1,\theta_2,\theta_1,\kappa).
\end{split}\end{aligned}$$ Here, $q_1,p_1,q_2$ and $p_2$ are canonical variables and $\kappa_0,\kappa_1,\theta_1,\theta_2$ and $\kappa$ are constant parameters satisfying the relation $$\kappa=\frac{1}{4}[(\kappa_0+\kappa_1+\theta_1+\theta_2-1)^2-\kappa_{\infty}^2].$$
For the system , we calculate its symmetry. We show that each B[ä]{}cklund transformation is a coupled B[ä]{}cklund transformation of the Painlevé VI system.
We see that the system is invariant under the following transformations defined as follows[:]{} with the notation $(*)=(q_1,p_1,q_2,p_2,t,s;\kappa_0,\kappa_1,\kappa_{\infty},\theta_1,\theta_2),$ $$\begin{aligned}
\begin{split}
s_0: (*) \rightarrow &(\frac{1}{q_1},-\left(q_1p_1-\frac{\kappa_0+\kappa_1-\kappa_{\infty}+\theta_1+\theta_2-1}{2} \right)q_1,\\
&\frac{1}{q_2},-\left(q_2p_2-\frac{\kappa_0+\kappa_1-\kappa_{\infty}+\theta_1+\theta_2-1}{2} \right)q_2,\frac{1}{t},\frac{1}{s};\\
&-\kappa_{\infty},\kappa_1,\kappa_0,\theta_1,\theta_2),\\
s_1: (*) \rightarrow &\left(q_1,p_1-\frac{\kappa_0}{q_1},q_2,p_2-\frac{\kappa_0}{q_2},t,s;-\kappa_0,\kappa_1,\kappa_{\infty},\theta_1,\theta_2 \right),\\
s_2: (*) \rightarrow &\left(q_1,p_1-\frac{\kappa_1}{q_1-1},q_2,p_2-\frac{\kappa_1}{q_2-1},t,s;\kappa_0,-\kappa_1,\kappa_{\infty},\theta_1,\theta_2 \right),\\
s_3: (*) \rightarrow &\left(q_1,p_1,q_2,p_2,t,s;\kappa_0,\kappa_1,-\kappa_{\infty},\theta_1,\theta_2 \right),\\
s_4: (*) \rightarrow &\left(q_1,p_1-\frac{\theta_1}{q_1-t},q_2,p_2-\frac{\theta_1}{q_2-t},t,s;\kappa_0,\kappa_1,\kappa_{\infty},-\theta_1,\theta_2 \right),\\
s_5: (*) \rightarrow &\left(q_1,p_1-\frac{\theta_2}{q_1-s},q_2,p_2-\frac{\theta_2}{q_2-s},t,s;\kappa_0,\kappa_1,\kappa_{\infty},\theta_1,-\theta_2 \right),\\
{\sigma}_1: (*) \rightarrow &(1-q_1,-p_1,1-q_2,-p_2,1-t,1-s;\kappa_1,\kappa_0,\kappa_{\infty},\theta_1,\theta_2),\\
{\sigma}_2: (*) \rightarrow &(q_2,p_2,q_1,p_1,s,t;\kappa_0,\kappa_1,\kappa_{\infty},\theta_2,\theta_1),\\
{\sigma}_3: (*) \rightarrow &\left(\frac{s-q_1}{s-1},-(s-1)p_1,\frac{s-q_2}{s-1},-(s-1)p_2,\frac{s-t}{s-1},\frac{s}{s-1};\theta_2,\kappa_1,\kappa_{\infty},\theta_1,\kappa_0 \right),\\
{\sigma}_4: (*) \rightarrow &(\frac{1}{q_1},-\left(q_1p_1-\frac{\kappa_0+\kappa_1+\kappa_{\infty}+\theta_1+\theta_2-1}{2} \right)q_1,\\
&\frac{1}{q_2},-\left(q_2p_2-\frac{\kappa_0+\kappa_1+\kappa_{\infty}+\theta_1+\theta_2-1}{2} \right)q_2,\frac{1}{t},\frac{1}{s};\\
&\kappa_{\infty},\kappa_1,\kappa_0,\theta_1,\theta_2).
\end{split}
\end{aligned}$$ The group $<{\sigma}_1,{\sigma}_2,{\sigma}_3,{\sigma}_4>$ is isomorphic to symmetric group of degree five.
By resolving an accessible singularity of the system , we transform the system into a polynomial Hamiltonian system.
We see that the birational and symplectic transformation $\varphi_1$[:]{} $$\label{symp}
\left\{
\begin{aligned}
Q_1=&\frac{1}{q_1-q_2},\\
P_1=&-\left((q_1-q_2)p_1-\frac{\kappa_0+\kappa_1-\kappa_{\infty}+\theta_1+\theta_2-1}{2}\right)(q_1-q_2),\\
Q_2=&q_2,\\
P_2=&p_2+p_1
\end{aligned}
\right.$$ takes the system into a polynomial Hamiltonian system.
We remark that for the polynomial Hamiltonian system obtained by , this system becomes again a polynomial Hamiltonian system in each coordinate $r_i \ (i=0,1,\ldots,6)$[: ]{} $$\begin{aligned}
&r_0:x_0=q_1, \ y_0=p_1, \ z_0=-(q_2p_2-\kappa_0)p_2, \ w_0=\frac{1}{p_2}, \\
&r_1:x_1=q_1, \ y_1=p_1, \ z_1=-((q_2-1)p_2-\kappa_1)p_2, \ w_1=\frac{1}{p_2}, \\
&r_2:x_2=(q_1q_2+1)q_2, \ y_2=\frac{p_1}{q_2^2}, \ z_2=\frac{1}{q_2}, \ w_2=-\left(q_2p_2-2\left(q_1q_2+\frac{1}{2} \right)\frac{p_1}{q_2} \right)q_2, \\
&r_3:x_3=-(q_1p_1+\kappa_{\infty})p_1, \ y_3=\frac{1}{p_1}, \ z_3=q_2, \ w_3=p_2,\\
&r_4:x_4=q_1, \ y_4=p_1, \ z_4=-((q_2-t)p_2-\theta_1)p_2, \ w_4=\frac{1}{p_2},\\
&r_5:x_5=q_1, \ y_5=p_1, \ z_5=-((q_2-s)p_2-\theta_2)p_2, \ w_5=\frac{1}{p_2},\\
&r_6:x_6=q_1, \ y_6=p_1+\frac{p_2}{q_1^2}, \ z_6=q_2+\frac{1}{q_1}, \ w_6=p_2.\end{aligned}$$ Here, for notational convenience, we have renamed $Q_i,P_i$ to $q_i,p_i$ (which are not the same as the previous $q_i,p_i$).
We see that the system obtained by is invariant under the following transformations defined as follows[:]{} with the notation $(*)=(q_1,p_1,q_2,p_2,t,s;\kappa_0,\kappa_1,\kappa_{\infty},\theta_1,\theta_2);$ $$\begin{aligned}
\begin{split}
\varphi_1 \circ s_1 \circ s_0 \circ {\varphi_1}^{-1}: (*) \rightarrow &(-(q_1q_2+1)q_2,-\frac{p_1}{q_2^2},\frac{1}{q_2},-\left(q_2p_2-2\left(q_1q_2+\frac{1}{2} \right)\frac{p_1}{q_2} \right)q_2,\frac{1}{t},\frac{1}{s};\\
&-\kappa_{\infty},\kappa_1,-\kappa_0,\theta_1,\theta_2),\\
\varphi_1 \circ s_1 \circ {\varphi_1}^{-1}: (*) \rightarrow &\left(q_1,p_1-\frac{\kappa_0 q_2}{q_1 q_2+1},q_2,p_2-\frac{\kappa_0 (2 q_1 q_2+1)}{q_2(q_1 q_2+1)},t,s;-\kappa_0,\kappa_1,\kappa_{\infty},\theta_1,\theta_2 \right),\\
\varphi_1 \circ {\sigma}_2 \circ {\varphi_1}^{-1}: (*) \rightarrow &(-q_1,-\left(p_1+\frac{p_2}{q_1^2} \right),q_2+\frac{1}{q_1},p_2,s,t;\kappa_0,\kappa_1,\kappa_{\infty},\theta_2,\theta_1).
\end{split}
\end{aligned}$$ On the other hand, the system obtained by is not invariant under the following transformation associated with holomorphy condition $r_0$: $$\begin{aligned}
\begin{split}
S_0: (*) \rightarrow &\left(q_1,p_1,q_2,p_2-\frac{\kappa_0}{q_2},t,s;-\kappa_0,\kappa_1,\kappa_{\infty},\theta_1,\theta_2 \right).
\end{split}
\end{aligned}$$ In this vein, we will guess that in [@Oka] Professors H. Kimura and K. Okamoto considered an algebraic transformation of degree 2 for the Garnier system in two variables.
[99]{}
H. Kimura, [*Uniform foliation associated with the Hamiltonian system ${\mathcal H}_{n}$*]{}, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) [**20**]{} (1993), no. 1, 1–60.
H. Kimura and K. Okamoto, [*On the polynomial Hamiltonian structure of the Garnier systems*]{}, J. Math. Pures Appl. [**63**]{} (1984), 129–146.
P. Painlevé, [*Mémoire sur les équations différentielles dont l’intégrale générale est uniforme*]{}, Bull. Société Mathématique de France. [**28**]{} (1900), 201–261.
P. Painlevé, [*Sur les équations différentielles du second ordre et d’ordre supérieur dont l’intégrale est uniforme*]{}, Acta Math. [**25**]{} (1902), 1–85.
B. Gambier, [*Sur les équations différentielles du second ordre et du premier degré dont l’intégrale générale est à points critiques fixes*]{}, Acta Math. [**33**]{} (1910), 1–55.
C. M. Cosgrove, [*All binomial-type Painlevé equations of the second order and degree three or higher*]{}, Studies in Applied Mathematics. [**90**]{} (1993), 119-187.
F. Bureau, [*Integration of some nonlinear systems of ordinary differential equations*]{}, Annali di Matematica. [**94**]{} (1972), 345–359.
J. Chazy, [*Sur les équations différentielles dont l’intégrale générale est uniforme et admet des singularités essentielles mobiles*]{}, Comptes Rendus de l’Académie des Sciences, Paris. [**149**]{} (1909), 563–565.
J. Chazy, [*Sur les équations différentielles dont l’intégrale générale posséde une coupure essentielle mobile* ]{}, Comptes Rendus de l’Académie des Sciences, Paris. [**150**]{} (1910), 456–458.
J. Chazy, [*Sur les équations différentielles du trousiéme ordre et d’ordre supérieur dont l’intégrale a ses points critiques fixes*]{}, Acta Math. [**34**]{} (1911), 317–385.
T. Suzuki, [*Affine Weyl group symmetry of the Garnier system*]{}, Funkcial. Ekvac. [**48**]{} (2005), 203–230.
K. Okamoto, [*Isomonodromic deformations and Painlevé equations, and the Garnier system*]{}, J. Fac. Sci. Univ. Tokyo, Sect. IA Math., [**33**]{} (1986), 575–618.
|
{
"pile_set_name": "ArXiv"
}
|
[**$n=3$ Differential calculus and gauge theory on a reduced quantum plane.**]{}\
[M. EL BAZ[^1], A. EL HASSOUNI [^2], Y. HASSOUNI[^3]\
and E.H. ZAKKARI[^4].]{}\
Laboratory of Theoretical Physics\
PO BOX 1014, University Mohammed V\
Rabat, Morocco.
**Abstract:** We discuss the algebra of $N\times N$ matrices as a reduced quantum plane. A $3-$nilpotent deformed differential calculus involving a complex parameter $q$ is constructed. The two cases, $q$ $3^{rd}$ and $N^{th}$ root of unity are completely treated. As application, a gauge field theory for the particular cases $n=2$ and $n=3$ is established.
**Keywords:** reduced quantum plane, non-commutative differential calculus n=3, gauge theory.
Introduction:
=============
An adequate way leading to generalizations of the ordinary exterior differential calculus arises from the graded differential algebra $\left[
1-3\right] $. These generalizations are not universal as far as we know, and many technics have been used to introduce differential calculus that corresponds to non commutative calculi. The latters involve a complex parameter that satisfies some conditions allowing the obtention of a consistent generalized differential calculus. It is usually called $q$-differential calculus. In ref $\left[ 1-3\right] $ , it is seen as a graded $q$-differential algebra which is the sum of $\
k-$graded subspaces, where $k\in \left\{ 0,1,\, 2...m-1\right\} .$ The relevent differential operator is an endomorphism $d$ satisfying $d^m=0$ and the $q$-Leibniz rule:
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
d(AB)=(dA)B+qAd(B).$
The most important property of this calculus is that it contains not only first differentials $dx^i,$ $i=1...n,$ but also it involves the higher-order differentials $d^jx^i,j=1....m-1$.
On the other hand, the differential calculi $(d^2=0)$ on noncommutative spaces was also studied by different authors, see for exemple $[4-9].$ The common property of these calculi is the covariance of these latters under some symmetry quantum group.
In this paper, we construct a covariant differential calculus $d^3=0$ on the algebra $M$ of $3\times 3$ matrices considered as a quantum plane. We will show that our differential calculus is covariant under the algebra of transformations with a quantum group structure. The complex deformation parameter $q$ ($3^{rd}-$root of unity) will play an important role in constructing the differential calculus that we introduce. As it is done in the litterature of deformed differential calculus $\left[ 4,5\right] $, this case implies a non trivial study. As application, we treat the gauge field theory.
The paper is organized as follows:
We start in section $2$ by defining the algebra of $N\times N$ matrix as a reduced quantum plane, where the deformation parameter $q$ is $N$-th root of unity. We also give a matrix realization in the case $N=3$. In section $3$ we construct the covariant differential calculus $d^3=0,$ on two dimensional reduced quantum plane as in ref $\left[ 1-3\right] $. The new objects, $d^2x$ and $d^2y,$ appearing in this construction are seen as the analogous of the differential elements $dx$ and $dy$ in the ordinary differential calculus. In section $4,$ we generalize this result by considering a complex deformation parameter $q$ $N^{th}$ root of unity.
In section $5$, we study the application of this new differential calculus $%
(N=3)$ to the gauge field theory on $M_3(C).$ We recall in section $6$ the differential calculus $d^2=0$ $\left[ 6-9\right] $, and we apply it to the gauge theory on $M_3(C)$.
Preliminaries about the algebra M$_3$(C) of N$\times $N matrices as a reduced quantum plane.
=============================================================================================
The associative algebra of $N\times N$ matrices is generated by two elements $x$ and $y$ $[10]$ satisfying the relations:
$$xy=qyx$$
and
$$x^N=y^N=1,$$
where $1$ is the unit matrix and $q$ $(q\neq 1)$ is a complex parameter $%
N^{th}$ root of unity.
In the case $N=3$, an explicit matrix realization of generators $x$ and $y$ $%
\left[ 6,11\right] $ is given by:
$$x=\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & q^{-1} & 0 \\
0 & 0 & q^{-2}
\end{array}
\right)$$
$$y=\left(
\begin{array}{ccc}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 & 0
\end{array}
\right) ,$$
and $q$ satisfies the relation:
$$1+q+q^{2}=0.$$
The associative algebra, noted by $C_q$ $\left[ x,y\right] :=C_q,$ of formal power series defined over the two dimentional quantum plane is generated by $%
x$ and $y$ with a single quadratic relation $xy=$ $qyx$. it is clear that $%
C_1$ $\left[ x,y\right] $ coincides with the algebra of polynomials over commuting variables $x,$ $y.$
Note that if the generators $x,$ $y$ do not satisfy any additional relations, then $C_q$ is infinite dimensional. In the case of the algebra $%
M_3(C)$ of $3\times 3$ matrices over complex numbers, the generators $x,$ $y$ satisfy the above quadratic relation and the cubic ones $x^3=y^3=1$. Thus it is generated by the following set: {$1,$ $x,$ $y,$ $x^2,$ $y^2,$ $xy,$ $%
x^2y,$ $xy^2,$ $x^2y^2$}. In this case, the algebra $M_3(C)$ appears as the associative quotient algebra $\ C_q^0$ by the bilateral ideal generated by $%
x^3$-$1=0$ and $y^3-1=0$. Here $C_q^0$ is the unital extension of $C_q.$ That is, in the sense of $\left[ 6,11\right] $, the $3\times 3$ matrices over $C$ are seen as a reduced quantum plane. We note that the functions of $x$ and $y$ are seen as formal power series with a maximum degree $3;$ this property will be extremely useful in what follows. In fact, the set of those functions is an associative algebra that is used to introduce a gauge field theory on the reduced quantum plane. This idea will be developed in sections $5$ and $7$.
Differential calculus with nilpotency $n=3$ on reduced quantum plane, case $q^3=1$
==================================================================================
The aim of this section is to construct a covariant $n=3$ nilpotent differential calculus by mixing two approaches; namely we adapt to the reduced quantum plane an idea originally proposed by Kerner $[1-3]$, and we use Couquereaux’s technics $[6]$ to ensure covariance. We denote by $\Omega $ the differential algebra generated by $\ x$, $y$, $dx$, $dy$, $d^2x$ and $%
d^2y$, where the ”2- forms” $d^2x$ and $d^2y$ are the second differentials of the basic variables $x$ and $y$.
Let us introduce the differential operator $d$ that satisfies the following conditions:
Nilpotency,
$$d^3=0.$$
Leibniz rule,
$$d(uv)=d(u)v+q^nud(v),$$
where $u$ is a form of degree $n$ and $q$ is $3^{rd}$ root of unity.
By applying the Leibniz rule on the $1$–form we obtain:
$$d(f(x)\, dx)=(df(x))\, dx+f(x)\, d^2x,$$
$f(x)$ are the $0-$form in the algebra $\ \Omega $. The notion of covariance is necessary for the consistency of every differential calculus. The set of transformations leaving covariant our differential calculus is $F\subset
Fun(SL_q(2,C))$ and the covariance is described by the left coaction. We start by explaining this coaction $\left[ 12\right] .$
The left coaction of the group $F$ on the reduced quantum plane is the linear transformation of coordinates given by:
$$\left(
\begin{array}{c}
x_{1} \\
y_{1}
\end{array}
\right) =\left(
\begin{array}{c}
a\, b \\ c\, d
\end{array}
\right) \otimes \left(
\begin{array}{c}
x \\
y
\end{array}
\right) .$$
We introduce also the line vectors with coordinate functions:
$$\left( x^1,\, y^1\right) =\left( x,\, y\right) \otimes \left(
\begin{array}{c}
a\, b \\ c\, d
\end{array}
\right) ,\ \ \$$
where the matrix elements $a,$ $b,$ $c$ and $d$ do not commute with each others.We require that the quantities $x_1,$ $y_1,$ $x^1,$ $y$ $^1$ obtained in the above relations satisfy the same relations as $x$ and $y.$ The two constraints $x_1y_1=qy_1x_1$ and $x^1y^1=qy^1x^1$ lead to the relations:
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$$\ \ \ \ \ \ \ ac=qca\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ bd=qdb$$
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$$\ \ \ \ \ \ \ \ ab=qba\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ cd=qdc$$
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$$\ \ \ \ \ \ \ \ \ \ \ \ bc=cb\ \ \ \ \ \ \ \ \ \ \ \ \ \ ad-da=(q-q^{-1})bc.$$
The algebra generated by $a,\ b,\ c,$ and $d$ is usually denoted $%
Fun(GL_q(2,C)).$ The $q$-determinant $D=da-q^{-1}bc$ is in the center of $%
Fun(GL_q(2,C)).$ If we set it to be equal to $1,$ we define the algebra $%
Fun(SL_q(2,C)).$ Assuming that the supplementary conditions $(x)^3=1$ and $%
(y)^3=1$ are also verified by the coordinates $x_1$, $\ y_1$ $($and $\ x^1$, $y^1)$, $($ $(x_1)^3=(y_1)^3=1$, $((x^1)^3=(y^1)^3=1)),$ implies $\ a^3=1,$ $%
b^3=0,$ $c^3=0,$ $d^3=1$. These new cubic relations on $Fun(SL_q(2,C)),$ yields a new algebra that we denote $F$. It is also a Hopf algebra. Indeed, it has a coalgebra structure (coproduct) which is compatible with the algebra one (product), this defines a bialgebra structure. An antipode and a co-unit are also defined . For further details on such structures on $F$, see, for example $[6]$.
The mixture of Kerner’s idea and Coquereaux’s technics allows us to construct the left covariant differential algebra $\Omega =\{x,$ $y,$ $dx,$ $%
dy,$ $d^2x,$ $d^2y\},$ see appendix$.$ The commutation relations between the generators of $\Omega $ are as follows:
$$x\, dx=q^{2}dx\, x$$
$$x\, dy=qdy\, x+(q^2-1)dx\, y$$
$$y\, dx=qdx\, y$$
$$y\, dy=q^2dy\, y$$
$$dy\, dx=q^2dx\, dy$$
$$x\, d^2x=q^2d^2x\, x$$
$$y\, d^2x=qd^2x\, y$$
$$y\, d^2y=q^2d^2y\, y$$
$$x\, d^2y=qd^2y\, x+(q^2-1)d^2x\, y$$
$$dx\, d^2y=d^2y\, dx+q(1-q)d^2x\, dy$$
$$dy\, d^2x=d^2x\, dy$$
$$dx\, d^2x=qd^2x\, dx$$
$$dy\, d^2y=qd^2y\, dy$$
$$d^{2}y\, d^{2}x=q^{2}d^{2}x\, d^{2}y.$$
As in the standard way, we define the partial derivatives in directions $x$ and $y$ through:
$$d=\frac \partial {\partial x}dx+\frac \partial {\partial y}dy=\partial
_xdx+\partial _ydy.$$
Consistency conditions as in $[9]$ yield:
$$\partial _{x}\, \partial _{y}=q\partial _{y}\, \partial _{x}$$
$$\partial _{x}\, x=1+q^{2}x\, \partial _{x}+(q^{2}-1)y\, %
\partial _{y}$$
$$\partial _{x}\, y=qy\, \partial _{x}$$
$$\partial _{y}\, y=1+q^{2}y\, \partial _{y}$$
$$\partial _{y}\, y=1+q^{2}y\, \partial _{y}$$
$$(dx)^{3}=(dy)^{3}=0.$$
The last equality $eq(29)$ can be related to the nilpotency relation encountered in the description of the fractional statistics. More precisely, we recover the description of physical systems that generalize fermions. In a forthcoming paper $\left[ 13\right] $, we reintroduce these systems using this new differential calculus by establishing an adequate correspondence between our differential calculus and some deformed Heinsenberg algebras, as it is done in $\left[ 14\right] $ for the particular case $(d^2=0)$.
Now, we generalize our differential calculus by considering the case $q^N=1$.
Differential calculus on a reduced quantum\
plane, case $%
q^N=1$.
===========================================
A two dimensional reduced quantum plane is an associative algebra generated by $x$ and $y$ with relations $(1)$ and $(2).$ One can always define the differential operator ”$d$” satisfying $d^3=0,$ $(d^2\neq 0),$ and the Leibniz rule:
$$d(uv)=d(u)v+(j)^nud(v),$$
$u\in \Omega ^n$ and $v\in \Omega ^m$, where $\Omega ^n$ and $\Omega ^m$ are the spaces of $n$ and $m$ forms on reduced quantum plane respectively.
Note that in contrast to $eq(7),$ one have to distinguish between the deformation parameter $q$ and the $j$ parameter, $j^3=1$, $(j\neq 1)$, in $%
eq(30)$.
Following the same method of section $3,$ we get the covariant differential calculus:
$$x\, dx=j^{2}dx\, x$$
$$x\, dy=-\frac{jq}{1+q^2}dy\, x+\frac{j^2q^2-1}{1+q^2}dx\, y$$
$$y\, dx=\frac{j^2-q^2}{1+q^2}dy\, x-\frac{jq}{1+q^2}dx\, y$$
$$y\, dy=j^2dy\, y$$
$$dx\, dy=qdy\, dx$$
$$x\, d^{2}x=j^{2}d^{2}x\, x$$
$$x\, d^{2}y=-\frac{jq}{1+q^{2}}d^{2}y\, x+\frac{j^{2}q^{2}-1}{%
1+q^{2}}d^{2}x\, y$$
$$yd^2x=\frac{j^2-q}{1+q^2}^2d^2y\, x-\frac{jq}{1+q^2}d^2x\, y$$
$$y\, d^2y=j^2d^2y\, y$$
$$dx\, d^{2}x=jd^{2}x\, dx$$
$$dx\, d^{2}y=-\frac{q}{1+q^{2}}d^{2}y\, dx+\frac{jq^{2}-j^{2}}{%
1+q^{2}}d^{2}x\, dy$$
$$dyd^{2}x=\frac{j-j^{2}q}{1+q^{2}}^{2}d^{2}y\, dx-\frac{q}{1+q^{2}}d^{2}x%
\, dy$$
$$dy\, d^{2}y=jd^{2}y\, dy$$
$$d^2x\, d^2y=qd^2y\, d^2x.$$
We recover the differential calculus obtained in section $3,$ if $q=j$. As an application of this new differential calculus $d^3$ $=0$ on the reduced quantum plane, we construct in the section below a gauge field theory on $%
M_3(C).$
Gauge theory on $M_3(C)$ as a reduced quantum plane with $d^3=0$
================================================================
In this section, we use the $n=3$ differential calculus constructed in section $3$ to establish a gauge theory on the reduced quantum plane.
As in the ordinary case, the covariant differential is defined by:
$$D\Phi (x,y)=d\Phi (x,y)+A(x,y)\Phi (x,y),$$
where the field $\Phi (x,y)$ is a function on $M_3(C)$ and the gauge field $%
A(x,y)$ is a $1$-form valued in the associative algebra of functions on the reduced quantum plane $M_3(C)$.
We have assumed that the algebra of functions on $M_3(C)$ is a bimodule over the differential algebra $\Omega .$
As usual, the covariant differential $D$ must satisfy:
$$DU^{-1}\Phi (x,y)=U^{-1}D\Phi (x,y),$$
where $U$ is an endomorphism defined on $Fun[M_3(C)]$ .
This leads to the following gauge field transformation:
$$A(x,y)\rightarrow U^{-1}A(x,y)U+U^{-1}dU.$$
In general, the 1-form gauge field $A(x,y)$ can be written as:
$$A(x,y)=A_{x}(x,y)dx+A_{y}(x,y)dy.$$
The differential calculus $n=3$ allows to define the curvature $R$ as follows $\left[ 2,15\right] $:
$$D^{3}\Phi (x,y)=R\Phi (x,y).$$
Direct computations show that $R$ is a ”three-form” given by:
$$\begin{aligned}
R&=& d^2A(x,y)+dA^2(x,y)+A(x,y)dA(x,y)+A^3(x,y) \\
&=& d^2A(x,y)+(dA(x,y))A(x,y)+(1+q)A(x,y)dA(x,y)+A^3(x,y) \\
&=& d^{2}A(x,y)+(dA(x,y))A(x,y)-q^{2}A(x,y)dA(x,y)+A^{3}(x,y).\end{aligned}$$
One has to express the curvature written above in terms of $3$-forms constructed from basic generators $dx,$ $dy,$ $d^2x$ and $d^2y$ of the differential algebra $\Omega .$ Since we are dealing with a non-commutative space (reduced quantum plane), this task is not straightforward. In fact, the non-commutativity prevents us from rearanging the different terms in $%
eq(52)$ adequately. To overcome this technical difficulty we require that the components of the gauge field $A_x(x,y)$ and $A_y(x,y)$ are expressed as formal power series of the space coordinates $x$ and $y$ $\left[
16-19\right] $. The condition $eq(2)$ in section$2$ $(N=3)$ is extremely useful, in the sense that it limits the power series to finite ones rather than infinite:
$$A_{x}(x,y)=a_{mn}x^{m}y^{n};\, m,n=0,1,2$$
$$A_y(x,y)=b_{kl}x^ky^l;\, k,l=0,1,2.$$
Using the formulae $(1,31-44,52-54),$ and after technical computations, the desired expression of the curvature arises as:
$R=\Big[ R_{xxy}+qR_{yxx}+q^2R_{xyx}+$
$(1-q)\Big\{ \partial _yA_x(x,y)+q\partial _xA_y(x,y)+\partial
_yA_Y(x,y)((1-q)f_2(y)-f_1(x,y))$
$+qf_4(x,y)f_0(x,y)-q^2f_6(x,y))+A_y(x,y)(f_5(x,y)+$
$A_y(x,y)A_y(q^2x,y)((1-q)f_2(y)-f_1(x,y))A_y(x,y)A_x(q^2x,y)f_4(x,y)+$
$qA_x(x,y)A_y(qx,q^2y)f_0(x,y)+q^2A_y(x,y)f_4(x,y)A_y(x,y)+$
$A_y(x,y)f_3(x,y)A_y(q^2x,qy) \Big\} \Big] dxdxdy$
$+\Big[ R_{yyx}+qR_{yxy}+q^2R_{xyy}+(1-q) \Big\{ -q^2\partial
_yA_y(x,y)f_0(x,y)-q^2A_y(x,y)f_7(x,y)-$
$A_y(x,y)A_y(q^2x,qy)f_8(x,y)\Big\} \Big] %
dydydx+qF_{xy}^qd^2xdy+F_{yx}^{q^2}d^2ydx,$
where:
$R_{xxy}=\partial _x\partial _xA_y(x,y)+\partial
_xA_x(x,y)A_y(q^2x,qy)-q^2A_x(x,y)\partial _xA_y(qx,q^2y)+$
$$A_x(x,y)A_x(qx,q^2y)A_y(q^2x,qy)$$
$R_{yxx}=\partial _y\partial _xA_x(x,y)+\partial
_yA_x(x,y)A_x(x,y)-q^2A_y(x,y)\partial _xA_x(qx,q^2y)+$
$$A_y(x,y)A_x(q^2x,qy)A_x(x,y)$$
$R_{xyx}=\partial _x\partial _yA_x(x,y)+\partial
_xA_y(x,y)A_x(x,y)-q^2A_x(x,y)\partial _yA(qx,q^2y)+$
$$A_x(x,y)A_y(qx,q^2y)A_x(x,y)$$
$R_{yyx}=\partial _y\partial _yA_x(x,y)+\partial
_xA_y(x,y)A(x,y)-q^2A_x(x,y)\partial _yA_x(qx,q^2y)+$
$$A_y(x,y)A_y(q^2x,qy)A_x(qx,q^2y)$$
$R_{yxy}=\partial _y\partial _xA_y(x,y)+\partial
_yA_x(x,y)A_x(x,y)-q^2A_y(x,y)\partial _xA_y(qx,q^2y)+$
$$A_y(x,y)A_x(q^2x,qy)A_y(x,y)$$
$R_{xyy}=\partial _x\partial _yA_y(x,y)+\partial
_xA_y(x,y)A_y(x,y)-q^2A_x(x,y)\partial _yA_y(qx,q^2y)+$
$$A_x(x,y)A_y(qx,q^2y)A_y(x,y)$$
$$\begin{array}{ccl}
F_{xy}^q &=&\partial _xA_y(x,y)-q\partial _yA_x(x,y)+
A_x(x,y)A_y(qx,q^2y)-qA_y(x,y)A_x(q^2x,qy) \cr
F_{yx}^{q^2}&=&\partial _yA_x(x,y)-q^2\partial _xA_y(x,y)+
A_y(x,y)A_x(q^2x,qy)-q^2A(x,y)A(qx,q^2y) \cr
\end{array}$$
$$\begin{aligned}
f_0(x,y) & =& -b_{11}y^2-qb_{10}y+q^2b_{22}x+b_{20}xy+qb_{21}xy^2-b_{21}
\nonumber \\
f_1(x,y)&=& -a_{11}y^2-a_{10}y+a_{22}x+a_{20}xy+a_{21}xy^2-a_{12} \nonumber
\\
f_2(x,y)&=& -b_{20}y^2-q^2b_{21}y-q^2b_{22}y \nonumber \\
f_3(x,y) & =& -q^2a_{11}y^2-a_{10}y+q^2a_{22}x+qa_{20}xy+a_{21}xy^2-qa_{12}
\nonumber \\
f_4(x,y) & =& -q^2b_{11}y^2-b_{10}y+q^2b_{22}x+qb_{20}xy+b_{21}xy^2-qb_{12}
\\
f_5(x,y) &=& -qb_{21}y^2-b_{20}y-qb_{22} \nonumber \\
f_6(x,y) &=& +qa_{12}y^2+-a_{11}y+qa_{21}xy-qa_{22}xy^2 \nonumber \\
f_7(x,y) &=& +qb_{21}y^2-b_{11}y+qb_{21}xy^2-qb_{22}xy^2 \nonumber \\
f_8(x,y) &=& -b_{11}y^2-b_{10}y+b_{22}x-b_{20}xy+b_{21}xy^2-b_{12}.
\nonumber\end{aligned}$$
The expression of the curvature components $eq(55)$ and the deformed field strength $eqs(56)$ are formally the same as those obtained by Kerner $%
[2,15]. $ The functions $f_i(x,y)$ $i=0,...,8$ can be interpreted as a direct consequence of the non-commutativity property of the space.
The covariant $n=3$ differential calculus constructed in section $3$ and $4$ respectively for $q$ $3^{rd}$ and $N^{th}-$root of unity can be seen as a genralization of the case $n=2$. However, one cannot see $d^2=0$ as a certain limit of $d^3=0$ case. In the next section, we remind the differential calculus $d^2=0$.
Differential calculus with nilpotency $n=2$ on a reduced quantum plane.
========================================================================
We recall that the exterior differential $"d"$ on the reduced quantum plane satisfies usual properties $[6-9]$, namely
$\ \ \ i/$ linearity,
$i$ $i$ $/$ Nilpotency,
$$d^2=0.$$
$i$ $i$ $i$ /Leibniz rule,
$$d(uv)=d(u)v+(-1)^nud(v),$$
where
$$u\in \Omega ^n,v\in \Omega ^m\;\hbox{and},\, d(x)=dx,\, d(y)=dy,%
\, d1=0.$$
The deformed differential calculus satisfies:
$$xdx=q^2dxx$$
$$xdy=qdyx+(q^2-1)dxy$$
$$ydx=qdxy$$
$$ydy=q^2dyy$$
$$dydx=-q^2dxdy$$
$$(dx)^2=(dy)^2=0.$$
So, the differential algebra $\Omega $ is generated by $x,$ $y,$ $dx$ and $%
dy $, $\Omega =\{x,y,dx,dy\}.$
Using the standard realization of the differential $"d"$:
$$d=\frac \partial {\partial x}dx+\frac \partial {\partial y}dy=\partial
_xdx+\partial _ydy,$$
one can prove that:
$$\partial _xx=1+q^2x\partial _x+(q^2-1)y\partial _y$$
$$\partial _yx=qx\partial _x$$
$$\partial _xy=qy\partial _x$$
$$\partial _{y}y=1+q^{2}y\partial _{y}.$$
We apply this covariant differential calculus to study the related gauge field theory on $M_3(C).$
Gauge field theory on M$_3$(C) as reduced quantum plane with d$^2$=0
=====================================================================
Similarly, the covariant differential is defined as in section $6$:
$$D\Phi (x,y)=d\Phi (x,y)+A(x,y)\Phi (x,y).$$
The expression of the curvature is:
$$D^{2}\Phi (x,y)=(dA(x,y)+A(x,y)A(x,y))\Phi (x,y)=R\Phi (x,y).$$
The differential realization of $"d"$ $eqs(67-71)$ allows to rewrite the expression of the curvature $R$:
$$R=(\partial _xA_y(x,y)-q\partial _yA_x(x,y))dxdy+A_x(x,y)dxA_y(x,y)dy.$$
Using the differential calculus $eqs(61-71)$ on the reduced quantum plane and the expressions of $A_x(x,y)$, $A_y(x,y)$ $eqs(53,54)$ as formal power series, it is easy to establish:
$$R=[\partial _xA_y(x,y)-q\partial
_yA_x(x,y)+A_x(x,y)A_y(qx,q^2y)-qAy(x,y)A_x(q^2x,qy)+$$
$$(1-q)A_y(x,y)\{-qb_{12}-b_{10}y+q^2b_{22}x-q^2b_{11}y^2+qb_{20}xy+b_{21}xy^2%
\}]dxdy,$$
this permit us to define $$\begin{aligned}
F_{xy}^q &=&\partial _xA_y(x,y)-q\partial _yA_x(x,y)+%
A_x(x,y)A_y(qx,q^2y)-qAy(x,y)A_x(q^2x,qy) \nonumber \\
&=&-q\{\partial _yA_x(x,y)-q^2\partial _xA_y(x,y)+%
A_x(q^2x,qy)A_y(x,y)-q^2A_x(x,y)A_y(q^2x,qy)\} \nonumber \\
&=&-qF_{yx}^{q^2},\end{aligned}$$ which is the $\ q$-deformed antisymetric field strengh.
The comparison of the two expressions of curvature $(d^3=0$ section $5$ and $%
d^2=0)$ will be given in the following section.
Discussions and concluding remarks
==================================
In this paper, we have constructed a differential calculus $n=3$ nilpotent on the reduced quantum plane by mixing Kerner’s idea and Coquereaux’s technics. The notion of covariance for this differential calculus is also given and we show that there is a quantum group structure behind this covariance. As an application, we have constructed a gauge theory based on this calculus.
In the case $n=3,$ the expressions of curvature contain additionnal terms $%
eqs(55,57)$ compared with $eq(75).$ These terms can be interpreted as a generic consequence of the extension of the differential calculus $d^2=0$ to the higher order $d^3=0.$
We can also compare our results with those of $Kerner$ $\&$ $al$ $[2,15].$ In fact, $eqs(55,56)$ are formally the same as in $[2,15]$, they differ only by the appearance of the deformation parameter $q$. However, there is no analogous of $eq(57)$ in $[2,15]$. It is a direct consequence of the noncommutativity of the space considered here.
In a forthcoming paper, we shall treat in a mathematical way the correspondance between this calculus and the Heisenberg algebra. This correpondance is based on the bargman Fock reprensentation and will give a new oscillator algebra. To study the minimization of incertitude principal in this case, we will try to find the eigenvectors of the annihilation operator in the way to construct the corresponding Klauder s coherent states $\left[ 13\right] $.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors Y. Hassouni and E. H. Zakkari wish to thank the Abdus Salam International Centre for Theoretical Physics for financial and scientific support, where a great part of this work has been done. This work was done within the framework of Associateship and Federation Arrangement Schemes of the Abdus Salam ICTP.** **
Appendix: {#appendix .unnumbered}
=========
We start by writting a priori $xdx,$ $xdy$ $ydx$ and $ydy$ in terms of $dxx,$ $dyx,$ $dxy$ and $dyy$ ie:
$$xdx=a_1dxx+b_1dyx+c_1dxy+d_1dyy$$
$$xdy=a_2dxx+b_2dyx+c_2dxy+d_2dyy$$
$$ydx=a_3dxx+b_3dyx+c_3dxy+d_3dyy$$
$$ydy=a_4dxx+b_4dyx+c_4dxy+d_4dyy .$$
Differentiating the commutation relation$\ xy=qyx$ and replacing $xdx$ and $%
xdy$ by their expressions in the formulae above, permit us to fix three unknown coeficients. Actually, we have $nine$ independant parameters.
The left coaction of $F$ on a quantum plane is defined by:
$$x_{1}=a\otimes x+b\otimes y$$
$$y_1=c\otimes x+d\otimes y.$$
Hence
$$dx_{1}=a\otimes dx+b\otimes dy$$
$$dy_{1}=c\otimes dx+d\otimes dy.$$
We impose that the relations between $x_1,$ $y_1$ and $dx_1,$ $dy_1$ be the same as the relations betwen $x,$ $y$ and $\ dx,$ $dy;$ these conditions yields to:
$$a_{2}=\, a_{3}=\; a_{4}=\, %
b_{1}=b_{4}=c_{1}=c_{4}=d_{1}=d_{2}=d_{3}=0\, \hbox{and }d_{1}=\,
a_{4},$$
So, the unknown coeficients $b_2,$ $b_3,$ $c_2,$ and $c_3$ can be expressed in the terms of one unknown coeficient $a_1$. Indeed:
$$b_{2}=\frac{q(1+a_{1})}{1+q^{2}}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ c_{2}=\ \frac{a_{1}q^{2}-1}{1+q^{2}}\$$
$$b_{3}=\frac{a_{1}-q^{2}}{1+q^{2}}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ c_{3}=\frac{q(1+a_{1})}{1+q^{2}}.$$
Differentiating the relations $(74-77)$ and noticing that $\ dxdx$, $d^2x$, $%
dydy$ and $d^2y$ are independant, we find $a_1=q^2$. The left covariant differential calculus on a reduced quantum plane is hence constructed $%
eqs(9-22).$
**References:**
$\left[ 1\right] $- V. Abramov and R. Kerner, J. Math. Phys, **41**, (2000) 5598.
$\left[ 2\right] $ - V. Abramov and R. Kerner, hep-th/9607143 (1996).
$\left[ 3\right] $ - R. Kerner, Math-ph/0011023 (2000).
$\left[ 4\right] $- M. Daoud and Y. Hassouni, Int. Jour. Theor. Phys. **36**, (1996) 37.
$\left[ 5\right] $- A. Elhassouni, Y. Hassouni and E. H. Tahri, Int. Jour. Theor. Phys. **35** (1996) 2517.
$\left[ 6\right] $-R.Coquereaux, A. O. Garcia and R. Trinchero, Math-ph/9807012 (1998).
$\left[ 7\right] $- R. Coquereaux, A. O. Garcia and R. Trinchero, Phys. Lett. B. **443** (1998) 221.
$\left[ 8\right] $- T. Brezinski and J. Rembielinski, J. Phys. **A25** (1992) 1945.
$\left[ 9\right] $-J. Wess and B. Zumino, Nucl. Phys. B (Proc Suppl) **18** B (1990) 302.
$[10]$- H. Weyl, *The thory of groups and quantum mechanics* (Dover Publications, 1931).
$\left[ 11\right] $- R. Coquereaux, A. O. Garcia and R. Trinchero, math-hep/9811017 (1998).
$\left[ 12\right] $- R. Coquereaux and G. E. Schieber, *Action of finite quantum group on the algebra of complex* $N\times N$* matrices*. A.I.P Conference Proceedings.**45**.
$\left[ 13\right] $- Y. Hassouni and E. H . Zakkari, in preparation.
$\left[ 14\right] $- A. K. Mishra and G. Rajasekaran, J. Math. Phys. **38** (1997) 1.
$[15]$-R. Kerner,* *math-ph/0004031 (2000).
$\left[ 16\right] $- J. Madore, S. Schraml and J. Wess, Eur. phys. J . C **16** (2000) 161.
$\left[ 17\right] $-B. Jurcco, P. Schupp and J. Wess, Nucl. phys. B. **584**. (2000). 784.
$\left[ 18\right] $-B. Jurcco, S. Schraml, P. Schupp and J. Wess, Eur. phys. J C **17** (2000) 521.
$\left[ 19\right] $-B. Jurcco, P. Schupp and J. Wess, Mod. Phys. Lett. A. **16**. (2001) 343.$\ $
[^1]: [[email protected]]{}
[^2]: [[email protected]]{}
[^3]: [[email protected]]{}
[^4]: [[email protected]]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'In this work, we have studied the chemical and magnetic interactions of Fe$_n$; $n\le 6$ clusters with a divacancy site in a graphene sheet by ab-initio density functional calculations. Our results show significant chemical interactions between the cluster and graphene. As a result, a complex distribution of magnetic moments appear on the distorted Fe clusters in presence of graphene and results in a lower average magnetic moments compared to the free clusters. The presence of cluster also prevents the formation of 5-8-5 ringed structure known to exhibit in a graphene sheet having a divacancy defect. The clusters induce electronic states primarily of $d$-character near the Fermi level.'
author:
- 'Bhalchandra S. Pujari[^1]'
- 'Dilip G. Kanhere'
- Biplab Sanyal
bibliography:
- 'biblio.bib'
title: 'Interaction of iron clusters (Fe$_n$; $n\le 6$) with a divacancy in graphene'
---
Introduction
============
Graphene has been a subject of immense investigation since its discovery in 2004 [@novoselov] as it has a great potential for future electronics [@geim; @netormp]. It is a one-atom-thick planar sheet of sp$^2$-bonded carbon atoms that are densely packed in a honeycomb crystal lattice. Graphene is the basic structural element of some carbon allotropes including graphite, carbon nanotubes, fullerenes as well as recently synthesized graphane [@elias01302009]. Measurements have shown that graphene has a breaking strength 200 times greater than steel, making it the strongest material ever tested [@c:lee].
Despite its strength, graphene is prone to the impurities and defects as any other material. The nature and types of defects in [[graphene]{}]{} have been discussed by Castro Neto [*et al*]{} in their extensive review. [@netormp] Among a number of defects and disorders seen in [[graphene]{}]{}, ripples or topological defects are intrinsic while cracks, vacancies, charged impurities, atomic adsorption etc. are extrinsic. In particular [[graphene]{}]{} is prone to the formation of vacancy defects. Divacancy defects are quite probable to form as strong reactive centers. It is shown theoretically [@lust] how different defect structures can be engineered in graphene. It is known that such defects cab affetct the electronic structure and hence transport properties of graphene [@coleman; @jafri; @carva].
One of the important points in graphene research is to explore the possibility of having spin-dependent transport in graphene. As the mobility of graphene is extraordinarily high, electronic transport with selective spin is an interesting topic of study. The relevant question is how to make graphene magnetic. One idea is to deposit magnetic adatoms on graphene and study the range of spin polarization in the host lattice arising from the exchange coupling between the adatoms. One can make use of the defects for trapping the adatoms. It is known that the chemisorption energies at vacancy sites are very high [@biplabprb09]. So it is possible to trap the magnetic adatoms or clusters at various defect sites and hope to have an effective spin polarization. In an interesting theoretical work[@nieminen] based on density functional theory (DFT) the magnetism of different transition metal adatoms on single and divacancy centers has been studied. In a work based on DFT, Wang [*et al*]{} [@wang] have studied the interaction between a single adatom and graphene containing a Stone-Wales defect. They have observed a reduction in local magnetic moment on iron atom and a substantial modulation of electronic states near the Fermi level. Instead of a single adatom, a flux of adatoms may generate various sizes of magnetic nanoclusters trapped in defect sites. As already noted, the divacancies are quite frequently form on a graphene sheet and hence we investigate the interaction of small clusters of Fe with a divacancy. Thus, a comparison of geometric and magnetic structure between adsorbed and free Fe clusters will be made.
Computational details
=====================
All the calculations have been performed on a monolayer graphene with a divacancy using plane-wave based density functional code VASP.[@vasp] The generalized gradient approximation as given by Perdew, Burke and Ernzerhof [@PBE; @PBEerr] has been used for the exchange-correlation potential. The energy and the Hellman-Feynman force thresholds are kept at 10$^{-5}$ eV and 10$^{-3}$ eV/[Å]{} respectively. For geometry optimization, a 4 $\times$ 4 $\times$ 1 Monkhorst-Pack $k$-grid is used. Total energies and electronic structures are calculated with the optimized structures on 11 $\times$ 11 $\times$ 1 Monkhorst-Pack $k$-grid.
The supercell is generated by repeating the primitive cell by six times in $a$ and $b$ direction of the cell and then removing two carbon atoms in the vicinity to create the divacancy. Such large cell, containing 70 atoms is required to avoid the interaction between the clusters. Further interactions in vertical direction is avoided by taking a vacuum of more than 15 [Å]{}.
To determine the ground state geometries, several initial structures of iron clusters are placed on divacancy and are fully optimized. We wish to point out that a special care has to be taken to achieve the correct magnetic moment by starting the optimization procedure with several possible guesses of magnetic states.
Results and discussion
======================
Geometries
----------
We begin our discussion by presenting the ground state geometries of the systems studied. Our aim is to find out the possible evolution of the geometries of clusters on the divacancy. \[fig:allgeom\] depicts all the geometries of clusters on a divacancy, starting from Fe$_1$ to Fe$_6$. Also shown in the insets are the geometries of free clusters.
{width="14cm"}
As reported earlier,[@boukhvalov:09] the single iron atom diffuses almost to the interior of the vacancy as seen in \[fig:allgeom\](a). The four Fe-C bonds are identical with the bond lengths of 1.97 Å. It is interesting to note that not only for a single atom but in general, the underlying vacancy does not undergo any significant structural rearrangement. This is in contrast with the vacancy in absence of iron atoms where the carbon atoms are known to move close to each other to form in-plane $\sigma$ bonds, resulting in a 5-8-5 ringed structure [@coleman]. This indicates that even a single iron atom can be used to maintain the structure of underlying lattice despite the presence of the vacancy.
As the size of the cluster increases some trend is discernible. In all the cases we note that one of the carbon atom is at the center of the vacancy. In other words the structures evolve as a gradual addition of atoms on the geometry of single iron system.
Iron dimer has also been studied earlier [@boukhvalov:09] and our results are in agreement with theirs. As seen from \[fig:allgeom\](b), the geometry is obtained by adding an atom to the optimized geometry of Fe$_1$. The dimer bond length is 2.20 [Å]{} which is about 10% enhanced over the bond length found in a free cluster (2.02 [Å]{}). We recall that a single iron atom is just out of plane, but forming four bonds with nearest neighbour carbon atoms, thus maintaining the 5-8-5 ring structure. Thus after one atom is accommodated in this lattice, there is no more scope for any additional Fe atoms. Hence the single atom remains with lattice plane and the additional Fe atoms are out of the plane with the first atom as an apex .
It is worth mentioning that this system has a few isomers. The closest one is the geometry where the dimer resides parallel to the plane of the graphene and none of the iron atoms goes inside the vacancy. The energy difference between this structure and ground state is $\sim$ 0.3 eV. This isomer is important because all the larger clusters ($n >2$) have at least one isomer with the basic unit as a dimer parallel to the graphene plane. The second isomer is obtained by changing the orientation of the dimer with respect to the graphene plane. In this case the energy difference is about 0.4 eV.
It is interesting to point out that, in the ground state, one of the atoms is outside the vacancy which is in contrast with a nitrogen dimer [@biplabprb09] placed on a divacancy center. In that case both the nitrogen atoms of the dimer become a part of the graphene lattice by occupying the vacant places of carbon atoms thereby completely healing the topological disorder.
Clusters evolve systematically as we go on adding an atom on Fe dimer. Fe$_3$ forms a triangle which is tilted with respect to vertical axis and none of the sides are identical to each other. The bond lengths are 2.22 [Å]{}, 2.11 [Å]{} and 3.05 [Å]{}. The tilt occurs due to the optimization of the interactions with underlying carbon atoms. Also, as the nearest carbon neighbours are not same the sides of the triangle are not identical.
When we add the fourth iron atom to the system, the triangle aligns vertically and the fourth adatom positions from a side to form a distorted prism. Clearly the distortion is brought about by the carbon lattice. As can be inferred from the figure, the addition of the fifth atom is rather straightforward change from prism to pyramidal structure with the added atom going on top. The sixth atom distorts this pyramid and forms a complex structure as shown in \[fig:allgeom\]. Although distorted, the four-atom structure can still be seen near the graphene plane. As we shall see later, the prism formed by four atoms is a very stable system and we believe that in more complex clusters with large $n$ this prism may serve as the building block.
Clearly the geometries of iron clusters on graphene are remarkably different than those studied in free space [@yu]. As said earlier the dimer bond length is slightly reduced in presence of divacant graphene. The trimer in the free space is reported to be isosceles triangle [@yu; @gustev], while the trimer on vacancy is a distorted isosceles triangle with the bond lengths differing substantially from that in free space. Fe$_4$ also forms a prism in free space with the bond lengths ranging from 2.22-2.41 [Å]{}. However in our case the bond length varies substantially from 1.6 [Å]{} to 2.6 [Å]{}. The change in the trigonal bipyramid of isolated Fe$_5$ is seen in the vertical four-atom plane where the bond lengths are increased with respect to those in free space. Fe$_6$ undergoes a substantial change from octahedron to a more complex structure seen in \[fig:allgeom\](f).
Energetics and Magnetic structure
---------------------------------
Iron in a body centered cubic structure is known to be ferromagnetic in its bulk phase as well as in a cluster form. It is thus interesting to see the nature of magnetism in the studied systems. It is also helpful to discuss the energetics with the same.
![Binding energies and total magnetic moments as a function of cluster size (Fe$_n$).[]{data-label="fig:BeMu"}](figure2.eps){width="10cm"}
Before we proceed we define the binding energy $\Delta_E$ of the iron cluster in presence of the divacancy as: $$\Delta_E = (E_{GD} + E_{Fe}) -E_{GD+Fe}
\label{eq:be}$$
where $E_{GD+Fe}$ is the total energy of the system ([*i.e.*]{} the cluster on graphene with a divacancy), $E_{GD}$ is the total energy of the graphene with a divacancy and $E_{Fe}$ is that of an isolated iron cluster. Higher binding energy indicates a more stable system.
\[fig:BeMu\] shows the binding energies and total magnetic moments for all the systems. Two observations can be made immediately. Firstly, the binding energy slowly increases from one to four atoms-cluster and then slowly decreases. Secondly, the plot of total magnetic moment steadily increases except for the five-atom cluster. Thus from the observation of binding energies, it is not surprising that we see a four-atom prism staying almost intact in larger clusters. It also indicates that the Fe$_4$ is the most stable structure on the divacancy center in graphene, however there is no indication of Fe$_4$ being the most stable in the free space.
As seen from \[fig:BeMu\], all the systems studied are effectively magnetic in nature. Our analysis of local magnetic moment has revealed that in all the cases none of the carbon atoms has any significant local moment. Almost all the contribution towards total magnetic moment is from iron atoms. Unlike the isolated clusters it is inappropriate to calculate the magnetic moment per atom as the atoms in the cluster may not have strictly iron neighbours.
![Local magnetic moment on individual iron atoms for six clusters.[]{data-label="fig:local"}](figure3.eps){width="8cm"}
From \[fig:BeMu\] it is clear that, up to the Fe$_4$ cluster, as the number of atoms increases, the magnetic moment also increases as the atoms are ferromagnetically coupled to each other. However as the cluster size increases, the hybridization of Fe $d$ orbitals inside the clusters results in the lowering of total magnetic moment. To ascertain that we also plot the local magnetic moments on individual atoms. \[fig:local\] shows the magnetic moments on individual atoms for all the clusters. It can be deduced from the figure that as the cluster increases in size the variation of the local moments becomes non-monotonous (\[fig:BeMu\]). Except Fe$_5$, all the other clusters studied show aligned local moments on Fe atoms though they vary in size. In Fe$_5$ cluster, one of the iron atoms has the moment flipped with respect to the other atoms. So, the average moment on iron in this cluster is smaller than the others.
Naturally in the presence of of iron clusters the density of states (DOS) of the graphene lattice undergoes a substantial change. Particularly interesting is the fact that the contributions from the iron atoms occur directly at the Fermi level. \[fig:fe1dos\] (a) shows the total DOS in the presence of a single iron atom while \[fig:fe1dos\] (b) shows the local DOS (LDOS) on an iron atom. LDOS is resolved into its angular components. It should be mentioned that the single atom case is the simplest of all the cases studied however the features seen here are similar for all the clusters. So, we present only the DOSs for a single iron atom.
![ (a) Total DOS and (b) the local DOS of an iron atom on the divacancy. Only the $d$-components of LDOS are shown. The upper and lower parts of the graph depict the spin-up and spin-down components respectively. Iron clearly induces electronic states on the Fermi level which is marked by a vertical line.[]{data-label="fig:fe1dos"}](figure4a.eps "fig:"){width="8cm"} ![ (a) Total DOS and (b) the local DOS of an iron atom on the divacancy. Only the $d$-components of LDOS are shown. The upper and lower parts of the graph depict the spin-up and spin-down components respectively. Iron clearly induces electronic states on the Fermi level which is marked by a vertical line.[]{data-label="fig:fe1dos"}](figure4b.eps "fig:"){width="8cm"}\
[(a) (b)]{}
Our angular momentum based analysis of LDOS on iron atom (\[fig:fe1dos\] (b)) shows that the contribution from the components except $d$ is negligible. Also, the characteristics of the electronic structure are very different from that of pure graphene. The presence of midgap states of $p$ orbital character in presence of a divacancy was discussed before [@coleman; @jafri]. Here, these states are mainly of $d$ character. So, the characteristics of transport properties are expected to be different. As seen from the figure that the contribution from the $d$ orbital is concentrated in the energy interval of about 6 eV downwards from the Fermi energy. More interestingly, the contribution from spin-up and spin-down channels of iron atom is substantially different. The contribution near the Fermi level arising from the orbital with $d_{z^{2}-r^{2}}$ character increases, maintaining the difference in the spin channels as the cluster size increases. Such features are of particular interest in spintronics applications where the spin-dependent transport properties are discussed heavily.
![Charge density isosurface of a state near the Fermi level. The system consists of a single iron atom on which a $d$-like state is clearly seen and $p$-states on nearby carbon atoms are heavily distorted. Isosurface is shown at one tenth of its maximum value. \[fig:fermi\]](figure5.eps){width="8cm"}
Thus it is clear that iron atoms induce a substantial number of states on the Fermi level and it is seen that those are mainly $d$-like. This becomes further evident from \[fig:fermi\] which shows the charge density isosurface for a state close to Fermi energy for a single iron atom. This feature is general and is present in all the clusters. The state represents a complex of $d$ states on central iron atom and heavily distorted $p$ orbitals on neighbouring carbon atoms. Interestingly it is only the nearest carbon atoms those contribute towards the charge density. It is the interaction between the four carbon atoms with the iron cluster which is responsible for the strong binding.
Conclusions
===========
We have systematically investigated the geometries and electronic structures of Fe$_n$ ($n \le 6$) clusters on a graphene sheet with a divacancy using DFT. When a divacancy is created in graphene, the underlying structure undergoes structural rearrangements to form a 5-8-5 ringed structure. The presence of an iron cluster, however prevents the formation of 5-8-5 ring and maintains the pristine hexagonal structure. The geometries of iron clusters also undergo significant changes due to the interaction between their $d$ orbitals with $p$ orbitals of the neighbouring carbon atoms. Individual atom in the iron cluster possesses different magnetic moment and in general the atom closest to carbon matrix has the lowest magnetic moment. However as the cluster size increases, more complex patterns of magnetic moments emerge. The iron clusters in free space has an average moment $\sim$ 3 $\mu_B$, however in presence of divacant graphene the average moment is reduced to $\sim$ 2.5 $\mu_B$. The iron cluster also has an important contribution in DOS, as most of the $d$ states of iron contribute within $\pm$ 3 eV around the Fermi level. Our spin polarized calculations also reveal that the contribution of up and down channels are not identical making the system a candidate for spintronics materials.
B.S.P. would like to acknowledge CSIR, Govt. of India (No: 9/137(0458)/2008-EMR-I). B.S. gratefully acknowledges VR/SIDA funded Asian-Swedish Research Link Program, Carl Tryggers Foundation, Göran Gustafssons Foundation and STINT for financial support. We are grateful to HPC2N and UPPMAX under Swedish National Infrastructure for Computing (SNIC) for providing computing facility. Some of the figures are generated by using VMD software [@vmd].
[^1]: Present Address: National Institute of Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Alberta, T6G 2M9, Canada.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Reinforcement Learning (RL) technologies are powerful to learn how to interact with environments and have been successfully applied to variants of important applications. Q-learning is one of the most popular methods in RL, which uses temporal difference method to update the Q-function and can asymptotically learn the optimal Q-function. Transfer Learning aims to utilize the learned knowledge from source tasks to help new tasks. For supervised learning, it has been shown that transfer learning has the potential to significantly improve the sample complexity of the new tasks. Considering that data collection in RL is both more time and cost consuming and Q-learning converges slowly comparing to supervised learning, different kinds of transfer RL algorithms are designed. However, most of them are heuristic with no theoretical guarantee of the convergence rate. Therefore, it is important for us to clearly understand when and how will transfer learning help RL method and provide the theoretical guarantee for the improvement of the sample complexity. In this paper, we propose to transfer the Q-function learned in the source task to the target in the Q-learning of the new task when certain safe conditions are satisfied. We call this new transfer Q-learning method *target transfer Q-Learning*. The safe conditions are necessary to avoid the harm to the new tasks brought by the transfer target and thus ensure the convergence of the algorithm. We study the convergence rate of the target transfer Q-learning. We prove that if the two tasks are similar with respect to the MDPs, the optimal Q-functions of the two tasks are similar which means the error of the transferred target Q-function in the new task is small. Also, the convergence rate analysis shows that the *target transfer Q-Learning* will converge faster than Q-learning if the error of the transferred target Q-function is smaller than the current Q-function in the new task. Based on our theoretical results and the relationship between the Q error and the Bellman error, we design the safe condition as the Bellman error of the transferred target Q-function is less than the current Q-function. Our experiments are consistent with our theoretical founding and verified the effectiveness of our proposed target transfer Q-learning method.'
author:
- |
Yue Wang$^{\dagger}$[^1] , Qi Meng$ ^{\ddagger} $, Wei Cheng$ ^{\ddagger} $, Yuting Liug$ ^{\dagger} $, Zhi-Ming Ma$ ^{\dagger} $$ ^{\S} $, Tie-Yan Liu$ ^{\ddagger} $\
$ ^{\dagger} $School of Science, Beijing Jiaotong University, Beijing, China {11271012, ytliu}@bjtu.edu.cn\
$ ^{\ddagger} $Microsoft Research, Beijing, China {meq, wche,Tie-Yan.Liu}@microsoft.com\
$ ^{\S} $ Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China [email protected]\
bibliography:
- 'accrl.bib'
title: ' Target Transfer Q-Learning and Its Convergence Analysis '
---
Introduction
============
Reinforcement Learning (RL) [@sutton1998reinforcement] technologies are very powerful to learn how to interact with environments and have been successfully applied to variants of important applications, such as robotics, computer games and so on [@kober2013reinforcement; @mnih2015human; @silver2016mastering; @bahdanau2016actor].
Q-learning [@watkins1989learning] is one of the most popular RL algorithms which uses temporal difference method to update the Q-function. To be specific, Q-learning maps the current Q-function to a new Q-function by using Bellman operator and use the difference between these two Q-functions to update the Q-function. Since Bellman operator is a contractive mapping, Q-learning will converge to the optimal Q-function [@jaakkola1994convergence]. Comparing to supervised learning algorithms, Q-learning converges much slower due to the interactions with the environment. At the same time, the data collection is both very time and cost consuming in RL. Thus, it is crucial for us to utilize available information to save the sample complexity of Q-Learning.
Transfer learning aims to improve the learning performance on a new task by utilizing knowledge/model learned from source tasks. Transfer learning has a long history in supervised learning [@li2009transfer; @pan2010survey; @oquab2014learning]. Recently, by leveraging the experiences from supervised transfer learning, researchers developed different kinds of transfer learning methods for RL, which can be categorized into three classes: (1) *instance transfer* in which old data will be reused in the new task [@sunmola2006model; @zhan2015online]; (2) *representation transfer* such as reward shaping and basis function extraction [@konidaris2006autonomous; @Barreto2017SuccessorFF]; (3) *parameter transfer* [@song2016measuring] in which the parameters of the source task will be partially merged into the model of the new task. While supervised learning is a pure optimization problem, reinforcement learning is a more complex control problem. To the best of our knowledge, most of the existing transfer reinforcement learning algorithms are heuristic with no theoretical guarantee of the convergence rate [@bone2008survey], [@Taylor2009TransferLF] and [@lazaric2012transfer]. As mentioned by [@spector2018sample], the transfer learning method potentially do not work or even harm to the new tasks and we do not know the reason since the absence of the theory. Therefore, it is very important for us to clearly understand how and when transfer learning will help reinforcement learning save sample complexity.
In this paper, we design a novel transfer learning method for Q-learning in RL with theoretical guarantee. Different from the existing transfer RL algorithms, we propose to transfer the Q-function learned in the source task as the temporal difference update target of the new task when certain safe conditions are satisfied. We call this new transfer Q-learning method *target transfer Q-learning*. The intuitive motivation is that when the two RL tasks are similar to each other, their optimal Q-function will be similar which means the transferred target is better ( the error is smaller than the current Q-function ). Combine it with that a better target Q-function in Q-learning will help to accelerate the convergence, we may expect that the *target transfer Q-learning* method will outperform the Q-learning. The safe conditions are necessary to avoid the harm to the new tasks and thus ensure the convergence of the algorithm. We prove that target transfer Q-learning has the theoretical guarantee of convergence rate. Furthermore, if the two MDPs and thus the optimal Q-functions in the source and new RL tasks are similar, the target transfer Q-learning converges faster than Q-learning. To be specific, we prove the error of target transfer Q-learning consists of two errors: the initialization error and the sampling error. Both of the errors are increasing with the the product of discount factor $\gamma$ and the *relative Q-function error ratio* $ \beta $ (*error ratio* for simplicity) which measures the relative error of the target Q-function comparing with the current Q-function in the new task. We called $ \gamma \beta $ discounted relative Q-function error ratio(*discounted error ratio* for simplicity). The smaller the discounted error ratio is, the faster the convergence is. And if the discounted error ratio is larger than 1, the convergence will no longer guaranteed.
If the two RL tasks are similar, the learned Q-function in the source task will be close to the optimal Q-function comparing to the current Q-function in the new task. Thus, the discounted error ratio $ \gamma \beta $ will be small(especially for the early stage) when we transfer the learned Q-function from the source task to the target of the new task. Please note that the traditional Q-learning is a special case for target transfer Q-learning with constant discounted error ratio $\gamma$. Therefore, our convergence analysis for target transfer Q-learning help us design the safe condition. We can transfer the target if it will lead the discounted error ratio $ \gamma \beta $ smaller than $1$ . We call it *error ratio* safe condition. Specifically, in the early stage of the training, the Q-function in the new task is not fully trained, the learned Q-function in the source task it a better choice with a smaller error ratio. With the updating of the Q-function in the new task, its error ratio becomes larger. When its discounted error ratio is close or larger than $1$, the safe condition will not be satisfied, and we will stop transferring the target to avoid the harm brought by the transfer learning. Following the standard way in Q-learning, we estimate the error ratio about the error of the Q-function w.r.t the optimal Q-function by the Bellman error.
Our experiments on synthetic MDPs fully support our convergence analysis and verify the effectiveness of our proposed target transfer Q-Learning with error ratio safe condition.
Related Work
============
This section briefly outline related work in transfer learning in reinforcement learning.
Transfer Learning in RL[@Taylor2009TransferLF] [@lazaric2012transfer] aims to improve learning in new MDP tasks by borrowing knowledge from a related but different learned MDP tasks. In paper [@Laroche2017TransferRL], the authors propose to use instance transfer in the Transfer Reinforcement Learning with Shared Dynamics (TRLSD) setting in which only the reward function is different between MDPs. In paper [@gupta2017learning], the authors propose to use the representation transfer and learned the invariant feature space. The papers [@Karimpanal2018SelfOrganizingMA; @song2016measuring] propose to use the parameter transfer to guide the exploration or to initialize the Q-function of the new task directly. In paper [@al2017continuous], the authors propose to use the meta-learning method to do transfer learning in RL. All these works are empirically evaluated and no theoretical analysis for the convergence rate. There are few works that have the convergence analysis. In paper [@Barreto2017SuccessorFF], the authors use the representation transfer but only consider the TRLSD setting. [@zhan2015online] propose a method by using instance transfer. They gives the theoretical analysis of the asymptotic convergence and no finite sample performance guarantee.
Q Learning Background
=====================
Consider the reinforcement learning problem with Markov decision process (MDP) $M \triangleq (\mathcal{S},\mathcal{A},P,r,\gamma) $, where $ \mathcal{S} $ is the state space, $ \mathcal{A} $ is the action space, $ P= \{P_{s,s'}^a; s,s’\in \mathcal{S}, a\in\mathcal{A}\} $ is the transition matrix and $ P_{s,s'}^a $ is the transition probability from state $ s $ to state $ s' $ after taking action $ a $, $ r=\{r(s,a );s\in \mathcal{S},a\in\mathcal{A}\}$ is the reward function and $r(s,a)$ is the reward received at state $s$ if taking action $a$, and $ 0<\gamma<1 $ is the discount factor. A policy $ \pi: \mathcal{A}\times\mathcal{S}\to [0,1]$ indicates the probability to take each action at each state. Value function for policy $ \pi $ is defined as: $ V^{\pi}(s)\triangleq E\left[ \sum_{t=0}^{\infty}\gamma^t r(s_t,a_t)|s_0 = s,\pi \right] $. Action value function for policy $ \pi $ is also called Q-function and is defined as: $$Q^{\pi}(s,a)\triangleq E\left[ \sum_{t=0}^{\infty}\gamma^t r(s_t,a_t)|s_0 = s,a_0 =a,\pi \right].$$ Without loss of generality, we assume that the rewards all lie between 0 and 1. The optimal policy is denoted $ \pi^* $ and has value function $ V^*_M(s) $ and Q value function $ Q^*_M(s,a) $.
As we know, the Q-function in RL satisfies the following Bellman equation:$$Q^\pi(s,a) = r(s ,a )+\gamma{\mathop{\mathbb{E}}}_{\substack{\tilde{a}\sim \pi(a|s)\\s'\sim P(s'|s,a)}}\left[ Q^\pi(s', \tilde{a}) | s_t=s \right]$$ Denote the right hand side(RHS) of the equation as $ T^\pi Q^\pi(s,a) $ , $ T^\pi $ is called Bellman operator for policy $\pi$. Similar, consider the optimal Bellman equation:$$Q^*(s,a) = r(s,a) + \gamma{\mathop{\mathbb{E}}}_{\substack{\tilde{a}\sim \pi(a|s)\\s'\sim P(s'|s,a)}}\left[ Q^*(s', \tilde{a}) | s_t=s \right]$$ (RHS) of the equation is been denoted as $ T^\pi Q^\pi(s,a) $,$ T^* $ is called optimal Bellman operator. It can be proved that the optimal Bellman operator is a contraction mapping for the Q-function. We know that there is an unique fix point which is optimal Q-function by contraction mapping theorem. Q-learning algorithm is designed by the above theory. Watkins introduced the Q-learning algorithm to estimate the value of state-action pairs in discounted MDPs [@watkins1989learning] : $$\begin{aligned}
&Q_{t+1}(s,a)\\
\small& = (1-\alpha_t)Q_t(s,a) + \alpha_t \left(r_t(s,a) + \gamma \max_{\tilde{a}}Q_t(s' ,\tilde{a}) \right)
\end{aligned}$$ We introduce the max norm error to measure the quality of Q-function: $${ \mathbf{MNE}}(Q ) = \max_{s,a}\vert Q(s,a) - Q^*(s,a) \vert .$$
Target Transfer Q-Learning
===========================
First of all, we formalize transfer learning in RL problem. Secondly, We propose our new transfer Q-learning method Target Transfer Q-Learning (TTQL) and introduce the intuition.
Transfer Learning in RL[@Taylor2009TransferLF] [@lazaric2012transfer] aims to improve learning in new MDP tasks by borrowing knowledge from a related but different learned MDP tasks.
According to the definition of MDPs, $M \triangleq (\mathcal{S},\mathcal{A},P,r,\gamma) $, we consider the situation that two MDPs are different in transition probability $ P $, reward function $ r $ and discount factor $ \gamma $. Assume there are two MDPs: source MDP $ M_1 = (\mathcal{S},\mathcal{A},P_1,r_1,\gamma_1) $ and new MDP $ M_2 = (\mathcal{S},\mathcal{A},P_2,r_2,\gamma_2) $, $ Q^*_1 $ and $ Q^*_2 $ are the corresponding optimal Q-functions. Let $ M_1 $ be the source domain and we have already learned the $ Q^*_1 $. The goal of transfer in RL considered in this work is how we can use the information of $ M_1 $ and $ Q^*_1 $ to achieve learning speed improvement in $ M_2 $.
To solve the problem mentioned above, we propose to use TTQL method. TTQL use the Q-function learned from the source task as the target Q-function in the new task when safe conditions satisfied. The safe condition ensures that the transferred target only appears if it can help to accelerate the training. Otherwise we will replace it with the current Q-function in the new MDP’s learning progress. We describe the TTQL in Algorithm 1.
\[alg\]
initial Q-learning $ Q_1 $ , source task learned Q-learning $ Q^*_{source} $, total step $ n $ $ \alpha_t = \frac{1}{t+1} $ flag = `safe-condition`($Q^*_{source} , Q_{t}(\cdot,\cdot) $) $ Q_{target} = Q^*_{source} $ $ Q_{target} = Q_{t} $ $ Q_{t+1}(s,a) = ( 1 - \frac{1}{n})Q_{t}(s,a) + \frac{1}{n} \left(r(s,a) + \gamma \max_{\tilde{a}} Q_{target} (s',\tilde{a})\right) $ $ Q_{n+1} $
The intuitive motivation is that when the two RL tasks are similar to each other, their optimal Q-function will be similar. Thus the transferred target is better ( the error is smaller than the current Q-function ) and the better target can help to accelerate the convergence.
We define the distance between two MDPs as $ \Delta(M_1 , M_2) $ $$\Delta(M_1 , M_2) = \max_{s,a}|Q_1^*(s,a) - Q_2^*(s,a)|.$$ The following Proposition \[propq\*diff\] shows the relation between the distance of two MDPs and the component of two MDPs.
\[propq\*diff\] Assume two MDPs, $ M_1 = (\mathcal{S},\mathcal{A},P_1,r_1,\gamma_1) $ and $ M_2 = (\mathcal{S},\mathcal{A},P_2,r_2,\gamma_2) $, Let the corresponding optimal Q-functions be $ Q^*_1 $ and $ Q^*_2 $, then we have $$\begin{aligned}
&\Delta(M_1 , M_2) = \Vert Q_1^* -Q_2^* \Vert_\infty \le \tilde{\Delta}(M_1 , M_2) {\addtocounter{equation}{1}\tag{\theequation}}\\
& \triangleq \frac{\Vert r_1 - r_2 \Vert_\infty}{1 - \gamma'} + \frac{\gamma''\Vert r' \Vert_\infty}{(1 - \gamma'')^2} \Vert P_1 - P_2 \Vert_\infty +\frac{\vert \gamma_1 - \gamma_2 \vert }{(1 - \gamma_1)(1-\gamma_2)}\Vert r'' \Vert_\infty.\\
\end{aligned}$$ for $\forall (\gamma', \gamma'', r', r'') \in \Omega $, where $ \Omega $ is the available combination of the $ (\gamma_1 , \gamma_2 , \gamma_1 , \gamma_2) $.
Without loss of generality, we assume $ \gamma_1 \le \gamma_2 $, $ \Vert r_2 \Vert_\infty \le \Vert r_1 \Vert_\infty $, we will show that other cases can be proved similarly. We define the following auxiliary MDPs: $ \hat{M_3} = (\mathcal{S},\mathcal{A},P_1,r_2,\gamma_1) $, $ \hat{M_4} = (\mathcal{S},\mathcal{A},P_2,r_2,\gamma_1) $, and let the corresponding optimal Q-functions be $ Q^*_3 $ and $ Q^*_4 $. We have $$\begin{aligned}
&\Vert Q_1^* -Q_2^* \Vert_\infty {\addtocounter{equation}{1}\tag{\theequation}}\\
& = \Vert Q_1^* - Q^*_3 + Q^*_3 - Q^*_4 + Q^*_4 - Q_2^* \Vert_\infty \\
&\le \Vert Q_1^* - Q^*_3 \Vert_\infty + \Vert Q^*_3 - Q^*_4\Vert_\infty + \Vert Q^*_4 - Q_2^* \Vert_\infty
\end{aligned}$$ Notice that in each term, two MDPs are only different in one component. Using the results of [@csaji2008value], we have that $ \Vert Q_1^* - Q^*_3 \Vert_\infty \le \frac{\Vert r_1 - r_2 \Vert_\infty}{1 - \gamma_1} $, $ \Vert Q^*_3 - Q^*_4\Vert_\infty \le \frac{\gamma_1\Vert r_2 \Vert_\infty}{(1 - \gamma_1)^2} \Vert P_1 - P_2 \Vert_\infty $, $\Vert Q^*_4 - Q_2^* \Vert_\infty \le \frac{\vert \gamma_1 - \gamma_2 \vert }{(1 - \gamma_1)(1-\gamma_2)}\Vert r_2 \Vert_\infty $. Combine the above upper bound and set $ \gamma'=\gamma_1, \gamma''=\gamma_1, r'=r_2, r''=r_2 $, we can get the in-equation (1).
In other situation, we can construct auxiliary MDPs like above and use the similar procedure to prove the theorem. After traversing all the available combination of the $ (\gamma_1 , \gamma_2 , \gamma_1 , \gamma_2) $, we can prove the Proposition \[propq\*diff\]
By the Proposition \[propq\*diff\], we can conclude that if the two RL tasks are similar, in the sense of that the component of two MDPs are similar, the learned Q-function in the source task will be close to the optimal Q in the new task.
A question is that when to transfer the target will have performance guarantee. Here, we need safe conditions which are necessary to avoid the harm to the new tasks and thus ensure the convergence of the algorithm. We can now heuristically relate it to the distance between two MDPs and the current learning quality. The concrete value of the safe condition need to further investigate through quantified theoretical analysis and we present these result in the following section.
Convergence Rate of TTQL
========================
In this section, we present the convergence rate of the Target Transfer Q Learning (TTQL) and make discussions for the key factor that influence the convergence. Theorem \[q’\] analysis the convergence of the target transfer Q learning. Theorem \[thmsumw2\] and \[alpha\] analysis two key factors of the convergence rate. Theorem \[thmkey\] discuss the convergence rate for the TTQL totally.
First of all, Theorem \[q’\] analysis the convergence rate for the target transfer method which is $$Q_{t+1}(s,a) = ( 1 - \frac{1}{n})Q_{t}(s,a) + \frac{1}{n} \left(r(s,a) + \gamma \max_{\tilde{a}} Q_{target} (s',\tilde{a})\right)$$
For simplicity, we denote $ E_n = { \mathbf{MNE}}(Q_n) $. We denote the error ratio $ \beta_n = \frac{{ \mathbf{MNE}}(Q_{target})}{E_n} $ and $ \beta $ if we do not specify the learning steps $ n $.
\[q’\]
we denote $w_k( \beta _{n-k:n} ) = \frac{\prod_{i=n-k}^{n-1} (i+\gamma\beta_i)}{\prod_{i=n-k}^{n } i}$, $ \alpha_n = \frac{\prod_{i=1}^{n-1} (i+\gamma\beta_i )}{\prod_{i=2}^{n } i} $. If $0\le\beta_n \le 1$, then with probability $ 1-\delta $ we have $$\begin{aligned}
&E_n
\le&\underbrace{\alpha_n E_1}_{\text{initialization error}} + \underbrace{ \sqrt{\frac{\ln1/\delta\sum_{k=0}^{n-1}w_k^2(\beta_{n-k:n}) }{2 }}}_{\text{sampling error}}.
\end{aligned}$$
Before showing the proof of Theorem \[q’\], we first introduce a modified Hoeffding inequality lemma which bounds the distance between the weighted sum of the bounded random variable and its expectation.
\[lemhoeff\] Let $ a< x_i <b$ almost surely , $S_n = \sum_{i=1}^{n}w_ix_i $, then we have $$\begin{aligned}
S_n - E[S_n] \le \sqrt{\frac{1}{2} \log\frac{1}{\delta}\sum_{k=1}^{n}w_k^2(b-a)^2}. \label{invershoeff}
\end{aligned}$$
We first prove the inequality $ \mathbb{P}\left( S_n - E[S_n] \ge \epsilon \right) \le exp\left( -\frac{2\epsilon^2}{\sum_{k=1}^{n}w_k^2(b-a)^2} \right) \label{hoeff} $
For $ s,\epsilon \ge 0 $, Markov’s inequality and the independence of $ x_i $ implies $$\begin{aligned}
& \mathbb{P}\left(S_{n}-\mathrm{E}\left [S_n \right ]\geq \epsilon \right)\\
&= \mathbb{P} \left (e^{s(S_n-\mathrm{E}\left [S_n \right ])} \geq e^{s\epsilon} \right)\\
&\leq e^{-s\epsilon} \mathrm{E} \left [e^{s(S_{n}-\mathrm{E}\left [S_n \right ])} \right ]\\
&=e^{-s\epsilon} \mathrm{E} \left [e^{s( \sum_{i=1}^{n}w_ix_i-\mathrm{E}\left [ \sum_{i=1}^{n}w_ix_i \right ])} \right ]\\
&= e^{-s\epsilon} \prod_{i=1}^{n}\mathrm{E} \left [e^{sw_i(x_i-\mathrm{E}\left [x_{i}\right])} \right ]\\
&\leq e^{-s\epsilon} \prod_{i=1}^{n} e^{\frac{s^2 w_i^2(b -a )^2}{8} } \\
&= \exp\left(-s\epsilon+\tfrac{1}{8} s^2 (b -a)^{2}\sum_{i=1}^{n}w_i^2\right).
\end{aligned}$$ Now we consider the minimum of the right hand side of the last inequality as a function of $ s $, and denote $$g(s)=-s\epsilon+\tfrac{1}{8} s^2 (b -a)^{2}\sum_{i=1}^{n}w_i^2$$ Note that g is a quadratic function and achieves its minimum at $ s=\frac{4\epsilon}{(b-a)^2\sum_{i=1}^{n}w_i^2} $, Thus we get $$\begin{aligned}
\mathbb{P}\left(S_{n}-\mathrm{E}\left [S_n \right ]\geq \epsilon \right) \le exp\left( -\frac{2\epsilon^2}{\sum_{k=1}^{n}w_k^2(b-a)^2} \right)
\end{aligned}$$ We can easily obtain the second part of the Lemma \[lemhoeff\] by inverse the inequality.
Our analysis are derived based on the following synchronous generalized Q-learning setting. Compare with the traditional synchronous Q-learning [^2], we replace the target Q-function as the independent Q-function $ Q'(s,a) $ rather than the current one $ Q_n(s,a) $. $$\begin{aligned}
&\forall s,a~:~ Q_0(s,a)=q(s,a)\\
&\forall s,a~:~ Q_n(s,a) = \\
&~~~~~~ ( \frac{n-1}{n})Q_{n-1}(s,a) + \frac{1}{n} \left(r(s,a) + \gamma \max_{\tilde{a}} Q'_{n-1} (s',\tilde{a})\right) {\addtocounter{equation}{1}\tag{\theequation}}\end{aligned}$$ Let $ Q_n'(s,a) $ satisfied the following condition , $$\begin{aligned}
0 \le \frac{ \max_{s,a}\left(Q_n'(s,a) - Q^*(s,a) \right)}{\max_{s,a}\left(Q_{n}(s,a) - Q^*(s,a) \right)} \le 1 \label{proofbeta}
\end{aligned}$$ Note that if we set $ Q'_n(s,a) = Q^*_{source} $, we can verify $0 \le \beta_n \le 1 $ according to inequality \[proofbeta\]. First of all, we decompose the update role, $$\begin{aligned}
& Q_n(s,a) \\
& =\frac{n-1}{n}Q_{n-1}(s,a) + \frac{1}{n}\left[r(s,a) + \gamma\max_{\tilde{a}}Q'_{n-1}(s',\tilde{a})\right] \\
&= \frac{n-1}{n}Q_{n-1}(s,a) + \frac{1}{n}\left[r(s,a) + \gamma\max_{\tilde{a}}Q^*(s',\tilde{a}) \right. \\
& ~~~~~~~~~~~~~~~ \left.+\gamma\max_{\tilde{a}}Q'_{n-1}(s',\tilde{a}) - \gamma\max_{\tilde{a}}Q^*(s',\tilde{a}) \right]
\end{aligned}$$ If we denote $\epsilon_n(s,a) = Q_n(s,a) - Q^*(s,a) $, $ x(s') = \gamma\max_{\tilde{a}}Q^*(s',\tilde{a}) $ and recall the definition of $ \beta_n $ we can have $$\begin{aligned}
& \epsilon_n(s,a) \\
\le & \frac{n-1}{n}\epsilon_{n-1}(s,a)+\frac{1}{n}\left[x(s') - \mathbb{E}_{s'}x(s')\right] + \frac{1}{n}\gamma\beta_n\epsilon_{n-1}(s',\tilde{a}) \\
\le & \frac{n-1}{n}\epsilon_{n-1}(s,a) +\frac{1}{n}\left[x(s') - \mathbb{E}_{s'}x(s')\right] + \frac{1}{n}\gamma\beta_nE_{n-1}
\end{aligned}$$ The last step is right because $ \epsilon_n(s,a) \le E_n$ for $ \forall s,a $. Taking maximization of the both sides(RHS) of the inequality and using recursion of $ E $ we can have $$\begin{aligned}
& E_n \le \frac{n-1+\gamma\beta_n}{n}E_{n-1} + \frac{1}{n}\left[x(s') - \mathbb{E}_{s'}x(s')\right] \\
&\le \frac{\prod\limits_{i=1}^{n-1} (i+\gamma\beta_i)}{\prod\limits_{i=2}^{n } i}E_1 + \sum_{k=1}^{n-1}\frac{\prod\limits_{i=n-k}^{n-1} (i+\gamma\beta_i)}{\prod\limits_{i=n-k}^{n } i}[x(s'_k) - {\mathop{\mathbb{E}}}_{\substack{s'}}x(s')]\\
&= \alpha_n E_1 + \sum_{k=1}^{n-1}w_k(\beta)[x(s'_k) - \mathbb{E}_{s'}x(s')]\\
\end{aligned}$$ According to Lemma \[lemhoeff\](weighted Hoeffding inequality), with probability 1-$ \delta $, we have $$\begin{aligned}
E_n\le&\alpha_nE_1 + \sqrt{\frac{\ln1/\delta\sum_{k=0}^{n-1}w_k^2(\beta_{n-k:n}) }{2 }}
\end{aligned}$$
The convergence result reveals the how the error ratio $ \beta $ influence the convergence rate. In short, if we can find a better target Q-function, we can learn much more faster.
We can see from the Theorem \[q’\] that there are two key factors that influence the convergence rate. One is the initialization error $\alpha_n E_1 $, the other one is the sampling error $ \sqrt{\frac{\ln1/\delta\sum_{k=0}^{n-1}w_k^2(\beta_{n-k,n}) }{2 }}$. To make it clear, we analysis the order of these two terms in \[thmsumw2\] and \[alpha\] respectively.
\[thmsumw2\] Denote $w_k(\beta_{n-k:n}) = \frac{\prod_{i=n-k}^{n-1} (i+\gamma\beta_i )}{\prod_{i=n-k}^{n } i}$, and $ \beta_i \le \beta^* \text{ for } \forall i \le n $, we have $$\begin{aligned}
\sum_{k=0}^{n-1}\left(w_k(\beta_{n-k,n})\right)^2 \le
\left\{
\begin{array}{lr}
\frac{e^{ 2\gamma\beta^* }}{n^{2-2\gamma\beta^*}}\left(\frac{n^{1-2\gamma\beta^*}}{1-2\gamma\beta^*} - \frac{1}{1-2\gamma\beta^*}+1\right), \\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \gamma\beta^*\not=0.5 \\
\frac{(n-2)^{2\gamma\beta^*}}{n^2} e^{2\gamma\beta^*}(1 + \ln(n)),\\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \gamma\beta^*=0.5
\end{array}
\right..
\end{aligned}$$
Based on the results of Theorem 2, we can get the following corollary directly.
The order of $ \sum_{k=0}^{n-1}\left(w_k(\beta_{n-k:n})\right)^2 $ is:\
$ \mathcal{O}(\frac{1}{n}) $, if $ \gamma\beta^* < 0.5 $,.\
$ \mathcal{O}(\frac{1}{n^{2-2\gamma\beta^*}}) $, if $ 0.5<\gamma\beta^* <1 $.\
$ \mathcal{O}(\frac{1}{n^{2-2\gamma\beta^*}}\ln(n)) $, if $ \gamma\beta^* = 0.5 $.\
The sufficient condition for the $ \lim_{n\to \infty}\sum_{k=0}^{n-1}\left(w_k(\beta^*)\right)^2 = 0 $ is $ \gamma\beta^* <1 $\
Before showing the proof of Theorem 2, we first introduce a Lemma which will be used.
\[sumint\] If $ a < b $, $ \sum_{i=a}^{b}\frac{1}{i}\le \frac{1}{a} + \ln(b) - \ln(a) $.
$$\begin{aligned}
\sum_{i=a}^{b}\frac{1}{i}&\le \frac{1}{a} + \sum_{i=a+1}^{b}\frac{1}{i} \le \frac{1}{a} + \sum_{i=a+1}^{b}\int_{k=i-1}^{i}\frac{1}{k}dk \\
&\le \frac{1}{a} + \int_{k=a}^{b}\frac{1}{k}dk \le \frac{1}{a} + \ln(b) - \ln(a)
\end{aligned}$$
$$\begin{aligned}
&\sum_{k=0}^{n-1}\left(w_k(\beta_{n-k:n})\right)^2 \le \sum_{k=0}^{n-1}\left( \frac{\prod_{i=n-k}^{n-1} (i+\gamma\beta^*)}{\prod_{i=n-k}^{n } i}\right)^2 \\
&\underbrace{=}_{(a)}\sum_{k=0}^{n-1}\exp \left\lbrace 2\left[ \sum_{i=n-k}^{n-1}\ln(i+\gamma\beta^*) -\sum_{i=n-k}^{n}\ln i \right] \right\rbrace \\
&\underbrace{=}_{({b})}\frac{1}{n^2}\sum_{k=0}^{n-1}\exp \left\lbrace 2 \sum_{i=n-k}^{n-1}\left[ \ln(i+\gamma\beta^*) - \ln i \right] \right\rbrace \\
&\underbrace{\le}_{(c)} \frac{1}{n^2}\sum_{k=0}^{n-1}\exp \left\lbrace 2 \sum_{i=n-k}^{n-1}\frac{ \gamma\beta^*}{i} \right\rbrace \\
&\underbrace{\le}_{(d)} \frac{1}{n^2}\sum_{k=0}^{n-1}\exp \left\lbrace 2\gamma\beta^* \left[\ln(n-2) - \ln(n-k) + 1 \right] \right\rbrace \\
&= \frac{(n-2)^{2\gamma\beta^*}}{n^2} e^{2\gamma\beta^*}\sum_{k=0}^{n-1}\frac{1}{(n-k)^{2\gamma\beta^*}} \\
& = \frac{(n-2)^{2\gamma\beta^*}}{n^2} e^{2\gamma\beta^*}\sum_{t=1}^{n }\frac{1}{ t ^{2\gamma\beta^*}} {\addtocounter{equation}{1}\tag{\theequation}}\label{sumw2}
\end{aligned}$$ We rewrite the product term in (a) into the summarization term. Then we drop one term outside of the summarization to align the $ i $ sum from $ n-k $ to $ n-1 $ in (b). (c) follows the concave property of the $ \ln $ function. (d) follows the relation between summarization and integral as shown in Lemma \[sumint\]. The last two terms is right because we only rearrange the term and write it simply.
If $ \gamma\beta^* = 0.5 $, $ 2\gamma\beta^* =1 $, $$\begin{aligned}
\sum_{k=0}^{n-1}\left(w_k(\beta_{n-k:n})\right)^2 \le \frac{1}{n^{2-2\gamma\beta^*} }e^{\frac{2\gamma\beta^*}{n-1}}(1 + \ln(n))
\end{aligned}$$ If $ \gamma\beta^* \not = 0.5 $, $$\begin{aligned}
\small&\sum_{k=0}^{n-1}\left(w_k(\beta^*)\right)^2 \le \underbrace{\frac{1}{n^{2-2\gamma\beta^*}}}_{(e)} \underbrace{e^{2\gamma\beta^*}}_{(f)}\left(\underbrace{\frac{n^{1-2\gamma\beta^*}}{1-2\gamma\beta^*}}_{(g)} - \underbrace{\frac{1}{1-2\gamma\beta^*}+1}_{(h)}\right) \label{key} \normalsize
\end{aligned}$$ Note that term (f) is a constant.\
If $ \gamma\beta^* < 0.5 $, term(g) will dominant the order, $ \sum_{k=0}^{n-1}\left(w_k(\beta_{n-k:n})\right)^2 $ will be $ \mathcal{O}(\frac{1}{n}) $.\
If $ \gamma\beta^* > 0.5 $, term(h) will dominant the order, $ \sum_{k=0}^{n-1}\left(w_k(\beta_{n-k:n})\right)^2 $ will be $ \mathcal{O}(\frac{1}{n^{2-2\gamma\beta^*}}) $.\
If $ \gamma\beta^* = 0.5 $, $ \sum_{k=0}^{n-1}\left(w_k(\beta_{n-k:n})\right)^2 $ will be $ \mathcal{O}(\frac{1}{n^{2-2\gamma\beta^*}}\ln(n)) $.\
In all case, the (\[key\]) will converge to 0 as $ n $ will go to $ \infty $.
Note that if $ \gamma\beta^* <1 $. The theorem \[thmsumw2\] shows, $ \sum_{k=0}^{n-1}w_k^2 $ converges to 0 and the convergence rate is highly related to the $ \gamma\beta^* $. The next theorem shows the upper bound of the coefficient $ \alpha_n $ in initialization error.
\[alpha\] Denote $ \alpha_n = \frac{\prod_{i=1}^{n-1} (i+\gamma\beta_i )}{\prod_{i=2}^{n } i} $, and $ \beta_i \le \beta^* \text{ for } \forall i \le n $, we can bound $ \alpha_n $ as: $$\begin{aligned}
\alpha_n \le \frac{(n-1)^{\gamma\beta^*}}{n}(1+\gamma\beta^*)e^{(0.5-\ln2)\gamma\beta^*} = \frac{C^1_{\gamma,\beta^*}}{n^{1-\gamma\beta^*}}.
\end{aligned}$$ where $ C^1_{\gamma,\beta^*} =(1+\gamma\beta^*)e^{(0.5-\ln2)\gamma\beta^*} $ is a constant。
$$\begin{aligned}
&\alpha_n \le \frac{\prod_{i=1}^{n-1} (i+\gamma\beta^* )}{\prod_{i=2}^{n } i} \\
&= \exp\left\{ \sum_{i=1}^{n-1}\ln(i+\gamma\beta^*) - \sum_{i=2}^{n}\ln i \right\}\\
&=(1+\gamma\beta^*)\exp\left\{ \sum_{i=2}^{n-1} \left( \ln(i+\gamma\beta^*) - \ln i \right) - \ln n\right\}\\
& \le (1+\gamma\beta^*)\exp\left\{ \sum_{i=2}^{n-1} \left( \frac{\gamma\beta^*}{i} \right) - \ln n\right\}\\
&\le (1+\gamma\beta^*)\exp\left\{ \gamma\beta^*(0.5 + \ln(n-1) - \ln2 )- \ln n\right\} \\
& \le \frac{(n-1)^{\gamma\beta^*}}{n}(1+\gamma\beta^*)e^{(0.5-\ln2)\gamma\beta^*}
\end{aligned}$$ We rewrite the product term in the second equation into the summarization term. The third equation is rearrange the terms. The first inequality follows the concave property of $ \ln $ function. The second inequality follows the relation between summarization and integral(Lemma \[sumint\]).
Note that if $ \gamma\beta^* <1 $. The theorem \[alpha\] shows, $ \alpha_n $ converge to 0 and the convergence rate is in order $ \mathcal{O}(\frac{1}{n^{1-\gamma\beta^*}}) $.
Combining Theorem \[q’\], \[thmsumw2\] and \[alpha\], we have the following Theorem:\
\[thmkey\] The TTQL will converge if we set the **safe condition** as $$\hat{\beta}_n = \frac{\Delta(M_1, M_2)}{E_n} \le 1.$$ And the convergence rate is: $$\begin{aligned}
E_n \le
\left\{
\begin{array}{lr}
\mathcal{O}(\frac{1}{n^{1-\gamma\beta}}E_1 + \sqrt{\frac{1}{n}} ) , &if ~ \gamma\beta < 0.5 \\
\mathcal{O}(\frac{1}{n^{1-\gamma\beta}}E_1 + \frac{1}{n^{1-\gamma\beta}}\sqrt{\ln n}) , & if~ \gamma\beta = 0.5 \\
\mathcal{O}(\frac{1}{n^{1-\gamma\beta}}E_1 + \frac{1}{n^{1-\gamma\beta}}) , &if ~ 0.5 < \gamma\beta < 1\\
\end{array}
\right..
\end{aligned}$$
Note that if the safe condition is satisfied, we set $ Q_{target} = Q_{source}^* $ and $ \beta $
We would like to make the following discussion:
**(1) The distance between two MDPs influence the convergence rate.** According to the Proposition \[propq\*diff\], if two MPDs have the similar components($ P $, $ r $, $\gamma$), the optimal Q-function of these two MDPs will be closed. The discounted error ratio $ \gamma\beta_n $ will be relatively small in this situation and the convergence rate will be improved.
**(2) Q-learning is the special case.** Please note that the traditional Q-learning is a special case for target transfer Q-learning with $ Q_{target} = Q_{n-1} $. Thus the error ratio is a constant and $\beta_n = 1$ and our results reduce to the previous [@szepesvari1998asymptotic]. It shows that if the $ \beta < 1 $ in TTQL,the TTQL converge faster than traditional Q-learning.
**(3) The TTQL method do converge with the safe condition.** As shown in Theorem \[thmkey\], the TTQL method will converge. And the convergence rate changes under different discounted error ratio $ \gamma\beta $. The smaller $ \gamma\beta $ will lead to a quicker convergence rate. Intuitively, smaller $ \beta $ means that $ Q' $ provides more information about the optimal Q-function. Besides, the discount factor $ \gamma $ can be viewed as the “horizon” of the infinite MDPs. Smaller $ \gamma $ means that the expected long-term return is less influenced by the future information and the immediate reward is assigned more weights.
**(4) Safe condition is necessary.** As mentioned above, the safe condition is defined as $ \hat{\beta}_n \le 1 $. If the safe condition is satisfied, we set $ Q_{target} = Q_{source}^* $ and $ \gamma\beta_n = \gamma\hat{\beta}_n \le \gamma < 1 $. If safe condition is not satisfied, we set $ Q_{target} = Q_{n} $ and $ \gamma\beta_n = \gamma < 1 $. So with the safe condition, TTQL algorithms do converge at any situation. At the beginning of the new task training, due to the large error of the current Q-function, $ \beta_n = \hat{\beta}_n $ will be relatively small and the transfer learning will be greatly helpful. Speedup would come down as the error of current Q-function, become smaller. Finally when $ \beta $ is equal to or larger than one we need to remove the transfer Q target which means to set $ \beta=1 $ to avoid the harm brought by the transfer learning.
Discussion for Error Ratio Safe Condition
=========================================
Until now, we can conclude that TTQL will converge. TTQL method need the safe condition to guarantee the convergence. In this section, We discuss the safe conditions.
At the beginning, we propose the safe condition is that can guarantee the algorithms convergence generally. Heuristically, the safe condition is related to the distance between two MDPs and the quality of the current value function. Then according to the Theorem \[q’\], we know that the safe condition is $ \hat{\beta}_n \le 1 $ which we called error ratio safe condition. Under the transfer learning in RL setting, it means that the distance between two MDPs need to be smaller than the error of the current Q-function. In the real algorithms, it is impossible to calculate the error of the current Q-function $ { \mathbf{MNE}}(Q_n) $ and the distance between two MDPs precisely. However it is easy to calculate the bellman error $ { \mathbf{MNBE}}(Q(s,a)) = \max_{s,a} \left\vert Q(s,a) - (r(s,a) + \gamma E_{s'}\max_{\tilde{a}}(Q(s' ,\tilde{a})))\right\vert$. We can prove that these two metrics follow the relationship as: $${ \mathbf{MNE}}(Q) \le \frac{{ \mathbf{MNBE}}(Q)}{1-\gamma}.$$ Following the standard way in Q-learning, we estimate the error ratio about the error of the Q-function w.r.t the optimal Q-function by the Bellman error.
\[sc2\]
leared $Q_1^*$ , current Q-function $Q_n $ flag = True flag = False flag
\[Proof of the relation between $ { \mathbf{MNE}}$ and $ { \mathbf{MNBE}}$\]
Denote $ \mathcal{B}Q(s,a) = r(s,a) - \gamma\mathbb{E}_{s'}\max_{\tilde{a}}Q(s',\tilde{a}) $ as bellman operator. $$\begin{aligned}
&{ \mathbf{MNE}}(Q) \\
\le &\Vert Q(s,a) - \mathcal{B}Q^*(s,a)\Vert_\infty + \Vert \mathcal{B}Q^*(s,a) - Q^*(s,a)\Vert_\infty \\
\le &{ \mathbf{MNBE}}(Q) +
\Vert \gamma\mathbb{E}_{s'}\max_{\tilde{a}}Q(s',\tilde{a}) - \gamma\mathbb{E}_{s'}\max_{\tilde{a}}Q^*(s',\tilde{a})\Vert \\
\le &{ \mathbf{MNBE}}(Q) + \gamma { \mathbf{MNE}}(Q)
\end{aligned}$$ So we can proof that $${ \mathbf{MNE}}(Q) \le \frac{{ \mathbf{MNBE}}(Q)}{1-\gamma} .$$
Experiment
==========
In this section, we report our simulation experiments to support our convergence analysis and verified the effectiveness of our proposed target transfer Q-Learning with the error ratio safe condition.
We consider the general MDP setting. We construct the random MDP by generating the transition probability $ P(s'|s,a) $, reward function $ r(s,a) $ and discount factor $ \gamma $ and fixing the state and action space size as 50.
First of all, we generate 9 different MDPs ($ M_{11}\sim M_{33} $) as source tasks and then generate the new MDP $ M_0 $. Let $ M_{11},M_{12},M_{13}$ be different from $ M_0 $ in $ \gamma $ and the distance from $ M_{1\cdot} $ and $ M_0 $ increase as $ M_{11} < M_{12} < M_{13} .$ Similarly, MDPs $ M_{21},M_{22},M_{23} $ is different from $ M_0 $ in $ r $, and MDPs $ M_{31},M_{32},M_{33} $ is different from $ M_0 $ in $ P $. Then we run our algorithm to transfer the Q-function learned on these 9 source MPDs to the new MDP $ M_0 $. The result is shown in Figure1a, 1b and 1c. Note that the dash line $ Q $ is the Q-learning algorithm with no transfer learning, and the solid line with various markers are the TTQL algorithm.
Secondly, we design three MDPs $ M_{4}, M_{5}, M_{6} $ as source task MDPs, and the distance between these MDPs and the target becomes larger and larger. Then we use TTQL to transfer the Q-function learning from them to new MDP $ M_0 $ with and without the safe condition. The results is shown in Figure1d, 1e and 1f. Note that $ W-SC $ means that the experiment is run with the safe condition and $ WO-SC $ means without the safe condition.
We have the following observations. (1) TTQL method outperforms Q-learning in all experiments. (2) Running TTQL on the more similar MDPs will lead to the faster convergence rate. Note that the curve in Figure \[fig3\] are closed to each other. It is because the infinity norm of the $ P $ will be small because the scale of the $ P $ is small and is consistent with the Proposition \[propq\*diff\]. (3) The safe condition is necessary to ensure the convergence of the algorithms in various situation. All these observations are consistent with our theoretical findings.
Conclusion
==========
In this paper, we proposed a new transfer learning in RL method *target transfer Q-learning*(TTQL). The method transfer the Q-function learned in the source task to the target of Q-learning in the new task when the safe conditions are satisfied. We prove the TTQL method do converge with the safe condition and the convergence rate is quicker than Q-learning if the two MDPs are not faraway from each other. The theoretical analysis helps to design safe conditions which is key to guarantee the convergence of TTQL. As far as we known, it is the first convergence rate guaranteed transfer leaning in reinforcement learning algorithm. In the future, we will apply the TTQL to the more complex tasks and study convergence rate for the TTQL with complex function approximation such as the neural network.
[^1]: This work was done when the first author was visiting Microsoft Research Asia.
[^2]: It is the same as the commonly used setting or more general([@pmlr-v70-asadi17a], [@even2003learning], [@azar2013speedy] [@haarnoja2017reinforcement]).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We propose a simple preconditioning technique that, if incorporated into algorithms for computing functions of triangular matrices, can make them more efficient. Basically, such a technique consists in a similarity transformation that reduces the departure from normality of a triangular matrix, thus decreasing its norm and in general its function condition number. It can easily be extended to non triangular matrices, provided that it is combined with algorithms involving a prior Schur decomposition. Special attention is devoted to particular algorithms like the inverse scaling and squaring to the matrix logarithm and the scaling and squaring to the matrix exponential. The advantages of our proposal are supported by theoretical results and illustrated with numerical experiments.'
author:
- |
\
\
\
---
*keywords*: Triangular matrices, Schur decomposition, matrix functions, preconditioning, condition number, matrix exponential, matrix logarithm, matrix square roots, matrix inverse cosine.
MSC Subject Classification: 15A16, 65F35, 65F60.
Introduction
============
Given a square complex matrix $A \in \mathbb{C}^{n \times n}$ and a scalar valued function $f$ defined on the spectrum of $A$ [@Higham Def.1.1], the notation $f(A)$ means that [*“$f$ is a primary function of $A$”*]{} in the usual sense, as considered in [@Higham] and [@Horn Ch.6]. We refer the reader to those books for background on matrix functions. If a prior Schur decomposition $A=UTU^*$ ($U$ unitary and $T$ upper triangular) has been computed, the problem of computing $f(A)$ is reduced to that of computing $f(T)$, because $f(A)=U\,f(T)\,U^*$.
In this paper, we begin by stating, in Section \[pre\], a preconditioning technique that can be easily incorporated into algorithms for computing functions of triangular matrices. This technique can be used, in particular to:
- Increase the efficiency of algorithms, especially those involving scaling and squaring (e.g., matrix exponential) or inverse scaling and squaring techniques (matrix logarithm and matrix $p$th roots);
- Reduce the norm of both $T$ and $f(T)$, and, in general, of the relative condition number of $f$ at $T$, thus making the problem of approximating $f(T)$ better conditioned.
In Section \[sec-properties\], we prove that the preconditioning technique reduces the norms of the original matrix $T$ and of $f(T)$. Other properties related to Fréchet derivatives and condition numbers are investigated. In Section \[error\], an error analysis of the technique is provided. It will be shown that it is numerically stable. Two experiments regarding the inverse scaling and squaring to the matrix logarithm and an algorithm for computing the inverse cosine of a matrix are presented in Section \[experiments\]. In Section \[conclusions\], some conclusions are drawn.
Unless otherwise stated, $\|.\|$, $\|.\|_F$ and $\|.\|_2$ will denote, respectively, a general subordinate matrix norm, the Frobenius norm and the 2-norm (also known as spectral norm). For a given $A \in \mathbb{C}^{n \times n}$, $|A|$ stands for the matrix of the absolute values of the entries of $A$.
The preconditioning technique {#pre}
=============================
Let $T\in\mathbb{C}^{n\times n}$ be an upper triangular matrix and let $\alpha$ be a positive scalar. Without loss of generality, let us assume throughout the paper that $\alpha>1$. Consider the diagonal matrix $$\label{S}
S=\diag(1,\alpha,\ldots,\alpha^{n-1})$$ and let us denote $\widetilde{T}:=S\,T\,S^{-1}$.
Given a complex valued function $f$ defined on the spectrum of $T$, the following steps describe, at a glance, the proposed preconditioning technique for computing $f(T)$:
1. Choose a suitable scalar $\alpha>1$;
2. Compute $f(\widetilde{T})$ using a certain algorithm;
3. Recover $f(T)=S^{-1}\, f(\widetilde{T})\, S$.
A discussion on the choice of $\alpha$ will be provided at the end of Section \[error\]. Step 3 is based on the fact that primary matrix functions preserve similarity transformations (see [@Higham Thm. 1.13]). Note that the similarity transformation in Step 3 can magnify the absolute error of the computed approximation to $f(\widetilde{T})$, thus resulting in a larger error in the approximation obtained to $f(T)$. This issue will be discussed in Section \[error\], where we can see that such errors are not larger than the ones resulting from the application of the algorithms without preconditioning.
To gain insight into the effects of the left multiplication of $T$ by $S$ and of the right multiplication by $S^{-1}$, we write $$\label{N1N2}
T=D+N_1+\cdots+N_{n-1},$$ where $D$ is a diagonal matrix formed by the diagonal of $T$ and zeros elsewhere, $N_1$ is formed by the first superdiagonal of $T$ and zeros elsewhere and so on, up to $N_{n-1}$. Then $$\widetilde{T}=STS^{-1}=D+ N_1/\alpha+\cdots+ N_{n-1}/\alpha^{n-1},$$ which means that the proposed technique just involves multiplications/divisions of certain entries of the matrix by the positive scalar $\alpha$.
For instance, if $T=[t_{ij}]$ ($i,j=1,2,3$) is an $3\times 3$ upper triangular matrix ($t_{ij}=0$ for $i>j$), we have $$D=\left[\begin{array}{ccc}
t_{11} & 0 & 0\\
0 & t_{22} & 0\\
0 & 0 & t_{33}
\end{array}\right], \quad
N_1=\left[\begin{array}{ccc}
0 & t_{12} & 0\\
0 & 0 & t_{23} \\
0 & 0 &0
\end{array}\right], \quad
N_2=\left[\begin{array}{ccc}
0 & 0 & t_{13}\\
0 & 0 & 0\\
0 & 0 & 0
\end{array}\right].$$ Hence, $$\widetilde{T}=D+ N_1/\alpha+N_2/\alpha^2=
\left[\begin{array}{ccc}
t_{11} & t_{12}/\alpha & t_{13}/\alpha^2\\
0 & t_{22} & t_{23}/\alpha \\
0 & 0 &t_{33}
\end{array}\right].$$
To illustrate the benefits of the proposed preconditioning technique, let us consider the problem of computing the exponential of the matrix $$\label{T}
T=\left[\begin{array}{cc}
1 & 10^6 \\
0 & -1 \\
\end{array}\right]$$ by the classical scaling and squaring method, as described in [@Moler]. Before using Taylor or Padé approximants, $T$ has to be scaled by $2^{k_0}$ so that the condition $\|T\|_2/2^{k_0}<1$ holds. We easily find that the smallest $k_0$ verifying that condition is $k_0=21$, thus making it necessary to carry out at least 21 squarings for evaluating $e^T$. In contrast, if we apply our preconditioning technique with $\alpha=10^{6}$, it is enough to take $k_0=1$, meaning that the computation of $e^T$ involves only one squaring and the multiplication of the $(1,2)$ entry of $\widetilde{F}=e^{\widetilde{T}}$ by $\alpha=10^6$, which is a very inexpensive procedure.
In the last decades, the scaling and squaring method has been subject to significant improvements; see, in particular, [@Mohy09; @Higham05]. The function `expm` of the recent versions of MATLAB implements the method proposed in [@Mohy09], where a sophisticated technique is used to find a suitable number of squarings. Such a technique is based on the magnitude of $\|T^k\|^{1/k}$ instead of $\|T\|$, which may lead to a considerable reduction in the number of squarings. For instance, if we compute $e^T$, where $T$ is the matrix given in (\[T\]), by [@Mohy09 Alg. 5.1], no squaring is required. Note that this does not represent a failure of our proposal, because the preconditioning technique described above can be combined easily with any method for approximating the matrix exponential (or any other primary matrix function), in particular, with the new scaling and squaring method of [@Mohy09]. For instance, the computation of the exponential of the matrix in (\[T1\]) involves $4$ squarings if computed directly by `expm` (which implements [@Mohy09 Alg. 5.1]) and no squarings if preconditioned. The main reason is that for $\alpha>1$, we have $\|\widetilde{T}\|\leq \|T\|$ and consequently, $$\|\widetilde{T}^k\|^{1/k} \leq \|T^k\|^{1/k},$$ for any positive integer $k$.
Similarly, our technique can be used to reduce the number of square roots in the inverse scaling and squaring to the matrix logarithm. The function `logm` implements the method provided in [@Mohy12 Alg. 5.2], where, like the matrix exponential, the estimation of the number of square roots is based on the magnitude of $\|T^k\|^{1/k}$ instead of $\|T\|$. To illustrate the gain that preconditioning a matrix may bring, let us consider $$\label{T1}
T=\left[\begin{array}{cccc}
\mathtt{3.2346e-001} & \mathtt{3.0000e+004} & \mathtt{3.0000e+004} & \mathtt{3.0000e+004}\\
0 & \mathtt{3.0089e-001} & \mathtt{3.0000e+004} & \mathtt{3.0000e+004}\\
0 & 0 & \mathtt{3.2210e-001} & \mathtt{3.0000e+004}\\
0 & 0 & 0 & \mathtt{3.0744e-001}
\end{array}\right]$$ (see [@Mohy12 p.C163]). Directly computing $\log(T)$ by `logm` involves the computation of $16$ square roots, while a preconditioning of $T$ with $\alpha=\|N\|$, where $N$ denotes the nilpotent part of $T$, requires only $4$ square roots, without any sacrifice in the accuracy of the computed logarithm (see the results for $T_2$ in the Table \[tab1\]).
We finish this section by noticing that preconditioning the original matrix may also bring benefits if combined with the MATLAB function `funm`, which implements the Schur-Parlett method of [@Davies] for computing several matrix functions, and with methods for computing special functions [@Cardoso; @Garrappa].
Properties {#sec-properties}
==========
To understand the potential of the proposed preconditioning technique, we show, in this section, that, for any $\alpha>1$, it reduces the Frobenius norms of both $T$ and of $f(T)$. We also provide insight to understand why in all the examples we have tested the norm of the Fréchet derivative $L_f(T)$ is reduced as well, thus making the problems better conditioned.
\[property1\] Let us assume that: $T\in\mathbb{C}^{n\times n}$ is an upper triangular matrix, $\alpha>1$, $f$ is a complex valued function defined on the spectrum of $T$, and $S$ is defined as in (\[S\]). Denoting $\widetilde{T}=STS^{-1}$, the following inequalities hold:
1. $\|\widetilde{T}\|_F\leq \|T\|_F$;
2. $\|f(\widetilde{T})\|_F\leq \|f(T)\|_F$.
<!-- -->
1. Let us write $T=D+N_1+\cdots+N_{n-1}$, where $N_j,\ j=1,\ldots,n-1$, are defined as in (\[N1N2\]). Then $$\label{nT2}
\|T\|_F^2=\|D\|_F^2+ \|N_1\|_F^2+\cdots+\|N_{n-1}\|_F^2.$$ Since $$\begin{aligned}
\widetilde{T} &=& STS^{-1}\\
&=& D+SN_1S^{-1}+\cdots+SN_{n-1}S^{-1}\\
&=& D+N_1/\alpha+\cdots+N_{n-1}/\alpha^{n-1},
\end{aligned}$$ we have $$\label{n-tildeT2}
\|\widetilde{T}\|_F^2=\|D\|_F^2+ \frac{1}{\alpha^2}\|N_1\|_F^2+\cdots+\frac{1}{\alpha^{2(n-1)}}\|N_{n-1}\|_F^2.$$ From (\[nT2\]) and (\[n-tildeT2\]), $$\label{T-Ttilde}
\|T\|_F^2=\|\widetilde{T}\|_F^2+ \left(1-\frac{1}{\alpha^2}\right)\|N_1\|_F^2+\cdots+\left(1-\frac{1}{\alpha^{2(n-1)}}\right)\|N_{n-1}\|_F^2.$$ Since $\alpha>1$, the inequality $\|\widetilde{T}\|_F \leq \|T\|_F$ follows.
2. Let us write $f(T)=f(D)+F_1+\cdots+F_{n-1},$ where $F_1$ is formed by the first superdiagonal of $f(T)$ and zeros elsewhere and so on, up to $F_{n-1}$. Then $$\label{nfT2}
\|f(T)\|_F^2=\|f(D)\|_F^2+ \|F_1\|_F^2+\cdots+\|F_{n-1}\|_F^2,$$ and $$\label{n-ftildeT2}
\|f(\widetilde{T})\|_F^2=\|f(D)\|_F^2+ \frac{1}{\alpha^2}\|F_1\|_F^2+\cdots+\frac{1}{\alpha^{2(n-1)}}\|F_{n-1}\|_F^2.$$ From (\[nfT2\]) and (\[n-ftildeT2\]), $$\label{T-fTtilde}
\|f(T)\|_F^2=\|f(\widetilde{T})\|_F^2+ \left(1-\frac{1}{\alpha^2}\right)\|F_1\|_F^2+\cdots+\left(1-\frac{1}{\alpha^{2(n-1)}}\right)\|F_{n-1}\|_F^2.$$ Since $\alpha>1$, the result follows.
Using (\[T-Ttilde\]), we can evaluate the difference between the squares of the norms $\|\widetilde{T}\|_F$ and $\|T\|_F$. Likewise, the difference between the squares of $\|f(\widetilde{T})\|_F$ and $\|f(T)\|_F$ can be found from (\[T-fTtilde\]). In particular, if $T$ is nonsingular, we have $\|\widetilde{T}^{-1}\|_F\leq \|T^{-1}\|_F$, which shows that $\kappa(\widetilde{T}) \leq \kappa(T)$, where $\kappa(X)=\|X\|\|X^{-1}\|$ denotes the standard condition number of $X$ with respect to matrix inversion. This issue has also motivated us to investigate the absolute and relative condition numbers of $f$ at $\widetilde{T}$, which are commonly defined through Fréchet derivatives ([@Higham Thm. 3.1]).
Given a map $f:\mathbb{C}^{n\times n}\rightarrow\mathbb{C}^{n\times n}$, the Fréchet derivative of $f$ at $A\in\mathbb{C}^{n\times n}$ in the direction of $E\in\mathbb{C}^{n\times n}$ is a linear operator $L_f(A)$ that maps the “direction matrix” $E$ to $L_f(A,E)$ such that $$\lim_{E\rightarrow 0}\frac{\|f(A+E)-f(A)-L_f(A,E)\|}{\|E\|}=0.$$ The Fréchet derivative of the matrix function $f$ may not exist at $A$, but if it does it is unique and coincides with the directional (or Gâteaux) derivative of $f$ at $A$ in the direction $E$. Hence, the existence of the Fréchet derivative guarantees that, for any $E\in\mathbb{C}^{n\times n}$, $$L_f(A,E)=\lim_{t\rightarrow 0}\frac{f(A+tE)-f(A)}{t}.$$ Any consistent matrix norm $\|.\|$ on $\mathbb{C}^{n\times n}$ induces the operator norm $\ \|L_f(A)\|:=$ $\max_{\|E\|=1}\,\|L_f(A,E)\|.$ Here we use the same notation to denote both the matrix norm and the induced operator norm.
Since $L_f(A,E)$ is linear in $E$, it is often important to consider the following vectorized form of the Fréchet derivative: $$\label{frechet-vec}
\vec\left(L_{f}(A,E)\right)=K_{f}(A)\,\vec(E),$$ where $\vec(.)$ stands for the operator that stacks the columns of a matrix into a long vector of size $n^2\times 1$, and $K_{f}(A)\in \mathbb{C}^{n^2\times n^2}.$
For more information on the Fréchet derivative and its properties see, for instance, [@Bhatia97 Ch. X] and [@Higham Ch. 3].
\[lemma1\] For $i,j=1,\ldots,n$, let $E_{ij}:=e_ie_j^T$, where $e_k$ is the $n\times 1$ vector with $1$ in the $k$-th position and zeros elsewhere. Using the same notation of Proposition \[property1\], the following equality holds: $$\label{equal-frechet}
L_f(\widetilde{T},E_{ij})=\alpha^{j-i}\,S\, L_f(T,E_{ij})\,S^{-1}.$$
Through a simple calculation, we can see that, for any $i,j=1,\ldots,n$, $S^{-1}E_{ij}S=\alpha^{j-i} E_{ij}.$ Now, by the linearity and similarity properties of Fréchet derivatives, one arrives at: $$\begin{aligned}
L_f(\widetilde{T},E_{ij})&=& L_f(STS^{-1},E_{ij})\\
&=& S\, L_f(T,S^{-1}E_{ij}S)\,S^{-1}\\
&=& \alpha^{j-i}\,S\, L_f(T,E_{ij})\,S^{-1}.
\end{aligned}$$
\[property2\] Let $A,E\in\mathbb{C}^{n\times n}$ and let us consider the function $g(A,E):=\phi(A)\,E\,\psi(A)$, where $\phi(A)$ and $\psi(A)$ are primary matrix functions, which depends on the matrices $A$ and $E$. Assuming that $\alpha$, $S$, $T$ and $\widetilde{T}$ are as in Proposition \[property1\] and that, for any $i,j=1,\ldots,n$, $E_{ij}$ is as in Lemma \[lemma1\], the following inequality holds: $$\label{inequal-frechet}
\|g(\widetilde{T},E_{ij})\|_F\leq \|g(T,E_{ij})\|_F.$$
By virtue of Lemma \[lemma1\], (\[inequal-frechet\]) is trivial if $i\geq j$. Hence, from now on, we assume that $i<j$. Once more, using Lemma \[lemma1\], the properties of primary matrix functions ensure that $$\begin{aligned}
g(\widetilde{T},E_{ij})&=& \phi(\widetilde{T})\,E_{ij}\,\psi(\widetilde{T}) \\
&=& \phi(STS^{-1})\,E_{ij}\,\psi(STS^{-1}) \\
&=& S\phi(T)S^{-1}\,E_{ij}\,S\psi(T)S^{-1} \\
&=& \alpha^{j-i}\,S\phi(T)\,E_{ij}\,\psi(T)S^{-1}.
\end{aligned}$$ Let us denote $G:=g(T,E_{ij})=\phi(T)\,E_{ij}\,\psi(T)$. Since the matrices $\phi(T)$, $E_{ij}$ and $\psi(T)$ are upper triangular (recall that $i<j$), the same is valid for $G$. Write $$G=G_0+G_1+\cdots+G_{j-i-1}+G_{j-i}+G_{j-1+1}+\cdots+G_{n-1},$$ where $G_0$ is a diagonal matrix formed by the diagonal of $G$ and zeros elsewhere, $G_1$ is formed by the first superdiagonal of $G$ and zeros elsewhere and so on, up to $G_{n-1}$. After a few calculations, it can be shown that $G_0=\,G_1=\ldots=G_{j-i-1}=0$. More precisely, denoting by $g_{ij}$ the $(i,j)$ entry of $G$, we have $g_{rs}=0$, for any $r>i$ and $s<j$. The remaining entries of $G$ may be or may not be zero. Hence, $$\begin{aligned}
g(\widetilde{T},E_{ij})&=& \alpha^{j-i}\,SGS^{-1}\\
&=& \alpha^{j-i}\,S\left(G_{j-i}+G_{j-1+1}+\cdots+G_{n-1}\right)S^{-1}\\
&=& \alpha^{j-i}\,\left(\frac{G_{j-i}}{\alpha^{j-i}}+\frac{G_{j-i+1}}{\alpha^{j-i+1}}+\cdots+\frac{G_{n-1}}{\alpha^{n-1}}\right)\\
&=& \left(G_{j-i}+\frac{G_{j-i+1}}{\alpha}+\cdots+\frac{G_{n-1}}{\alpha^{n-1-j+i}}\right),\end{aligned}$$ showing that (\[inequal-frechet\]) is valid.
The Fréchet derivatives of the most used primary matrix functions are sums or integrals of functions like $g(A,E)$ in Proposition \[property2\]. For instance, the Fréchet derivatives of the matrix exponential and matrix logarithm allow, respectively, the integral representations $$\label{frechet-exp}
L_{\exp}(A,E)=\int_{0}^1 e^{A(1-t)}Ee^{At}\ dt$$ and $$\label{frechet-log}
L_{\log}(A,E)=\int_0^1\, \left(t(A-I)+I\right)^{-1}\,E\, \left(t(A-I)+I\right)^{-1}\ dt$$ (see [@Higham]). More generally, a function that can be represented by a Taylor series expansion $$f(A)=\sum_{k=0}^\infty a_kA^k,$$ has a Fréchet derivative of the form [@Kenney] $$\label{frechet-general}
L_f(A,E)=\sum_{k=0}^\infty a_k \sum_{j=0}^{k-1} A^{j}E A^{k-1-j},$$ which involves sums of functions like $g(A,E)$. Hence, the procedure used in the proof of inequality (\[inequal-frechet\]) can be easily extended to several Fréchet derivatives, including, in particular, (\[frechet-exp\]), (\[frechet-log\]) and (\[frechet-general\]), thus showing that, for those functions, $$\label{inequal-frechet-2}
\|L_f(\widetilde{T},E_{ij})\|_F\leq \|L_f(T,E_{ij})\|_F,$$ for any $i,j=1,\ldots,n$,
A well-known tool to understand how $f(A)$ changes as $A$ is being subject to perturbations of first order, is the absolute condition number of $f$ at $A$, whose value can be evaluated using the Fréchet derivative: $$\label{cond-abs}
\cond\,\hspace*{-0.5ex}_\mathrm{abs}(f,A)=\|L_f(A)\|.$$ The relative condition number of $f$ at $A$ can be evaluated by the formula $$\label{cond-rel}
\cond\,\hspace*{-0.5ex}_\mathrm{rel}(f,A)=\|L_f(A)\|\frac{\|A\|}{\|f(A)\|}$$ (see [@Higham Sec. 3.1]).
Let us recall how to evaluate $\|L_f(A)\|$. Once we know $L_f(A,E_{ij})$, for a given a pair $(i,j)$, with $i,j\in\{1,\ldots,n\}$, the equality (\[frechet-vec\]) enables us to find the $((j-1)n+i)$-th column of $K_f(A)$. Repeating the process for all $i,j=1,\ldots,n$, we can find all the entries of $K_f(A)$. Since, with respect to the Frobenius norm, $$\label{equ-norm}
\|L_f(A)\|_F=\|K_f(A)\|_2$$ (see [@Higham Sec. 3.4]), the absolute and relative conditions numbers follow easily.
We now compare the condition numbers corresponding to $\widetilde{T}$ and $T$, by analysing the values of $\|K_f(\widetilde{T})\|_2$ and $\|K_f(T)\|_2$. To simplify the exposition, let us denote $\widetilde{K}:=K_f(\widetilde{T})=[\widetilde{k}_{pq}]$ and $K:=K_f(T)=[k_{pq}]$ ($p,q=1,\ldots,n^2$). Attending to (\[inequal-frechet-2\]), the Frobenius norm of the $p$-th column of $\widetilde{K}$ is smaller than or equal to that of the $p$-th column of $K$, for any $p=1,2,\ldots,n^2$. This means that, $$\|\widetilde{K}e_p\|_F \leq \|Ke_p\|_F,$$ where $e_p$ denotes the $n^2\times 1$ vector with one in the $p$-th component and zeros elsewhere. Moreover, $\diag(\widetilde{K})=\diag(K)$ and, by applying the $\vec$ operator to both hand-sides of (\[equal-frechet\]), we can observe that $\widetilde{k}_{pq}$ and $k_{pq}$ have the same signs for all $p,q=1,2,\ldots,n^2$. If $\widetilde{K}$ (or $K$) have nonnegative entries, the properties of the spectral norm ensure that $$\label{ineq-positive}
\|\widetilde{K}\|_2 \leq \|K\|_2,$$ and, consequently, $$\label{ineq-abs}
\cond\,\hspace*{-0.5ex}_\mathrm{abs}(f,\widetilde{T}) \leq \cond\,\hspace*{-0.5ex}_\mathrm{abs}(f,T).$$ However, because the spectral norm is not absolute (that is, in general, $\|A\|_2\neq \|\ |A|\ \|_2$; see, for instance, [@Mathias] and [@Horn13 Sec. 5.6]), in the case of $\widetilde{K}$ (or $K$) having simultaneously positive and negative entries, it is not fully guaranteed that (\[ineq-abs\]) holds. Nevertheless, we believe that (\[ineq-abs\]) is not valid just for a few exceptional cases. In all the tests we have carried out, we have not encountered any example for which (\[ineq-abs\]) does not hold. The same can be said about the inequality $
\cond\,\hspace*{-0.5ex}_\mathrm{rel}(f,\widetilde{T}) \leq \cond\,\hspace*{-0.5ex}_\mathrm{rel}(f,T),
$ that our tests suggested to be true in general.
Error Analysis {#error}
==============
Let us denote $\widetilde{F}=f(\widetilde{T})=[\widetilde{f}_{ij}]$ and $F=f(T)=[f_{ij}]$, where $i,j=1,\ldots,n$. We assume that $\widetilde{X}\approx \widetilde{F}$ is the approximation arising in the computation of $f(\widetilde{T})$ using a certain algorithm and that $X=S^{-1}\widetilde{X}S$ is the approximation to $f(T)$ that results from the multiplication of $\widetilde{X}$ by $S^{-1}$ on the left-hand side and by $S$ on the right-hand side. Recall that $S$ is defined by (\[S\]), where $\alpha>1$. The entries of $\widetilde{X}$ (resp., $X$) are denoted by $\widetilde{x}_{ij}$ (resp., $x_{ij}$).
We will show below that the absolute and relative errors arising in the computation of $\widetilde{X}$ are magnified, but such a magnification has the same order of magnitude as the corresponding errors that come from the direct application of the algorithms. To get more insight, we analyse the componentwise absolute errors.
Let $\widetilde{\mathcal{E}}:=\widetilde{F}-\widetilde{X}=[\widetilde{\varepsilon}_{ij}]$ and $\mathcal{E}:=F-X=[\varepsilon_{ij}].$ We have $$\begin{aligned}
\mathcal{E} &=& F-X\\
&=& S^{-1}\widetilde{F}S- S^{-1}\widetilde{X}S\\
&=& S^{-1}(\widetilde{F}-\widetilde{X})S \\
&=& S^{-1}\widetilde{\mathcal{E}}S \\
&=& \left[
\begin{array}{ccccc}
\widetilde{\varepsilon}_{11}&\alpha\,\widetilde{\varepsilon}_{12}&\alpha^2\, \widetilde{\varepsilon}_{13}&\cdots&\alpha^{n-1}\,\widetilde{\varepsilon}_{1n}\\
0&\widetilde{\varepsilon}_{22}&\alpha\,\widetilde{\varepsilon}_{23}&\ddots&\vdots\\
\vdots&\ddots&\ddots&\ddots&\alpha^2\,\widetilde{\varepsilon}_{n-2,n}\\
&&&&\alpha \,\widetilde{\varepsilon}_{n-1,n}\\
0&\cdots&&0&\widetilde{\varepsilon}_{nn}
\end{array}
\right],\end{aligned}$$ where we can see that some entries of $\widetilde{\mathcal{E}}$ are magnified by powers of $\alpha$, and the errors affecting the top-right entries of $X$ being much larger, especially whenever $\alpha$ is large. Hence, we may be misled into thinking that the proposed preconditioning technique would introduce large errors in the computation process. It turns out that the errors arising when one uses preconditioning are not larger than the ones occurring in direct methods (that is, methods without preconditioning). To understand why, let us consider the typical situation where a Padé approximant of order $(m,k)$, $r_{mk}$, is used to approximate $f(T)$. From the definition of Padé approximant, we have $$f(z)-r_{mk}(z)=O(z^{m+k+1})$$ (see, for instance, [@Higham Sec. 4.4.2]). Hence, in a matrix scenario, there exist coefficients $c_j\in\mathbb{C}$ such that $$\begin{aligned}
\mathcal{E} &=& f(T)-r_{mk}(T)\\
&=& \sum_{j=m+k+1}^\infty c_jT^j\\
&=& S^{-1}\,\left(\sum_{j=m+k+1}^\infty c_j\widetilde{T}^j\right)\,S \\
&=& S^{-1}\, \widetilde{\mathcal{E}} \,S,\end{aligned}$$ meaning that the truncation error with and without preconditioning is the same (we are not taking into account other types of errors, like roundoff errors). What we have is that for a large $\alpha$, the entries in the top-right of $\widetilde{\mathcal{E}}$ are much smaller than the corresponding ones in $\mathcal{E}$. These observations still remain valid for Taylor approximants (they are particular cases of Padé approximants) and for the Parlett recurrence (the details are not included here).
The theoretical results stated in the previous section require the condition $\alpha >1$ for having a reduction in the norm and a smaller condition number. There is still the question of how to find a suitable $\alpha$. In the case of the matrix exponential, it is convenient to choose $\alpha$ guaranteeing $\|\widetilde{T}\|\approx 0$ while to the matrix logarithm an appropriate $\alpha$ must led to $\|\widetilde{T}\|\approx 1$. This suggests that finding an optimal $\alpha$ for a given function $f$ and for a given matrix $T$ may be difficult. Instead, we propose the following heuristic that in practice works very well.
Assume that $T=D+N$, where $D$ and $N$ are, respectively, the diagonal and nilpotent parts of $T$. According to the experiments that will be shown in Section \[experiments\], significant savings may occur if $\|D\|$ is small when compared with $\|N\|$, that is, if the quotient $\|D\|/\|N\|$ is small. Hence, $\alpha=\|N\|$ appears to be a very reasonable choice for making the preconditioning reliable (check the results above for matrices (\[T\]) and (\[T1\])), in particular for matrices having norm $\|T\|>1$. In addition, provided that $\alpha=\|N\|>1$ and attending to (\[ineq-abs\]), we see that algorithms for computing $f(\widetilde{T})$ are less sensitive to errors (including roundoff) than algorithms for $f(T)$.
Another issue that must be taken into account is that the power $\alpha^{n-1}$ may overflow for very large values of $\alpha$ and $n$. In these cases, another strategy for choosing $\alpha$ is recommended. Other possibility is to extend the proposed preconditioning technique to block triangular matrices. This is a topic that needs further research.
Based on the facts above and bearing in mind that the proposed preconditioning technique just involves divisions and multiplications of entries of a matrix by a positive scalar, we can claim that it is a numerically stable procedure provided that $\alpha$ is suitable chosen.
Numerical experiments {#experiments}
=====================
Before presenting the numerical experiments, we give some practical clues on how to implement the proposed preconditioning technique. We assume that an algorithm (let us call it [*Algorithm 1*]{}) for computing $f(T)$, with $T$ triangular, and a suitable $\alpha$ are available. Consider the MATLAB code displayed on Figure \[fig1\]. Our preconditioning technique can be implemented in MATLAB using the following steps:
1. `T1=multiply_by_alpha(T,alpha)`;
2. Run Algorithm 1 to approximate $f(\mathtt{T1})$;
3. Recover $F=f(T)$ from `F=multiply_by_alpha(T1,1/alpha)`.
<!-- -->
function T1 = multiply_by_alpha(T,alpha)
n = length(T);
T1 = T;
for k = 1:n
for s = k+1:n
T1(k,s) = T1(k,s)*alpha^(s-k);
end
end
end
Most of the effective algorithms for matrix functions involve $O(n^3)$ operations, while this preconditioning technique only involves $O(n^2)$. Thus, if the choice of $\alpha$ is such that some $O(n^3)$ operations are saved (e.g., squarings, square roots, matrix products,...), such a preconditioning technique can contribute towards a reduction in the computational cost.
The two experiments we present below were carried out in MATLAB R2019b (with unit roundoff $u\approx 1.1\times 10^{-16}$).
[*Experiment 1.*]{} In this first experiment, we have calculated the logarithm of the following triangular matrices, with no real negative eigenvalues (MATLAB style is used) by the MATLAB function `logm`, without and with preconditioning:
- $T_1$ is the matrix $\mathtt{exp(a)*[1\ b;0\ 1]}$, with $a=0.1$ and $b=10^6$; its exact logarithm is $\mathtt{[a\ b;0\ a]}$;
- $T_2$ is the matrix in (\[T1\]);
- $T_3$ has been obtained by $\mathtt{[\sim,T3]=schur(gallery('frank',8),'complex')}$;
- $T_4$ has been obtained by $\mathtt{[\sim,T4]=schur(gallery('dramadah',11),'complex')}$;
- $T_5$ has come from $\mathtt{[\sim,T5]=schur(gallery('frank',13),'complex')}$;
- $T_6$ to $T_{10}$ are randomized matrices with orders ranging from $9$ to $15$ with small entries in diagonal and large entries in the superdiagonals.
The results are displayed in Table \[tab1\]. We have used the following notation:
- $\widetilde{T}_i=ST_iS^{-1}$, where $S$ is defined in (\[S\]) with $\alpha=\|N\|_F$;
- $s_i$ and $\widetilde{s}_i$ concern the number of square roots involved in the inverse scaling and squaring method, without and with preconditioning, respectively;
- $e_i$ and $\widetilde{e}_i$ are the relative errors of the computed approximations, that is, $$\begin{aligned}
e_i&=&\|\log(T_i)-\mathrm{logm}(T_i)\|_F/\|\log(T_i)\|_F\\
\widetilde{e}_i&=&\|\log(T_i)-S\,\mathrm{logm}(\widetilde{T}_i)\,S^{-1}\|_F/\|\log(T_i)\|_F;
\end{aligned}$$
- $\kappa_{\log}(T_i)$ and $\kappa_{\log}(\widetilde{T}_i)$ are estimates of the relative condition numbers obtained with the function `logm_cond` available in [@mftoolbox].
Excepting the matrix $T_1$, whose exact logarithm is known, we have considered as the exact $\log(T_i)$ ($i=2,\ldots,10$) the result of evaluating the logarithm at 200 decimal digit precision using the Symbolic Math Toolbox and rounding the result to double precision.
$T_i$ $s_i$ $\widetilde{s}_i$ $|\mathcal{E}_i|$ $|\widetilde{\mathcal{E}}_i|$ $\|T_i\|_F$ $\|\widetilde{T}_i\|_F$ $\kappa_{\log}(T_i)$ $\kappa_{\log}(\widetilde{T}_i)$ $\|D_i\|_F/\|N_i\|_F$
---------- ------- ------------------- ------------------- ------------------------------- ------------- ------------------------- ---------------------- ---------------------------------- -----------------------
$T_1$ 5 0 9.8e-23 9.8e-23 1.1e+06 1.9e+00 3.3e+11 2.9e+00 1.4e-06
$T_2$ 16 4 2.1e-16 2.1e-16 7.3e+04 9.9e-01 8.8e+19 5.4e+00 8.5e-06
$T_3$ 6 4 5.3e-16 6.8e-16 4.7e+00 4.0e+00 8.7e+01 3.0e+01 1.1e+00
$T_4$ 5 4 2.8e-16 7.7e-16 7.1e+00 6.3e+00 6.4e+02 9.4e+01 1.7e+00
$T_5$ 11 4 8.7e-16 5.7e-16 6.2e+01 4.6e+01 2.9e+10 6.6e+02 1.1e+00
$T_6$ 11 6 1.2e-15 2.1e-16 2.0e+01 1.6e+00 1.9e+11 2.0e+02 7.0e-02
$T_7$ 11 5 4.4e-16 4.9e-16 3.2e+01 1.5e+00 9.8e+10 2.5e+01 4.7e-02
$T_8$ 19 6 9.1e-16 6.6e-16 5.0e+01 1.1e+00 3.7e+21 3.0e+02 2.1e-02
$T_9$ 10 6 1.8e-15 5.4e-16 1.2e+01 1.5e+00 1.5e+09 7.5e+01 1.2e-01
$T_{10}$ 13 5 7.2e-16 2.7e-16 3.1e+01 1.5e+00 1.4e+13 4.3e+01 4.2e-02
: Results for the computation of the logarithm of $10$ upper triangular matrices using the improved inverse scaling and squaring method of [@Mohy12] without and with preconditioning.
\[tab1\]
[*Experiment 2.*]{} In this experiment, we have implemented Algorithm 5.2 in [@Aprahamian16] for computing the inverse cosine of a matrix, $\acos(T)$, with and without preconditioning. MATLAB codes are available at <https://github.com/higham/matrix-inv-trig-hyp> and the algorithm is called by `acosm`. This algorithm involves a prior Schur decomposition, thus being well suited to be combined with preconditioning. We have considered ten matrices from MATLAB’s gallery with no eigenvalues in the set $\{-1,\,1\}$ and sizes ranging from $30$ to $50$. The left plot of Figure \[figure1\] compares the relative errors $$\begin{aligned}
e_i&=&\|\acos(T_i)-\mathrm{acosm}(T_i)\|_F/\|\acos(T_i)\|_F,\\
\widetilde{e}_i&=&\|\acos(T_i)-S\,\mathrm{acosm}(\widetilde{T}_i)\,S^{-1}\|_F/\|\acos(T_i)\|_F,\end{aligned}$$ where the exact $\acos(T_i)$ results from evaluating the inverse cosisne at 200 decimal digit precision using the Symbolic Math Toolbox and rounding the result to double precision. The right plot displays the number of square roots in the inverse scaling and squaring procedure required by [@Aprahamian16 Alg. 5.2] with preconditioning, $s_i$, and without preconditioning $\widetilde{s}_i$.
![Left: Relative errors of the approximations obtained by `acosm` with preconditioning ($\tilde{e}_i$) and without preconditioning ($e_i$). Right: Number of square roots required by `acosm` with preconditioning ($\tilde{s}_i$) and without preconditioning ($s_i$).[]{data-label="figure1"}](fig1.eps){width="17cm"}
A careful analysis of the results displayed in Table \[tab1\] shows that a combination of the proposed preconditioning technique with `logm` and an appropriate choice of $\alpha$ brings many benefits, namely:
- A significant reduction in the number of square roots required, especially when the norm of $D_i$ is small in comparison with the norm of $N_i$ (check second, third and the last column);
- A stabilization or reduction of the magnitude of relative errors (columns 4 and 5);
- A decrease in the relative condition number of the matrix logarithm (columns 6 to 9).
Similar observations hold for the results of Figure \[figure1\], namely: lower relative errors and less square roots when the preconditioning technique is incorporated in [@Aprahamian16 Alg. 5.2].
Conclusions
===========
We have proposed an inexpensive preconditioning technique aimed at improving algorithms for evaluating functions of triangular matrices, in terms of computational cost and accuracy. It is particularly well suited to be combined with algorithms involving a prior Schur decomposition. Such a technique involves a scalar $\alpha$ that needs to be carefully chosen. We have presented a practical strategy for finding such a scalar, that has given good results for experiments involving matrix exponential, logarithm and inverse cosine.
[xx]{} A. H. Al-Mohy, N. J. Higham, A New Scaling and Squaring Algorithm for the Matrix Exponential, SIAM J. Matrix Anal. Appl., 31, 970–989 (2009).
A. H. Al-Mohy and N. J. Higham, Improved inverse scaling and squaring for the matrix logarithm, SIAM J. Sci. Comput., 34, 153–169 (2012).
M. Aprahamian and N. J. Higham, Matrix Inverse Trigonometric and Inverse Hyperbolic Functions: Theory and Algorithms, SIAM J. Matrix Anal. Appl., 37, 1453–1477 (2016).
<span style="font-variant:small-caps;">R. Bhatia</span>, [*Matrix Analysis*]{}, Springer-Verlag, New York (1997).
J. R. Cardoso, A. Sadeghi, Computation of matrix gamma function, BIT Numer. Math., 59, 343–370 (2019).
P. A. Davies and N. J. Higham, A Schur-Parlett algorithm for computing matrix functions, SIAM J. Matrix Anal. Appl., 25, 464–485 (2003).
R. Mathias, The spectral norm of a nonnegative matrix, Linear Alg. Appl., 139, 269–284 (1990).
C. Moler and C. Van Loan, Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later, SIAM Rev., 45, 3–49 (2003).
R. Garrappa, M. Popolizio, Computing the Matrix Mittag-Leffler Function with Applications to Fractional Calculus, J. Sci. Comput., 77, 129–-153 (2018).
N. J. Higham, The scaling and squaring method for the matrix exponential revisited, SIAM J. Matrix Anal. Appl., 26, 1179–1193 (2005).
N. J. Higham, [*Functions of Matrices: Theory and Computation*]{}, SIAM, Philadelphia, PA, USA (2008). N. J. Higham, [*The Matrix Function Toolbox*]{},\
http://www.maths.manchester.ac.uk/$\thicksim$higham/mftoolbox/.
R. A. Horn, C. R. Johnson, [*Topics in Matrix Analysis*]{}, Cambridge Univ. Press, Cambridge, UK, Paperback Edition (1994).
R. A. Horn, C. R. Johnson, *Matrix Analysis*, 2nd Ed., Cambridge University Press (2013).
C. S. Kenney, A. J. Laub, Condition estimates for matrix functions, SIAM J. Matrix Anal. Appl., 10, 191–209 (1989).
|
{
"pile_set_name": "ArXiv"
}
|
The study of reactions induced by neutrinos on nuclei is at present an active field of research. A detailed knowledge of the reaction cross sections is interesting for different domains, going from high energy physics to astrophysics [@ORLAND]. For example, they are necessary in the interpretation of current experiments on neutrinos as well as in the evaluation of possible new detectors for future experiments. The importance of neutrino-nuclei reactions in astrophysical processes, such as the r-process nucleosynthesis, is also being attentively studied [@qian; @Bor]. In particular, $\nu-Pb$ reactions have attracted much interest recently. Lead has been used as a shielding material in the recent experiments on neutrino oscillations performed by the LSND collaboration [@LSND1; @LSND2] so that estimates of the $\nu-Pb$ reaction cross sections are necessary for the evaluation of backgrounds in these experiments; also projects on lead-based detectors [@LeadPer], such as OMNIS [@Cline; @OMNIS] and LAND [@LAND], are being studied for the purpose of detecting supernova neutrinos. These detectors might provide information on neutrino properties, such as oscillations in matter [@Hax] or the mass [@Boyd1] by measuring the time delay and/or spreading in the neutrino signal [@OMNIS; @LAND] as well as help in testing supernova models. From the practical point of view, lead-based detectors seem to present several of the characteristics required to be supernova observatories, namely high sensitivity to neutrinos of all flavors, simplicity, reliability with inexpensive materials [@LAND]. Large cross sections for neutrinos in the supernova energy range are also an important condition since they determine the possible rates and therefore the maximum observable distance. Actually, $\nu$-nucleus reaction cross sections increase strongly with the charge of the nucleus. For example, if the neutrinos come from the Decay-At-Rest (DAR) of $\mu^+$, the cross sections of the flux-averaged charged-current (CC) reaction $\nu_e + _{Z}X_{N} \rightarrow _{Z+1}X'_{N-1} + e^-$ goes from about $14.~10^{-42}$ cm$^2$ for $^{12}C$ [@LSNDve; @Allen; @KAR_ve_Ngs], to $2.56~10^{-40}$ cm$^2$ in $^{56}Fe$ [@Lan00] and is estimated to be $3.62~10^{-39}$ in $^{208}Pb$ [@Lan00]. Besides these practical features which are essential in the choice of the nucleus to use to detect neutrinos, another important feature is the spectroscopic properties which may suggest attractive signals of supernova neutrino oscillations. In [@Hax], for example, it has been shown that the measurement of events where two neutrons are emitted by $^{208}Bi$ excited in the reaction $\nu_e + ^{208}Pb \rightarrow ^{208}Bi + e^-$ is both flavor-specific and very sensitive to the mean energy of the $\nu_e$. In case when $\nu_{\mu},\nu_{\tau} \rightarrow \nu_e$ oscillations take place, the hotter $\nu_e$ would increase the number of two neutron events by a factor of forty [@Hax]. Another possible signal has been proposed in [@Lan00], that is that the energy distribution of the neutrons emitted in the same CC reaction should have a peak at low energy more or less pronounced according to whether the oscillations occur or not. This peak would come from the excitation of a peak at around 8 MeV in the Gamow-Teller strength distribution. (One should however note that this peak has never been observed experimentally). Both the estimate of the CC $\nu-Pb$ reaction cross section in [@Hax] and the microscopic calculations of [@Lan00] show that a possible oscillation signal relies strongly on the knowledge of the spectral properties of $^{208}Bi$. In fact, the CC reaction cross section induced by $\nu_e$ scales almost as the square of the electron energy and is particularly sensitive to the detailed structure of the excitation spectrum as was already pointed out for the case of $^{12}C$ [@Vol]. It is then important either to get the cross sections directly from the experiment or/and to obtain different theoretical estimates in order to know the theoretical uncertainties and how they affect the reaction cross sections. This is crucial when the impinging neutrino energy increases because not only the allowed Gamow-Teller (GT) and Isobaric Analogue State (IAS) contribute significantly to these cross sections but also forbidden transitions, of first-, second-, third-order (which are not very well known experimentally).
In this paper, we present new theoretical results for the CC $\nu_e + ^{208}Pb \rightarrow ^{208}Bi + e^-$ reaction cross section. Our calculations, as opposed to [@Lan00], are performed in a self-consistent charge-exchange Random-Phase-Approximation (RPA) with effective Skyrme forces. Contrary to all the previously published calculations, we present non flux-averaged cross sections, obtained for both low-energy $\nu_e$ and high-energy $\nu_{\mu}$. These reaction cross sections given as a function of neutrino energy span a large energy range. They can be used to convolute with different neutrino fluxes in various contexts, for example for future experiments with astronomical neutrinos which are at present under study, for the very recent terrestrial experiments such as the LSND ones [@Lan00] to estimate the background, or in the r-process nucleosynthesis.
We will emphasize the importance of the contribution of forbidden transitions and how it evolves as a function of neutrino energy. This is often not taken into account in many present r-process nucleosynthesis calculations and so the neutrino-nuclei cross sections are underestimated (in [@surman] only the importance of first forbidden transitions in neutron-rich nuclei was emphasized).
We will compare our results with presently available calculations [@Hax; @Lan00]. With this aim, we will present two different flux-averaged cross sections, where the neutrino fluxes are given by either the DAR of $\mu^+$ and Decay-In-Flight (DIF) of $\pi^+$; or by a Fermi-Dirac spectrum for a supernovae explosion. Finally, we will discuss our results in relation to the suggested possible oscillation signals that would use the spectroscopic properties of $^{208}Bi$.
The general expression for the differential cross section as a function of the incident neutrino energy $E_{\nu}$ for the reaction $\nu_{l} + ^{208}Pb \rightarrow l + ^{208}Bi$ ($l=e,~\mu$) is [@Kubo] $$\sigma(E_{\nu})={G^{2} \over {2 \pi}}cos^{2}\theta_C\sum_{f}p_lE_l
\int_{-1}^{1}d(cos \, \theta)M_{\beta},
\label{e:1}$$ where $G \,cos \, \theta_C$ is the weak coupling constant, $\theta$ is the angle between the directions of the incident neutrino and the outgoing lepton, $E_l=E_{\nu}-E_{fi}$ ($p_{l}$) is the outgoing lepton energy (momentum), $E_{fi}$ being the energy transferred to the nucleus, $M_{\beta}$ are the nuclear Gamow-Teller and Fermi type transition probabilities [@Kubo].
In a nucleus as heavy as $Pb$ the distortion of the outgoing lepton wavefunction due to the Coulomb field of the daughter nucleus becomes large and affects the integrated cross section considerably. In our treatment of this effect we follow the findings of ref.[@Engel]. In ref.[@Engel] it is found that the “Effective Momentum Approximation” (EMA) works well for high energy neutrinos. This approximation consists in using an effective momentum $p_l^{eff}=\sqrt{E^2_{eff}-m^2}$ where $E_{eff}=E-V_C(0)$ ($V_C(0)$ is the Coulomb potential at the origin) in calculating the angle integrated cross section and multiplying eq.(\[e:1\]) by $(p_l^{eff}/p_l)^2$. It is also shown that the Modified EMA (MEMA) works better than EMA for $\nu_{\mu}$ of low and high energies. In this approximation eq.(\[e:1\]) is multiplied by $p_l^{eff}E_{eff}/p_lE_l$. We use therefore this method in all our calculations of the $({\nu}_{\mu},{\mu}^-)$ cross sections. In the case of the $(\nu_{e},e^-)$ process, the situation is somewhat more complicated. The Fermi function works only for very low energies, namely $E_{e} \le 10~MeV$ (where $p_eR \ll 1$, $R$ is the nuclear radius), whereas the EMA seems to be a good approximation for most energies of the outgoing electrons [@Engel]. As in ref.[@Lan00], for $\nu_e$ [@Lan01], we treat Coulomb corrections by interpolating between the Fermi function at low electron energies and the EMA approximation at high lepton energies.
To get flux-averaged cross sections it is necessary to convolute (\[e:1\]) by the neutrino flux $f(E_{\nu})$, that is $$\label{e:2}
\langle \sigma \rangle_f = \int_{E_0}^{\infty} dE_{\nu} \sigma(E_{\nu}) f(E_{\nu}),$$ $E_{0}$ being the threshold energy. The choice of $f(E_{\nu})$ depends on the neutrino source and can be taken for example equal to the supernova neutrino energy spectrum given by transport codes or the neutrino fluxes produced by a beam dump.
The nuclear structure model used to evaluate the transition probabilities $M_{\beta}$ in (\[e:1\]) is the charge-exchange Random-Phase-Approximation (RPA). The details of the approach can be found in [@Col94]. The calculations we present have been obtained in a self-consistent approach: the HF single-particles energies and wavefunctions as well as the residual particle-hole interaction are derived from the same effective forces, namely the SIII [@Bei75] and SGII [@Gia81] Skyrme forces. We have found that the model configuration space used is large enough for the Ikeda and Fermi sum rule to be satisfied as well as the non energy-weigthed and energy-weighted sum rules for the forbidden transitions [@pnRPA]. The GT strength distribution we have obtained is peaked at $19.2~MeV$, in agreement with the experimental value. This main peak exhausts about $60 \%$ of the Ikeda sum rule. The IAS results at $18.4 ~MeV$ and this value compares again well with the experimental finding ($18.8~MeV$). Apart from these two resonances and the spin-dipole, the experimental knowledge about states of higher multipolarity is rather poor. The recent experiment of Ref. [@Zegers] shows that isovector monopole strength exists in $^{208}Bi$ between $30$ and $45~MeV$ and in the present calculation we find some strength in the same energy region.
In figs.1 and 2 we show the non flux-averaged $^{208}Pb(\nu_{e},e^-)^{208}Bi$ and $^{208}Pb(\nu_{\mu},\mu^-)^{208}Bi$ inclusive cross sections as a function of the neutrino energy, for a mesh of energies, namely $\Delta E=2.5~MeV$ for $E_{\nu_{e}}$ and $\Delta E=5.0~MeV$ for $E_{\nu_{\mu}}$. The dashed line in fig.1 shows the cross section obtained when only the Fermi function is used to include the Coulomb corrections. The results shown have been obtained with the SIII force, but we have found that with the SGII force we get quite similar results. All the multipolarities with $J \le 6$ are included. We have checked that the contribution coming from $J = 7$ is small. (Note that, for higher multipolarities, a mean field description, neglecting the particle-hole residual interaction, can be used to evaluate the transition probabilities (\[e:1\])). In the calculations we present the axial vector coupling constant has been taken equal to 1.26. Note that the use of an effective $g_a$ to take into account the problem of the “missing” GT strength will reduce the reaction cross section by $10-15 \%$ as it was already discussed in [@Vol].
Figure 3 shows the contribution of the different multipolarities to the total cross section (fig.1), for the impinging neutrino energies $E_{\nu_e}=15, 30, 50~MeV$, which are characteristic average energies for supernova neutrinos. When $E_{\nu_e}=15~MeV$ (fig.3, up), $\sigma_{\nu_e}$ is dominated by the allowed Gamow-Teller ($J^{\pi}=1^+$) transition. As the neutrino energy increases (fig.3, middle), the allowed IAS and other forbidden transitions start to contribute significantly. Finally, when $E_{\nu_e}=50~MeV$ (fig.3, bottom), the GT and IAS transitions are not dominating at all, the cross section is being spread over many multipolarities. These results suggest that r-process nucleosynthesis calculations such as [@Bor], which include neutrino-nuclei reactions, should take into account forbidden transitions. This may be even more important if ${\nu}_{\tau},{\nu}_{\mu} \rightarrow {\nu_e}$ oscillations occur, because in this case electron neutrino may have a higher average energy than it is usually expected from current supernovae models.
Let us now come to the comparison with other available calculations. Table 1 shows our flux-averaged cross sections, in comparison with those of refs.[@Hax; @Lan00]. The low-energy neutrino flux is given by a Fermi-Dirac spectrum [@Hax; @Lan00] $$\label{e:3}
f(E_{\nu})={1 \over {c(\alpha)T^3}} {E_{\nu}^2 \over {exp\left[
(E_{\nu}/T) - \alpha \right] +1}}$$ where $T,\alpha$ are fitted to numerical spectra and $c(\alpha)$ normalizes the spectrum to unit flux. The values of the parameters $T$ and $\alpha$ have been chosen to be able to compare our results with those of [@Hax; @Lan00]. As we can see from table 1, our predictions are in close agreement (the difference is at most $20-30 \%$) with [@Lan00]. The results of [@Lan00] have been obtained in a CRPA approach. A variation of $20-30 \%$ is actually to be expected for calculations based on the same approach but using different parametrization (for example for single particle wavefunctions and effective particle-hole interaction), because of the sensitivity of the flux-averaged cross sections to the detailed strength distributions [@Vol], as we will discuss further. On the contrary, our results and those of ref.[@Lan00] present significant differences with those of [@Hax], obtained using the allowed approximation and including the IAS, the GT and the first-forbidden contributions treated on the basis of the Goldhaber-Teller model.
We have checked that the differences do not come from the higher order forbidden transitions which are not included in the calculations of [@Hax]. The three calculations satisfy the same constraints, namely they reproduce the centroid of the resonances and satisfy the sum-rules.
We believe that the significant differences (by a factor of 2) with [@Hax] may have two origins. The first possible origin might be the way the Coulomb corrections are treated. In [@Hax], the Coulomb distortion of the outgoing electron wave function was taken into account by multiplying the cross section (\[e:1\]) by a Fermi function. In order to see the effect of using only the Fermi function instead of making an interpolation between the Fermi function and the EMA approximation, we have calculated the reaction cross sections using these two possible corrections. As figure 4 shows, the two cross sections have a quite different behaviour as a function of the neutrino energy so that this difference on the flux-averaged cross section may vary according to the particular neutrino flux considered. To get a quantitative idea of the variation, we have calculated the flux-averaged cross sections by convoluting the two curves of fig.4 with (\[e:3\]). If we use the Fermi function only, the reaction cross sections increase, on average, by $50 \%$.
The second possible origin of the discrepancies between our work, [@Lan00] and [@Hax] might be the sensitivity of the flux-averaged cross sections to the detailed strength distributions in $^{208}Bi$. In fact, it has already been discussed in [@Vol], that for low-energy neutrinos, the flux-averaged cross sections are very sensitive to the energy of the excited states in the final nucleus. The reason is twofold. First, due to the small electron mass, the non flux-averaged cross section (\[e:1\]) scales as the square of energy of the states. Second, the energy dependence of the neutrino flux may emphasize differences in the non flux-averaged cross sections due to variations in the energy of the states. As it was discussed in [@Vol], these two effects may modify the flux-averaged reaction cross sections by $20-30 \%$.
To complete our comparison with the calculations of [@Lan00], we have calculated two more flux-averaged cross sections, using the neutrino fluxes of both $\nu_{\mu}$ coming from the DIF of $\pi^+$ and $\nu_{e}$ coming from the DAR of $\mu^+$. The neutrino fluxes $f(E_{\nu})$ were taken from [@Imlay]. These neutrino fluxes have been used in the recent experiments $\nu_{\mu} \rightarrow \nu_{e}$ [@LSND1; @KR0], $\bar{\nu}_{\mu} \rightarrow \bar{\nu}_{e}$ [@LSND2; @KR1] or ${\nu}_{\mu} \rightarrow {\nu}_{x}$ [@KR2] performed by the LSND and KARMEN collaborations. The $DAR(\nu_{e},e^-)$ cross section calculated is $\sigma_{DAR}=44.39 \cdot 10^{-40}~cm^2$ which is very close to $ 36.2 \cdot 10^{-40}~cm^2$ obtained in [@Lan00]. On the contrary, our $DIF(\nu_{\mu},{\mu}^-)$ is $\sigma_{DIF} = 399.2 \cdot 10^{-40}~cm^2$; whereas the one of [@Lan00] is $115 \cdot 10^{-40}~cm^2$. We believe that some of the disagreement may come from differences in the strength distributions of the high order (higher than 2) forbidden transitions. In fact, contrary to the reactions of neutrinos on light nuclei such as carbon, where these states contribute only by $20 \%$ to the total DIF cross section, their contribution represents $65 \%$ of the total cross section when the nucleus is as heavy as lead.
Let us finally discuss the two possible neutrino oscillation $\nu_{\mu},\nu_{\tau} \rightarrow \nu_{e}$ signals based on the spectroscopic properties of $^{208}Bi$ excited in the CC reaction that have been proposed recently. In [@Hax], it was shown that the 2-neutron events associated with the deexcitation of $^{208}Bi$ are very sensitive to the mean electron neutrino energy. This signal relies on the fact that most of the IAS, GT and first-forbidden strength distributions are above the $2n$ emission threshold ($14.98~MeV$) in $^{208}Bi$. Our results show that not only the allowed and spin-dipole strengths are above this threshold, but also a fraction of the strength distributions associated with other forbidden transitions (fig.3) will contribute to the $2n$ decay. All the arguments given in [@Hax] are based on the statistical calculations of $1n$ and $2n$ decays. The direct $1n$ emission represents about $50 \%$ of the total width in the case of the IAS, and $5-10 \%$ in the case of the GT [@Col94].
In [@Lan00], it was pointed out that the energy distribution of the neutrons in the $1n$ events should form a peak at low energy, more or less pronounced according to the occurence or absence of oscillations. This peak comes from the GT strength distribution at around $7.6~MeV$ which is located above the $1n$ threshold emission at $6.9~MeV$. Our GT distribution also shows a peak at around $7.5~MeV$. We have checked that its location is not sensitive to the choice of the effective forces used. Still one should be careful about conclusions, because predictions of different models about the energy location and strength of that peak are at variance.
In summary, we have presented the non flux-averaged $^{208}Pb(\nu_{e},e^-)^{208}Bi$ and $^{208}Pb(\nu_{\mu},\mu^-)^{208}Bi$ reaction cross sections, calculated in a self-consistent charge-exchange Random-Phase-Approximation with Skyrme effective forces. These predictions can be employed for very different purposes, such as for the interpretation of the recent experiments on neutrino oscillations performed by the LSND collaboration (where reactions induced by neutrinos on lead contribute significantly to the background) and to evaluate the feasibility of future projects in which lead should be used as detector for supernova neutrinos. We have emphasized that forbidden transitions contribute significantly to the neutrino-nuclei reaction cross sections even at the “astrophysical neutrino energies” and they should be included in present r-process nucleosynthesis calculations. We have discussed the present status on the theoretical predictions on the reaction cross sections for the $\nu_e$ having typical energies from present models on supernovae. If on one hand our calculations agree with those of ref.[@Lan00], which are also based on RPA; on the other hand, they both significantly disagree with those of ref.[@Hax]. We point out that the origin of the discrepancy might be mainly the different treatment of Coulomb corrections, but also the sensitivity of the reaction cross sections to the detailed energy spectrum of the final nucleus. We have also compared our flux-averaged reaction cross sections with $\nu_{\mu}$ coming from the DIF of $\pi^+$ and with $\nu_{e}$ coming from the DAR of $\mu^+$, with the ones of [@Lan00]. As expected, the DAR cross sections are very close. On the contrary our DIF cross section differs significantly from the one of [@Lan00]. We have pointed out that the two predictions may differ because of differences in the strength distributions of forbidden transitions of high multipolarity which represent the main contribution in reactions of neutrinos on nuclei as heavy as lead. Finally, we have discussed our results in relation with recently proposed signals to measure supernova neutrino oscillations.
This work was supported by the US-Israel Binational Science Foundation.
[99]{}
, a Report on the “Workshop on Neutrino Nucleus Physics Using a Stopped Pion Neutrino Facility”, May 22-26, 2000, Oak Ridge, Tennessee.
Y.Z.Qian et al., Phys. Rev. C [**55**]{}, 1532 (1997). I.N.Borzov and S.Goriely, Phys. Rev. C [**62**]{}, 035501-1 (2000).
C.Athanassopoulos and the LSND collaboration, Phys. Rev. Lett. [**81**]{}, 1774 (1998); C.Athanassopoulos and the LSND collaboration, Phys. Rev. C [**58**]{}, 2489 (1998).
C.Athanassopoulos and the LSND collaboration, Phys. Rev. Lett. [**77**]{}, 3082 (1996); C.Athanassopoulos and the LSND collaboration, Phys. Rev. Lett. [**75**]{}, 2650 (1995).
P.J.Doe and al., nucl-ex/0001008.
D.Cline and al., Astro.Lett. and Communications, [**27**]{}, 403 (1990).
P.F.Smith, Astro. Phys. [**8**]{}, 27 (1997).
C.K.Hargrove and al., Astro. Phys. [**5**]{}, 183 (1996).
G.M.Fuller, W.C.Haxton and G.C.McLaughlin, Phys. Rev. D [**59**]{}, 085005 (1999).
J.F. Beacom, R.N. Boyd and A. Mezzacappa, Phys. Rev. Lett. [**85**]{}, 3568 (2000); J.F. Beacom, R.N. Boyd and A. Mezzacappa, Phys. Rev. D [**63**]{}, 073011 (2001).
C.Athanassopoulos and the LSND collaboration, Phys. Rev. C [**55**]{}, 2078 (1997).
D.A.Krauker and al., Phys. Rev. C [**45**]{},2450 (1992); R.C.Allen and al., Phys. Rev. Lett. [**64**]{}, 1871 (1990).
B.E.Bodmann and the KARMEN collaboration, Phys. Lett. [**B332**]{},251 (1994); J.Kleinfeller and al., in [*Neutrino ‘96*]{}, eds. K.Enquist,H.Huitu and J.Maalampi (World Scientific Singapore, 1997).
E.Kolbe and K.Langanke, Phys. Rev. C [**63**]{}, 02580 (2001).
C. Volpe, N.Auerbach, G.Colò, T. Suzuki, N. Van Giai, Phys.Rev. C [**62**]{}, 015501 (2000).
R. Surman et J. Engel, Phys. Rev. C [**58**]{}, 2526 (1998).
T.Kuramoto,M.Fukugita,Y.Kohyama and K.Kubodera, Nucl. Phys. [**A512**]{}, 711 (1990).
J.Engel, Phys. Rev. C [**57**]{}, 2004 (1998).
E.Kolbe and K.Langanke, private communication.
G. Colò, N. Van Giai, P.F. Bortignon and R.A. Broglia, Phys. Rev. C [**50**]{}, 1496 (1994).
M. Beiner, H. Flocard, N. van Giai and Ph. Quentin, Nucl. Phys. [**A238**]{}, 29 (1975).
N. Van Giai and H. Sagawa, Phys. Lett. [**106**]{}, 379 (1981).
N.Auerbach and A.Klein, Nucl.Phys. [**A395**]{}, 77 (1983).
R.G.T. Zegers et al, Phys. Rev. C [**63**]{}, 034613 (2001).
R.Imlay, private communication.
K.Eitel, [*“Proceedings of the 32nd Rencontres de Moriond, Electroweak Interactions and Unified Theories”*]{}, Les Arcs, 15th-22nd March 1997.
K.Eitel and B.Zeitnitz for the KARMEN collaboration, Nucl. Phys. Proc. Suppl. 77, 212 (1999).
B.Armbruster and al., Phys.Rev.[**C57**]{}, 3414 (1998).
$(T,\alpha)$ this work ref.[@Lan00] ref.[@Hax]
-------------- ----------- -------------- -----------------
$(6,0)$ 14.06 11. 27.84
$(8,0)$ 25.3 25. 57.99
$(10,0)$ 34.91 45. 96.14
$(6.26,3)$ 25.21 21. 47.50 \[tab:1\]
: Flux-averaged cross sections ($10^{-40}~cm^2$) obtained by convoluting the inclusive cross sections of fig.1 by a Fermi-Dirac spectrum (3) for neutrinos emitted in a supernova explosion. Different temperatures $T$ and $\alpha$ values are considered. The results of recent calculations are shown for comparison.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Given an edge-weighted tree $\TT$ with $n$ leaves, sample the leaves uniformly at random without replacement and let $W_k$, $2 \le k \le n$, be the length of the subtree spanned by the first $k$ leaves. We consider the question, “Can $\TT$ be identified (up to isomorphism) by the joint probability distribution of the random vector $(W_2, \ldots, W_n)$?” We show that if $\TT$ is known [*a priori*]{} to belong to one of various families of edge-weighted trees, then the answer is, “Yes.” These families include the edge-weighted trees with edge-weights in general position, the ultrametric edge-weighted trees, and certain families with equal weights on all edges such as $(k+1)$-valent and rooted $k$-ary trees for $k \ge 2$ and caterpillars.'
address:
- |
Department of Statistics\
University of California\
367 Evans Hall \#3860\
Berkeley, CA 94720-3860\
U.S.A.
- |
Department of Mathematics\
University of California\
970 Evans Hall \#3840\
Berkeley, CA 94720-3840\
U.S.A.
author:
- 'Steven N. Evans'
- Daniel Lanoue
title: Recovering a tree from the lengths of subtrees spanned by a randomly chosen sequence of leaves
---
[^1]
Introduction {#sec:Introduction}
============
Background and motivation
-------------------------
What features of an edge-weighted tree identify it uniquely up to isomorphism, perhaps within some class of such trees? Here an [*edge-weighted tree*]{} is a connected, acyclic finite graph $\TT$ with vertex set $\VV(\TT)$ and edge set $\EE(\TT)$ which is equipped with a function $\WW_\TT: \EE(\TT) \to \bR_{++} := (0,\infty)$. The value of $\WW_\TT(e)$ for an edge $e \in \EE(\TT)$ is called the [*weight*]{} or the [*length*]{} of $e$. Two such trees $\TT'$ and $\TT''$ are isomorphic if there is a bijection $\sigma: \VV(\TT') \to \VV(\TT'')$ such that:
- $\{u,v\} \in \EE(\TT')$ if and only if $\{\sigma(u), \sigma(v)\} \in \EE(\TT'')$,
- $\WW_{\TT'}(\{u,v\}) = \WW_{\TT''}(\{\sigma(u), \sigma(v)\})$ for all $\{u,v\} \in \EE(\TT')$.
The question above is, more formally, one of asking for a given class of edge-weighted trees $\bT$ about the possible sets $\bU$ and functions $\Phi: \bT \to \bU$ such that for all $\TT',\TT'' \in \bT$ we have $\Phi(\TT') = \Phi(\TT'')$ if and only if $\TT'$ and $\TT''$ are isomorphic. If the class $\bT$ consists of edge-weighted trees for which all edges have length $1$ (we will call such objects [*combinatorial trees*]{} for the sake of emphasis), then determining whether two trees in $\bT$ are isomorphic is just a particular case of the standard graph isomorphism problem. The general graph isomorphism problem has been the subject of a large amount of work in combinatorics and computer science – [@MR0485586] already speaks of the “graph isomorphism disease” – and, in particular, there are many results on reconstructing the isomorphism type of a graph from the isomorphism types of subgraphs of various sorts (see, for example, the review [@MR1161466]). There is also a substantial volume of somewhat parallel research on graph isomorphism in computational chemistry (see, for example, [@MR3052391] for a review). There seems to be considerably less work on determining isomorphism (in the obvious sense) of edge-weighted graphs; of course, in order for two edge-weighted graphs to be isomorphic the underlying combinatorial graphs must be isomorphic, but this does not imply that the best way for checking that two edge-weighted graphs are isomorphic proceeds by first determining whether the underlying combinatorial graphs are isomorphic and then somehow testing whether some isomorphism of the combinatorial graphs is still an isomorphism when the edge-weights are considered.
We begin with a discussion of previous results that address various aspects of the problem of determining when two edge-weighted or combinatorial trees are isomorphic.
A result in [@MR0332540] gives the following criterion for a bijection $\sigma: \VV(\TT') \to \VV(\TT'')$, where $\TT'$ and $\TT''$ are combinatorial trees, to be an isomorphism:\
if $v_0, v_1, \ldots, v_m$ is any sequence from $\VV(\TT') \sqcup \VV(\TT'')$ such that $v_0 = v_m$ and $$\{v_i, v_j\} \in \EE(\TT')
\sqcup \EE(\TT'')
\sqcup \{\{u,\sigma(u)\} : u \in \VV(\TT')\}
\Longleftrightarrow i - j \equiv \pm 1 \mod m,$$ then $m=4$.
The above result is elegant, but, of course, one does not need to apply it to all possible bijections to determine whether two combinatorial trees are isomorphic: there is a much more explicit and efficient procedure, which we now describe for the sake of completeness. First of all, suppose that $\TT'$ and $\TT''$ have distinguished vertices $\rho'$ and $\rho''$ and, in addition to the requirements in the above definition of an isomorphism $\sigma$, we require that $\sigma$ maps $\rho'$ to $\rho''$; that is, we have rooted trees and we require that an isomorphism maps the root of one tree to the root of the other. The presence of a root allows us to think of a combinatorial tree as a directed graph, where the head of an edge is the vertex that is closer to the root and the tail is the vertex farther from the root. The children of a vertex are the adjacent vertices that are farther from the root and, more generally, the descendants of a vertex $u$ are those vertices $v$ such that the path from the root to $v$ passes through $u$. The subtree spanned by a vertex $u$ and its descendants contains no other vertices and can be thought of as a combinatorial tree rooted at $u$, and we call this subtree the subtree below $u$. Then, two rooted, combinatorial trees $\TT'$ and $\TT''$ are isomorphic if the two roots have the same number of children, say $m$, and there is an ordering of these children for each tree such that the subtree below the $i^{\mathrm{th}}$ child of the root of $\TT'$ is isomorphic (as a rooted, combinatorial tree) to the subtree below the $i^{\mathrm{th}}$ child of the root of $\TT''$. This observation can be turned into an efficient algorithm (see, for example, [@MR0413592 Example 3.2]). Now, two combinatorial trees are isomorphic if there is some choice of roots such the resulting rooted, combinatorial trees are isomorphic. A [*center*]{} of a combinatorial tree is a vertex $c$ such that $$\max_{v \in \VV(\TT)} r_\TT(c,v)
=
\min_{u \in \VV(\TT)} \max_{v \in \VV(\TT)} r_\TT(u,v),$$ where $r_\TT(u,v)$ is the number of edges in he unique path between $u$ and $v$ for $u,v \in \VV(\TT)$, and a combinatorial tree has either a unique center or two centers that are adjacent. It is therefore possible to determine if two combinatorial trees are isomorphic by rooting each of them at their various centers and checking if any two such rooted, combinatorial trees are isomorphic.
We, however, are interested in whether there are “statistics” of a more numerical character that can be used to decide tree isomorphism. For combinatorial trees, one somewhat obvious possibility is the multiset of eigenvalues of some matrix associated with the tree such as the adjacency matrix or the distance matrix. Unfortunately, the results of [@schwenk; @MR1231010; @MR750401; @MR1609509; @matsen2011ubiquity; @MR2956206] show that not only is the isomorphism type of a tree not uniquely determined by the spectrum of its adjacency matrix but for various ensembles of combinatorial trees if one picks a tree uniformly at random from those in the ensemble with $n$ vertices, then the probability there is another tree in the ensemble with an adjacency matrix that has the same spectrum converges to one as $n \to \infty$. The results of [@matsen2011ubiquity] can be used to show that an analogous phenomenon is present when one considers the spectrum of the matrix of leaf-to-leaf distances.
Two trees have adjacency matrices with the same spectrum if and only if the characteristic polynomials of the adjacency matrices are equal. Given some irreducible representation of the symmetric group on the number of letters equal to the dimension of a square matrix, the immanantal polynomial of the matrix is constructed in the same manner as the characteristic polynomial except that the determinant is replaced by a similarly defined object for which the sign character is replaced by the character of the representation. One might hope that the immanantal polynomials are more successful at deciding isomorphism of combinatorial trees, but a result of [@MR1231010] shows that if the adjacency matrices of two combinatorial trees have the same characteristic polynomials, then they have the same immanantal polynomials for every irreducible representation. We note that [@MR0228370] already contains an example of two combinatorial trees with adjacency matrices that are explicitly shown to have the same immanantal polynomial.
The greedoid Tutte polynomial of a combinatorial tree $\TT$ encodes for each $i$ and $\ell$ the number of subtrees of $\TT$ that have $i$ internal vertices and $\ell$ leaves. It was conjectured in [@MR1369283] that this information identifies the isomorphism type of a combinatorial tree. However, it was shown in [@MR2234989] that there are infinitely many pairs of nonisomorphic caterpillars that share the same greedoid Tutte polynomial: a caterpillar is a combinatorial tree that consists of some number of internal vertices along a single path and leaves that are each adjacent to one of the internal vertices. This contrasts with the situation for rooted, combinatorial trees; it is shown in [@MR967486] that there is a two-variable polynomial defined for all rooted, directed graphs (and hence, in particular, for rooted, combinatorial trees) such that two rooted, combinatorial trees have the same polynomial if and only if they are isomorphic. The polynomial in [@MR967486] is defined recursively, but it is not hard to see that it encodes in a compact manner the total number of vertices in the tree, the number of children of the root, the number of vertices in each of the subtrees below the children of the root, and so on.
The chromatic symmetric function of a graph was introduced in [@MR1317387]. A proper coloring of a finite graph is a function $\kappa$ from the vertices of the graph to $\bN$ such that adjacent vertices are assigned different values. We can introduce an equivalence relation on the proper colorings by declaring that two colorings $\kappa'$ and $\kappa''$ are equivalent if there is a bijection $\pi : \bN \to \bN$ such that $\kappa'' = \pi \circ \kappa'$. For a graph with $m$ vertices, each equivalence class gives rise to a partition $\lambda_1 \ge \lambda_2 \ge \ldots \ge \lambda_k > 0$ of $m$ by taking, for any $\kappa$ in the equivalence class, $\lambda_i$ to be the $i^{\mathrm{th}}$ largest of the cardinalities $\#\{v : \kappa(v) = j\}$ as $j$ ranges over $\bN$. The chromatic symmetric function encodes for each partition of $m$ the number of equivalence classes of colorings that give rise to that partition. It was conjectured in [@MR1317387] that nonisomorphic combinatorial trees have distinct chromatic symmetric functions. It was shown in [@MR2382514] that this conjecture is true for caterpillars and that paper also reports on computational results verifying that the conjecture holds for the class of trees with at most $23$ vertices. Further work related to the conjecture for the special case of trees with a single centroid is contained in [@MR3147202].
Our point of departure in this paper is the well-known fact [@Zaretskii_65; @MR0237362; @buneman1971recovery; @MR0363963] that an edge-weighted tree can be reconstructed from its matrix of leaf-to-leaf distances (see [@felsenstein] for an indication of the importance of this observation in the statistical reconstruction of phylogenetic trees). In fact, an edge-weighted tree with $n$ leaves can be reconstructed from the collection of total lengths of subtrees spanned by all subsets of $m$ leaves provided $n \ge 2m-1$ [@MR2064171]. We remark that the total length of the subtree spanned by a set of leaves is an important quantity in phylogenetics where it is called the phylogenetic diversity of the corresponding set of taxa [@MR2359353].
Given these results, one might imagine that the multiset of leaf-to-leaf distances suffices to identify the isomorphism type of an edge-weighted tree. This is certainly not the case. For example, consider the two combinatorial caterpillars $\TT'$ and $\TT''$ with $28$ leaves each, where $\TT'$ has $3$ internal vertices $a',b',c'$ in order along a path that are adjacent respectively to $2,11,12$ leaves, and $\TT''$ has $3$ internal vertices $a'',b'',c''$ in order along a path that are adjacent respectively to $3,14,8$ leaves. Taking the $\binom{25}{2}$ pairs of distinct leaves in $\TT'$, we see that the distance $2$ appears $\binom{2}{2} + \binom{11}{2} + \binom{12}{2} = 122$ times, the distance $3$ appears $2 \times 11 + 11 \times 12 = 154$ times, and the distance $4$ appears $2 \times 12 = 24$ times. Similarly, taking the $\binom{25}{2}$ pairs of distinct leaves in $\TT''$, we see that the distance $2$ appears $\binom{3}{2} + \binom{14}{2} + \binom{8}{2} = 3 + 91 + 28 = 122$ times, the distance $3$ appears $3 \times 14 + 14 \times 8 = 154$ times, and the distance $4$ appears $3 \times 18 = 24$ times. Probabilistically, we have just shown that if we pick two leaves uniformly at random without replacement from an edge-weighted tree, then the isomorphism type of the tree is not uniquely identified by the probability distribution of the distance between the two leaves.
Note in this last example that if we looked at the multisets of lengths of subtrees spanned by three leaves, then we would see the length $3$ appearing $\binom{11}{3}+\binom{12}{3} = 335$ times for $\TT'$ and $\binom{3}{3} + \binom{14}{3}+\binom{8}{3} = 421$ times for $\TT''$, and hence the probability distribution of the length of the subtree spanned by three leaves chosen uniformly at random is not the same for the two trees.
In order to proceed further, we need to introduce some more notation. Write $\LL(\TT)$ for the set of leaves of an edge-weighted tree $\TT$. Given a subset $K$ of $\LL(\TT)$, let $\WW_\TT(K)$ be the length of the subtree spanned by $K$; that is, $\WW_\TT(K)$ is the sum of the lengths of the edges in the smallest connected subgraph of $\TT$ with a vertex set that contains $K$.
It is possible to calculate the total length of $\TT$, that is, $\WW_\TT(\LL(\TT))$, using the following result from [@MR2053839] that extends one for the special case of $3$-valent trees in [@pauplin2000direct]. Write $d_\TT(v)$ for the degree of an interior vertex $v$ of $\TT$ (that is, $v \in \VV(\TT) \setminus \LL(\TT)$). For distinct leaves $x,y \in \LL(\TT)$ denote by $I_\TT(x,y)$ the set of interior vertices on the unique path in $\TT$ between $x$ and $y$ and put $$h_\TT(x,y) := \prod_{v \in I_\TT(x,y)} ((d_\TT(v) - 1)!)^{-1}.$$ Let $r_\TT(x,y)$ be the sum of the lengths of the edges in the path between $x$ and $y$. Then, $$\WW_\TT(\LL(\TT))
=
\sum_{\{x,y\} \subseteq \LL(\TT), x \ne y} h_\TT(x,y) r_\TT(x,y).$$ Of course, a similar formula gives $\WW_\TT(K)$ for any $K \subseteq \LL(\TT)$; the path between a pair of leaves of the subtree is the same as the path between them in $\TT$, the length of this path is the same in the subtree as it is in $\TT$, but the degree of an interior vertex of the subtree can be less than its degree as an interior vertex of $\TT$.
Suppose that $\# \LL(\TT) = n$ and $Y_1, \ldots, Y_n$ is a uniformly distributed random listing of $\LL(\TT)$; that is, $Y_1, \ldots, Y_n$ is the result of sampling the leaves of $\TT$ uniformly at random without replacement. Set $W_k := \WW_\TT(\{Y_1, \ldots, Y_k\})$ for $2 \le k \le n$; that is, the random variable $W_k$ is the length of the subtree spanned by the first $k$ of the randomly chosen leaves. We write $\cW_\TT$ for the $(n-1)$-dimensional random vector $(W_2, \ldots, W_n)$ and call this random vector the [*random length sequence*]{} of $\TT$.
In this paper we address the following question.
[Can we reconstruct the edge-weighted tree $\TT$ up to isomorphism from the joint probability distribution of the random length sequence $\cW_{\TT}$?]{} \[q:main\]
Another way of framing this question is the following. Write $y_1, \ldots, y_n$ for the leaves of $\TT$ and let $\cJ_\TT$ be the multiset with cardinality $n!$ that results from listing the $(n-1)$-dimensional vectors $$(\WW_\TT(\{y_{\pi(1)},y_{\pi(2)}\}),
\WW_\TT(\{y_{\pi(1)},y_{\pi(2)},y_{\pi(3)}\}),
\ldots, \WW_\TT(\{y_{\pi(1)}, \ldots ,y_{\pi(n)}\}))$$ as $\pi$ ranges of the permutations of $[n]:=\{1,\ldots,n\}$. We stress that $\cJ_\TT$ is a multiset; that is, we do not know which increasing sequences of lengths go with which ordered listings of the leaves.
[ Can we reconstruct the edge-weighted tree $\TT$ up to isomorphism from the multiset of length sequences $\cJ_\TT$?]{}
We end this section with some remarks about the problem of reconstructing trees from various so-called [*decks*]{}, as this subject has some similarities to the questions we consider. In [@MR0120127], Ulam asked whether it is possible to reconstruct the isomorphism type of a graph with at least $3$ vertices from the isomorphism types of the subgraphs obtained by deleting each of the vertices. This question was resolved in the affirmative for combinatorial trees in [@MR0087949]. Moreover, later results established that it is not necessary to know the forests obtained by deleting every vertex. For example, it was shown in [@MR0200190] that it suffices to know the subtrees obtained by deleting leaves. This latter result was strengthened in [@MR0256926], where it was found that it is only necessary to know which nonisomorphic forests are obtained and not what the multiplicity of each isomorphism type is, and in [@MR0260614], where it was shown that it suffices to take only those leaves $p$ that are [*peripheral*]{} in the sense that $$\max_{v \in \VV(\TT)} r_\TT(p,v)
=
\max_{v \in \VV(\TT)} \max_{v \in \VV(\TT)} r_\TT(u,v).$$ Along the same lines, it was established in [@MR680306] that it is enough to take only the nonleaf vertices, provided that there are at least three of them. The line of inquiry in [@MR786484] is the most similar to ours: an example was presented of two trees for which the respective sets of vertices may be paired up in such a way that for each pair the sizes of the trees in the forests produced by removing each element of the pair from its tree are the same, and a necessary and sufficient condition was given for a tree to be uniquely reconstructible from this sort of data, which the authors of [@MR786484] call the [*number deck*]{} of the tree.
Overview of the main results
----------------------------
We will answer Question \[q:main\] in the affirmative for a few different classes of trees. Some classes will have general edge-weights and some classes will be combinatorial trees. It is clear that in the case of general edge-weights we must restrict to trees that have no vertices with degree $2$ because otherwise we can subdivide any edge into arbitrarily many edges with the same total length and the joint probability distribution of the random length sequence will be unchanged – see . We call such trees [*simple*]{}. The terms irreducible or homomorphically irreducible are also used in the literature.
![Two non-isomorphic edge-weighted trees that cannot be distinguished by the joint probability distribution of their random length sequences.[]{data-label="fig:NonSimple cex"}](NonSimpleCex.png){width="40.00000%"}
Our first result is for the the class of [*stars*]{}; that is, edge-weighted trees with $n \ge 3$ leaves that have a single interior vertex. Note that such trees are simple. For any edge-weighted tree with $n$ leaves, $W_n$ is a constant (the total length of the tree) and $W_n - W_{n-1}$ is a uniformly distributed random pick from the lengths of the $n$ edges that are adjacent to one of the leaves. The following simple result is immediate from this observation.
[For $n \ge 3$ the isomorphism type of a star is uniquely determined by the joint probability distribution of its random length sequence.]{} \[thrm:main star\]
The simple trees with two leaves all consist of a single edge and have a random length sequence $(W_2)$, where $W_2$ is the length of that edge, and so the isomorphism type of such a tree is uniquely determined by the joint probability distribution of its random length sequence. The simple trees with three leaves are stars, and it follows from that the isomorphism type of such a tree is uniquely determined by the the joint probability distribution of its random length sequence.
We next consider simple, edge-weighted trees with four leaves.
[For $2 \le n \le 4$, the isomorphism type of a simple, edge-weighted tree $\TT$ with $n$ leaves is uniquely determined by the joint probability distribution of its random length sequence.]{} \[thrm:main n4\]
The proof of this result is via consideration of possible cases. Similar proofs could be attempted for larger numbers of leaves, but the main reason we include the result is to show how such a proof for even a small number of leaves leads to quite a few cases and because we will need the case of four leaves later.
It is well-known that any simple, combinatorial tree with labeled leaves can be reconstructed from the simple, combinatorial trees spanned by each subset of four leaves (the so-called quartets) [@MR2060009 Theorem 6.3.7]. With this and in mind, one might imagine that the isomorphism type of simple, edge-weighted tree can be determined from the joint probability distribution of $(W_2, W_3, W_4)$. However, putting such a strategy into practice would seem to be rather complicated because there can be two sets of leaves $\{y_1', y_2', y_3', y_4'\}$ and $\{y_1'', y_2'', y_3'', y_4''\}$ such that $\{y_1', y_2', y_3', y_4'\} \ne \{y_1'', y_2'', y_3'', y_4''\}$ but $\WW_\TT(\{y_1', y_2'\}) = \WW_\TT(\{y_1'', y_2''\})$, $\WW_\TT(\{y_1', y_2', y_3'\}) = \WW_\TT(\{y_1'', y_2'', y_3''\})$, and $\WW_\TT(\{y_1', y_2', y_3',y_4'\}) = \WW_\TT(\{y_1'', y_2'', y_3'',y_4''\})$. One way of ruling out such annoying algebraic coincidences is to assume that the edge-weighted tree $\TT$ has [*edge-weights in general position*]{}, by which we mean that the sums of the lengths of any two different (not necessarily disjoint) subsets of edges of $\TT$ are not equal.
[The isomorphism type of a simple, edge-weighted tree $\TT$ with edge-weights in general position is uniquely determined by the joint probability distribution of its random length sequence.]{} \[thrm:main generalposition\]
The last family of edge-weighted trees with general edge-weights whose elements we can identify up to isomorphism from the joint probability distributions of their random length sequences is the class of [*ultrametric*]{} trees. For the sake of completeness, we now define this class. Recall that for leaves $i, j \in \LL(\TT)$ we denote by $r_\TT(i, j)$ the distance between them; that is, $r_\TT(i,j)$ is the sum of the lengths of the edges on the unique path between $i$ and $j$. The edge-weighted tree $\TT$ is ultrametric if for any leaves $i, j, k \in \LL(\TT)$ we have $$r_\TT(i, k) \leq r_\TT(i, j) \vee r_\TT(j, k),$$ from which it follows that for any leaves $i, j, k \in \LL(\TT)$ at least two of the distances $r_\TT(i, j)$, $r_\TT(i, k)$, and $r_\TT(j, k)$ are equal while the third is no greater than that common value. Equivalently, an edge-weighted tree $\TT$ is ultrametric if, when it is thought of as a real tree (that is, a metric space where the edges are treated as real intervals of varying lengths given by their edge-weights – see, for example, [@MR2351587]), then there is a (unique) point $\rho$ called the root (which may be in the interior of an edge) such that the distance from $\rho$ to a leaf is the same for all leaves. We will make use of both definitions. It is immediate from the former definition that the subtree of an ultrametric tree spanned of a subset of leaves is itself ultrametric.
[The isomorphism type of an ultrametric, simple, edge-weighted tree $\TT$ is uniquely determined by the joint probability distribution of its random length sequence.]{} \[thrm:main ultrametric\]
[The proof of establishes an even stronger result. Namely, the isomorphism type of an ultrametric, simple, edge-weighted tree $\TT$ is uniquely determined by the minimal element of $\cJ_\TT$ in the lexicographic order.]{} \[rem:ultrametric min enough\]
[We call attention to a subtle point in the statements of and . Both results say that if we are given the joint probability distribution of the random length sequence of an edge-weighted tree $\TT$ – information that certainly includes the number of leaves of $\TT$ – and we know, a priori, that $\TT$ has a certain extra property (edge-weights in general position or ultrametricity), then we can determine the isomorphism type of $\TT$. The theorems do not, however, say whether it is possible to determine from the joint probability distribution of its random length sequence whether a simple, edge-weighted tree $\TT$ has its edge-weights in general position or is ultrametric. We do not have results that settle this question, but we say some more about it in and believe it is an interesting area for future research.]{} \[rem:ultrametric determination\]
Observe that if $\TT$ is an edge-weighted tree, $a$ is any vertex of $\TT$, and $c$ is a constant such that $c \ge \max\{r_\TT(a,i) : i \in \LL(\TT)\}$, then $\tilde r_\TT: \LL(\TT) \times \LL(\TT) \to \bR_+$ defined by $$\tilde r_\TT(i,j) := c + \frac{1}{2}(r_\TT(i,j) - r_\TT(a,i) - r_\TT(a,j)), \quad i \ne j,$$ and $$\tilde r_\TT(i,i) := 0,$$ is an ultrametric on $\LL(\TT)$ that arises from suitable edge-weights on $\TT$. The metric $\tilde r_\TT$ is often called the [*Farris transform*]{} of $r_\TT$ – see [@MR2311928] for a review of the many appearances of this object in various areas from phylogenetics to metric geometry. It might be hoped that an affirmative answer to for general edge-weighted trees will follow from . However, we have been unable to find an argument which shows that the joint probability distribution of the random length sequence of the tree $\TT$ equipped with the new edge-weights is determined by the joint probability distribution of the random length sequence for the original edge-weights.
Suppose that $\TT$ is a rooted, simple combinatorial tree with root $\rho$. We can define a partial order on $\VV(\TT)$ by declaring that that $x \le y$ if $x$ is on the unique path from $\rho$ to $y$. Two vertices $x,y \in \VV(\TT)$ have a unique greatest lower bound in this partial order that we write as $x \wedge y$ and call the [*most recent common ancestor*]{} of $x$ and $y$. The map $\hat r_\TT: \LL(\TT) \times \LL(\TT) \to \bR_+$ defined by $$\hat r_\TT(i,j) := \#\{k \in \LL(\TT) : i \wedge j < k\}$$ is an ultrametric on $\LL(\TT)$ and hence it arises from a collection of edge-weights $\hat \WW_\TT$ on $\TT$. A directed edge $(x,y)$ in $\TT$ with $x \le y$ is necessarily of the form $x = i \wedge j = i \wedge k$ and $y = j \wedge k$ for some $i,j,k \in \LL(\TT)$. If $e = \{x,y\}$ is the corresponding undirected edge, then $$\begin{split}
\hat \WW_\TT(e)
& = \frac{1}{2}(\hat r_\TT(i,j) - \hat r_\TT(j,k)) \\
& = \frac{1}{2}(\#\{\ell \in \LL(\TT) : x < \ell\} - \#\{\ell \in \LL(\TT) : y < \ell\}). \\
\end{split}$$ Therefore, if $\TT'$ is a subtree of $\TT$ spanned by some set of leaves $K \subseteq \LL(\TT)$ and $\DD(\TT')$ is the set of directed edges of $\TT'$, then we have that the length of $\TT'$ is $$\begin{split}
\hat \WW_\TT(K)
& = \frac{1}{2}\sum_{(x,y) \in \DD(\TT')}\left(\sum_{\ell \in \LL(\TT)} {\mathbbold{1}}\{x < \ell\} - {\mathbbold{1}}\{y < \ell\}\right) \\
& = \frac{1}{2} \#\{((x,y),\ell) \in \DD(\TT') \times \LL(\TT) : x < \ell, \, y \not < \ell\}. \\
\end{split}$$ The following result is immediate from and .
[The isomorphism type of a simple, combinatorial tree $\TT$ is uniquely determined by the minimal element of the set $\cJ_\TT$ of length sequences obtained after designating a root for $\TT$ and equipping $\TT$ with the edge-weights $\hat \WW_\TT$.]{}
We now turn our focus to combinatorial trees and drop the assumption of simplicity. That is, all edge-weights are equal to one and there may be vertices with degree two. We answer Question \[q:main\] in the affirmative for two families of combinatorial trees.
![A caterpillar tree. Removing the leaves (white vertices) results in a path of length $5$ (black vertices).[]{data-label="fig:Cat ex"}](CaterpillerEx.png){width="40.00000%"}
First, a combinatorial tree $\TT$ is a [*caterpillar*]{} if the deletion of the leaves along with the edges adjacent to them results in a path with $\ell+1$ vertices (and hence $\ell$ edges) – see, for example, . Choose some direction for the path and number from $0$ to $\ell$ the vertices on the path encountered successively in that direction and write $n_i$ for the number of leaves adjacent to the vertex numbered $i$. Note that $n_0 \ge 1$ and $n_\ell \ge 1$. Two sequences $n_0', \ldots, n_{\ell'}'$ and $n_0'', \ldots, n_{\ell''}''$ correspond to isomorphic trees if and only if $\ell' = \ell'' = \ell$, say, and either $n_i' = n_i''$, $0 \le i \le \ell$ or $n_i' = n_{\ell-i}''$, $0 \le i \le \ell$.
[The isomorphism type of a caterpillar is uniquely determined by the joint probability distribution of its random length sequence. Furthermore, it is possible to determine from the joint probability distribution of the random length sequence of a combinatorial tree whether the tree is a caterpillar.]{} \[thrm:main caterpillar\]
Our final results are for the classes of (unrooted) [*$(k+1)$-valent*]{} and [*rooted $k$-ary*]{} combinatorial trees. For $k \geq 2$, a $(k+1)$-valent combinatorial tree is a combinatorial tree for which all vertices have degree either $k+1$ (the internal vertices) or $1$ (the leaves). For $k \geq 2$, a rooted $k$-ary combinatorial tree is a combinatorial tree for which one internal vertex (the root) has degree $k$ and the remaining internal vertices have degree $k+1$; the leaves, of course, have degree $1$. When $k = 2$ we refer to a rooted $2$-ary combinatorial tree as a rooted [*binary*]{} combinatorial tree. Attaching an extra vertex via and edge to the root of a rooted $k$-ary tree produces a $(k+1)$-valent combinatorial tree.
[The isomorphism type of a $(k+1)$-valent combinatorial tree (respectively, a $k$-ary tree) is uniquely determined by the joint probability distribution of its random length sequence.]{} \[thrm:main kary1\]
In fact, our proof of leads us to a stronger conclusion.
[Fix $n > 1$. Let $\cT$ be a random $(k+1)$-valent combinatorial tree (respectively, a random $k$-ary combinatorial tree) with $n$ leaves. Then, the probability distribution of the isomorphism type of $\cT$ is uniquely determined by the joint probability distribution of its random length sequence. ]{} \[thrm:main kary2\]
Note that in there are two sources of randomness in the construction of the random length sequence: we first choose a realization of the random $\cT$ and then take an independent uniform random listing of the leaves to build the increasing sequence of subtrees and their lengths.
The rest of the paper consists primarily of proofs of the above results in the order we have presented them. In we briefly discuss further open questions related to Question \[q:main\].
Trees with up to $n = 4$ leaves: Proof of {#sec:n4 trees}
==========================================
We begin by looking at Question \[q:main\] for edge-weighted trees with a small number of leaves and give a proof of that answers Question \[q:main\] in the affirmative for general, simple edge-weighted trees with $n = 2, 3$ or $4$ leaves.
The case of for simple trees with $n = 2$ leaves is trivial, as all such trees have two leaves and one edge, $\cW_\TT = (W_2)$ in this case, and $W_2$ is the length of the edge.
The case of $n = 3$ leaves is only slightly more complicated, as all such trees are star-shaped. Thus, determining $\TT$ from $\cW_{\TT}$ consists of determining its three edge weights. These can be inferred easily from $\cW_{\TT}$ by looking at the distribution of $W_3 - W_2$, which, since $W_3$ is constant (equal to the total length of $\TT$), is distributed as a uniform random choice from the three edge weights.
Finally, we give a proof of in the case when $n = 4$.
For $n = 4$ leaves, there are two possible simple combinatorial trees, and hence two possibilities for the shape of $\TT$. The first is the star-shaped tree with four edges and one interior vertex. The second is the $3$-valent tree with two interior vertices and one interior edge. See .
![The two possible simple combinatorial trees with $n = 4$ leaves.[]{data-label="fig:n4 trees"}](n4Trees.png){width="40.00000%"}
To determine which possibility $\TT$ is, we first look at the distribution of $W_4 - W_3$ to find the lengths of the four edges connecting directly to the four leaves. Call these edges [*pendent*]{}. If the sum of the four pendent edge lengths equals $W_4$, then $\TT$ is star shaped and we have determined $\TT$ up to isomorphism. If not, then $\TT$ is $3$-valent and the difference between $W_4$ and the sum of the pendent edge lengths is the length $e$ of the interior edge. All that is left to determine $\TT$ up to isomorphism in this second case is determining how the pendent edges pair on each side of the interior edge.
First, if the multiset of the lengths of pendent edges is of the form $\{a, a, a, a\}$ or $\{a, a, a, b \}$, then $\TT$ is already uniquely determined.
Next, if the multiset is of the form $\{a, a, b, b \}$, then we need to distinguish between the case where the leaves with pendent edges of length $a$ are siblings (and thus so are the leaves with pendent edge length $b$) and the case where leaves with pendent edge lengths $a$ and $b$ are paired. In the former case the possible values of $W_2$ are $a+a, b+b, a+b+e$ with respective probabilities $\frac{4}{24}, \frac{4}{24}, \frac{16}{24}$, whereas in the latter case the possible values of $W_2$ are $a+b, a+b+e, a+a+e, b+b+e$ with respective probabilities $\frac{8}{24}, \frac{8}{24}, \frac{4}{24}, \frac{4}{24}$, and we can certainly distinguish between the two cases.
If the multiset of pendent edge lengths is of the form $\{a, a, b, c\}$, then there are the following two possibilities:
- the two leaves with pendent edge length $a$ are siblings and the two leaves with pendant edge lengths $b$ and $c$ are siblings, in which case the possible values of $W_2$ are $a+a, a+b+e, a+c+c, b+c$ with respective probabilities $\frac{4}{24}, \frac{8}{24}, \frac{8}{24}, \frac{4}{24}$;
- a leaf with pendent edge length $a$ is the sibling of the one with pendent edge length $b$ and the other leaf with pendent edge length $a$ is the sibling of the one with pendent edge length $c$, in which case the possible values of $W_2$ are $a+b, a+e, a+a+e, a+b+e, a+c+c, b+c+e$ with respective probabilities $\frac{4}{24}, \frac{4}{24}, \frac{4}{24}, \frac{4}{24}, \frac{4}{24}, \frac{4}{24}$.
Suppose without loss of generality that $b<c$. If $a<b<c$, then $\bP\{W_2 = a+a\}$ is $\frac{4}{24}$ for (P1) and $0$ for (P2). If $b<a<c$ or $b<c<a$, then $\bP\{W_2 = a+c+e\}$ is $\frac{8}{24}$ for (P1) and $\frac{4}{24}$ for (P2). In all cases we can distinguish between (P1) and (P2).
Finally, if the multiset of pendent edge lengths is of the form $\{a, b, c, d \}$, then there are the following two possibilities:
- the leaf with pendent edge length $a$ is paired with the one with pendent edge length $b$ and the leaf with pendent edge length $c$ is paired with the one with pendent edge length $d$, in which case the possible values of $W_2$ are $a+b, c+d, a+c+e, a+d+e, b+c+e, b+d+e$ with common probability $\frac{4}{24}$;
- the leaf with pendent edge length $a$ is paired with the one with pendent edge length $c$ and the leaf with pendent edge length $b$ is paired with the one with pendent edge length $d$, in which case the possible values of $W_2$ are $a+c, b+d, a+b+e, a+d+e, b+c+e, c+d+e$ with common probability $\frac{4}{24}$;
- the leaf with pendent edge length $a$ is paired with the one with pendent edge length $d$ and the leaf with pendent edge length $b$ is paired with the one with pendent edge length $c$, in which case the possible values of $W_2$ are $a+d, b+c, a+c+e, a+b+e, c+d+e, b+d+e$ with common probability $\frac{4}{24}$;
Suppose without loss of generality that $a < b < c < d$. Then possibility (P3) holds if and only if $\bP\{W_2=a+b\} > 0$ and possibility (P5) holds if and only if $\bP\{W_2=a+b\} = 0$ and $\bP\{W_2 = b+d+e\}>0$, so we can distinguish between (P3), (P4) and (P5).
The argument in the proof of seems rather [*ad hoc*]{} and it does not suggest a systematic approach to obtaining the analogous result for trees with an arbitrary numbers of leaves. The number of simple combinatorial trees with $n$ leaves grows so rapidly with $n$ (see, for example, [@felsenstein]) that even for trees with a relatively small fixed number of leaves a case-by-case argument seems rather forbidding. Nonetheless, we do conjecture that an affirmative answer to Question \[q:main\] holds more generally.
Trees in general position: Proof of {#sec:general pos}
====================================
Recall that the edge-weights of a simple, edge-weighted tree $\TT$ are in general position if the sum of the lengths of any two distinct subset of edges of $\TT$ are not equal.
By assumption, if $\{y_1', \ldots, y_k'\}$ and $\{y_1'', \ldots, y_k''\}$ are two subsets of $\LL(\TT)$ such that $\WW_\TT(\{y_1', \ldots, y_k'\}) = \WW_\TT(\{y_1'', \ldots, y_k''\})$, then $\{y_1', \ldots, y_k'\} = \{y_1'', \ldots, y_k''\}$. Consequently, if $\{y_1', \ldots, y_k'\}$ and $\{y_1'', \ldots, y_k''\}$ are two subsets of $\LL(\TT)$ such that $\WW_\TT(\{y_1', \ldots, y_j'\}) = \WW_\TT(\{y_1'', \ldots, y_j''\})$ for $2 \le j \le k$, then $\{y_1', y_2'\} = \{y_1'', y_2''\}$ and $y_j' = y_j''$ for $3 \le j \le k$.
Recall that $Y_1, \ldots, Y_n$ are the successive randomly chosen leaves used in the construction of $\cW_\TT = (W_2, \ldots, W_n)$.
Because $W_n - W_{n-1}$ is the length of the pendent edge attaching $Y_n$ to the rest of $\TT$, it follows that the set $C := \{\ell > 0 : \bP\{W_n - W_{n-1} = \ell\} > 0\}$ has $n$ elements and $\bP\{W_n - W_{n-1} = \ell\}=\frac{1}{n}$ for each $\ell \in P$. There are at least two leaves of $\TT$ that are siblings, and so there exist $\ell', \ell'' \in C$ such that $\bP\{W_2 = \ell' + \ell''\} > 0$. Fix such a pair of lengths and write $x_1$ and $x_2$ for the (unique) leaves of $\TT$ with pendent edges having respective lengths $\ell'$ and $\ell''$. We have $\bP\{W_2 = \ell' + \ell''\} = \frac{1}{\binom{n}{2}}$, and the event $\{W_2 = \ell' + \ell''\}$ coincides with the event $\{\{Y_1,Y_2\} = \{x_1, x_2\}\}$.
By assumption, the set $D :=
\{\ell > 0 : \bP\{W_3 - W_2 = \ell \, | \, W_2 = \ell' + \ell''\} > 0\}$ has $n-2$ elements and $\bP\{W_3 - W_2 = \ell \, | \, W_2 = \ell' + \ell''\} = \frac{1}{n-2}$ for each $\ell \in D$. Index the values of $D$ as $\ell_3, \ldots, \ell_n$ and write $x_k$, $3 \le k \le n$, for the unique leaf of $\TT$ that is distance $\ell_k$ from the unique vertex of $\TT$ that is adjacent to both of the sibling leaves $x_1$ and $x_2$. We will show that it is possible to determine the leaf-to-leaf distances $r_\TT(x_i, x_j)$, $1 \le i,j \le n$. As we recalled in the Introduction, this information uniquely identifies the isomorphism type of $\TT$.
Again by assumption, the set $E := \{\ell > 0 : \bP\{W_4 = \ell \, | \, W_2 = \ell' + \ell''\} > 0\}$ has $\binom{n-2}{2}$ elements and $\bP\{W_4 = \ell \, | \, W_2 = \ell' + \ell''\} = \frac{1}{\binom{n-2}{2}}$ for each $\ell \in E$. For a given $\ell \in E$ there is a unique ordered pair $(x_i, x_j)$, $3 \le i \ne j \le n$, and a unique $e \ge 0$ such that $$\bP\{W_3 - W_2 = \ell_i, \, W_4 - W_3 = \ell_j - e
\, | \, W_2 = \ell' + \ell'', \, W_4 = \ell\} > 0$$ and $$\bP\{W_3 - W_2 = \ell_j, \, W_4 - W_3 = \ell_i - e
\, | \, W_2 = \ell' + \ell'', \, W_4 = \ell\} > 0,$$ in which case the two conditional probabilities in question are both $\frac{1}{2}$. Moreover, every ordered pair $(x_i, x_j)$, $3 \le i \ne j \le n$, corresponds to some unique $\ell \in E$ and $e \ge 0$ in this way. The event $\{W_2 = \ell' + \ell'', \, W_3 - W_2 = \ell_i, W_4 - W_3 = \ell_j - e, \, W_4 = \ell\}$ coincides with the event $\{\{Y_1,Y_2\} = \{x_1, x_2\}, \, Y_3 = x_i, \, Y_4 = x_j\}$ and the event $\{W_2 = \ell' + \ell'', \, W_3 - W_2 = \ell_j, W_4 - W_3 = \ell_i - e, \, W_4 = \ell\}$ coincides with the event $\{\{Y_1,Y_2\} = \{x_1, x_2\}, \, Y_3 = x_j, \, Y_4 = x_i\}$. Considering the subtree of $\TT$ spanned by $\{x_1, x_2, x_3, x_4\}$ and ignoring the vertices with degree two to produce a simple tree, the leaves $x_i$ and $x_j$ are siblings in this simple tree (as are $x_1$ and $x_2$), and the quantity $e$ is the distance between the vertex in the subtree to which $x_i$ and $x_j$ are adjacent and the vertex to which $x_1$ and $x_2$ are adjacent; the lengths of the pendent edges connecting $x_i$ and $x_j$ to the rest of the subtree are $\ell_i - e$ and $\ell_j - e$. Thus, if the ordered pair $(x_i, x_j)$ corresponds to $\ell \in E$ and $e \ge 0$, then, recalling the notation $r_\TT$ for the path length distance in $\TT$, $r_\TT(x_1,x_2) = \ell'+\ell''$, $r_\TT(x_1, x_i) = \ell' + \ell_i$, $r_\TT(x_1, x_j) = \ell' + \ell_j$, $r_\TT(x_2, x_i) = \ell'' + \ell_i$, $r_\TT(x_2, x_j) = \ell'' + \ell_j$, and $r_\TT(x_i, x_j) = \ell_i + \ell_j - e$.
Therefore, the joint probability distribution the random length sequence $\cW_\TT$ uniquely determines the matrix of leaf-to-leaf distances in $\TT$ and hence the isomorphism type of $\TT$.
Ultrametric trees: Proof of {#sec:ultrametric}
============================
Recall that $\cJ_\TT$ is the set of sequences $(\ell_2, \ldots, \ell_n)$ such that $\bP\{W_k = \ell_k, \, 2 \le k \le n\} > 0$. Write $\prec$ for the usual [*lexicographic*]{} total order on $\cJ_\TT$ (that is $\ell' \prec \ell''$ if in the first coordinate where the two sequences differ the entry of the $\ell'$ is smaller than the entry of $\ell''$). Equivalently, $\ell' \prec \ell''$ if either $\ell_2' < \ell_2''$ or $\ell_2' = \ell_2''$ and for the smallest $k \ge 2$ such that $\ell_{k+1}' - \ell_k' \ne \ell_{k+1}'' - \ell_k''$ we have $\ell_{k+1}' - \ell_k' < \ell_{k+1}'' - \ell_k''$. In this section we prove by showing that that the tree $\TT$ is determined up to isomorphism by the minimal element of $\cJ_\TT$.
We use a similar technique (but with a different total order) to establish for $k+1$-valent and rooted $k$-ary combinatorial trees in .
Let $(\ell_2, \ell_3, \ldots, \ell_n)$ be the minimal element of $\cJ_\TT$. Write $x_1, x_2, \ldots, x_n$ for an ordering of $\LL(\TT)$ such that $\ell_k = \WW_\TT(\{x_1, x_2, \ldots, x_k\})$ for $k=2, \ldots, n$.
We will establish by induction that for $2 \leq k \leq n$ the ultrametric real tree spanned by the leaves $\{x_1, x_2, \ldots, x_k\}$ can be reconstructed from $(\ell_2, \ell_3, \ldots, \ell_k)$ and, moreover, if we adopt the convention that we draw ultrametric real trees in the plane with the root at the top and leaves along the bottom, then this particular real tree can be embedded in the plane with the leaves $x_1, x_2, \ldots, x_k$ in order from left to right.
The claim is certainly true when $k=2$. Suppose the claim is true for $2,3,\ldots,k$.
Write $\TT_k$ for the ultrametric real tree spanned by $\{x_1, x_2, \ldots, x_k\}$ and denote the height of $\TT_k$ by $h_k$; that is, $h_k$ is the common distance from each of the leaves of $\TT_k$ to the root $\rho_k$ of $\TT_k$. We can, of course, suppose that $\TT_2 \subset \TT_3 \subset \ldots \subset \TT_n$.
If $\ell_{k+1}-\ell_k \ge h_k$, then the ultrametric real tree $\TT_{k+1}$ spanned by $\{x_1, x_2, \ldots, x_k, x_{k+1}\}$ must consist of an arc of length $$h_{k+1} = \frac{1}{2}(\ell_{k+1} - \ell_k + h_k)$$ from the root $\rho_{k+1}$ of $\TT_{k+1}$ to the leaf $x_{k+1}$ and an arc of length $\frac{1}{2}(\ell_{k+1} - \ell_k - h_k)$ from “new root” $\rho_{k+1}$ to the “old root” $\rho_k$. In this case we can, by the inductive hypothesis, certainly embed $\TT_{k+1}$ in the plane with the leaf $x_{k+1}$ to the right of the leaves $x_1, x_2, \ldots, x_k$.
Assume, therefore, that $\ell_{k+1}-\ell_k < h_k$. Then the ultrametric real tree $\TT_{k+1}$ must consist of $\TT_k$ and an arc of length $\ell_{k+1} - \ell_k$ joining $x_{k+1}$ to a point $y \in \TT_k$. It will suffice to show that $y$ must be on the arc $[\rho_k, x_k]$ that connects $\rho_k$ to $x_k$ because there is a unique ultrametric real tree consisting of $\TT_k$ and an arc of length $\ell_{k+1} - \ell_k$ joining a new leaf to a point on the arc $[\rho_k, x_k]$ (this tree must have root $\rho_k$ and the point where the arc of length $\ell_{k+1} - \ell_k$ attaches to $[\rho_k, x_k]$ must be at distance $h_k - (\ell_{k+1}-\ell_k)$ from $\rho_k$) and, moreover, such a tree can be embedded in the plane with the new leaf to the right of the leaves $\{x_1, x_2, \ldots, x_k\}$.
Suppose, then, that $y$ is not on the arc $[\rho_k, x_k]$. Let $j$ be the maximum of the indices $i < k$ such that $y$ is on the arc connecting $x_i$ to $\rho_k$. Write $u$ for the point that is closest to $x_{j+1}$ in the subtree spanned by $\{x_1, x_2, \ldots, x_j\}$ and $\rho_k$. Write $v$ for the point that is closest to $x_{j+1}$ in the subtree spanned by $\{x_1, x_2, \ldots, x_j\}$. Equivalently, $v$ is the point in the subtree spanned by $\{x_1, x_2, \ldots, x_j\}$ that is closest to $u$. We may, of course, have $u=v$ (which occurs if and only if $h_{j+1} = h_j$). By the inductive hypothesis, $u$ and $v$ are on the arc connecting $x_j$ to $\rho_k$ and $$r_\TT(x_{j+1}, u) + r_\TT(u,v) = \ell_{j+1} - \ell_j.$$
By construction, $y$ is the point closest to $x_{k+1}$ in the subtree spanned by $\{x_1, x_2, \ldots, x_j\}$ and $\rho_k$. Write $w$ for the point closest to $x_{k+1}$ in the subtree spanned by $\{x_1, x_2, \ldots, x_j\}$. Equivalently, $w$ is the point in the subtree spanned by $\{x_1, x_2, \ldots, x_j\}$ that is closest to $y$. We have $$\WW_\TT(\{x_1, \ldots, x_j, x_{k+1}\}) - \ell_j
= r_\TT(x_{k+1}, y) + r_\TT(y,w).$$
By the definition of $j$, the points $y$ and $u$ are on the arc connecting $x_j$ to $\rho_k$ and $r_\TT(u,x_j) > r_\TT(y, x_j)$. This implies that $r_\TT(u,v) \ge r_\TT(y,w)$. It also implies, by ultrametricity, that $$r_\TT(x_{k+1},y) = r_\TT(x_j,y) < r_\TT(x_j,u).$$ Consequently, $$\WW_\TT(\{x_1, \ldots, x_j, x_{k+1}\}) - \ell_j
< \ell_{j+1} - \ell_j.$$ This, however, contradicts the minimality of $(\ell_2, \ldots, \ell_n)$.
As we noted in , it is interesting to know whether it is possible to determine from the joint probability distribution of the random length sequence whether an edge-weighted tree is ultrametric. The preceding proof of contains a procedure for reconstructing $\TT$ from the minimal element of $\cJ_\TT$ in the lexicographic order when $\TT$ is an ultrametric tree. If $\TT$ is an arbitrary edge-weighted tree and this procedure is applied to the minimal element of $\cJ_\TT$ in the lexicographic order, then it will still produce an ultrametric tree and so a necessary condition for $\TT$ to be ultrametric is that the joint probability distribution of the random length sequence of this ultrametric tree coincides with the joint probability distribution of $\cW_\TT$.
Along the same lines, suppose that $\TT$ is an arbitrary edge-weighted tree and, thinking of $\TT$ as a real tree, we root it at the unique point $\rho$ such that $$\max_{v \in \LL(\TT)} r_\TT(\rho,v)
=
r^*
:=
\frac{1}{2}\max_{u \in \LL(\TT)} \max_{v \in \LL(\TT)} r_\TT(u,v).$$ Then $\rho$ will have $k$ children for some $k$. Let $m_i$, $1 \le i \le k$, be the number of leaves $v$ in the subtree below the $i^{\mathrm{th}}$ child of $\rho$ such that $r_\TT(\rho,v) = r^*$. It is clear that $\TT$ is ultrametric if and only if $m_1 + \cdots + m_k = n$. Let $n_1, \ldots, n_\ell$ be a listing of the nonzero terms in the list $m_1, \ldots, m_k$. Note for $2 \le j \le \ell$ that $$\bP\{W_2 = 2 r^*, \ldots, W_j = j r^*\}
=
j ! \frac{1}{n(n-1) \cdots (n-j+1)}
\sum_{1 \le h_1 < \ldots < h_j \le \ell}
n_{h_1} \cdots n_{h_j}$$ and $$\max\{j \ge 2 : \bP\{W_2 = 2 r^*, \ldots, W_j = j r^*\} > 0\}
=
\ell.$$ Thus, the joint probability distribution of $\cW_\TT$ determines $\ell$ and the values of the elementary symmetric polynomials of degrees $2 \le j \le \ell$ evaluated at $n_1, \ldots, n_\ell$, and we want to know whether $n_1 + \cdots + n_\ell$, the value of the elementary symmetric polynomial of degree $1$ evaluated at $n_1, \ldots, n_\ell$, is $n$. The elementary symmetric polynomials of degrees $1,2,\ldots,\ell$ in $\ell$ real variables are algebraically independent over the reals, and so we cannot expect to recover $n_1 + \cdots + n_\ell$ from the values of the other elementary symmetric polynomials. However, there are inequalities connecting the values of the various elementary symmetric polynomials that can be used to establish necessary conditions and sufficient conditions for $\TT$ to be ultrametric. For example, set $$p_1 := \frac{1}{\ell}(n_1 + \cdots + n_\ell)$$ and $$\begin{split}
p_j
& :=
\frac{1}{\binom{\ell}{j}}
\sum_{1 \le h_1 < \ldots < h_j \le \ell}
n_{h_1} \cdots n_{h_j} \\
& =
\frac{1}{\binom{\ell}{j}} \frac{n(n-1) \cdots (n-j+1)}{j!}
\bP\{W_2 = 2 r^*, \ldots, W_j = j r^*\}, \quad 2 \le j \le \ell. \\
\end{split}$$ If $\alpha_1, \ldots, \alpha_\ell$ and $\beta_1, \ldots, \beta_\ell$ are positive constants such that $$\alpha_1 + 2 \alpha_2 + \cdots + \ell \alpha_\ell
=
\beta_1 + 2 \beta_2 + \cdots + \ell \beta_\ell$$ and $$\label{eq:alpha beta}
\alpha_j + 2 \alpha_{j+1} + \cdots + (\ell-j+1) \alpha_\ell
\ge
\beta_j + 2 \beta_{j+1} + \cdots + (\ell-j+1) \beta_\ell, \quad
2 \le j \le \ell,$$ then, by [@MR944909 Theorem 77, Chapter II] $$\prod_{j=1}^\ell p_j^{\alpha_j} \le \prod_{j=1}^\ell p_j^{\beta_j}.$$ Thus, if $\alpha_2, \ldots, \alpha_\ell$ and $\beta_2, \ldots, \beta_\ell$ satisfy the inequalities and $$\gamma := \sum_{j=2}^\ell j (\beta_j - \alpha_j),$$ then $$p_1
\le
\left(
\prod_{j=2}^\ell p_j^{\beta_j - \alpha_j}
\right)^{\frac{1}{\gamma}}$$ when $\gamma > 0$, and the opposite inequality hold when $\gamma < 0$. This observation leads to necessary conditions and sufficient conditions for $\TT$ to be ultrametric.
\[rem:more ultrametric determination\]
As we will see in for $k+1$-valent and rooted $k$-ary combinatorial trees, a somewhat similar proof argument based on the consideration of length sequences that are minimal with respect to a suitable order leads to a stronger result in that case. There we can not only determine $\TT$ from the joint probability distribution of its random length sequence, but if we have a random tree $\cT$ with a fixed number of leaves, then it is possible to determine the distribution of $\cT$ from the joint probability distribution of the random length sequence obtained by first picking a realization of $\cT$ and then independently picking a random ordering of the leaves to build a random length sequence.
Formally, we have some space $\bT$ of isomorphism types of trees, a corresponding space $\bS$ of possible length sequences, and a probability kernel $\mu$ from $\bT$ to $\bS$, where, for $\TT \in \bT$, $\nu(\TT, \cdot)$ is the element of $\cP(\bS)$, the space of probability measures on $\bS$, that is the joint probability distribution of the random length sequence built from $\TT$. An affirmative answer to for a particular $\bT$ means that the map $\TT \mapsto \nu(\TT, \cdot)$ from $\bT$ to $\cP(\bS)$ is injective. Given an element $\mu$ of $\cP(\bT)$, the space of probability measures on $\bT$, let $\mu \nu \in \cP(\bS)$ be defined as usual by $\mu \nu(B) = \int_\bT \nu(\TT,B) \, \mu(d\TT)$ for $B \subseteq \cS$. The stronger results obtained in say that, in the situations considered there, the map $\mu \mapsto \mu \nu$ from $\cP(\bT)$ to $\cP(\bS)$ is injective.
One can ask if an analogous strengthening is also true for ultrametric trees. A proof along the lines of that given for doesn’t appear to apply immediately in this situation where the relevant space $\bT$ is uncountable rather than finite. We leave this as one of many open questions.
Caterpillar trees: Proof of {#sec:caterpillar}
============================
Recall that a caterpillar is a (not necessarily simple) combinatorial tree such that deleting the leaves of the tree results in a path consisting of $\ell+1$ vertices (and hence $\ell$ edges of length $1$).
Choosing one end of the path, we can label the vertices on path consecutively with $0,1,\ldots,\ell$ and denote by $n_r$ the leaves that are attached to vertex $r$ on the path. Both $n_0$ and $n_\ell$ are non-zero, but the remaining $n_i$ may be zero.
The isomorphism types of caterpillars with $n$ leaves are thus seen to be in a bijective correspondence with equivalence classes of nonnegative integer sequences $(n_0, n_1, \ldots, n_{\ell-1}, n_\ell)$, where $n = n_0 + \cdots + n_\ell$ and $n_0, n_{\ell} \ne 0$, and we declare that $(n_0, n_1, \ldots, n_{\ell-1}, n_\ell)$ and $(n_\ell, n_{\ell-1}, \ldots, n_1, n_0)$ are equivalent.
The proof of the following, which establishes the first claim in , is straightforward and we omit it.
[A combinatorial tree $\TT$ with $n$ leaves is a caterpillar with an associated path of length $\ell$ if and only if $$\max \{ k : \operatorname{\mathbb{P}}\{ W_2 = k + 2 \} > 0\} = \ell$$ and $W_n = \ell + n$ almost surely. ]{} \[prop:det if caterpillar\]
We now turn to the proof of the main claim in .
Consider a box with $n$ tickets. Each ticket has a label belonging to $\{0,1,\ldots,\ell\}$ and there are $n_i$ tickets with label $i$ for $0 \le i \le \ell$. Let $X_1, X_2, \ldots, X_n$ be the result of drawing tickets uniformly at random from the box without replacement and noting their labels. Set $$K_r := \max_{1 \le j \le r} X_j - \min_{1 \le j \le r} X_j.$$ It is clear that $(W_2, W_3, \ldots, W_n)$ has the same joint probability distribution as $(K_2 + 3, K_3 + 3, \ldots, K_n+n)$, and so it suffices to show that it is possible to determine $\{(n_0, n_1, \ldots, n_{\ell-1}, n_\ell), (n_\ell, n_{\ell-1}, \ldots, n_1, n_0)\}$ from a knowledge of the joint probability distribution of $\cK := (K_2, \ldots, K_n)$ (that is, it is possible to determine up to a reflection the vector that gives the number of tickets with each label).
To begin with, note that, as in , $$\max\{k : \bP\{K_2 = k\} > 0\} = \ell,$$ and so we can determine $\ell$ from the joint probability distribution of $\cK$.
Observe next that $$\begin{split}
\bP\{K_2 = \ell\}
& = \bP\{(X_1, X_2) \in \{(0,\ell), (\ell,0)\}\} \\
& = 2\frac{n_0 n_\ell}{n(n-1)}, \\
\end{split}$$ and $$\max\{k : \bP\{K_2 = 0, \ldots, K_k = 0, \, K_{k+1} = \ell\} > 0\}
= n_0 \vee n_\ell.$$ We can thus determine the multiset $\{n_0, n_\ell\}$ and, in particular, $n_0 + n_\ell$.
For $1 \le r < \frac{\ell}{2}$ we have $$\begin{split}
& \bP\{K_2 = r, \, K_3 = \ell \} \\
& \quad =
\bP\{(X_1, X_2, X_3) \in
\{(0,r,\ell), (r,0,\ell), (\ell,\ell-r,0), (\ell-r,\ell,0)\}
\} \\
& \quad =
\frac{2 n_0(n_r+n_{\ell-r})n_\ell}{n(n-1)(n-2)}, \\
\end{split}$$ and so we can determine $n_r+n_{\ell-r}$. If $\ell$ is even, then $$\begin{split}
& \bP\{K_2 = \frac{\ell}{2}, \, K_3 = \ell\} \\
& \quad =
\bP\left\{(X_1, X_2, X_3) \in
\left\{\left(0,\frac{\ell}{2},\ell\right), \left(\frac{\ell}{2},0,\ell\right),
\left(\ell,\frac{\ell}{2},0\right), \left(\frac{\ell}{2},\ell,0\right)\right\}
\right\} \\
& \quad =
\frac{4 n_0 n_{\frac{\ell}{2}} n_\ell}{n(n-1)(n-2)}, \\
\end{split}$$ and so we can determine $n_{\frac{\ell}{2}}$.
Also, $$\begin{split}
\bP\{K_2 = 0\}
& = \sum_{i=0}^\ell \bP\{X_1 = r, X_2 = r\} \\
& = \frac{\sum_{r=0}^\ell n_r(n_r-1)}{n(n-1)} \\
& = \frac{\sum_{r=0}^\ell n_r^2 - n} {n(n-1)} \\
\end{split}$$ and, for $1 \le k \le \ell$, $$\begin{split}
\bP\{K_2 = k\}
& = \sum_{r=0}^{\ell-k} \bP\{(X_1, X_2) \in \{(r,r+k), (r+k,r)\}\} \\
& = \frac{2 \sum_{r=0}^{\ell-k} n_r n_{r+k}}{n(n-1)}. \\
\end{split}$$ We can therefore determine $\sum_{r=0}^{\ell-k} n_r n_{r+k}$ for $0 \le k \le \ell$.
We claim that we the information we have just derived suffices to determine $\{(n_0, n_1, \ldots, n_{\ell-1}, n_\ell), (n_\ell, n_{\ell-1}, \ldots, n_1, n_0)\}$. That is, if $n_0', \ldots, n_\ell'$ is a sequence with $$n_0 + \cdots + n_\ell = n_0' + \cdots + n_\ell' = n,$$ $$n_r+ n_{\ell-r} = n_r'+ n_{\ell-r}'$$ for $0 \le r \le \ell$, and $$\sum_{r=0}^{\ell-k} n_r n_{r+k} = \sum_{r=0}^{\ell-k} n_r' n_{r+k}'$$ for $0 \le k \le \ell$, then either $n_r = n_r'$ for $0 \le r \le \ell$ or $n_r = n_{\ell-r}'$ for $0 \le r \le \ell$.
To see that this is so, introduce the Fourier transforms $$g(z) := \sum_{k=0}^\ell n_k e^{i z k}$$ and $$g'(z) := \sum_{k=0}^\ell n_k' e^{i z k}$$ for $z \in \bC$. These are entire functions that uniquely determine $n_0, \ldots, n_\ell$ and $n_0', \ldots, n_\ell'$. Note that $$\sum_{k=0}^\ell n_{\ell-k} e^{i z k}
= e^{i z \ell} g(-z),$$ and a similar formula holds for $g'$. It will thus suffice to show that either $g(z) = g'(z)$ or $g(z) = e^{i z \ell} g'(-z)$ (equivalently, $g'(z) = e^{i z \ell} g(-z)$).
It follows from the assumption that $$\sum_{r=0}^{\ell-k} n_r n_{r+k} = \sum_{r=0}^{\ell-k} n_r' n_{r+k}'$$ for $0 \le k \le \ell$ that if we define $N: \bZ \to \bZ$ by $$N(j) =
\begin{cases}
n_j,& \quad 0 \le j \le \ell, \\
0,& \quad \text{otherwise,}
\end{cases}$$ and define $N'$ similarly, then $$\sum_{\{r,j \in \bZ: r-j = k\}} N(r) N(j)
=
\sum_{\{r,j \in \bZ: r-j = k\}} N'(r) N'(j)$$ for all $k \in \bZ$ and hence $$g(z) g(-z) = g'(z) g'(-z)$$ for all $z \in \bC$. By Theorem 2.2 in [@rosenblatt1982structure], there exist finitely supported functions $C: \bZ \to \bZ$ and $D: \bZ \to \bZ$ such that if we set $$\phi(z) := \sum_{k \in \bZ} C(k) e^{i z k}$$ and $$\psi(z) := \sum_{k \in \bZ} D(k) e^{i z k},$$ then $$g(z) = \phi(z) \psi(z)$$ and $$g'(z) = \phi(z) \psi(-z).$$
It follows from the assumption that $$n_r+ n_{\ell-r} = n_r'+ n_{\ell-r}'$$ for $0 \le r \le \ell$ that $$g(z) + e^{i z \ell} g(-z)
=
g'(z) + e^{i z \ell} g'(-z)$$ for all $z \in \bC$. Therefore, $$\phi(z) \psi(z) + e^{i z \ell} \phi(-z) \psi(-z)
=
\phi(z) \psi(-z) + e^{i z \ell} \phi(-z) \psi(z)$$ and hence $$(\phi(z) - e^{i z \ell} \phi(-z))(\psi(z) - \psi(-z)) = 0$$ for all $z \in \bC$. Because the functions $z \mapsto \phi(z) - e^{i z \ell} \phi(-z)$ and $z \mapsto \psi(z) - \psi(-z)$ are both entire, we must have either that $\phi(z) = e^{i z \ell} \phi(-z)$ for all $z \in \bC$ or $\psi(z) = \psi(-z)$ for all $z \in \bC$. If $\phi(z) = e^{i z \ell} \phi(-z)$ for all $z \in \bC$, then $$g'(z) = \phi(z) \psi(-z)
= e^{i z \ell} \phi(-z) \psi(-z) = e^{i z \ell} g(-z)$$ and $n_i = n_{\ell-i}'$ for $0 \le i \le \ell$. If $\psi(z) = \psi(-z)$ for all $z \in \bC$, then $$g'(z) = \phi(z) \psi(-z)
= \phi(z) \psi(z) = g(z)$$ and $n_r = n_r'$ for $0 \le r \le \ell$.
$k+1$-valent and rooted $k$-ary trees {#sec:kary trees}
=====================================
We now turn our focus to the cases of $(k+1)$-valent and $k$-ary trees. Recall that a [*$(k+1)$-valent tree*]{} is a tree with all vertices of degree either $k+1$ or $1$. For $k \geq 2$ a [*rooted $k$-ary tree*]{} is a tree with one vertex of degree $k$ and the rest of degrees either $k+ 1$ or $1$. We refer to the rooted $2$-ary tree as a [*rooted binary tree*]{}. Note that any $k$-ary tree is obtained by removing one leaf of a suitable $(k+1)$-valent trees.
Our general proof methodology for these families of trees is similar to that used in for ultrametric trees. We first define a particular class of sequences that can appear as elements of $\cJ_\TT$ (the down-split sequences) and a total order on such sequences. We then show that the minimal down-split sequence in $\cJ_{\TT}$ uniquely identifies $\TT$.
The idea of the proof is the same for all $k$ and depends on the following fact.
[Let $\TT$ be a $(k+1)$-valent tree or a rooted $k$-ary tree and let $\SS$ be a subtree of $\TT$. Then $\SS$ is a rooted $k$-ary tree if and only if $$\# \EE(\SS) = \frac{k}{k - 1}( \# \LL(\SS) - 1).$$]{} \[lem:k subtree size\]
Because $\SS$ is a subtree of $\TT$, every interior vertex of $\SS$ has degree at most $k+1$. Write $d_1 := \# \LL(\SS), d_2, \ldots, d_{k+1}$ for the number of vertices of $\SS$ of degrees $1, 2, \ldots, k+1$. We need to show that $d_j=0$ for $1 < j \le k-1$ and $d_k=1$, or, equivalently, that $d_k=1$ and $d_{k+1} = \sum_{j=2}^{k+1} d_j - 1
= \# \VV(\SS) - d_1 - 1
= \# \EE(\SS) - d_1$. This is in turn equivalent to showing that $$\sum_{j=2}^{k+1} j d_j = k + (k+1) (\# \EE(\SS) - d_1),$$ which, by the “handshaking identity” $$2 \# \EE(\SS) = \sum_{j=1}^{k+1} j d_j,$$ becomes $$2 \# \EE(\SS) - d_1 = k + (k+1) (\# \EE(\SS) - d_1)$$ or, upon rearranging, $$\# \EE(\SS) = \frac{k}{k-1} (d_1 - 1) = \frac{k}{k-1} (\# \LL(\SS) - 1).$$
For simplicity of notation we present the details of the proof for the case of (unrooted) $3$-valent trees and rooted binary trees (that is, $k=2$). We end in with a discussion of the extension to general $k$.
$3$-valent and rooted binary trees {#sec:binarytrees}
----------------------------------
Our proof of begins with an analysis of random length sequences for marked (also known as planted) $3$-valent trees. A [*marked $3$-valent tree*]{} $(\TT, v)$ is an $3$-valent tree $\TT$ and a distinguished leaf $v$ of $\TT$. We define the [*modified random length sequence*]{} $\cW_{(\TT, v)}$ of $(\TT, v)$ to be the random length sequence $\cW_{\TT}$ of $\TT$ conditioned on $Y_1 = v$.
Down-split sequences
--------------------
We need to distinguish some particular sequences that appear in the support of $\cW_{(\TT,v)}$.
As usual, we can define a partial order on $\VV(\TT)$ by declaring that $x$ precedes $y$ if $x \ne y$ is on the path between $v$ and $y$, and we can extend this partial order to a total order $<$ such that if $w,x,y,z$ are such that $w$ and $x$ are not comparable in the partial order but $w < x$, $w$ precedes $y$ in the partial order, and $x$ precedes $z$ in the partial order, then $y < z$. Such a total order corresponds to embedding $\TT$ in the plane and listing the elements of $\VV(\TT)$ in the order they are encountered as one walks around $\TT$ starting from $v$.
Suppose that $v=y_1 < y_2 < \ldots < y_n$ is the ordered listing of $\LL(\TT)$. Set $s_k = \WW_\TT(\{y_1, \ldots, y_k\})$, $2 \le k \le n$. If $s_k = 2k-2$, then the subtree spanned by the $k$ leaves $\{y_1, \ldots, y_k\}$ has $2k-2$ edges and hence, by , this subtree is a binary tree. If we write $o$ for the vertex adjacent to the marked leaf $v$, denote by $v',v''$ the other two vertices adjacent to $o$, and suppose that $v' < v''$, then it must be the case that $\{y_2, \ldots, y_{k_s}\} = \{y \in \LL(\TT) : v' \le y\}$. Write $\TT'$ (respectively, $\TT''$) for the subtree of $\TT$ consisting of $w$ and the vertices $u$ such that $v'$ (respectively, $v''$) is on the path from $o$ to $u$. The sequence $(s_2', \ldots, s_{n'}'):= (s_2-1, \ldots, s_{k_s}-1)$ satisfies $s_k' = \WW_{\TT'}(o, y_2, \ldots, y_k)$ for $2 \le k \le n' = k_s$. The sequence $(s_2'', \ldots, s_{n''}'' := (s_{k_s + 1} - (2k_s - 2), \ldots, s_{n} - (2k_s - 2) )$ satisfies $s_k'' = \WW_{\TT''}(o, y_{k_s + 1}, \ldots, y_{k_s+ k - 1})$ for $2 \le k \le n'' = n - k_s + 1$.
\[rem:down-split construction\]
\[def:downsplit\] A [*down-split sequence*]{} is an element of the class of increasing sequences of positive integers defined recursively as follows. The sequence $$s = (1)$$ is a down-split sequence.
A sequence $s = (s_2, \ldots, s_n)$, $n > 2$, is down-split if $$\{ 2 \leq k < n \colon s_k = 2k - 2 \} \ne \emptyset$$ and, setting $$k_s = \inf \{ 2 \leq k < n \colon s_k = 2k - 2 \},$$
- $(s_2 - 1, \ldots, s_{k_s} - 1)$ is down-split,
- $(s_{k_s + 1} - (2k_s - 2), \ldots, s_{n} - (2k_s - 2) )$ is down-split.
The index $k_s$ is the [*splitting index*]{} of $s$.
[For $n=3$, the sequence $s = (s_2,s_3) = (2,3)$ is a down-split sequence. Here $k_s = 2$, $(s_2 - 1, \ldots, s_{k_s} - 1) = (1)$ and $(s_{k_s + 1} - (2k_s - 2), \ldots, s_{n} - (2k_s - 2) ) = (1)$.]{}
The following result is immediate from .
[For every marked $3$-valent tree $(\TT,v)$ there is at least one down-split sequence $s$ with $$\operatorname{\mathbb{P}}\{ \cW_{(\TT,v)} = s \} > 0.$$]{} \[lem:exist of split\]
We record the following fact for later use.
[If $s = (s_2,\ldots, s_n)$ is a down-split sequence then $s_n = 2n - 3$.]{} \[lem:size of downsplit\]
This follows easily by induction. If $s$ splits at $k_s$, then, as $$(s_2', \ldots, s_{n'}')
=
(s_{k_s + 1} - (2k_s - 2), \ldots, s_n - (2k_s - 2))$$ is a down-split sequence with $n' = n - k_2 + 1$, we have by the inductive hypothesis that $$s_{n} - (2k_s - 2) = 2( n - k_s + 1) - 3$$ and the claim follows.
\[ex:nonunique split seq\] Given any down-split sequence $s$, it is possible to reverse the argument in and construct a marked $3$-valent tree with a suitable total ordering on its vertices such that $s$ is the corresponding down-split sequence. However, a marked $3$-valent tree $(\TT,v)$ is not uniquely identified by an arbitrary down-split sequence in the support of $\cW_{(\TT,v)}$, as the example in shows.
![Two marked binary trees $(\hat \TT,v)$ and $(\check \TT,v)$ with particular realizations of the random selection of leaves.[]{data-label="fig:DownSplitCex"}](DownSplitCex1.png "fig:"){width="40.00000%"} ![Two marked binary trees $(\hat \TT,v)$ and $(\check \TT,v)$ with particular realizations of the random selection of leaves.[]{data-label="fig:DownSplitCex"}](DownSplitCex2.png "fig:"){width="40.00000%"}
Write $(\hat Y_1, \ldots, \hat Y_n)$ and $(\check Y_1, \ldots, \check Y_n)$ for the random selections of the leaves of $\hat \TT$ and $\check \TT$. Suppose that the realizations are such that $\hat Y_k = \check Y_k \in \bar \TT$ for $4 \le k \le n$ and that these leaves of the subtree $\bar \TT$ appear in an order of the type discussed in . The corresponding realizations for the modified random length sequences are equal. The common value $(3,4,\ldots)$ is a down-split sequence with splitting index $3$. Thus, two non-isomorphic marked $3$-valent trees can have a common down-split sequence in the supports of their modified random length sequences. Note that the common down-split sequence results from taking the leaves of $\check \TT$ according to an order of the type described in , but this is not the case for $\hat \TT$.
With in mind we see that it would be useful to have a way of recognizing down-split sequences in the support of $\cW_{(\TT,v)}$ that result from realizations where the leaves are selected in an order that arises from a suitable total order on the vertices of $\TT$. The key is the following total order on down-split sequences. We re-use the notation $\prec$ that was used in for the lexicographic order.
Define a total order $\prec$ on the set of down-split sequences of a given length recursively as follows. Firstly, $(1) \prec (1)$ does not hold. Next, let $s, r$ be down-split sequences indexed by $\{2, \ldots, n\}$ with respective splitting indices $k_s$ and $k_r$. Set $$s' = (s_2 - 1, \ldots, s_{k_s} - 1 ),
\quad
r' = ( r_2 - 1, \ldots, r_{k_r} - 1),$$ and $$s'' = (s_{k_s + 1} - (2k_s - 2), \ldots, s_n - (2k_{s} - 2) ),
\quad
r'' = (r_{k_r + 1} - (2k_r - 2), \ldots, r_n - (2k_{r} - 2) ).$$ Declare that $$s \prec r$$ if $$k_s < k_r$$ or $$k_s = k_r \quad \text{and} \quad s' \prec r'$$ or $$k_s = k_r \quad \text{and} \quad s' = r' \quad \text{and} \quad s'' \prec r''.$$
The next result follows easily by induction.
[The binary relation $\prec$ is a total order on the set of down-split sequences of a given length.]{}
The [*minimal down-split sequence*]{} for a marked $3$-valent tree $(\TT,v)$ is the minimal element (with respect to the total order $\prec$) of the set $$\{ s \text{ down-split }\colon \operatorname{\mathbb{P}}\{ \cW_{(\TT,v)} = s \} > 0 \}.$$
We now proceed to establish some results that culminate in showing that $(\TT,v)$ is determined by its minimal down-split sequence.
[Let $(\TT,v)$ be a marked $3$-valent tree, with modified random length sequence $\cW_{(\TT, v)} = (W_2, \ldots, W_n)$ constructed from the random sequence of leaves $(Y_1, \ldots, Y_n)$ with $Y_1 = v$. Denote by $o$ the vertex adjacent to the marked leaf $v$ and denote by $v',v''$ the other two vertices adjacent to $o$. Write $\TT'$ (respectively, $\TT''$) for the subtree of $\TT$ consisting of $o$ and the vertices $u$ such that $v'$ (respectively, $v''$) is on the path from $o$ to $u$. Set $$m := \inf\{ k \colon \operatorname{\mathbb{P}}\{ W_k = 2k - 2 \} > 0 \}.$$ Then $W_m = 2m - 2$ if and only if $$Y_2, \ldots, Y_m \in \LL(\TT') \text{ and } Y_{m + 1}, \ldots, Y_n \in \LL(\TT''),$$ or [*vice versa*]{}.]{} \[lem:split subtrees\]
If $Y_2, \ldots, Y_m \in \LL(\TT')$ and $Y_{m + 1}, \ldots, Y_n \in \LL(\TT'')$, then the subtree spanned by $\{v,Y_2, \ldots, Y_m\}$ consists of the leaf $v$ adjoined to $\TT'$ via an edge to the vertex $o$. This subtree is a rooted binary tree with root $o$. It follows from that $W_m = 2m - 2$.
For the other direction, assume that $W_m = 2m -2$. By , the subtree $\SS$ spanned by $\{v, Y_2, \ldots, Y_m\}$ is a rooted binary tree with $m$ leaves. We have $\LL(\SS) \subseteq \LL(\TT)$, $v \in \LL(\SS)$, and $\LL(\SS) \setminus \{v\} \subseteq (\LL(\TT') \setminus \{o\}) \cup (\LL(\TT'') \setminus \{o\})$. We need to show that $\SS$ consists of the leaf $v$ adjoined to either $\TT'$ or $\TT''$ via an edge to the vertex $o$ that is common to both $\TT'$ and $\TT''$.
By the construction prior to the statement of we know that $$m \leq \# \LL(\TT') \wedge \# \LL(\TT'')$$ and so if $\LL(\TT') \setminus \{o\} \subseteq \LL(\SS) \setminus \{v\}$, then $\LL(\TT'') \cap \LL(\SS) = \LL(\TT'') \setminus \{o\} \cap \LL(\SS) \setminus \{v\} = \emptyset$ and similarly with the roles of $\TT'$ and $\TT''$ reversed.
We can rule out the possibility that $\LL(\SS)$ intersects both $\LL(\TT')$ and $\LL(\TT'')$ as follows. If $\LL(\TT') \cap \LL(\SS) \ne \emptyset$ and $\LL(\TT'') \cap \LL(\SS) \ne \emptyset$, then $\LL(\TT') \cap \LL(\SS)$ must be a proper subset of $\LL(\TT') \setminus \{o\}$ and $\LL(\TT'') \cap \LL(\SS)$ must be a proper subset of $\LL(\TT'') \setminus \{o\}$. If $\LL(\TT') \cap \LL(\SS)$ is a proper, nonempty subset of $\LL(\TT') \setminus \{o\}$, then $\SS$ must have a degree $2$ vertex that belongs to $\VV(\TT') \setminus \{o\}$, and similarly for $\TT''$. However, $\SS$ is a rooted binary tree and cannot have two or more vertices of degree $2$.
Finally, we need to rule out the possibility of $\LL(\SS) \setminus \{v\}$ is a proper subset of $\LL(\TT') \setminus \{o\}$ or $\LL(\TT'') \setminus \{o\}$. However, if $\LL(\SS) \setminus \{v\}$ is a proper subset of $\LL(\TT') \setminus \{o\}$, then $\SS$ would have at least one degree $2$ vertex in that belongs to $\VV(\TT') \setminus \{o\}$ as well as the degree $2$ vertex $o$, which contradicts $s$ being a rooted binary tree. The same argument holds with $\TT''$ in place of $\TT'$.
[Let $(\TT,v)$ be a marked $3$-valent tree with modified random length sequence $\cW_{(\TT, v)} = (W_2, \ldots, W_n)$. Then $$m:= \inf\{ k \colon \operatorname{\mathbb{P}}\{ W_k = 2k - 2 \} > 0 \}$$ is the splitting index for the minimal down-split sequence for $(\TT,v)$.]{} \[cor:split subtrees\]
If $k_s$ is the splitting index of any down-split sequence $s$ in the support of $\cW_{(\TT,v)}$, then $s_{k_s} = 2k_s - 2$ by definition. Thus $\operatorname{\mathbb{P}}\{ W_{k_s} = 2k_s - 2 \} > 0$ and hence $m \le k_s$.
On the other hand, let $o,v',v'', \TT',\TT''$ be as in the statement of . It follows from that result that $m = \# \LL(\TT') \wedge \# \LL(\TT'')$. By the construction in if $m = \# \LL(\TT')$ or the analogous one with the roles of $\TT'$ and $\TT''$ reversed if $m = \# \LL(\TT'')$, we may construct a down-split sequence for $(\TT,v)$ that has splitting index $m$. By the definition of the total order $\prec$, the splitting index for the minimal down-split sequence for $(\TT,v)$ is at most $m$.
[Let $s$ be the minimal down-split sequence for a marked $3$-valent tree $(\TT,v)$. There is no other marked $3$-valent tree for which $s$ is the minimal down-split sequence.]{} \[prop:det by downsplit\]
We will prove this by induction. The claim is clearly true for the down-split sequence $s = (1)$.
Let $(\TT, v)$ be a marked $3$-valent tree and $s$ the minimal down-split sequence for $(\TT,v)$. Define $o,v',v'', \TT',\TT''$ as in the statement of . Let $k_s$ be the splitting index of $s$. Let $y_1, \ldots, y_n$ be an ordered listing of $\LL(\TT)$ such that $\WW_{(\TT,v)}(\{y_1, \ldots, y_k\}) = s_k$ for $2 \le k \le n$. By and we must either have $\{y_2, \ldots, y_{k_s}\} = \LL(\TT') \setminus \{o\}$ and $\{y_{k_s + 1}, \ldots, y_{n}\} = \LL(\TT'') \setminus \{o\}$ or the analogous conclusion with the roles of $\TT'$ and $\TT''$ interchanged holds (if $\# \LL(\TT') \ne \LL(\TT'')$, then only one alternative is possible). We may suppose without loss of generality that the choice of $v'$ and $v''$ is such that the first alternative holds.
Set $$s' := (s_2 - 1, \ldots, s_{k_s} - 1), \quad s'' := (s_{k_s + 1} - (2k_s - 2), \ldots, s_{n} - (2k_s - 2) ).$$ By definition, $s'$ and $s''$ are down-split sequences. Because $\operatorname{\mathbb{P}}\{ \cW_{(\TT, v)} = s \} > 0$, we have $$\operatorname{\mathbb{P}}\{\cW_{(\TT', o)} = s' \} > 0$$ and $$\operatorname{\mathbb{P}}\{\cW_{(\TT'', o)} = s''\} > 0.$$
We claim that $s'$ must be the minimal down-split sequence for $(\TT', o)$. To see this, note that if there was a down-split sequence $\tilde{s}'$ with $\tilde{s}' \prec s'$ such that $$\operatorname{\mathbb{P}}\{ \cW_{(\TT', o)} = \tilde{s}'\} > 0,$$ then, writing $$\bar{m} := (m, \ldots, m)$$ for a positive integer $m$ we would have $$\operatorname{\mathbb{P}}\{\cW_{(\TT,v)} = (\tilde{s}' + \overline{1}, s'' + \overline{2k_s - 2})\} > 0$$ and, by definition of the total order $\prec$, $$(\tilde{s}'+ \overline{1}, s'' + \overline{2k_s - 2}) \prec (s' + \overline{1}, s''+ \overline{2k_s - 2}) = s.$$ This, however, contradicts the minimality of $s$. Similarly, $s''$ is the minimal down-split sequence for $(\TT'', o)$. By induction, $(\TT', o)$ and $(\TT'', o)$ are uniquely determined.
Since $(\TT,v)$ is obtained by gluing $(\TT',o)$ and $(\TT'', o)$ together at the shared vertex $o$ and attaching the marked leaf $v$ to $o$ by an edge, we see that $(\TT, v)$ is also determined by $s$.
While the proof of is not in the form of an explicit reconstruction procedure, the argument clearly leads to an algorithm for building a marked $3$-valent tree $(\TT, v)$ from the corresponding minimal down-split sequence. Namely, $(\TT,v)$ is simply the recursion tree that results from parsing $s$ as a down-split sequence as in , with leaves corresponding to edges that terminate in the sequence $(1)$.
![A marked $3$-valent tree with its leaves ordered minimally and the corresponding parse tree for the minimal down-split sequence.[]{data-label="fig:ParseTree"}](ParseEx1.png "fig:"){width="40.00000%"} ![A marked $3$-valent tree with its leaves ordered minimally and the corresponding parse tree for the minimal down-split sequence.[]{data-label="fig:ParseTree"}](ParseEx2.png "fig:"){width="40.00000%"}
### Proof of and {#sec:first half of main proof}
From we are able to easily prove for (unmarked) $3$-valent trees.
Let $\TT$ be a fixed (unknown) $3$-valent tree with $n$ leaves and let $\cW_{\TT}$ be its random length sequence. Conditional on $Y_1$, $\cW_{\TT}$ is the modified random length sequence of the marked binary tree $(\TT, Y_1)$. Thus, if $$\operatorname{\mathbb{P}}\{\cW_{\TT} = s\} > 0,$$ then there must be some leaf $v \in \TT$ such that $$\operatorname{\mathbb{P}}\{ \cW_{(\TT, v)} = s \} > 0.$$ Let $s^*$ be the minimal element of the set $$\{ s \text{ down-split } \colon \operatorname{\mathbb{P}}\{\cW_{\TT} = s\} > 0 \}.$$ Then $s^*$ must be the minimal down-split sequence for $(\TT, v)$ for at least one leaf $v$ of $\TT$. By we can reconstruct $(\TT, v)$ and hence $\TT$ from $s^*$.
The above argument can be pushed further to prove for $\cT$ a random $3$-valent tree.
Let $\cT$ be a random $3$-valent tree with $n$ leaves and random length sequence $\cW_{\cT}$.
Given a $3$-valent tree $\TT$ with $n$ leaves, let $s^\TT$ be the minimal element of the set of down-split sequences of the marked $3$-valent trees $(\TT,v)$ as $v$ ranges over $\LL(\TT)$. We equip the set of $3$-valent tree with $n$ leaves with a total order that, with a slight abuse of notation, we denote by $\prec$ by declaring that $\TT' \prec \TT''$ if $s^{\TT'} \prec s^{\TT''}$. Note that if $\TT' \prec \TT''$, then $\bP\{\cW_{\TT'} = s^{\TT'}\} > 0$ and $\bP\{\cW_{\TT''} = s^{\TT'}\} = 0$. Now, for each choice of $\TT$ we have $$\bP\{\cW_{\cT} = s^{\TT}\}
=
\sum_{\TT'} \bP\{\cT = \TT'\} \bP\{\cW_{\TT'} = s^{\TT}\}
=
\sum_{\TT' \preceq \TT} \bP\{\cT = \TT'\} \bP\{\cW_{\TT'} = s^{\TT}\},$$ and the conclusion that we can recover $\bP\{\cT = \TT\}$ as $\TT$ ranges over the $3$-valent trees with $n$ leaves follows simply from the observation that if $b$ is a row vector of length $N$ and $A$ is an $N \times N$ matrix that has all entries below the diagonal zero and all entries on the diagonal strictly positive, then there is a unique row vector $x$ of length $N$ such that $b = x A$.
Up-split sequences
------------------
We now prove and for rooted binary trees. Analogous to the objects we introduced for marked $3$-valent trees, we begin with a definition of a class of sequences that will appear in the support of the random length sequence of a rooted binary tree.
An [*up-split sequence*]{} is an element of the class of increasing sequences of nonnegative integers defined recursively as follows.
The sequence $$s = (0)$$ is an up-split sequence.
a sequence $s = (s_1, \ldots, s_n)$, $n>1$, is an up-split sequence if $$\{1 \leq k < n \colon s_k = 2k - 2 \} \ne \emptyset$$ and, setting $$k_s := \sup \{1 \leq k < n \colon s_k = 2k - 2 \},$$
- $(s_1, \ldots, s_{k_s})$ is an up-split sequence,
- $(s_{k_s + 1} - (2k_s - 1), \ldots, s_n - (2k_s - 1))$ is a down-split sequence.
The index $k_s$ is the [*splitting index*]{} of $s$.
Suppose that $\TT$ is a rooted binary tree with root $o$. In a manner similar to the construction in we can order a partial order on $\VV(\TT)$ by declaring that $x$ precedes $y$ if $x \ne y$ is on the path between $\rho$ and $y$, and we can extend this partial order to a total order $<$ such that if $w,x,y,z$ are such that $w$ and $x$ are not comparable in the partial order but $w < x$, $w$ precedes $y$ in the partial order, and $x$ precedes $z$ in the partial order, then $y < z$. Suppose that $y_1 < y_2 < \ldots < y_n$ is the ordered listing of $\LL(\TT)$. Set $s_1:=0$ and $s_k := \WW_\TT(\{y_1, \ldots, y_k\})$, $2 \le k \le n$. Then $(s_1, \ldots, s_n)$ is an up-split sequences. The leaves $y_1, \ldots, y_{k_s}$ and $y_{k_s+1}, \ldots, y_n$ respectively span the two binary subtrees $\TT'$ and $\TT''$ that are rooted at the two children of the root $o$. The subtree spanned by $o$ and $y_{k_s+1}, \ldots, y_n$ is a $3$-valent tree.
![A rooted binary tree split as a rooted binary subtree $\TT'$ and a marked $3$-valent tree $(\TT'', o)$.[]{data-label="fig:UpsplitEx"}](UpsplitEx.png){width="60.00000%"}
\[ex:up-split construction\]
The following analogue of is clear from .
[For every rooted binary tree $\TT$, there is at least one up-split sequence $s$ with $$\operatorname{\mathbb{P}}\{ (0,\cW_{\TT}) = s\} > 0.$$]{} \[lem:exist upsplit\]
The following analogue of can be established using a similar inductive proof.
[If $s = (s_1, \ldots, s_n)$ is an up-split sequence then $s_n = 2n - 2$.]{} \[lem:size of upsplit\]
Define a total order $\ll$ on the set of up-split sequences of a given length recursively as follows. Firstly, $(0) \ll (0)$ does not hold. Next, let $s$ and $r$ be two up-split sequences indexed by $\{1,\ldots,n\}$ with respective splitting indices $k_s$ and $k_r$. Set $$s' = \left(s_1, \ldots, s_k \right),
\quad
r' = \left( r_1, \ldots, r_k \right)$$ and $$s'' = \left(s_{k + 1} - (2k - 1), \ldots, s_n - (2k - 1) \right),
\quad
r'' = \left(r_{k + 1} - (2k - 1), \ldots, r_n - (2k - 1) \right).$$ Declare that $$s \ll r$$ if $$k_s > k_r$$ or $$k_s = k_r \quad \text{and} \quad s' \ll r'$$ or $$k_s = k_r \quad \text{and} \quad s' = r' \quad
\text{and} \quad s'' \prec r''.$$
Note for up-split sequences $s$ and $r$, that $s \ll r$ implies that the splitting index of $s$ is [*greater than or equal*]{} to the splitting index of $r$. For down-split sequences $u$ and $t$, $u \prec t$ implies that the splitting index of $u$ is [*less than or equal*]{} to the splitting index of $t$. This change in the direction of the inequalities matches the switch in the definition of the splitting index from an infimum for down-split sequences to a supremum for up-split sequences.
The next result follows easily by induction.
[The binary relation $\ll$ is a total order on the set of up-split sequences of a given length.]{}
The [*minimal up-split sequence*]{} for a rooted binary tree $\TT$ is the minimal element (with respect to the total order $\ll$) of the set $$\{ s \text{ up-split }\colon \operatorname{\mathbb{P}}\{ \cW_{\TT} = s \} > 0 \}.$$
The up-split sequence analogues of and are the following and they are proved in essentially the same manner.
[Given a binary tree $\TT$ with root $o$, let $\TT'$ and $\TT''$ be the binary subtrees rooted at the two children of $o$. Set $$m := \sup \{ 1 \leq k < n \colon \operatorname{\mathbb{P}}\{ W_k = 2k - 2 \} > 0 \}.$$ Then $W_m = 2m - 2$ if and only if $Y_1, \ldots, Y_m \in \TT'$ and $Y_{m + 1}, \ldots, Y_n \in \TT''$ or [*vice versa*]{}.]{} \[lem:upsplit subtrees\]
[Let $\TT$ be a rooted binary tree with random length sequence $\cW_{\TT} = (W_2, \ldots, W_n)$. Then $$m:= \sup\{ 1 \le k < n \colon \operatorname{\mathbb{P}}\{ W_k = 2k - 2 \} > 0 \}$$ is the splitting index for the minimal up-split sequence for $\TT$.]{} \[cor:upsplit subtrees\]
The following analogue of for up-split sequences follows from and in essentially the same manner that followed from and .
[Let $s$ be the minimal up-split sequence for a rooted binary tree $\TT$. There is no other rooted binary tree for which $s$ is the minimal up-split sequence.]{} \[prop:det by upsplit\]
Clearly, completes the proof of . To establish in the case of $\cT$ a random rooted binary tree, we need only repeat the argument of the proof of given in for $3$-valent trees.
$(k+1)$-valent and rooted $k$-ary combinatorial trees {#sec:kary}
-----------------------------------------------------
The proof of the extension and to $(k+1)$-valent and rooted $k$-ary combinatorial trees for $k \geq 3$ is very similar to the $k = 2$ case and involves the introduction of suitable notion of down-split and up-split sequences along with appropriate total orders on these sets of sequences. The only difference is that both types of split sequences are now split into $k$ smaller sequences, instead of just two. We leave the details to the reader.
Open problems {#sec:open probs}
=============
The original conjecture Question \[q:main\] remains open in general, both for simple trees with arbitrary edge weights (not in general position), and for combinatorial trees. An even more general question is suggested by .
[Let $\cT$ be a random tree with probability distribution supported either on the set of simple trees with $n$ leaves and general edge weights or the set of combinatorial trees with $n$ leaves. Can the probability distribution of $\cT$ be determined uniquely from the joint probability distribution of the random length sequence $\cW_{\cT}$?]{} \[q:open general\]
Even if the answer to Question \[q:open general\] is “no”, the answer may still be “yes” if the probability distribution of $\cT$ is known [*a priori*]{} to belong to some particular family of probability distributions. There are, of course, many families of probability models for with random trees with $n$ leaves that are described by a small number of parameters (for example, conditioned Galton-Watson models, the various preferential attachment models), and perhaps the value of these parameters can be determined from the joint probability distribution of the random length sequence of a random tree that is known [*a priori*]{} to be distributed according to a member of one of these families.
[What are the necessary and sufficient conditions on a vector for there to be an edge-weighted tree $\TT$ such that the vector is in the support of $\cW_\TT$?]{}
We remarked in the Introduction that the focus of this paper is superficially similar to that in [@MR786484], where the problem of reconstructing a combinatorial tree from its number deck (the sizes of the subtrees in the forests produced by deleting each vertex) was studied. The lists of lists that are the number deck of some combinatorial tree are characterized in [@MR846676].
[Are there more parsimonious quantities derived from the joint probability distribution of the random length sequence that still carry a lot of information about $\TT$? For example, how much information about $\TT$ is contained in the expectation $(\bE[W_2], \ldots, \bE[W_n])$ of the random length sequence and is it possible to characterize those vectors which can arise as the expectation of the random length sequence?]{}
[GMOY95]{}
Alfred V. Aho, John E. Hopcroft, and Jeffrey D. Ullman, *The design and analysis of computer algorithms*, Addison-Wesley Publishing Co., Reading, Mass.-London-Amsterdam, 1975, Second printing, Addison-Wesley Series in Computer Science and Information Processing. [MR ]{}[0413592 (54 \#1706)]{}
A. R. Bednarek, *A note on tree isomorphisms*, J. Combinatorial Theory Ser. B **16** (1974), 194–196. [MR ]{}[0332540 (48 \#10867)]{}
Shankar Bhamidi, Steven N. Evans, and Arnab Sen, *Spectra of large random trees*, J. Theoret. Probab. **25** (2012), no. 3, 613–654. [MR ]{}[2956206]{}
Phillip Botti and Russell Merris, *Almost all trees share a complete set of immanantal polynomials*, J. Graph Theory **17** (1993), no. 4, 467–476. [MR ]{}[MR1231010 (94g:05053)]{}
J. A. Bondy, *On [K]{}elly’s congruence theorem for trees*, Proc. Cambridge Philos. Soc. **65** (1969), 387–397. [MR ]{}[0260614 (41 \#5238)]{}
[to3em]{}, *A graph reconstructor’s manual*, Surveys in combinatorics, 1991 ([G]{}uildford, 1991), London Math. Soc. Lecture Note Ser., vol. 166, Cambridge Univ. Press, Cambridge, 1991, pp. 221–252. [MR ]{}[1161466 (93e:05071)]{}
Peter Buneman, *The recovery of trees from measures of dissimilarity*, Mathematics in the archaeological and historical sciences (F.R. Hodson, D.G. Kendall, and P. Tautu, eds.), Edinburgh University Press, Edinburgh, 1971, pp. 387–395.
[to3em]{}, *A note on the metric properties of trees*, J. Combinatorial Theory Ser. B **17** (1974), 48–50. [MR ]{}[0363963 (51 \#218)]{}
A. Dress, K. T. Huber, and V. Moulton, *Some uses of the [F]{}arris transform in mathematics and phylogenetics—a review*, Ann. Comb. **11** (2007), no. 1, 1–37. [MR ]{}[2311928 (2008a:05072)]{}
Mircea V. Diudea, *Hosoya-[D]{}iudea polynomials revisited*, MATCH Commun. Math. Comput. Chem. **69** (2013), no. 1, 93–100. [MR ]{}[3052391]{}
David Eisenstat and Gary Gordon, *Non-isomorphic caterpillars with identical subtree data*, Discrete Math. **306** (2006), no. 8-9, 827–830. [MR ]{}[2234989 (2006m:05057)]{}
Steven N. Evans, *Probability and real trees*, Lecture Notes in Mathematics, vol. 1920, Springer, Berlin, 2008, Lectures from the 35th Summer School on Probability Theory held in Saint-Flour, July 6–23, 2005. [MR ]{}[MR2351587]{}
J. Felsenstein, *Inferring phylogenies*, Sinauer Press, Sunderland, MA, 2004.
Philippe Flajolet, Xavier Gourdon, and Conrado Mart[í]{}nez, *Patterns in random binary search trees*, Random Structures Algorithms **11** (1997), no. 3, 223–244. [MR ]{}[1609509 (98m:68042)]{}
Gary Gordon and Elizabeth McMahon, *A greedoid polynomial which distinguishes rooted arborescences*, Proc. Amer. Math. Soc. **107** (1989), no. 2, 287–298. [MR ]{}[967486 (90a:05046)]{}
Gary Gordon, Eleanor McDonnell, Darren Orloff, and Nen Yung, *On the [T]{}utte polynomial of a tree*, Proceedings of the [T]{}wenty-sixth [S]{}outheastern [I]{}nternational [C]{}onference on [C]{}ombinatorics, [G]{}raph [T]{}heory and [C]{}omputing ([B]{}oca [R]{}aton, [FL]{}, 1995), vol. 108, 1995, pp. 141–151. [MR ]{}[1369283 (96j:05036)]{}
G. H. Hardy, J. E. Littlewood, and G. P[ó]{}lya, *Inequalities*, Cambridge Mathematical Library, Cambridge University Press, Cambridge, 1988, Reprint of the 1952 edition. [MR ]{}[944909 (89d:26016)]{}
Frank Harary and Ed Palmer, *The reconstruction of a tree from its maximal subtrees*, Canad. J. Math. **18** (1966), 803–810. [MR ]{}[0200190 (34 \#89)]{}
Klaas Hartmann and Mike Steel, *Phylogenetic diversity: from combinatorics to ecology*, Reconstructing evolution, Oxford Univ. Press, Oxford, 2007, pp. 171–196. [MR ]{}[2359353]{}
Paul J. Kelly, *A congruence theorem for trees*, Pacific J. Math. **7** (1957), 961–968. [MR ]{}[0087949 (19,442c)]{}
I. Krasikov, M. N. Ellingham, and Wendy Myrvold, *Legitimate number decks for trees*, Ars Combin. **21** (1986), 15–17. [MR ]{}[846676 (87j:05067)]{}
I. Krasikov and J. Sch[ö]{}nheim, *The reconstruction of a tree from its number deck*, Discrete Math. **53** (1985), 137–145, Special volume on ordered sets and their applications (L’Arbresle, 1982). [MR ]{}[786484 (86d:05088)]{}
J. Lauri, *Proof of [H]{}arary’s conjecture on the reconstruction of trees*, Discrete Math. **43** (1983), no. 1, 79–90. [MR ]{}[680306 (83k:05080)]{}
Bennet Manvel, *Reconstruction of trees*, Canad. J. Math. **22** (1970), 55–60. [MR ]{}[0256926 (41 \#1581)]{}
Frederick A Matsen and Steven N Evans, *Ubiquity of synonymity: almost all large binary trees are not uniquely identified by their spectra or their immanantal polynomials.*, Algorithms for molecular biology: AMB **7** (2011), no. 1, 14–14.
Jeremy L. Martin, Matthew Morin, and Jennifer D. Wagner, *On distinguishing trees by their chromatic symmetric functions*, J. Combin. Theory Ser. A **115** (2008), no. 2, 237–253. [MR ]{}[2382514 (2008m:05115)]{}
Rosa Orellana and Geoffrey Scott, *Graphs with equal chromatic symmetric functions*, Discrete Math. **320** (2014), 1–14. [MR ]{}[3147202]{}
Yves Pauplin, *Direct calculation of a tree length using a distance matrix*, Journal of Molecular Evolution **51** (2000), no. 1, 41–47.
L. Pachter and D. Speyer, *Reconstructing trees from subtree weights*, Appl. Math. Lett. **17** (2004), no. 6, 615–621. [MR ]{}[2064171 (2005b:05066)]{}
Ronald C. Read and Derek G. Corneil, *The graph isomorphism disease*, J. Graph Theory **1** (1977), no. 4, 339–363. [MR ]{}[0485586 (58 \#5412)]{}
Joseph Rosenblatt and Paul D. Seymour, *The structure of homometric sets*, SIAM Journal on Algebraic Discrete Methods **3** (1982), no. 3, 343–350.
Allen J. Schwenk, *Almost all trees are cospectral*, New Directions in the Theory of Graphs, Academeic Press, New York, 1973, pp. 275–307.
Jean-Marc Steyaert and Philippe Flajolet, *Patterns and pattern-matching in trees: an analysis*, Inform. and Control **58** (1983), no. 1-3, 19–58. [MR ]{}[750401 (86h:68070)]{}
J. M. S. Sim[õ]{}es Pereira, *A note on the tree realizability of a distance matrix*, J. Combinatorial Theory **6** (1969), 303–310. [MR ]{}[0237362 (38 \#5650)]{}
Charles Semple and Mike Steel, *Phylogenetics*, Oxford Lecture Series in Mathematics and its Applications, vol. 24, Oxford University Press, Oxford, 2003. [MR ]{}[2060009 (2005g:92024)]{}
[to3em]{}, *Cyclic permutations and evolutionary trees*, Adv. in Appl. Math. **32** (2004), no. 4, 669–680. [MR ]{}[2053839 (2005g:05042)]{}
Richard P. Stanley, *A symmetric function generalization of the chromatic polynomial of a graph*, Adv. Math. **111** (1995), no. 1, 166–194. [MR ]{}[1317387 (96b:05174)]{}
James Turner, *Generalized matrix functions and the graph isomorphism problem*, SIAM J. Appl. Math. **16** (1968), 520–526. [MR ]{}[0228370 (37 \#3951)]{}
S. M. Ulam, *A collection of mathematical problems*, Interscience Tracts in Pure and Applied Mathematics, no. 8, Interscience Publishers, New York-London, 1960. [MR ]{}[0120127 (22 \#10884)]{}
K.A. Zaretskii, *Constructing trees from the set of distances between pendant vertices*, Uspehi Matematiceskih Nauk. **20** (1965), 90–92.
[^1]: SNE supported in part by NSF grant DMS-0907630 and NIH grant 1R01GM109454-01.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We study the high-energy emission of the Galactic black hole candidate GX 339–4 using *INTEGRAL*/SPI and simultaneous *RXTE*/PCA data. By the end of January 2007, when it reached its peak luminosity in hard X-rays, the source was in a bright hard state. The SPI data from this period show a good signal to noise ratio, allowing a detailed study of the spectral energy distribution up to several hundred keV. As a main result, we report on the detection of a variable hard spectral feature ($\geq$$150$keV) which represents a significant excess with respect to the cutoff power law shape of the spectrum. The SPI data suggest that the intensity of this feature is positively correlated with the $25$–$50$keV luminosity of the source and the associated variability time scale is shorter than $7$hours. The simultaneous PCA data, however, show no significant change in the spectral shape, indicating that the source is not undergoing a canonical state transition. We analyzed the broad band spectra in the lights of several physical models, assuming different heating mechanisms and properties of the Comptonizing plasma. For the first time, we performed quantitative model fitting with the new versatile Comptonization code <span style="font-variant:small-caps;">belm</span>, accounting self-consistently for the presence of a magnetic field. We show that a magnetized medium subject to pure non-thermal electron acceleration provides a framework for a physically consistent interpretation of the observed $4$–$500$keV emission. Moreover, we find that the spectral variability might be triggered by the variations of only one physical parameter, namely the magnetic field strength. Therefore, it appears that the magnetic field is likely to be a key parameter in the production of the Comptonized hard X-ray emission.'
author:
- 'R. Droulans, R. Belmont, J. Malzac and E. Jourdain'
bibliography:
- 'biblist.bib'
title: 'Variability and spectral modeling of the hard X-ray emission of GX 339–4 in a bright low/hard state'
---
Introduction
============
GX 339–4 was discovered in the early 70’s by the MIT X-ray detector aboard the OSO-7 mission [@markert1973]. The source is classified as a low mass X-ray binary (LMXB; @shahbaz2001) from upper limits on the optical luminosity of the companion star. It is believed to harbor a black hole, for which @hynes2003 derived a mass function of $5.8 \pm 0.5\,\rm M_{\sun}$. The inclination of the system is yet uncertain. A study of the binary parameters [@zdziarski2004] revealed a plausible lower limit of $i \geq 45^{\circ}$, while @cowley2002 suggested $i \leq 60^{\circ}$ because the system is not eclipsing. On the other hand, spectral fits of the Fe $K_{\alpha}$ region with Chandra [@miller2004a] and XMM Newton [@miller2004b; @reis2008] clearly favor lower values for the inner disc inclination (typically $i \sim 20^{\circ}$). This may indicate that the inner accretion disc is warped, making it appear at a lower inclination than the orbital plane. Similarly, there is no certainty concerning the distance to the source. Resolving the velocity structure along the line of sight, @hynes2004 obtained a conservative lower limit of $d \geq 6$kpc. @zdziarski2004 analyzed the binary parameters of the system and found the most plausible distance to be around $8$kpc. In this work, we use $d = 8$kpc, $i = 50^{\circ}$ and, assuming a small companion mass, we infer $M=13\,\rm M_{\sun}$ from the mass function.
Black hole binaries (BHBs) are powerful engines producing high-energy radiation up to the $\gamma$-ray domain. From an observational perspective, they appear in four spectral states, namely, quiescent, low/hard, intermediate and high/soft (@tanaka1995, @belloni2005; see also @mr06 for a slightly different classification).
[*Soft states:*]{} In the soft state, the energy spectrum is dominated by a soft ($\leq 10$keV) component which is attributed to thermal emission from an optically thick geometrically thin accretion disc [@ss73] extending down to the last stable orbit. Above $10$keV, the emission is characterized by a complex hard X-ray continuum with no clear presence of a high-energy cutoff (see e.g. @gierlinski1999 [@motta2009; @caballero-garcia2009]). This component can be attributed to inverse Compton scattering of soft photons (UV, soft X) in a hybrid thermal/non-thermal electron plasma (the so-called ’corona’) (see e.g. the review by @dgk07, hereafter DGK07).
The non-thermal particles in the corona may be generated due to the magnetic field. Indeed, the Parker instability is able to transport a significant fraction of the accretion power above and below the disc [@galeev1979; @uzdensky2008], where the energy may then be dissipated through magnetic reconnection. Alternatively, Fermi acceleration at relativistic shocks is also expected to produce non-thermal particle distributions which can explain the observed steep power law emission in hard X-rays.
[*Hard states:*]{} In the hard state, the energy spectrum is very different. The disc emission almost vanishes and the spectrum is dominated by a hard power law component (photon index $\Gamma =$ 1.4 – 2.0) with a nearly exponential cutoff ($E_{\rm cut}$ = 50 – 150keV) (see e.g. @zdziarski1998). Such a spectrum is well described by thermal Comptonization in a hot, optically thin electron-proton plasma [@sunyaev1979; @zg04]. Soft $\gamma$-ray observations of several BHBs additionally revealed a high-energy excess with respect to a thermal Comtonization model, suggesting that in some cases at least some level of non-thermal electron acceleration is also required. For instance, COMPTEL observed a non-thermal tail in the averaged hard state spectrum of Cygnus X-1 [@mcconnell2002] while OSSE and SPI detected such a feature during bright hard states of GX 339-4 [@johnson1993; @wardzinski2002; @joinet2007].
[*Intermediate states:*]{} Intermediate states are observed during transitions between the hard and the soft states and usually show the characteristic features of both [@miyamoto1991; @mendez1997; @kong2002]. For GX 339–4, @belloni2005 defined two different varieties of intermediate state: the hard intermediate state (HIMS) and the soft intermediate state (SIMS). The transition from the HIMS to the SIMS of the 2004 outburst of GX 339–4 is reported in @delsanto2008.
According to a popular scenario, the different spectral states can be explained through changes in the geometry of the accretion flow. The weakness of the thermal component in the hard state is generally interpreted as a consequence of a truncated accretion disc (DGK07), which, in its inner parts, is replaced by a hot, advection dominated accretion flow (ADAF; @shapiro1976 [@narayan1994; @yuan2007]). In these solutions, gravitational energy is converted into thermal energy of protons, which in turn heat the electrons through Coulomb collisions. This process naturally forms the quasi thermal electron distributions that are required to explain the typical hard state spectra. Moreover, recent models of hot accretion flows include a small non-thermal component that can account for the sometimes observed high-energy excess.
However, a number of recent results seem to question this paradigm. First, it is notoriously difficult to estimate the inner disc radius in the hard state and its recession is a highly debated topic [see e.g. @cabanac2009]. Although several reports support the truncated disc scenario [@tomsick2009; @done2009], others suggest that even in hard states the accretion disc may extend down to a few Schwarzschild radii [@miller2006; @reis2008], which would be inconsistent with the standard ADAF scenario. Second, hot accretion flows generally exist only if the optical depth is small ($\tau_T \ll1$), which implies that the electron-proton coupling is weak and therefore allows high proton temperatures. However, it is difficult to align such a small optical depth with both the hard X-ray spectral slope and the thermal cutoff energy. When the spectral slope is reproduced, classic ADAF models lead to higher electron temperatures than those observed in BHBs, as it was shown for Cygnus X-1 or GX 339-4 [@yuan2004; @mb09 hereafter MB09]. Recent Monte-Carlo simulations, which account for global Compton cooling and general relativistic effects [@xie2010], yield self-consistent solutions in which the peak energy of the spectrum is reduced with respect to previous results. However, it is still not straightforward to accommodate the inferred high-energy cutoff to the observations, even if gravitational redshift is accounted for.
As an alternative to the ADAF-like models, the Comptonizing medium in the hard state could as well be powered by the same non-thermal mechanisms that are believed to accelerate the electrons in the soft state, i.e. diffusive shock acceleration or magnetic reconnection. Such a model naturally accounts for the presence of a non-thermal component in the high-energy spectrum. In addition, MB09 showed that the steady state electron distribution can appear quasi thermal even if acceleration mechanisms are purely non-thermal (see also @poutanen2009). These authors studied the thermalizing effects of the magnetic field, since it was pointed out by @ghisellini1998 that the very fast emission and absorption of synchrotron photons (the so-called ’synchrotron boiler effect’) is able to thermalize the electron distribution in a few light-crossing times. Using <span style="font-variant:small-caps;">belm</span>, a new code which includes this effect [@bmm08], MB09 qualitatively explained the variety of spectral states observed in the prototypical BHB Cygnus X-1. The model is consistent with a disc recession in the hard state [@poutanen2009], but since the weakness of the soft component could as well result from a lower disc temperature [@beloborodov1999; @malzac2001], it does not necessarily require a change in the geometry of the accretion flow (MB09).
In this paper, we use observations of GX 339–4 with SPI/*INTEGRAL* and PCA/*RXTE* to study the spectral energy distribution of the bright hard state from the 2007 outburst. We analyze the spectral variability in the framework of the models cited herebefore. The data and the associated reduction procedures are described in Section 2. Section 3 is dedicated to a phenomenological analysis of the high-energy behavior while in Section 4 we present an extensive physical analysis of the broad band spectra. The results and their implications are discussed in Section 5 and summarized in the conclusive section of the paper.
Observations
============
The here presented data on GX 339–4 were obtained with the *INTEGRAL* and *RXTE* observatories at the end of January 2007. At that time, the source was very bright in hard X-rays (cf. Figure \[outburst\]) and @motta2009 identified its spectro-temporal properties to be characteristic of the hard state.
![Overview of the 2007 outburst showing the daily light curves of GX 339–4 obtained by *SWIFT*/*BAT* (15 – 50keV ; top panel) and *RXTE*/*ASM* (1.3 – 12keV ; lower panel). The vertical dotted lines indicate the time period during which the here presented SPI data were obtained.[]{data-label="outburst"}](Fig1.ps){width="\columnwidth"}
Instruments
-----------
Our analysis focuses on the results obtained by the spectrometer SPI [@vedrenne2003], which is one of the two main instruments aboard the international gamma ray observatory *INTEGRAL*. Operating in the 20keV – 8MeV band, SPI uses germanium detectors to provide high spectral resolution while imaging is performed through the coded mask technique (Roques et al. 2003). The observational strategy of the *INTEGRAL* mission is to sample the region of interest by means of 30 – 40 minute long fixed pointings (the so-called science-windows), each separated by a $2^{\circ}$ angular distance (see @jensen2003 for details). In order to extend the spectral coverage to lower energies, we also considered simultaneous 4 – 28keV data from the Proportional Counter Array (PCA) on *RXTE* [@bradt1993].
[ccccc]{}
------------------------------------------------------------------------
------------------------------------------------------------------------
Instruments & Obs. ID & MJD start & MJD stop & Exp time (ks)\
------------------------------------------------------------------------
SPI/IBIS & 525 & $54130.6$ & $54132.2$ & $107$\
SPI/IBIS & low (cutoff) & — & — & $36$\
SPI/IBIS & high (excess) & — & — & $36$\
PCA/HEXTE & 92035-01-01-02 & $54131.10$ & $54131.17$ & $3.7$\
PCA/HEXTE & 92035-01-01-04 & $54132.08$ & $54132.15$ & $3.7$\
\[tab1\]
Data and reduction methods
--------------------------
The SPI data from the GX 339–4 region were obtained during *INTEGRAL* revolution 525, lasting from 2007 January 30 to February 1 (cf. Table 1). We selected the science-windows where GX 339–4 was less than $12^{\circ}$ off the central axis and which did not show any contamination by solar flares or radiation belt exit/entry. This resulted in a set of 50 useful pointings representing $107$ks of observational coverage. In order to determine the emitting sources in the field of view, we performed 25 – 50keV imaging with the <span style="font-variant:small-caps;">spiros</span> software [@skinner2003]. Aside from the main source, 4U 1700–377, OAO 1657–415, IGR J16318–4848 and GX 340+0 were detected above a $5\,\sigma$ threshold. The positions of these sources were then given as a priori information to a specific flux-extraction algorithm, using the SPI instrument response for sky-model fitting. In a first run of the software, we allowed the background normalization as well as the source fluxes to vary between successive pointings. In this way we determined the most appropriate variability time scale for each component. The background normalization was more or less stable during the SPI observation, therefore we assumed no background variability in the final reduction process. Some of the sources in the field, however, showed considerable flux variations. Most notably, 4U 1700–377 was extremely variable, with a 20 – 50keV flux per science-window ranging from $0$ to $700$mCrab.
There are two *RXTE* observations which are simultaneous with the *INTEGRAL* exposures (see Table 1). The first observation took place towards the middle and the second towards the end of revolution 525 (cf. the shaded areas in Figure \[lc\]). For both observations, we downloaded the standard products from the HEASARC website[^1] and analyzed the data in <span style="font-variant:small-caps;">xspec</span> v11.3.2 [@arnaud1996]. Apart from a global $3$ per cent flux difference, the PCA spectra from both observations are compatible within the error bars and were co-added using the <span style="font-variant:small-caps;">addspec</span> routine from the <span style="font-variant:small-caps;">ftools</span> package. Following @motta2009, we added a systematic error of $0.6$ per cent to each PCA channel.
SPI light-curve and data subsets
--------------------------------
Figure \[lc\] shows the 25 – 50keV SPI light curve from GX 339–4 obtained during revolution 525. Each bin represents one pointing or science-window, which corresponds to a time scale of about $40$ min. Overall, the 25 – 50keV flux shows no particular evolution, indicating that the source remained in the bright hard state during the $\sim$$1.5$ day long observation. However, the light-curve reveals minor variability (on a time scale of hours, or less) around the average flux value ($<$$F$$>=657.1 \pm 3.8$mCrab). To investigate whether this variability could be linked with changes in the spectral energy distribution, we fixed two arbitrary flux bounds ($F_{\rm low}=640$mCrab and $F_{\rm high}=670$mCrab) and grouped the science-windows where $F_{\rm scw}<F_{\rm low}$ and $F_{\rm scw}>F_{\rm high}$, respectively. For each of the two resulting data subsets, color-coded in blue and red and referred to as *low* and *high* respectively, we then produced the averaged 25 – 500keV spectrum. Both subsets consist of an equal number of science-windows, which allows a straightforward comparison of the spectra. Since the source flux is outstandingly high, the spectral extraction yields significant results, even if the number of science-windows is relatively small.
![25 – 50keV SPI light curve from revolution 525. The pointings selected for the high and low flux data sets are shown in red and blue, respectively. The cyan line represents the average flux from the SPI observation. The shaded areas indicate the time periods of the simultaneous PCA observations.[]{data-label="lc"}](Fig2.ps){width="\columnwidth"}
High energy emission
====================
SPI spectra
-----------
The resulting 25 – 500keV spectra differ by $\sim$$14$ per cent in terms of their total flux and were separately fitted in <span style="font-variant:small-caps;">xspec</span> with a simple cutoff power law model. The uncertainty on the model parameters is given at the $90$ per cent confidence level ($\Delta$$\chi^2=2.7$). As the SPI data alone do not allow to simultaneously constrain the photon index and the cutoff energy, we fixed the former to $\Gamma=1.3$ and left the latter free to vary. The model provides a good fit to the *low* spectrum ($\chi^2/22 =0.75$) and we infer a significant cutoff at $E_{\rm c}=58.6\pm2.2$keV. For the *high* spectrum, the marked curvature around $40$keV still suggests the presence of a cutoff energy, fitted at $E_{\rm c}=56.2\pm2.1$keV. With respect to the model, however, we observe a significant excess above $150\,$keV, leading to an unacceptable quality of the fit ($\chi{{}^2}/28 =1.82$). To account for the high-energy excess, we added a second power law of fixed photon index $\Gamma=1.6$[^2]. The presence of this second component, which reduces the cutoff energy to $E_{\rm c}=49.5\pm3.8$keV but improves the fit to $\chi{{}^2}/27 =1.03$, is statistically required as shown by the <span style="font-variant:small-caps;">ftest</span> [@bevington1992]. Indeed, we infer a probability of $P_{\textsc{ftest}} = 6.1 \times 10^{-5}$ that the improvement of the fit was a chance event. As a consequence, we conclude that both SPI spectra essentially differ in terms of the highest energy emission ($>150\,$keV).
Imaging and spectral robustness
-------------------------------
As the flux extraction process of the SPI data can be sensitive to background modelling and a priori information about the distribution of emitting sources in the field of view, we double checked our results. In the 150 – 450keV <span style="font-variant:small-caps;">spiros</span> image drawn from the *high* data set, GX 339–4 is detected without any a priori information at a flux level of $347.4 \pm 49.9$mCrab and a $7.0\, \sigma$ significance. This contrasts to a non-detection in the *low* data set, where the flux significance at the source position is below $2.5\, \sigma$. We note that the maximum level of the uniformly distributed residuals is around $ 4\, \sigma$ in both images.
Since GX 339–4 is the only source detected above $150$keV, we used a sky model with a single point source and re-extracted the high-energy part ($>$$150\,$keV) of the spectra. We also tested different background models along with various variability time scales. The obtained results are all perfectly compatible within the $1\, \sigma$ errors, showing that the spectra are insensitive to the details of the flux extraction process. We conclude that the respective presence of a cutoff and a high-energy excess in the SPI data is robust and significant. As a consequence, we rename the *low* and *high* spectra to *cutoff* and *excess*, respectively.
![*Top panel*: 150–450keV SPI lightcurve with a binning of $7$h. The flux is given in units of $10^5$ counts s$^{-1}$. The cyan cross indicates the average flux and flux uncertainty. *Middle panel*: \[150–450\]/\[50–150\]keV hardness ratio as a function of time. *Lower panel*: \[150–450\]/\[50–150\]keV hardness ratio as a function of the 25 – 50keV flux.[]{data-label="hardness"}](Fig3.ps){width="\columnwidth"}
Time evolution
--------------
The presence and absence of a high-energy excess in the high and low flux spectra suggests that the appearance of this feature is correlated with the 25 – 50keV flux of the source. To inverstigate this issue further, we extracted light curves in the 25 – 50, 50 – 150 and 150 – 450keV energy bands with a binning of $10$ science-windows ($\approx$$7$h) and plotted the \[150 – 450\]/\[50 – 150\]keV hardness ratio as a function of time and as a function of the 25 – 50keV flux. From these plots, shown in Figure \[hardness\], it is clear that the strength of the high-energy excess is correlated with the 25 – 50keV flux. Since a different time binning ($8$ or $12$ science-windows for instance) leads to the same results and since the 25 – 50keV flux is a good tracer of the total X-ray luminosity, we conclude that the intensity of the high-energy tail is most likely correlated to the total X-ray luminosity of the source.
For statistical resons, the SPI data do not allow to quantify the exact time scale of the phenomenon. Nevertheless, from the 25 – 50keV science-window light curve (cf. Figure \[lc\]) and the above analysis, we can assess that the time scale of the observed spectral variability is shorter than $7$h.
Cross-check with other instruments
----------------------------------
[ccccc]{}
------------------------------------------------------------------------
------------------------------------------------------------------------
Instrument & Model & $(\chi^2/\nu)_{\rm i}$ & $(\chi^2/\nu)_{\rm f}$ & P$_{\textsc{ftest}}$\
------------------------------------------------------------------------
\*[SPI]{} & <span style="font-variant:small-caps;">cutoffpl</span> & $36/26$ & $18/25$ & $4 \times 10^{-5}$\
& <span style="font-variant:small-caps;">eqpair</span> & $47/26$ & $22/25$ & $2 \times 10^{-5}$\
------------------------------------------------------------------------
\*[IBIS/ISGRI]{} & <span style="font-variant:small-caps;">cutoffpl</span> & $32/29$ & $26/28$ & $2 \times 10^{-2}$\
& <span style="font-variant:small-caps;">eqpair</span> & $39/29$& $25/28$ & $5 \times 10^{-4}$\
------------------------------------------------------------------------
\*[HEXTE]{} & <span style="font-variant:small-caps;">cutoffpl</span> & $59/44$ & $47/43$ & $2 \times 10^{-3}$\
& <span style="font-variant:small-caps;">eqpair</span>& $66/44$ & $47/43$ & $1 \times 10^{-4}$\
------------------------------------------------------------------------
\*[SPI+IBIS+HEXTE]{} & <span style="font-variant:small-caps;">cutoffpl</span> & $145/103$ & $105/102$ & $1 \times 10^{-8}$\
& <span style="font-variant:small-caps;">eqpair</span>& $141/103$ & $99/102$ & $2 \times 10^{-9}$\
\[ftests\]
Since the detection of a high-energy tail is a critical issue, we cross-checked the SPI results with other instruments. We analyzed the data obtained simultaneously by the soft gamma ray imager IBIS/ISGRI [@ubertini2003] aboard *INTEGRAL* and the high energy X-ray timing experiment HEXTE aboard *RXTE*. The HEXTE spectra from the two simultaneous *RXTE* observations (cf. Table 1) have been presented by @motta2009, who kindly provided us the reduced data. Both spectra are compatible within the $1\sigma$ errors and were co-added to obtain better statistics at high energies. The total SPI and IBIS/ISGRI spectra (averaged over the whole *INTEGRAL* revolution 525) have been presented by @caballero-garcia2009, who report on the detection of a high-energy excess above 150keV. We re-extracted the IBIS/ISGRI data using the standard OSA 8.0 software package[^3] and jointly fitted the averaged SPI, IBIS/ISGRI and HEXTE spectra in <span style="font-variant:small-caps;">xspec</span>. We find a good agreement between the three instruments, not only in spectral shape but as well in normalization. In addition, the data from all three instruments confirm the presence of a high-energy excess with respect to a phenomenological cutoff power law and a thermal Comptonization model (cf. Figure \[total\], Table 2). For the latter we used <span style="font-variant:small-caps;">eqpair</span> [@coppi1999 cf. section 4.2] with non-thermal heating turned off ($l_{\rm nth}=0$). The reflection amplitude is poorly constrained and was therefore fixed to $\Omega/2\pi=0.35$ (average value derived from the physical analysis, cf. Table 3).
The two IBIS/ISGRI spectra averaged separately over the *low* and *high* data sets are consistent with the SPI results. However, due to the shorter exposure times, the high-energy excess is no longer required in the ISGRI data. This is not surprising since the sensitivity of ISGRI above 200keV is lower than the sensitivity of SPI. As a consequence, we did not consider the IBIS/ISGRI data in the remainder of the paper because they do not improve the high-energy constraints with respect to the models. The HEXTE data, on the other hand, are obtained with even shorter exposure times and are thus unsuitable for our group-wise spectral analysis.
------------------------------------- -------------------------------------
{width="8.55cm"} {width="8.55cm"}
------------------------------------- -------------------------------------
Broadband spectral analysis
===========================
In order to understand the processes that are likely to cause the observed X/$\gamma$-ray emission, we analyzed the broad band PCA/SPI spectra in the lights of different physical Comptonization models. Since the spectral variability is occurring at high energies only, each one of the two SPI spectra (cf. section 3.1) was jointly fitted with the co-added PCA spectrum. A variable multiplicative factor was added to account for cross-calibration uncertainties as well as the luminosity difference between the two SPI spectra. The difference to unity of this factor is never larger than $8$ per cent. Except for slight changes in the $\chi^{2}$ values, neither the qualitative nor the quantitative results are affected when using one of the single PCA spectra instead of the co-added spectrum.
General model
-------------
The high-energy source in GX 339–4 is modeled by a spherical, magnetized, fully ionized proton-electron/positron plasma of radius $R$. The emission of the plasma is derived by self-consistent computations of the equilibrium electron distribution accounting for Compton scattering, synchrotron emission/absorption, pair production/annihilation and Coulomb collisions (*e-e* and *e-p*). The Thomson optical depth of the plasma is given by $\tau_{\rm T}=\tau_{\rm ion}+\tau_{\rm pair}$, where $\tau_{\rm ion}$ represents the optical depth of ionization electrons and $\tau_{\rm pair}$ is the opacity that arises from pair production.
The plasma properties essentially depend on the magnetic field strength as well as on the power supplied by external sources. The tangled magnetic field strength is parameterized by the magnetic compactness: $$l_{B}=\frac{\sigma_{\rm T}}{m_{\rm e}c^2}R\frac{B^2}{8\pi}.$$ Energy injection is quantified by the compactness parameter: $$l=\frac{\sigma_{\rm T}}{m_{\rm e}c^3}\frac{L}{R},$$ where $L$ is the power supplied to the plasma, $\sigma_{\rm T}$ the Thomson cross-section and $m_{\rm e}$ the electron rest mass. The general model comprises three possible channels for providing energy to the coupled electron-photon system: (i) non-thermal electron acceleration ($l_{\rm nth}$), (ii) thermal heating of the electron distribution ($l_{\rm th}$) and (iii) external soft radiation from a geometrically thin accretion disc ($l_{\rm s}$). The non-thermal acceleration processes are mimicked by continuous electron injection at a power law rate with index $\Gamma_{\rm{inj}}$ (i.e. $n(\gamma) \propto \gamma^{-\Gamma_{\rm inj}}$) and Lorentz factors ranging from $\gamma_{\rm{min}}$ to $\gamma_{\rm max}$. The thermal heating of the electron distribution could be caused by Coulomb interactions with a population of hot ions. The incident radiation from an accretion disc is modeled by a blackbody component of temperature $kT_{\rm bb}$. Since the temperature of this component cannot be constrained by our data, we follow @delsanto2008 and fix $kT_{\rm bb}$ to the fiducial value of $300$eV. All the injected energy ends up into radiation, and in the compactness formalism the total radiated power in steady state holds: $l=l_{\rm nth}+l_{\rm th}+l_{\rm s}$.
For a given source flux $F$, the total compactness can be estimated by the formula: $$l = 100 \left(\frac{F}{F_0}\right) \left(\frac{d}{8 \rm{kpc}}\right)^2 \left(\frac{13 M_{\sun}}{M}\right) \left(\frac{30R_{\rm G}}{R}\right)
\label{comp}$$ where $M$ is the mass of the black hole and $d$ the distance to the source. Here we expressed $l$ in terms of $F_{0}=2.6 \times 10^{-8}$erg s$^{-1}$cm$^{-2}$, the observed average 4 – 500keV flux of GX 339–4. Assuming a spherical plasma of radius $R=30$R$_{\rm G}$, a black hole mass of $M=13$M$_{\sun}$ and a distance of $d=8$kpc, this leads to a total compactness of $l \sim 100$.
Besides being a source of soft seed photons, the accretion disc may also give rise to Compton reflection of the incident hard X-rays from the Comptonizing plasma. The reflected emission is calculated using the viewing-angle dependent Green’s functions [@magdziarz1995], but accounting for general relativistic effects. The spatial extension of the possibly ionized disc is parameterized by its inner and outer radii, fixed at $R_{\rm in}=6$R$_{\rm G}$ and $R_{\rm out}=400$R$_{\rm G}$.
Moreover, the model takes into account the K$\alpha$ fluorescence of the Fe elements in the disc (<span style="font-variant:small-caps;">diskline</span>, @fabian1989) as well as the interstellar absorption (<span style="font-variant:small-caps;">phabs</span>). For the latter, we follow @reis2008 and assume a fixed neutral hydrogen column density of $N_{\rm H} = 5.2 \times 10^{21}\, \rm{cm}^{-2}$.
Since our data sets are energy and sensitivity limited and since such a complex modelling is partly degenerate, spectral fitting is unable to provide simultaneous constraints to all the parameters. Therefore, starting from the general model, we define several canonical sub-models where some of the parameters are held invariant.
Thermal heating
---------------
Given that the relative ratio of the heating mechanisms (thermal versus non-thermal) is very difficult to constrain, we investigated the extreme situations in which there is only one possible channel for providing energy to the electrons.
First, we consider the case where the energy injection is purely thermal ($l_{\rm th}\neq 0$, $l_{\rm nth}=0$). The seed photons may arise either from an external source (i.e. the accretion disc), or, in presence of a magnetic field, from internally generated synchrotron emission. However, since the magnetic field is believed to give rise to non-thermal electron acceleration, we limited our analysis to the standard case where magnetic processes are neglected ($l_{B}=0$).
The main model parameters are the power supplied to the Comptonizing electrons $l_{\rm th}$, the power contained in the soft photons from the disc $l_{\rm s}$, the Thomson optical depth of the plasma $\tau_{\rm ion}$, the reflection amplitude $\Omega/2\pi$, the ionization parameter of the reflecting material $\xi$ and the normalization factor between the two instruments. To calculate the model spectra, we use the hybrid thermal/non-thermal Comptonization code <span style="font-variant:small-caps;">eqpair</span> [@coppi1999]. A detailed description of the code and its application to Cygnus X-1 can be found in @gierlinski1999.
Since the model spectrum strongly depends on the ratio $l_{\rm th}/l_{\rm s}$, but only weakly on the absolute values of the compactness parameters, we fix $l_{\rm s}$ to a constant value in order to improve the fitting strategy. For $l_{\rm s}$ in the range $0.5<l_{\rm s}<100$, the spectral shape allows to constrain $l_{\rm th}/l_{\rm s}$ to be always of the order of $5$. Therefore we set $l_{\rm s}=15$, so that the total compactness parameter $l=l_{\rm th}+l_{\rm s}$ is consistent with Eq. \[comp\].
For the *cutoff* spectrum, the pure thermal model delivers a good description of the data. The best fit ($\chi^2/69 = 0.83$) yields a compactness ratio of $l_{\rm th}/l_{\rm s}=3.89^{+0.05}_{-0.08}$, a moderate opacity of $\tau_{\rm ion}=2.40^{+0.04}_{-0.08}$ and a significant ionized reflection component ($\Omega/2\pi=0.44\pm0.04$ ; $\xi=330^{+70}_{-60}$). The equilibrium temperature of the electrons is found at $32$keV.
For the *excess* spectrum, however, the thermal model is not appropriate. The best fit is indeed of poor quality, with a reduced chi-square of $\chi^2/75 = 1.40$. We note that the degradation results only from the high-energy channels, which are responsible for $\chi^2/\nu = 65/23$ of the total $\chi^2$. Obviously, the model fails to reproduce the high-energy excess, supporting the fact that the emission above $100$keV is linked to non-thermal processes. We conclude that a pure thermal model is not adequate to explain the observed spectral behavior.
Non-thermal acceleration with external Comptonization
-----------------------------------------------------
Now we investigate the opposite case, in other words we assume that the energy injection is purely non-thermal ($l_{\rm nth}\neq0$, $l_{\rm th}=0$). The rest of the model remains unchanged, i.e. we consider an external source of soft photons (of fixed compactness $l_{\rm s}=15$) and we neglect the magnetic field ($l_{B}=0$).
----------------------------------- -----------------------------------
{width="8.55cm"} {width="8.55cm"}
----------------------------------- -----------------------------------
Since the electrons can partly thermalize through Coulomb collisions, the equilibrium distribution is hybrid (i.e. thermal/non-thermal), even if the whole power is supplied via non-thermal injection. The acceleration processes are phenomenologically described by the model parameters $l_{\rm nth}$, $\Gamma_{\rm{inj}}$, $\gamma_{\rm{min}}$ and $\gamma_{\rm{max}}$. Here again, we define two sub-models in oder to reduce the number of free parameters. In the first model, we set $(\gamma_{\rm{min}},\gamma_{\rm{max}})=(1.3,1000)$, and analyze the spectra in terms of $l_{\rm nth}$ and $\Gamma_{\rm{inj}}$. This configuration is frequently used in the literature and will provide a familiar context to discuss our results. In the second model, we adopt a more novel approach and investigate the effects of a varying maximum energy of the accelerated electrons. We keep $\gamma_{\rm min}=1.3$ but fix the spectral index to the fiducial value $\Gamma_{\rm{inj}}=2.5$, while $l_{\rm{nth}}$ and $\gamma_{\rm{max}}$ are free to vary. For convenience, we call these models the ECM1 and the ECM2 (for External Comptonization Models), respectively. The associated model spectra are computed with the Comptonization code <span style="font-variant:small-caps;">eqpair</span> [@coppi1999] and the fitting results are summarized in Table 3.
### The injection index
For the *cutoff* spectrum, the ECM1 provides a good fit to the data ($\chi^2/68 = 0.84$, cf. Figure \[eq\] left). The observed cutoff shape is well reproduced by a very soft injected electron distribution. Our best fit yields a spectral index of $\Gamma_{\rm inj}=3.95^{+1.50}_{-0.70}$, which was fixed to this value to determine the error on the other free parameters.
We find a compactness ratio of $l_{\rm nth}/l_{\rm s}=4.35\pm0.08$, implying a non-thermal compactness of $l_{\rm nth}=65.0\pm1.0$. The optical depth of ionization electrons is fitted at $\tau_{\rm ion}=2.97\pm0.07$. Because of the soft injected electron distribution, there is only very little pair production, increasing the total optical depth to $\tau_{\rm T}=2.99\pm0.07$. The observed spectrum strongly requires a moderate amount of Compton reflection, with a fitted amplitude of $\Omega/2\pi=0.26^{+0.03}_{-0.01}$ and an ionization factor of $\xi=710\pm120$. Freezing $\Omega/2\pi$ to zero leads to a dramatically worse fit (F-test probability $<10^{-20}$).
For the *excess* spectrum, the best fit ($\chi^2/74 = 1.10$, cf. Figure \[eq\] right) requires a substantially harder power law injection, with a fitted index of $\Gamma_{\rm inj}=2.55^{+0.35}_{-0.15}$. With respect to the *cutoff* spectrum, the compactness ratio increased by $\sim$$15$ per cent to $l_{\rm nth}/l_{\rm s}=5.06^{+0.10}_{-0.08}$. The non-thermal compactness now yields $l_{\rm nth}=75.9^{+1.5}_{-1.2}$ and the total optical depth increased to $\tau_{\rm T}=3.87^{+0.07}_{-0.15}$, which is mainly due to the enhanced production of pairs. The fitted reflection amplitude and disc ionization, however, remain stable ($\Omega/2\pi=0.25^{+0.02}_{-0.03}$ ; $\xi=600\pm240$).
In order to explore the dependence on compactness, we abandoned our fiducial hypothesis $l_{\rm s}=15$ and fitted the *excess* spectrum with a variable soft compactness. We find $90$ per cent confidence intervals of $0.3 < l_{\rm s} < 250$ and $4.5<l_{\rm nth}/l_{\rm s}<5.9$, which confirms that the compactness ratio does not depend on the total compactness of the source. Although our analysis is unable to uniquely determine the total compactness, the upper and lower limits on the allowed range are well constrained by the spectral shape [@gierlinski1999]. The upper bound represents the limit at which, despite an extremely soft injected electron distribution ($\Gamma_{\rm inj}=4.50^{+0.03}_{-0.05}$), the plasma is completely pair dominated ($\tau_{\rm ion} \sim 10^{-4}$; $\tau_{\rm T} \sim 3.0$; $kT_{\rm e} \sim 11.5$keV). Above this threshold, the growing amount of pairs can no longer be balanced by a softer injection spectrum, leading to an inaccurate reproduction of the thermal peak in the photon spectrum (cf. paragraph 5.2). The lower bound corresponds to the limit at which the cooling of the high-energy particles is dominated by Coulomb instead of Compton losses, leading to an underprediction of the intensity of the high-energy tail. A change from $l_{\rm s}=0.3$ to $l_{\rm s}=0.25$ implies a degradation of $\Delta$$\chi^2=+5$ at $74$ d.o.f., showing that the threshold is well established. Hence, in the framework of an external Comptonization model involving non-thermal electron acceleration up to $\gamma_{\rm max}=1000$, the total compactness of the hard X-ray source can be conservatively constrained by means of pure spectral analysis to be $2 < l < 1500$.
### The maximum particle energy
Now, in the ECM2, we freeze $\Gamma_{\rm{inj}}$ to $2.5$ but allow for variations of the maximum Lorentz factor of the accelerated electrons. For the *cutoff* spectrum, we obtain again a very good fit to the data ($\chi^2/68 = 0.85$). The effects of a much harder slope are effectively balanced by a very low cutoff energy of the injected electron distribution. Indeed, we find $\gamma_{\rm max}=4.0^{+1.3}_{-0.3}$, which means that the maximum difference of the kinetic energy between the accelerated paricles is about a factor of $10$. The other parameters are not much affected, namely we find $l_{\rm nth}/l_{\rm s}=4.25^{+0.08}_{-0.10}$ and $\tau_{\rm ion} \simeq \tau_{\rm T}=3.06^{+0.09}_{-0.06}$. Similarly, the reflection parameters remain stable ($\Omega/2\pi=0.23^{+0.03}_{-0.16}$ ; $\xi=850^{+140}_{-200}$).
For the *excess* spectrum, the fixed injection index is consistent with the best fit value obtained with the ECM1. Nevertheless, it turns out that there are some differences between the results obtained with the two models. Mainly, we notice that particle acceleration up to $\gamma_{\rm max}=21.6^{+39.0}_{-7.9}$ is sufficient to reproduce the observed high-energy tail. Moreover, a truncated electron distribution allows to improve the quality of the best fit to $\chi^2/74 = 1.00$. The total opacity of the plasma is found equal to the value obtained with a non-truncated distribution, but the pair yield is reduced due to the lack of very energetic particles. With respect to the ECM1, the other parameters are only marginally affected and remain consistent within the $90$ per cent confidence errors.
[cccccccccccc]{}
------------------------------------------------------------------------
------------------------------------------------------------------------
\*[Model]{} & \*[Spec]{} & \*[$\Gamma_{\rm{inj}}$]{} & \*[$\gamma_{\rm max}$]{} & \*[$l_{\rm nth}/l_{\rm s}$]{} & \*[$l_{B}$]{} & \*[$\tau_{\rm ion}$]{} & \*[$\tau_{\rm T}$]{} & $kT_{\rm e}$ & \*[$\Omega/2\pi$]{} & \*[$\xi$]{} & \*[$\chi^2_{\nu}$(d.o.f.)]{}\
------------------------------------------------------------------------
& & & & & & & & (keV) & & &\
------------------------------------------------------------------------
\*[ECM1]{} & cutoff & $3.95^{+1.55}_{-0.70}$ & $1000^{\star}$ & $4.35{\pm0.07}$ & — & $2.98{\pm0.07}$ & $2.99{\pm0.07}$ & $20.1$ & $0.26^{+0.03}_{-0.01}$ & $710{\pm120}$ & $0.84 (68)$\
------------------------------------------------------------------------
------------------------------------------------------------------------
& excess & $2.55^{+0.35}_{-0.15}$ & $1000^{\star}$ & $5.06^{+0.10}_{-0.08}$ & — & $3.31^{+0.07}_{-0.15}$ & $3.87^{+0.07}_{-0.15}$ & $14.3$ & $0.25^{+0.02}_{-0.03}$ & $600{\pm240}$ & $1.10 (74)$\
------------------------------------------------------------------------
\*[ECM2]{} & cutoff & $2.5^{\star}$ & $4.0^{+1.3}_{-0.3}$ & $4.25^{+0.08}_{-0.10}$ & — & $3.06^{+0.09}_{-0.06}$ & $3.07^{+0.09}_{-0.06}$ & $19.1$ & $0.23^{+0.03}_{-0.02}$ & $850^{+140}_{-200}$ & $0.85 (68)$\
------------------------------------------------------------------------
------------------------------------------------------------------------
& excess & $2.5^{\star}$ & $21.6^{+39}_{-7.9}$ & $4.94^{+0.11}_{-0.10}$ & — & $3.69^{+0.04}_{-0.13}$ & $3.87^{+0.04}_{-0.13}$ & $14.5$ & $0.21^{+0.03}_{-0.02}$ & $1240^{+290}_{-240}$ & $1.00 (74)$\
------------------------------------------------------------------------
\*[ICM1]{} & cutoff & $3.47_{-0.31}^{+0.66}$ & $1000^{\star}$ & — & $740^{+200}_{-140}$ & $2.26{\pm0.18}$ & $2.26{\pm0.18}$ & $34.2$ & $0.43{\pm0.05}$ & $300^{+120}_{-100}$ & $0.96 (68)$\
------------------------------------------------------------------------
------------------------------------------------------------------------
& excess & $2.50^{+0.32}_{-0.08}$ & $1000^{\star}$ & — & $25.2{\pm3.6}$ & $2.14^{+0.14}_{-0.17}$ & $2.45^{+0.14}_{-0.17}$ & $25.7$ & $0.44^{+0.06}_{-0.03}$ & $260^{+110}_{-80}$ & $1.06 (74)$\
------------------------------------------------------------------------
\*[ICM2]{} & cutoff & $2.5^{\star}$ & $15.2^{+5.5}_{-1.3}$ & — & $283^{+43}_{-36}$ & $2.30^{+0.17}_{-0.16}$ & $2.30^{+0.17}_{-0.16}$ & $33.4$ & $0.43^{+0.04}_{-0.03}$ & $300^{+120}_{-100}$ & $0.96 (68)$\
------------------------------------------------------------------------
------------------------------------------------------------------------
& excess & $2.5^{\star}$ & $190^{+110}_{-105}$ & — & $27.3^{+4.0}_{-3.4}$ & $2.49{\pm0.18}$ & $2.73{\pm0.18}$ & $23.9$ & $0.39^{+0.04}_{-0.04}$ & $310^{+150}_{-130}$ & $0.92 (74)$\
\[tab2\]
Non-thermal acceleration with internal Comptonization
-----------------------------------------------------
Finally, we consider models in which the observed $X/\gamma$-ray spectra are produced in a magnetized plasma. To emphasize the effects of the magnetic field, we study the case where all the seed photons are internally generated from synchrotron emission ($l_{\rm s}=0$, $l_B\neq0$). As in the previously analyzed non-magnetic models, we adopt the same different configurations to phenomenologically describe the acceleration mechanism. For convenience, we call these models the ICM1 and the ICM2 (for Internal Comptonization Models), respectively. To calculate the model spectra, we used the new versatile Comptonization code <span style="font-variant:small-caps;">belm</span> [@bmm08]. One of the main differences with <span style="font-variant:small-caps;">eqpair</span> is that <span style="font-variant:small-caps;">belm</span> accurately accounts for self-absorbed cyclo-synchrotron radiation from the sub-relativistic to the ultra-relativistic regime. We compared the spectra obtained by both codes for the best fit parameters of the non-magnetized models. The relative differences are smaller than 3 per cent at all energy. The fitting results are summarized in Table 3.
----------------------------------- -----------------------------------
{width="8.55cm"} {width="8.55cm"}
----------------------------------- -----------------------------------
### The injection index
First, we employ the commonly used configuration, i.e. we fix $(\gamma_{\rm min},\gamma_{\rm max})=(1.3,1000)$. A qualitative analysis shows that the model spectrum below $30$keV is rather insensitive to the individual values of $l_{\rm nth}$, $l_B$ and $\Gamma_{\rm{inj}}$, but strongly depends on a combination of all three parameters (see also MB09). The situation is similar as in the non-magnetic case (cf. section 4.2), although the dependence is slightly more complicated. On the other hand, the high-energy spectrum ($>$$100$keV) is mostly determined by the injection index $\Gamma_{\rm{inj}}$. As a consequence, we use Eq. \[comp\] to fix $l_{\rm nth}=100$, while the broad band spectra allow to disentangle the degeneracy between $l_{B}$ and $\Gamma_{\rm{inj}}$.
Due to the detailed treatment of the microphysics in the <span style="font-variant:small-caps;">belm</span> code, real-time fits in <span style="font-variant:small-caps;">xspec</span> are very time consuming. In order to make the fitting process more efficient, we tabulated the model spectra[^4]. The resulting table file has three dimensions (corresponding to the parameters $l_{B}$, $\Gamma_{\rm{inj}}$ and $\tau_{\rm ion}$) and the fits are performed through interpolation between the tabulated spectra.
In order to account for Compton reflection from a cold disc, the <span style="font-variant:small-caps;">belm</span> model is convolved with an angle-dependent reflection routine based on the <span style="font-variant:small-caps;">pexriv</span> model by @magdziarz1995, but taking into account the distortions due to general relativistic effects. The free parameters of the ICM1 are hence $l_{B}$, $\Gamma_{\rm{inj}}$, $\tau_{\rm ion}$, $\Omega/2\pi$, $\xi$, the iron line energy and the normalization factor between PCA and SPI.
For the *cutoff* spectrum, the ICM1 provides a good fit to the data ($\chi^2/68 = 0.96$). In comparison with the ECM1, the high-energy rollover is better reproduced. However, as can be seen from the residuals in Figure \[ren\_ginj\] (left), the fit is slightly less accurate in the iron line region around $6.4$keV. There are also some positive residuals below $5$keV, possibly hinting the need for a soft disc component. Since we focus on the spectral behavior at high energies, we did not investigate these issues any futher. The best fit is obtained with an electron spectral index of $\Gamma_{\rm inj}=3.47_{-0.31}^{+0.66}$. The magnetic compactness is fitted at $l_B=740^{+200}_{-140}$, which corresponds to a magnetic field strength of $B=2.0^{+1.0}_{-0.9}\times 10^7$ G. The plasma is found to be of moderate optical thickness, with fitted $\tau_{\rm ion}=2.26 \pm 0.18$. Due to the fast synchrotron cooling of the small number of high-energy particles, the produced pair yield is negligible. As with the ECM1, the data strongly require ionized Compton reflection, with a fitted amplitude and ionization factor of $\Omega/2\pi=0.43\pm0.05$ and $\xi=300^{+120}_{-100}$, respectively.
For the *excess* spectrum, the ICM1 allows again a good description of the data (cf Figure \[ren\_ginj\] right). The best fit yields $\chi^2/74 = 1.06$ and the high-energy data constrain the injection index to $\Gamma_{\rm inj}=2.50^{+0.32}_{-0.08}$. This result is roughly equal to the value obtained with the ECM1. However, at equal injected electron distribution, the high-energy tail of the ICM1 spectrum is slightly steeper. Indeed, contrary to a non-magnetized model, the Compton losses compete with the synchrotron losses and a significant fraction of the energy radiated by the non-thermal leptons is emitted in the optical/UV range rather than in hard X-rays. Accounting for the the high-energy tail therefore requires more high-energy particles, i.e. a smaller $\Gamma_{\rm inj}$. We find a magnetic compactness of $l_B=25.2\pm3.6$, which given our assumptions translates to $B=3.76\pm0.14\times 10^6$ G. This means that in the framework of this model, the magnetic field strength would drop by a factor of $5.5$ during a change from the *cutoff* to the *excess* spectrum.
The Thomson optical depth of ionization electrons is fitted at $\tau_{\rm ion}=2.14^{+0.14}_{-0.17}$. Contrary to the results obtained with the ECM1, the $\tau_{\rm ion}$ values for both spectra are compatible within the $90$ per cent confidence errors. The reflection routine yields $\Omega/2\pi=0.44^{+0.06}_{-0.03}$ and $\xi=260^{+110}_{-80}$, which means that these characteristics did not change either.
### The maximum particle energy
In the ICM2, we keep $l_{\rm nth}=100$ and $\gamma_{\rm min}=1.3$, but allow now for variations of $\gamma_{\rm max}$. Instead, we set the injection index to $\Gamma_{\rm inj}=2.5$ and generate a new fitting table which has again three dimensions, corresponding to the free parameters $l_{B}$, $\gamma_{\rm max}$ and $\tau_{\rm ion}$.
For the *cutoff* spectrum, we obtain a good fit to the data ($\chi^2/68 = 0.96$; shown in Figure \[ren\_gmax\] left), qualitatively equivalent to the best fit obtained with the ICM1. Again, this shows that the effects of a very soft injection slope can be mimicked by a much harder injected distribution, but truncated at a certain particle energy. We find $\gamma_{\rm max}=15.2^{+5.5}_{-1.3}$, which is significantly higher than the fitted maximum particle energy in the ECM2. The inferred magnetic compactness is reduced with respect to the ICM1, that is to say $l_B=280^{+43}_{-36}$, which for a medium of typical size $R=30$R$_{\rm G}$ corresponds to $B=1.25^{+0.5}_{-0.4}\times 10^7$G. The other parameters are not much affected, namely we find $\tau_{\rm ion}\simeq\tau_{\rm T}=2.30^{+0.16}_{-0.16}$ and from the reflected component we infer $\Omega/2\pi=0.43^{+0.04}_{-0.03}$ and $\xi=300^{+120}_{-100}$.
For the *excess* spectrum, we note that the injection index is fixed to the best fit value obtained with the ICM1. However, allowing for $\gamma_{\rm max}$ to vary (i.e allowing for a truncated electron distribution), the fit may be improved considerably. For $\gamma_{\rm max}=190^{+110}_{-105}$, we obtain the best description of the *excess* spectrum, with a reduced $\chi^2$ of $\chi^2/74 = 0.92$ (cf. Figure \[ren\_gmax\] right). Compared to the non-magnetized model, the maximum electron energy is again found to be much higher. This is expected since in the magnetic models, the seed photons have a lower average energy than in the non-magnetized models. Indeed, in the ICMs, the high-energy photons are produced from single Compton scattering off the synchrotron emission ($E_{\rm s} \sim 0.01$keV) while in ECMs the seed photons originate from the disc emission ($E_{\rm s} \sim 1$keV). Therefore, in order to upscatter the seed photons to $200$keV, the electrons must have averaged Lorentz factors of $\gamma\sim12$ in the ECMs and $\gamma\sim 120$ in the ICMs.
The magnetic compactness remains equal to the value inferred with the ICM1, namely $l_B=27.3^{+4.0}_{-3.4}$. A transition from the cutoff to the excess spectrum thus requires a factor $3.2$ decrease of the magnetic field strength. The opacity from ionization electrons yields $\tau_{\rm ion}=2.49\pm0.18$, while the total optical depth is found to be $\tau_{\rm T}=2.73\pm0.18$. This shows again that the ionization opacity does not change between the two spectra. Finally, the fitted reflection amplitude and ionization parameter are found to be consistent for both spectra regardless of the acceleration model (cf. Table 3). We thus conclude that in the framework of a strongly magnetized medium, the constraints to these parameters are very robust.
----------------------------------- ------------------------------------
{width="8.55cm"} {width="8.55cm"}
----------------------------------- ------------------------------------
Discussion
==========
The SPI observations showed that during the bright hard state of the 2007 outburst, the highest energy emission ($>$$150$keV) of GX 339–4 was variable. While the spectral shape at lower energies (4–150keV) remained more or less constant, we detected the significant appearance/disappearance of a high-energy tail. The strength of this hard tail, varying on a time scale of less than $7$hours, is found to be positively correlated with the total X-ray luminosity of the source and enables interesting constraints to the physical processes which could be responsible for the high-energy emission.
The pure thermal model
----------------------
The clear detection of a cutoff energy in the *cutoff* spectrum indicates that the Comptonizing electron distribution is quasi Maxwellian. Thus, it is not surprising that the spectrum can be explained by assuming only thermal heating of the plasma. As suggested by the advection dominated accretion flow models, this could be achieved through Coulomb interactions with a thermal distribution of hot protons. The temperature of the protons can be estimated from the thermal compactness, the electron temperature and the optical depth of the plasma (cf. formula (4) in MB09). For $l_{\rm th}=100$, we infer a proton temperature of the order of $1$MeV. This is of the same order of magnitude than the proton temperature estimated in MB09 for the canonical hard state of Cygnus X-1, which in comparison to the hard state of GX 339–4 analyzed here shows a hotter electron plasma ($kT_{\rm e} \sim 85$keV) but a lower compactness ($l_{\rm th} \sim 5$). As mentioned in MB09, proton temperatures of the order of $1$MeV are significantly lower than what is expected in typical two-temperature accretion flows, namely $kT_{\rm i} > 10$MeV.
In any case, pure thermal heating is not enough since it is not able to explain the appearance of the observed high-energy tail. We conclude that either the hard excess is independent from the thermal component, in which case its origin is located outside the innermost regions, or that both components are linked, in which case at least some level of non-thermal heating is required.
On the other hand, both broad band spectra can be successfully explained by models involving *only* non-thermal electron acceleration. This suggests that the Comptonizing medium in hard states could be powered by the same non-thermal mechanisms that are believed to power the accretion disc corona in soft states. Depending on the nature of the plasma (magnetized or not) and the origin of the seed photons, these issues are discussed in the next paragraphs.
The non-magnetic case
---------------------
In the framework of a non-magnetized model, the variability of the high-energy spectrum can be explained by changes in the properties of the involved acceleration processes. Namely, the fits suggest a small variation of the total power supplied to the plasma along with a significant variation of either the spectral index (in the ECM1) or the cutoff energy (in the ECM2) of the non-thermal electron distribution.
In the ECM1, a change from the *cutoff* to the *excess* spectrum requires that the spectral index typically drops from $\Gamma_{\rm inj}=4.0$ to $2.5$. This implies that the average energy of the accelerated particles $<$$E_{\rm inj}$$>$ rises from $1.0$ to $1.8$MeV. In the ECM2, at constant spectral index, the best fits show that relatively low maximum Lorentz factors are sufficient to reproduce the data. In this case, the appearance of the high-energy excess requires an increase of $\gamma_{\rm max}$ from $4$ to $22$, which implies that the average energy of the accelerated electrons rises accordingly, from $1.0$ to $1.5$MeV. In both cases, such variations can be explained by the possible non-stationarity of the inherent acceleration mechanisms. For instance, in the framework of shock acceleration, the properties of the accelerated particles depend on the shock strength [@webb1984; @spitkovsky2008]. Acceleration by reconnection depends on several physical parameters such as the local geometry of the reconnection zone [@zenitani2007] and the number of reconnection sites (if the particles are accelerated stochastically by successive acceleration events in different sites [@anastasiadis1997; @dauphin2007]). All these properties may undergo variations with time and could hence explain the observed variability.
------------------------------------- -------------------------------------
{width="8.55cm"} {width="8.55cm"}
------------------------------------- -------------------------------------
Our results are consistent with those of @gierlinski2005, who studied the energy dependent variability of non-magnetized Comptonization models in response to varying physical parameters. Although the variations of $\Gamma_{\rm inj}$ and $\gamma_{\rm max}$ could not explain the observed rms spectra of XTE J1650–500 and XTE J1550-564, they found that changes in these parameters produce strong variations in the X-ray spectrum above 50 keV, which is what is required here to reproduce the present data.
The spectral analysis suggests that a transition from the *cutoff* to the *excess* spectrum additionally requires a $15$ per cent increase of the total power supplied to the plasma. Considering a spherical medium of fixed radius and constant illumination from the accretion disc, this increase is consistent with the $14$ per cent difference in the observed 4–500keV luminosity. In addition, the total optical depth of the plasma is found to increase by about $30$ per cent, which is mainly due to the enhanced pair production occuring in the harder acceleration regime.
Motivated by size and luminosity estimates, we assumed a constant soft photon compactness of $l_{\rm s}=15$. As mentioned earlier, the above results are only weakly dependent on the exact value of $l_{\rm s}$. Very high- or very low values of the illumination compactness, however, turned out to be inconsistent with the observed spectra. Indeed, since the ratio $l_{\rm nth}/l_{\rm s}$ is robustly constrained, strong illumination requires an efficient acceleration mechanism (i.e. large $l_{\rm nth}$) to reproduce the spectral slope at lower energies ($<20$keV). Consequently, this generates very energetic radiation which produces large amounts of $e^-$$e^+$ pairs through photon-photon annihilation. This, in turn, reduces the equilibrium temperature of the plasma since more particles have to share the same amount of energy. Hence, above a certain soft photon compactness, the equilibrium temperature will be too low to be consistent with the observed thermal peak of the spectrum.
Reciprocally, since the thermal part of the electron distribution is roughly determined by the balance between acceleration and Compton cooling, decreasing $l_{\rm s}$ at constant $l_{\rm nth}/l_{\rm s}$ has no significant effect on the lower energy part of the photon spectrum. However, a major fraction of the high-energy tail results from the Comptonization by mildly relativistic particles ($2<\gamma<10$). Below a certain soft photon compactness, the cooling of these mildly relativistic particles is no longer dominated by the Compton losses but by Coulomb interactions with the lower-energy thermal electrons. Thus, decreasing $l_{\rm s}$ at constant $l_{\rm nth}/l_{\rm s}$ provides a weaker acceleration rate while the cooling remains constant. This reduces the intensity of the high-energy tail up to the point where the model predictions are no longer consistent with the data.
In conclusion, independently of any geometric argument, we obtain conservative bounds to the total compactness of the X-ray emitting plasma, i.e. $2<l<1500$. These limits are consistent with the estimates derived from geometric arguments, but unfortunately not very constraining. Anyhow, the robustness of the fitted compactness ratio $l_{\rm nth}/l_{\rm s} \simeq 4.5$ allows to conclude that the luminosity of the cold disc represents at most $\sim$$20$ per cent of the luminosity of the Comptonized component, possibly much less if the plasma is magnetized.
The magnetic case
-----------------
Using the new code <span style="font-variant:small-caps;">belm</span> [@bmm08], we showed that the hard X-ray behavior of GX 339–4 in the bright hard state can be explained by assuming pure non-thermal electron acceleration and subsequent Comptonization of the self-consistently produced synchrotron photons. The model requires no incident radiation from the accretion disc and assumes constant power injection into the magnetized plasma. As in the non-magnetic case, the spectral variability can be mimicked by two different configurations of the acceleration model, involving either a variable power law slope (in the ICM1) or a variable maximum energy (in the ICM2) of the injected electron distribution.
In principle, these models allow to estimate the averaged magnetic field strength of the plasma. However, since the fits provide precise constraints only for the ratio $l_{B}/l$, the uncertainties on the total compactness (cf. equation (\[comp\])) are projected to the estimate of $l_B$. To discuss our results, we express $l_B$ as a fraction of the magnetic compactness at equipartition with the radiation field $l_{B_{\rm R}}$. As we have the approximate dependence $l_{B_{\rm R}} \propto l\times (1+\tau_{\rm T}/3)$ (cf. equation (8) in MB09), the ratio $l_{B}/l_{B_{\rm R}}$ does not depend on the uncertainties regarding the source size and distance and is therefore a good indicator to quantify the importance of the magnetic processes. In addition, $l_{\rm s}=0$ was assumed to study the physics of a strongly magnetized medium, but it can not be excluded that both the synchrotron flux *and* a soft disc component contribute to the cooling of the non-thermal particles. If the medium is additionally illuminated by cold disc photons, less synchrotron cooling will be required to reproduce the slope of the lower energy spectrum, implying that the fitted values of the magnetic compactness are in fact conservative upper limits. In the ICM1, we infer $l_{B}/l_{B_{\rm R}}\leq18$ from the *cutoff* and $l_{B}/l_{B_{\rm R}}\leq0.58$ from the *excess* spectrum. In the ICM2, the fitted magnetic compactness for the *cutoff* spectrum is lower, namely we find $l_{B}/l_{B_{\rm R}}\leq6.0$ and $l_{B}/l_{B_{\rm R}}\leq0.60$, respectively.
As mentioned above, the magnetic models do not require any disc blackbody photons to produce the observed 4–500keV spectra. If the accretion disc extends down close to the black hole (as requested by the accretion disc corona models), the paucity of soft disc photons can be explained if the Comptonizing coronal material is outflowing at a mildly relativistic speed [@beloborodov1999; @malzac2001]. Using the formulae (5) and (7) of @beloborodov1999, we estimate that bulk velocities of at least $0.6\,c$ are required for the cooling of the corona being dominated by synchrotron self-Compton. Comptonization off a dynamical corona may then blueshift the emerging spectrum, but these corrections remain moderate at $\sim$$0.6\,c$ and have not been included in the spectral fits. On the other hand, the disc may as well be truncated (as requested by the hot flow models), since the data do not explicitly require relativistic smearing of the reflection features. Thus, as long as the particle acceleration is essentially non-thermal, the ICMs may apply to both geometries.
If the electron cooling is dominated by synchrotron self-Compton ($l_{\rm s}/l_{\rm nth}\ll 1/5$ from the non-magnetic models), the fits suggest that the magnetic field strength is roughly in equipartition with the radiation field. From a qualitative fit of the canonical hard state spectrum of Cygnus X-1, MB09 constrained the magnetic energy density to be strictly below equipartition ($l_{B}/l_{B_{\rm R}}<0.3$). As a consequence, even if our results are consistent with the results obtained for Cygnus X-1, the magnetic field could play a more important role in the physics of GX 339–4, at least in the cutoff state.
In both magnetic models, a transition between the two spectra can be explained by the variations of only two parameters. In any case, the magnetic compactness $l_B$ needs to be variable, along with either the injection slope (in the ICM1) or the maximum energy of the accelerated particles (in the ICM2). All other fit parameters are found to remain constant within the $90$ per cent confidence errors. To reproduce the *cutoff* to *excess* transition with the ICM1, the magnetic field has to decrease by a factor of $5.4$ and the injection slope drops from $\Gamma_{\rm inj}=3.5$ to $2.5$, while in the ICM2, the magnetic field decreases by a factor of $3.2$ and the maximum energy increases from $\gamma_{\rm max}=15.2$ to $187$.
Regardless of the precise acceleration process involved in the accretion flow, the inferred change in the magnetic field strength is expected to have an impact on the cutoff energy of the accelerated electron distribution. Indeed, the maximum energy of the particles is achieved when the energy losses become larger than the gains. In the magnetized models investigated in this paper, the energy losses of the most energetic particles are dominated by synchrotron cooling (Compton and Coulomb cooling are significantly smaller), which obviously depends on the magnetic field strength. In the framework of diffusive shock acceleration for instance, it has been shown that when synchrotron losses are included, the maximum Lorentz factor of the accelerated relativistic electrons satisfies $\gamma_{\rm max} \propto 1/(K B^2)$, where $K$ is the diffusion coefficient (see e.g. @webb1984 or @marcowith1999). If $K$ is constant, this predicts a maximum energy of $\gamma_{\rm max} \propto 1/B^2$. However, depending on the assumptions made to describe the acceleration mechanism, the diffusion coefficient can depend both on the particle energy and the magnetic field strength: $K(\gamma,B)$. In the frequently used Bohm limit, it is assumed that $K\propto \gamma /B$, implying that the maximum Lorentz factor of the accelerated particles is expected to follow $\gamma_{\rm max}\propto B^{-1/2}$. Although our results are not consistent with the Bohm predictions, the overall behaviour remains that the cutoff energy decreases with an increasing magnetic field strength. Moreover, other acceleration mechanisms, such as magnetic reconnection for instance, may give different predictions.
As mentioned in the previous section, the injection index is not universal and may undergo variations in response to changing physical conditions in the acceleration region. However, while mere synchrotron cooling implies that $\gamma_{\rm max}$ and $B$ are necessarily anti-correlated, the physical connection between the variations of $\Gamma_{\rm inj}$ and the magnetic field strength remains less obvious. Hence, in the context of a magnetized plasma, the model with variable $\gamma_{\rm max}$ is slightly favored. Anyhow, the variability of the acceleration mechanism was modeled by the variations of only one parameter (either the injection index or the cutoff energy), although it cannot be excluded that both parameters undergo simultaneous variations. We emphasize however that such models are more complicated for they imply more varying parameters and would not significantly improve the quality of the fits.
In the context of the ICM2, we investigated the impact of the parameter variations on the electron distribution and the involved radiation mechanisms. Although the lower energy part of the model photon spectra is similar in shape, the underlying electron distributions are quite different (cf. Figure \[spec\_part\]). Indeed, the inferred variations of $B$ and $\gamma_{\rm max}$ not only change the non-thermal part of the distribution but also the temperature of the thermalized component. Figure \[rad\_mec\] compares the contributions from the various radiation mechanisms to the model photon spectra, showing separately the Comptonization off thermal and non-thermal particles[^5]. As expected, the parameter variations strongly increase the ratio of the non-thermal to the thermal component, resulting in the production of the high-energy tail.
In conclusion, the ICM2 provides a framework for a simple, physically motivated interpretation of the data, showing that the spectral variability could be triggered by the variation of only one single parameter, namely the magnetic field strength. Assuming a spherical medium of radius $R=30$R$_{\rm G}$, a black hole mass of $M=13$M$_{\sun}$ and a distance of $d=8$kpc, we infer a factor $3$ variation of the magnetic field strength, between $B\leq 3.9^{+1.5}_{-1.3}\times 10^6$G and $B\leq 1.25^{+0.50}_{-0.45}\times 10^7$G. If the fitted average spectra provide a good estimate of the individual spectra from the single science windows, the time scale of this evolution is at most of the order of hours. Using the standard $\alpha$-prescription [@ss73], the viscous time scale of the accretion disc at a radius $R$ is given by $t_{\rm visc} = \alpha^{-1} \left(H/R\right)^{-2} t_{\rm K}$, where $H/R$ is the aspect ratio of the disc and $t_{\rm K}$ the Keplerian period. Using the lower limit $\alpha \ge 0.01$ (quiescent disc) and $H/R \simeq 0.1$ (thin disc), we obtain $t_{\rm visc} < 10$min for the typical source size $R=30$R$_{\rm G}$. The viscous timescale of ADAF-like models is much shorter. Therefore, even if the geometry of the X-ray emitting region and its dynamical evolution remain uncertain, global changes on time scales of hours are not unrealistic.
{width="17cm"}
Summary and Conclusion
======================
We presented an analysis of the high-energy emission of GX 339–4 in a luminous hard state. With respect to the standard cutoff shape of the hard state spectrum, the 25 – 500keV *INTEGRAL*/SPI data revealed the appearance of a variable high-energy excess. The intensity of this hard excess seems to be positively correlated with the total X-ray luminosity and the associated time scale is shorter than $7$hours. We explored the possible physical origins of this variability through an extensive analysis of two averaged spectra, one showing the typical cutoff shape and one showing this prominent high-energy excess.
We used simultaneous *RXTE*/PCA data to extend the spectral coverage down to $4$keV and fitted the broad band spectra with a variety of physical Comptonization models. Models involving only thermal heating can be ruled out since they are not able to reproduce the high-energy tail. This feature thus confirms that in luminous hard states, the Comptonizing plasma of GX 339–4 contains a fraction of non-thermal particles. Models involving only non-thermal electron acceleration, on the other hand, showed that the thermal part of the spectrum can be produced by an initially non-thermal (power law) distribution, which rapidly thermalizes under the effects of synchrotron self-absorption and/or *e-e* Coulomb collisions.
The relatively good signal to noise ratio of the high-energy channels ($>$$150$keV) allowed to derive meaningful constraints to the model parameters. Depending on the nature of the plasma (magnetized or not), the transition between the two averaged spectra requires the variations of at least two parameters. We found that a magnetized medium subject to non-thermal electron acceleration provides the framework for a simple and physically consistent interpretation of the data. Indeed, we showed that the spectral variability can be triggered by the *only* variations of the magnetic field, implying a subsequent variation of the maximum energy of the accelerated particles.
The quantitative constraints derived from this model yield a very conservative upper limit on the average magnetic field strength in the Comptonizing plasma and suggest that in the bright hard state, the magnetic energy density could reach equipartition with the radiative energy density.
In conclusion, the presented results suggest that magnetic processes are likely to play a crucial role in the production of the high-energy emission of GX 339–4. In luminous hard states, the Comptonized emission could originate from a magnetized corona essentially powered though non-thermal particle acceleration, similarily to what is believed to happen in soft states. The hard X-ray emission in both spectral states may therefore be the consequence of a common physical phenomenon.
The SPI project has been completed under the responsibility and leadership of the CNES. The authors are grateful to the ASI, CEA, DLR, ESA, INTA, NASA, and OSTC for support. They also acknowledge financial support from CNRS, ANR and GDR PCHE in France. The authors thank the *Swift*/*BAT* and *RXTE*/*ASM* teams for publicly providing the monitor results. R.D. thanks M. del Santo and the IBIS team at INAF/IASF-Roma for their kind assistance with the IBIS/ISGRI data analysis. We thank S. Motta for providing the reduced HEXTE data and C. Cabanac for valuable discussions. Last but not least we thank the anonymous referee whose comments substantially improved the quality of the paper.
[^1]: http://heasarc.gsfc.nasa.gov/docs/archive.html
[^2]: If this parameter is left free to vary, the fitted $90$ per cent confidence interval is $\Gamma \in [0.3 , 1.9]$
[^3]: available from the *INTEGRAL* Science Data Centre (ISDC)
[^4]: We used the <span style="font-variant:small-caps;">wftbmd</span> routine (publicly available at the heasarc website) to create the appropriate FITS file required by the <span style="font-variant:small-caps;">atable</span> model in <span style="font-variant:small-caps;">xspec</span>
[^5]: The low energy part of the particle distributions were fitted with a Maxwellian and particles of energy one order of magnitude larger than the inferred temperature were considered to be non-thermal.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The Crab Nebula is the brightest TeV gamma-ray source in the sky and has been used for the past 25 years as a reference source in TeV astronomy, for calibration and verification of new TeV instruments. The High Altitude Water Cherenkov Observatory (HAWC), completed in early 2015, has been used to observe the Crab Nebula at high significance across nearly the full spectrum of energies to which HAWC is sensitive. HAWC is unique for its wide field-of-view, nearly 2 sr at any instant, and its high-energy reach, up to 100 TeV. HAWC’s sensitivity improves with the gamma-ray energy. Above $\sim$1 TeV the sensitivity is driven by the best background rejection and angular resolution ever achieved for a wide-field ground array.
We present a time-integrated analysis of the Crab using 507 live days of HAWC data from 2014 November to 2016 June. The spectrum of the Crab is fit to a function of the form $\phi(E) = \phi_0 (E/E_{0})^{-\alpha -\beta\cdot{\rm{ln}}(E/E_{0})}$. The data is well-fit with values of $\alpha=2.63\pm0.03$, $\beta=0.15\pm0.03$, and log$_{10}(\phi_0~{\rm{cm}^2}~{\rm{s}}~{\rm{TeV}})=-12.60\pm0.02$ when $E_{0}$ is fixed at 7 TeV and the fit applies between 1 and 37 TeV. Study of the systematic errors in this HAWC measurement is discussed and estimated to be $\pm$50% in the photon flux between 1 and 37 TeV.
Confirmation of the Crab flux serves to establish the HAWC instrument’s sensitivity for surveys of the sky. The HAWC survey will exceed sensitivity of current-generation observatories and open a new view of 2/3 of the sky above 10 TeV.
bibliography:
- 'bibliography.bib'
title: 'Observation of the Crab Nebula with the HAWC Gamma-Ray Observatory'
---
Introduction {#sec:intro}
============
The Crab Pulsar Wind Nebula (the Crab Nebula or the Crab) occupies a place of special distinction in the history of high-energy astrophysics. It was the first high-confidence TeV detection in 1989 using the Whipple telescope [@whipplecrabdiscovery] and is the brightest steady source in the Northern sky above 1 TeV. It has been observed with imaging atmospheric Cherenkov telescopes (IACTs) since [@cangaroocrab; @hegracrab; @hesscrab; @veritascrab; @magiccrabnebula]. The first observation using a ground array was the 2003 Milagro detection [@milagrocrab], and the signal was subsequently seen in other ground arrays [@tibetcrab; @argocrab].
The TeV emission arises from inverse-Compton (IC) up-scattering of low-energy ambient photons by energetic electrons accelerated in shocks surrounding the central pulsar [@crabinversecompton]. Photons from synchrotron emission of the electrons themselves are likely the dominant IC target with sub-dominant contributions from the cosmic microwave background and the extragalactic background light [@crabmodeling]. Despite rare flaring emission below 1 TeV [@agilecrabflare; @fermiflare], and a potential TeV flare [@argoflare], the Crab is generally believed to be steady at higher energies [@hesscrabflarelimits; @veritascrabflarelimits; @argocrab]. Consequently, the Crab Nebula has been adopted as the reference source in TeV astronomy and is a reliable beam of high-energy photons to use for calibrating and understanding new TeV gamma-ray instruments.
The High Altitude Water Cherenkov (HAWC) observatory is a new instrument sensitive to multi-TeV hadron and gamma-ray air showers, operating at latitude of +19$^\circ$N at an altitude of 4,100 meters in the Sierra Negra, Mexico. HAWC consists of a large 22,000 $\rm{m}^2$ area densely covered with 300 Water Cherenkov Detectors (WCDs), of which 294 have been instrumented. Each WCD consists of a 7.3-meter diameter, 5-meter tall steel tank lined with a plastic bladder and filled with purified water. Figure \[fig:layout\] shows a schematic of the WCD and an overhead view of the full instrument. At the bottom of each WCD, three 8-inch Hamamatsu R5912 photomultiplier tubes (PMTs) are anchored in an equilateral triangle of side length 3.2 meters, with one 10-inch high-quantum efficiency Hamamatsu R7081 PMT anchored at the center.
A high-energy photon impinging on the atmosphere above HAWC initiates an extensive electromagnetic air shower. The resulting mix of relativistic electrons, positrons and gamma rays propagates to the ground in a thin tortilla of particles at nearly the speed of light. Energetic particles that reach the instrument can interact in the water and produce optical light via Cherenkov radiation. The high altitude of HAWC sets the scale for the photon energy that can be detected. At HAWC’s altitude, the shower from a 1 TeV photon from directly overhead will have about 7% of the original photon energy left when the shower reaches the ground. The fraction of energy reaching the ground rises to $\sim$28% at 100 TeV. The detector is fully efficient to gamma rays with a primary energy above $\sim$1 TeV. Lower-energy photons can be detected when they fluctuate to interact deeper in the atmosphere than typical.
The voltages on the HAWC PMTs are chosen to match the PMT gains across the array. PMT pulses are amplified, shaped, and passed through two discriminators at approximately 1/4 and 4 PEs [@hawc111bsl] and digitized. The length of time that PMT pulses spend above these thresholds (time-over-threshold or ToT) is used to estimate the total amount of charge collected in the PMT. Noise arises from a number of sources including PMT afterpulsing, fragments of sub-threshold air showers, PMT dark noise, and other sources. The 8-inch PMTs have a hit rate (a hit being each time the PMT signal crosses the 1/4 PE threshold) due to the combined effect of these sources of 20–30 kHz and the 10-inch PMTs have a hit rate of 40–50 kHz.
The data from the front-end electronics is digitized with commercial time-to-digital converters (TDCs) and passed to a farm of computers for real-time triggering and processing. Events are preserved by the computer farm if they pass the trigger condition: a simple multiplicity trigger, requiring some number, $N_{\rm{thresh}}$, PMTs hit within 150 ns. Hits 500 ns prior to a trigger and up to 1000 ns after a trigger are also saved for reconstruction. During the operation of HAWC, $N_{\rm{thresh}}$ has varied between 20–50. The trigger rate at the time of writing, due primarily to hadronic cosmic-ray air showers, is $\sim$24 kHz with $N_{\rm{thresh}}=28$.
![The left panel shows a schematic of a single HAWC WCD including the steel tank, the covering roof, the three 8-inch Hamamatsu R5912 PMTs, and one 10-inch Hamamatsu R7081-MOD PMT. The tanks are 7.5 meters diameter and 5 meters high. Water is filled to a depth of 4.5 meters with 4.0 meters of water above each PMT. The right panel shows the layout of the completed HAWC instrument, covering 22,000 $\rm{m}^2$. The location of each WCD is indicated by a large circle and PMTs are indicated with smaller circles. The gap in the center hosts a building with the data acquisition system. []{data-label="fig:layout"}](wcd.eps "fig:"){width="48.00000%"} ![The left panel shows a schematic of a single HAWC WCD including the steel tank, the covering roof, the three 8-inch Hamamatsu R5912 PMTs, and one 10-inch Hamamatsu R7081-MOD PMT. The tanks are 7.5 meters diameter and 5 meters high. Water is filled to a depth of 4.5 meters with 4.0 meters of water above each PMT. The right panel shows the layout of the completed HAWC instrument, covering 22,000 $\rm{m}^2$. The location of each WCD is indicated by a large circle and PMTs are indicated with smaller circles. The gap in the center hosts a building with the data acquisition system. []{data-label="fig:layout"}](layout.eps "fig:"){width="48.00000%"}
The reconstruction process involves determining the direction, the likelihood for the event to be a photon, and the event’s size. A first-look reconstruction is applied at the HAWC site. In this analysis, all the data has been reconstructed again (the fourth revision, or Pass 4, of the reconstruction process) off-site in order to have a uniform dataset and the best calibrations available. The chief background to gamma-ray observation is the abundant hadronic cosmic-ray population. Individual gamma-ray-induced air showers can be distinguished from cosmic-ray showers by their topology and the presence of deeply penetrating particles at the ground.
The strength of HAWC over the IACT technique is that photon showers may be detected across the entire $\sim$ 2 sr field-of-view of the instrument, day or night, regardless of weather conditions. As such, HAWC is uniquely suited to study the long-duration light curve of objects and to search for flaring sources in real time. Additionally, since sources are observed on every transit, HAWC obtains thousands of hours of exposure on each source, greatly improving the sensitivity to the highest-energy photons.
Section \[sec:reconstruction\] outlines the algorithms by which the direction, size, and type (photon or hadron) of each shower is determined. Section \[sec:crabsignal\] describes the identification of the gamma-ray signal from the Crab Nebula. The fit to the Crab energy spectrum, including a treatment of systematic errors, is described in Section \[sec:spectralfit\]. Finally, a discussion of the result is presented in Section \[sec:discussion\], including a comparison to prior spectra measured by peer experiments and a computation of the sensitivity of the HAWC instrument, anchored in the agreement of the HAWC measurement to other experiments.
Air Shower Reconstruction {#sec:reconstruction}
=========================
Events from the detector are reconstructed to determine the arrival direction of the primary particle and the size of the resulting air shower on the ground, a proxy for the primary particle’s energy. Table \[tab:recosteps\] summarizes the steps in reconstruction of HAWC events as explained below.
To illustrate the event reconstruction, Figure \[fig:sampleevent\] shows a strong gamma-ray candidate from the Crab Nebula. In Section \[sec:simulation\], the simulation is briefly described as it is key to evaluating the reconstruction process. Section \[sec:calib\] describes the calibration, by which the time and light level in individual PMTs are determined. Section \[sec:size\] discusses the selection of PMT signals for reconstruction and the event size measurement. The direction reconstruction occurs in two steps, first the core reconstruction, described in Section \[sec:corereconstruction\], and then the direction determination, described in Section \[sec:directionreconstruction\]. The air shower core, the dense concentration of particles along the direction of the original primary, is needed to make the best reconstruction of the air shower’s direction since the air shower arrival front is delayed from a pure plane, depending on the distance from the core. The identification of photon candidates is presented in Section \[sec:ghsep\]. The directional fit is iterated to suppress noise and this iteration is explained in Section \[sec:refinement\].
Step Description Hit Selection
------ ------------------------------------ -------------------------------------------------
1 Calibration
2 Hit Selection
3 Center-of-Mass Core Reconstruction Selected Hits
4 SFCF Core - First Pass Selected Hits
5 Direction - First Pass Selected Hits
6 SFCF Core - Second Pass Selected Hits within 50 ns of First-pass Plane
7 Direction - Second Pass Selected Hits within 50 ns of First-pass Plane
8 Compactness Selected Hits within 20 ns of Second-pass Plane
9 PINCness Selected Hits within 20 ns of Second-pass Plane
: Steps in the HAWC Event Reconstruction. The best core and direction reconstructions are applied with gradual narrowing of the hits used.[]{data-label="tab:recosteps"}
Simulation {#sec:simulation}
----------
The HAWC instrument is modeled using a combination of community-standard simulation packages and custom software. The CORSIKA package (v7.4000) is used for simulation of air showers, propagating the primary particles through the atmosphere to the ground [@corsika]. At ground level, a GEANT 4 simulation (v4.10.00) of the shower particles is used to propagate the ground-level particles through the HAWC tanks and to track the Cherenkov photons to the faces of the PMTs [@geant4].
The response of the PMTs and the calibration are approximated with a custom simulation that assumes that recorded light is faithfully detected with some efficiency and an uncertainty in the logarithm of the total charge recorded. Decorrelated single PE noise is added. The absolute PMT efficiency for detecting Cherenkov photons is established by scaling the simulated PMT response to vertical muons to match data. Most muons passing through HAWC are minimum ionizing with nearly constant energy loss. Vertical muons, therefore, are a nearly constant light source and convenient for establishing to total PMT efficiency. Simulated events are subsequently reconstructed by the same procedure as experimental data to study the performance of the algorithms.
Calibration {#sec:calib}
-----------
The first step in the reconstruction process is calibration, the processes by which true time and light level in each PMT are estimated from the TDC-measured threshold-crossing times of each PMT [@calibrationicrc2013; @calibrationicrc2015].
The calibration associates the measured ToT in each PMT with the true number of PEs. To give a sense of scale, the ToT for a single PE crossing the low-threshold discriminator (about 1/4 PE) is $\sim$100 ns. Above a few PEs, the higher-threshold (about 4 PEs) ToT is used for charge assignment and a high-threshold ToT of 400 ns roughly corresponds to a charge of $10^4$ PEs. The time scale for these ToTs is determined by the shaping of the front-end electronics and is chosen to be longer than the characteristic arrival time distribution of PEs during an air shower so as to integrate the whole air shower arrival into one PMT hit.
In addition to the PE measurement, the calibration system accounts for electronic slewing of the PMT waveform: Lower-PE waveforms cross threshold later than contemporaneous high-PE waveforms.
Subsequent reconstruction algorithms treat all PMTs, the 8-inch and 10-inch, as identical, despite the larger size and greater efficiency of the 10-inch PMTs. To accommodate this, an effective charge $Q_{\rm{eff}}$ is defined. For $Q_{\rm{eff}}$, PE values from the central 10-inch PMTs are scaled by a factor of 0.46 to place them [*[on par]{}*]{} with the 8-inch PMTs.
Finally, each PMT has a single calibrated timing offset that accounts for the different cable lengths and any other timing delays that may differ from PMT to PMT. These delays are measured to within a few ns by the calibration system and are refined to sub-ns precision by iterated fits to hadronic air showers. Since the hadronic background is isotropic, the point of maximum cosmic-ray density is overhead and the PMT timing pedestals are chosen to ensure this is true. A final small ($\sim$0.2$^\circ$) rotation of all events is performed to ensure that the Crab Nebula appears in its known location.
Throughout the analysis, the Crab is assumed to be at a location of 83.63$^\circ$ right ascension and 22.01$^\circ$ declination, in the J2000.0 epoch, taken from [@hegracrab]. While the pulsar position is known more precisely (e.g. [@oldcrabpulsarradio]), this precision is sufficient for use in HAWC.
Hit Selection and Event Size Bins {#sec:size}
---------------------------------
As described in Section \[sec:intro\], the HAWC DAQ records 1.5 $\mu$s of data from all PMTs that have a hit during an air shower event. A subset of these hits are selected for the air shower fit. To be used for the air shower fit, hits must be found between -150 and +400 ns around the trigger time. Hits are removed if they occur shortly after a high-charge hit under the assumption that these hits are likely contaminated with afterpulses. Additionally, hits are removed if they have a pattern of TDC crossings that is not characteristic of real light; they cannot be calibrated accurately. Finally, each channel has an individual maximum calibrated charge, typically a few thousand PEs, but no more than 10$^4$ PEs, above which the PMTs are not used. Above $\sim$10$^4$ PEs, corresponding to a ToT of $\sim$400 ns, prompt afterpulsing in the PMTs can artificially lengthen the ToT measurement giving a false measurement. Channels are considered available for reconstruction if they have a live PMT taking data which has not been removed by one of these cuts.
The angular error and the ability to distinguish photon events from hadron events is strongly dependent on the energy and size of events on the ground. We adopt analysis cuts and an angular resolution description that depends on this measured size. The data is divided into 9 size bins, $\mathcal{B}$, as outlined in Table \[table:cuts\]. The size of the event is defined as the ratio of the number of PMT hits used by the event reconstruction to the total number of PMTs available for reconstruction, $f_{\rm{hit}}$. This definition allows for relative stability of the binning when PMTs are occasionally taken out of service.
For this analysis, events are only used if they have more than 6.7% of the available PMTs seeing light. Since typically 1000 PMTs are available, typically a minimum of 70 PMTs is needed for an event. This is substantially higher than the trigger threshold. The data between the trigger threshold and the threshold for $\mathcal{B}=1$ in this analysis consists of real air showers, and techniques to recover these events and lower the energy threshold, beyond what is presented here, are under study.
$\mathcal{B}$ $f_{\rm{hit}}$ $\psi_{68}$ $\mathcal{P}$ Maximum $\mathcal{C}$ Minimum Crab Excess Per Transit
--------------- ---------------- ------------- ----------------------- ----------------------- -------------------------
1 6.7 - 10.5% 1.03 $<$2.2 $>$7.0 68.4 $\pm$ 5.0
2 10.5 - 16.2% 0.69 3.0 9.0 51.7 $\pm$ 1.9
3 16.2 - 24.7% 0.50 2.3 11.0 27.9 $\pm$ 0.8
4 24.7 - 35.6% 0.39 1.9 15.0 10.58 $\pm$ 0.26
5 35.6 - 48.5% 0.30 1.9 18.0 4.62 $\pm$ 0.13
6 48.5 - 61.8% 0.28 1.7 17.0 1.783 $\pm$ 0.072
7 61.8 - 74.0% 0.22 1.8 15.0 1.024 $\pm$ 0.053
8 74.0 - 84.0% 0.20 1.8 15.0 0.433 $\pm$ 0.033
9 84.0 - 100.0% 0.17 1.6 3.0 0.407 $\pm$ 0.032
: Cuts used for the analysis. The definition of the size bin $\mathcal{B}$ is given by the fraction of available PMTs, $f_{\rm{hit}}$, that record light during the event. Larger events are reconstructed better and $\psi_{68}$, the angular bin that contains 68% of the events, reduces dramatically for larger events. The parameters $\mathcal{P}$ and $\mathcal{C}$ (Section \[sec:ghsep\]) characterize the charge topology and are used to remove hadronic air shower events. Events with a $\mathcal{P}$ less than indicated and a $\mathcal{C}$ greater than indicated are considered photon candidates. The cuts are established by optimizing the statistical significance of the Crab and trend toward harder cuts at larger size events. The number of excess events from the Crab in each $\mathcal{B}$ bin per transit is shown as well. []{data-label="table:cuts"}
Figure \[fig:energy\] shows the distribution of true energies as a function of the $\mathcal{B}$ of the events. The distribution of energies naturally depends heavily on the source itself, both its spectrum and the angle at which it culminates during its transit. A pure power-law spectrum with a shape of $E^{-2.63}$ and a declination of 20$^\circ$ was assumed for this figure. As $\mathcal{B}$ is a simple variable — containing no correction for zenith angle, impact position, or light level in the event — the energy distribution of $\mathcal{B}$ bins is wide. Section \[sec:improvements\] discusses planned improvements to this event parameter that will measure the energy of astrophysical gamma rays better.
Bin $\mathcal{B}=9$ bears particular attention. It is an “overflow” bin containing events which have between 84% and 100% of the PMTs in the detector seeing light. Typically, a 10 TeV photon will hit nearly every sensor and the $\mathcal{B}$ variable has no dynamic range above this energy. This limit is not intrinsic to HAWC and variables that utilize the light level seen in PMTs on the ground, similar to what was used in the original sensitivity study [@sensipaper], have dynamic range above 100 TeV. These variables, not used in this analysis, will improve the identification of high-energy events. This is discussed farther in Section \[sec:improvements\].
![Fits to the true energy distribution of photons from a source with a spectrum of the form $E^{-2.63}$ at a declination of +20$^\circ$N for $\mathcal{B}$ between 1 and 9, summed across a transit of the source. Better energy resolution and dynamic range can be achieved with a more sophisticated variable that takes into account the zenith angle of events and the total light level on the ground. The curves have been scaled to the same vertical height for display. []{data-label="fig:energy"}](energy.eps){width="65.00000%"}
Core Reconstruction {#sec:corereconstruction}
-------------------
In an air shower, the concentration of secondary particles is highest along the trajectory of the original primary particle, termed the air shower core. Determining the position of the core on the ground is key to reconstructing the direction of the primary particle. In the sample event, Figure \[fig:sampleevent\], the air shower core is evident in Figure \[fig:sampleevent\]a. The image is an overhead view of the HAWC detector with circles indicating the WCD location and the PMTs within the WCDs. The colors indicate the amount of light (measured in units of PEs) seen in each PMT. The air shower core is evident as the point of maximum PE density.
The PE distribution on the ground is fit with a function that decreases monotonically with the distance from the shower core. The signal in the $i$th PMT, $S_i$, is presumed to be
$$S_i = S(A, \vec{x}, \vec{x}_i) = A \Big(\frac{1}{2\pi \sigma^2}e^{-{|\vec{x}_i - \vec{x}|^2}/{2\sigma^2}} + \frac{N}{(0.5 + {|\vec{x}_i - \vec{x}|} /{R_{m}})^3}\Big)
\label{eqn:sfcf}$$
where $\vec{x}$ is the core location, $\vec{x}_i$ is the location of the measurement, $R_{m}$ is the Molière radius of the atmosphere, approximately 120 m at HAWC altitude, $\sigma$ is the width of the Gaussian, and $N$ is the normalization of the tail. Fixed values of $\sigma=10$ m and $N=5 \cdot 10^{-5}$ are used. This leaves three free parameters, the core location and overall amplitude, $A$.
The functional form used in this algorithm, termed the Super Fast Core Fit (SFCF), is a simplification of a modified Nishimura-Kamata-Greisen (NKG) function [@nkgfunction] and is chosen for rapid fitting of air shower cores. The NKG function has an additional free parameter, the shower age, and involves computationally intensive power law and gamma function evaluation. The SFCF hypothesis in Equation \[eqn:sfcf\] is similar but numerical minimization can converge faster because: the function is simpler, the derivatives are computed analytically, and the lack of a pole at the core location.
Figure \[fig:sampleevent\]b shows the recorded charge in each PMT as a function of the PMT’s distance along the ground to the reconstructed shower core. The fit for this event is shown along with the PINCness moving average from Section \[sec:ghsep\]. While the full NKG function would describe the lateral distribution better, the SFCF form allows rapid identification the center of showers and this is sufficient for the present analysis. Cores can be localized to a median error of $\sim$2 meters for large events ($\mathcal{B}=8$) and $\sim$4 meters for small events ($\mathcal{B}=3$) for gamma-ray events with a core that lands on the HAWC detector. The error in reconstructing the shower core increases as the core moves further from the array. For example, a shower with a core that is 50 meters from the edge of the array will have an error in the location of the core of $\sim$35 meters.
Direction Reconstruction {#sec:directionreconstruction}
------------------------
To first order, the air shower particles arrive on a plane defined by the speed of light and direction of the primary particle. In fact, the shower front has a slight conical shape centered at the air shower core. Several effects lead to this shape. First, particles far from the core arrive due to multiple scattering and longer travel distances. Second, the multiple scattering of particles at the edges of the shower leads to a broader arrival time distribution than at the core. Since the number of particles decreases with increasing distance from the shower core there are fewer opportunities to sample from the particle arrival time distribution. This decrease in sampling leads to a delay in the measured arrival time of the shower.
The conical shape of the air shower front can be easily seen with the sample event in Figure \[fig:sampleevent\]. Figure \[fig:sampleevent\]c shows, for each PMT in the sample event, the calibrated time each PMT saw light. The color trend is due to the inclined direction of the air shower. Taking out this inclination, we can see the curved shower front. The event in Figure \[fig:sampleevent\] was chosen for display because — taken from a high signal-to-background sample of data — it is very likely a photon from the Crab. If we assert the origin of this particle is the Crab Nebula, we know the air shower plane precisely and can make Figure \[fig:sampleevent\]d: We adjust the times of each PMT hit assuming a pure planar air shower originating at the Crab Nebula. Figure \[fig:sampleevent\]d shows the plane-corrected time as a function of the PMT distance from the core of the air shower along the ground. A pure planar shower would be a horizontal line on this figure. The delay of particles far from the core, is evident.
In the present analysis, we use the reconstructed core location to correct for this effect. A combined curvature/sampling correction — a function of the distance of hits from the shower core and the total charge recorded in the PMT — is utilized for this correction. The curvature/sampling correction is based on a combination of simulation and Crab observations. The rough functional form is tabulated using gamma-ray simulation. The simulation-optimized curvature/sampling correction yields a measured angular resolution approximately a factor of 2 worse than predicted from simulation. The origin of this discrepancy is likely due to some oversimplification of the electronics simulation. Repeated fits to the Crab have yielded a modification to the curvature/sampling correction that is a simple quadratic function of the distance between a hit and the shower core. While the origin of the discrepancy is under investigation, it amounts to a relatively small correction, approximately 2 ns/100 meters. Nonetheless, the improvement in the angular resolution is nearly a factor of two for all $\mathcal{B}$. Remaining disagreement between the simulated and measured angular resolution is adopted as a systematic error.
After correcting for the sampling and curvature, the angular fit is a simple $\chi^2$ planar fit and has been described before [@milagrocrab].
Photon/Hadron Separation {#sec:ghsep}
------------------------
Hadronic cosmic rays are the most abundant particles producing air showers in HAWC and constitute the chief background to high-energy photon observation. The air showers produced by high-energy cosmic rays and gamma rays differ: gamma-ray showers are pure electromagnetic showers with few muons or pions. Conversely, hadronic cosmic rays produce hadronic showers rich with pions, muons, other hadronic secondaries, and structure due to the showering of daughter particles created with high transverse momentum. In HAWC, these two types of showers appear quite different, particularly for showers above several TeV.
Figure \[fig:ghsepexample\] shows the lateral distributions for two showers, an obvious cosmic ray (left) and a strong photon candidate (right) from the Crab Nebula. The effective light level $Q_{\rm{eff}}$ falls off for hits further from the shower core in both showers, but in the hadronic shower there are sporadic high-charge hits far from the air shower’s center. This clumpiness is characteristic of hadronic showers and arises from a combination of penetrating particles (primarily muons) and hadronic sub-showers which are largely absent in photon-induced showers.
![Lateral distribution functions of an obvious cosmic ray (left) and a photon candidate from the Crab Nebula (right). The cosmic ray has isolated high-charge hits far from the shower core due to penetrating particles in the hadronic air shower. These features are absent in the gamma-ray shower. []{data-label="fig:ghsepexample"}](pinc.big-cr.eps "fig:"){width="45.00000%"} ![Lateral distribution functions of an obvious cosmic ray (left) and a photon candidate from the Crab Nebula (right). The cosmic ray has isolated high-charge hits far from the shower core due to penetrating particles in the hadronic air shower. These features are absent in the gamma-ray shower. []{data-label="fig:ghsepexample"}](pinc.rec.event-4.7606.eps "fig:"){width="45.00000%"}
Two parameters are used to identify cosmic-ray events. The first parameter, compactness, was used in the sensitivity study [@sensipaper]. The variable ${\rm{CxPE}}_{40}$ is the effective charge measured in the PMT with the largest effective charge outside a radius of 40 meters from the shower core. We then define the compactness, $\mathcal{C}$, as
$$\mathcal{C} = {{\rm{N_{hit}}} \over {\rm{CxPE_{40}}}}$$
where ${\rm{N_{hit}}}$ is the number of hit PMTs during the air shower. ${\rm{CxPE}}_{40}$ is typically large for a hadronic event, so $\mathcal{C}$ is small.
In addition to the largest hit outside the core, the “clumpiness” of the air shower is quantified with a parameter $\mathcal{P}$, termed the PINCness of an event (short for Parameter for Identifying Nuclear Cosmic-rays). $\mathcal{P}$ is defined using the lateral distribution function of the air shower, seen in Figure \[fig:ghsepexample\]. Each of the PMT hits, $i$, has a measured effective charge $Q_{\rm{eff},i}$. $\mathcal{P}$ is computed using the logarithm of this charge $\zeta_{i}={\rm{log}}_{10}(Q_{\rm{eff},i})$. For each hit, an expectation is assigned $\langle\zeta_i\rangle$ by averaging the $\zeta_i$ in all PMTs contained in an annulus containing the hit, with a width of 5 meters, centered at the core of the air shower.
$\mathcal{P}$ is then calculated using the $\chi^2$ formula:
$$\mathcal{P}= {1 \over N} {\sum_{i=0}^{N} { {(\zeta_i - \langle\zeta_i\rangle)^2} \over{ {\sigma_{\zeta_i}}^2} }}$$
The errors $\sigma_{\zeta_i}$ are assigned from a study of a sample strong gamma-ray candidates in the vicinity of the Crab.
The $\mathcal{P}$ variable essentially requires axial smoothness. Figure \[fig:ghsepexample\] shows the moving average $<\zeta_i>$ for two sample events. The hadronic event in Figure \[fig:ghsepexample\] is “clumpy” and has several hits that differ sharply from the moving average yielding a large $\mathcal{P}$.
The $\mathcal{C}$ and $\mathcal{P}$ variables are well-modeled in simulation. Figure \[fig:compactdistribution\] shows the measured distribution of $\mathcal{C}$ in the vicinity of the Crab Nebula (the Crab region) and in an annular reference region around the Crab (the background region). The background region is scaled to have the same solid angle as the Crab region. The distributions in the vicinity of the Crab are made of a combination of hadronic cosmic-rays and true photons from the Crab. Figures \[fig:compactdistribution\]c and \[fig:compactdistribution\]d show these distributions with the background distribution subtracted. The subtraction yields the data-measured distribution of $\mathcal{C}$ for gamma rays from the Crab. Figure \[fig:pincdistribution\] is a comparable figure for $\mathcal{P}$. Figures \[fig:compactdistribution\] and \[fig:pincdistribution\] are compared to a simulation prediction from the final fitted flux from Section \[sec:spectralfit\]; the simulation agreement is evident.
Noise and Fit Refinement {#sec:refinement}
------------------------
HAWC’s outer 8-inch PMTs individually trigger at some 20–30 kHz and the 10-inch central PMTs at 40-50 kHz. Of this random noise, roughly half (with large uncertainties) are believed to be due to real shower fragments and roughly half due to non-shower sources like radioactive contaminants in the PMT glass or PMT afterpulsing. Approximately 1 kHz of the noise is “high PE” noise from individual accidental air shower muons (10–200 PEs) and can be correlated between the PMTs within a WCD that the muon hits.
This noise can bias the air shower reconstruction and the muon noise has the potential, if not removed, to confuse the identification of gamma-ray showers because a single muon is enough to indicate that a shower is of hadronic origin.
In order to achieve the best direction reconstruction and to avoid falsely rejecting true photons, the SFCF core reconstruction and the plane fit are each performed twice as outlined in Table \[tab:recosteps\]. During the first pass, all selected hits are used to locate the core and initial direction. After this first “rough” fit, hits that are more than $\pm$50 ns from the curvature/sampling corrected air shower plane are removed and the shower is fit a second time.
The computation of photon/hadron separation variables is done with hits that are within $\pm$ 20ns from the curvature/sampling-corrected air shower plane. With these cuts, only some $\sim$4% of gamma ray events will have an accidental muon contributing to the photon/hadron variable computation and risk being falsely rejected.
Crab Nebula Signal {#sec:crabsignal}
==================
Once individual events are reconstructed, the identification and characterization of sources proceeds. Section \[sec:dataset\] discusses the first 553 days of data-taking. Section \[sec:bkg\] describes how the residual cosmic-ray background, after gamma/hadron separation cuts, is estimated in the vicinity of the Crab. Section \[sec:angres\] describes the validation of the angular resolution. Section \[sec:selection\] describes the optimization of photon/hadron discrimination cuts.
Dataset {#sec:dataset}
-------
We consider here data taken by HAWC between 2014 November 26 and 2016 June 2, a total elapsed time of 553 days. The detector was not taking data for a cumulative time of 40 days for various operational reasons. Additionally, a further 7 days of data was rejected due to trigger rate instability. This yields a total livetime of 506.7 days, an average fractional livetime of 92%. Figure \[fig:livetime\] shows the fractional livetime achieved in blocks of 10 days. With the exception of one period of extended downtime (due to a failure of the power transformer at the site), during no single 10-day block was the detector live less than 75% of the time.
The occasional downtime does not heavily bias the exposure. Figure \[fig:livetime\] shows the exposure of the instrument, measured as the fractional deviation of the number of shower events observed as a function of reconstructed right ascension. The anisotropy in cosmic-ray arrival direction is subdominant at $\sim$10$^{-3}$ [@milagrolsa] and we can conclude that the exposure is flat to within $\pm$2%. During the 507-day livetime, the Crab is visible above a zenith angle of 20$^\circ$ for $\sim$1400 hours and above a zenith angle of 45$^\circ$ for $\sim$3200 hours.
![The left panel shows the livetime fraction of HAWC during the first 553 days of data-taking. Data is shown averaged over 10-day increments along with the average from the entire dataset. The right panel shows the exposure of the instrument, measured with the reconstructed event direction, as a function of right ascension. The overhead sky is nearly uniformly exposed with a maximum deviation from uniformity of less than $\pm$2%.[]{data-label="fig:livetime"}](livetime.eps "fig:"){width="45.00000%"} ![The left panel shows the livetime fraction of HAWC during the first 553 days of data-taking. Data is shown averaged over 10-day increments along with the average from the entire dataset. The right panel shows the exposure of the instrument, measured with the reconstructed event direction, as a function of right ascension. The overhead sky is nearly uniformly exposed with a maximum deviation from uniformity of less than $\pm$2%.[]{data-label="fig:livetime"}](exposure.eps "fig:"){width="45.00000%"}
Background Estimation {#sec:bkg}
---------------------
Even with strict photon/hadron discrimination cuts, the data is still dominated by hadronic cosmic-ray events. Fortunately, the directions of hadronic cosmic rays are randomized by magnetic deflection in transit from their sources and the population of cosmic rays is isotropic to a few parts in 10$^{3}$ [@milagrolsa]. Gamma-ray sources, by comparison, appear as localized “bumps” on this smooth background. In order to identify gamma-ray sources, this background contamination must be estimated.
We utilize an algorithm developed for analysis of Milagro data [@milagrocrabspectrum] called direct integration. Due to the strong cosmic-ray rejection capability of HAWC, the number of background events observed in the highest-energy $\mathcal{B}$ bins is very sparse. To compensate for the sparseness of the background, the direct integration background computation has been averaged by $0.5^\circ$. This averaging has been studied using simulated data sets and does not adversely affect the background estimate.
Knowing the background at each pixel in the sky, we can evaluate the likelihood that there is a gamma-ray source at a specific location and the photon flux from each source.
Point Spread Function {#sec:angres}
---------------------
The point spread function, $\rho(\psi)$, describes how accurately the directions of gamma-ray events are reconstructed. Here, $\psi$ is the space-angle difference between the true photon arrival direction and the reconstructed direction. To a good approximation, the point spread function of HAWC is the sum of two 2-dimensional Gaussians with different widths.
$$\rho(\psi) = \alpha G_{1}(\psi) + (1-\alpha) G_{2}(\psi)
\label{fcn:psf}$$
where $G_{i}$ is a Gaussian distribution with width $\sigma_{i}$
$$G_{i} (\psi)= {1 \over{ {2 \pi \sigma_{i}^2}}} {e^{-{\psi^2 \over {2\sigma_{i}}^2 }}}$$
which is normalized to unity across the unit sphere.
Figure \[fig:angresexhibit\] exhibits the measured angular resolution in HAWC data in two size bins $\mathcal{B}=3$ and $\mathcal{B}=8$. The solid-angle density of recorded events $dN/d\Omega$ in the vicinity of the Crab is shown as a function of $\psi^2$. Bins of $\psi^2$ have constant solid angle (in the small-angle approximation), so any remaining cosmic-ray background shows up as a flat component and the gamma rays are evident as a peak near $\psi^2=0$. The improvement in angular resolution for larger events is clear.
Fits to this functional form of Equation \[fcn:psf\] can have highly coupled parameters. It is more useful and traditional to quantify the resulting fits with the 68% containment radius, $\psi_{68}$, the angular radius around the true photon direction in which 68% of events are reconstructed. Figure \[fig:angres\] shows $\psi_{68}$, for each $\mathcal{B}$ of the analysis, measured on the Crab and predicted from simulation. At best, events are localized to within 0.17$^\circ$, the best angular resolution achieved for a wide-field ground array.
Knowing the angular resolution is critical to subsequent steps of the analysis. Figure \[fig:angresexhibit\] indicates that the simulated angular resolution is in good agreement with measurements of the Crab Nebula. This is important because the angular resolution of HAWC for objects at declinations above and below the Crab will differ. While the measured PSF at the position of the Crab cannot be easily extrapolated to other declinations, the simulation can be used to predict the shape of the PSF at any declination. Therefore, the data-simulation agreement shown in Figure \[fig:angresexhibit\] is an important verification step.
![The figure shows the measured angular resolution, the angular bin required to contain 68% of the photons from the Crab, as a function of the event size, $\mathcal{B}$. The measurements are compared to simulation. The measured and predicted angular resolutions are close enough that that using the simulated angular resolution for measuring spectra is a sub-dominant systematic error. []{data-label="fig:angres"}](data-vs-mc-angres.eps){width="65.00000%"}
Cut Selection and Gamma-Ray Efficiency {#sec:selection}
--------------------------------------
The two parameters described in Section \[sec:ghsep\], the compactness, $\mathcal{C}$, and PINCness, $\mathcal{P}$, are used to remove hadrons and keep gamma rays. Events are removed using simple cuts on these variables and the cuts depend on the size bin, $\mathcal{B}$, of the event. The cuts are chosen to maximize the statistical significance with which the Crab is detected in the first 337 days of the 507-day dataset. Concerns of using the data itself for optimizing the cuts are minimal with a source as significant as the Crab.
![The figure shows the fraction of gamma rays and background hadron events passing photon/hadron discrimination cuts as a function of the event size, $\mathcal{B}$. Good efficiency for photons is maintained across all event sizes with hadron efficiency approaching 1$\times$10$^{-3}$ for high-energy events. []{data-label="fig:ghsep"}](efficiency.eps){width="65.00000%"}
Table \[table:cuts\] shows the cuts chosen for each $\mathcal{B}$ bin. The rates of events across the entire sky going into the 9 bins, after hadron rejection cuts, vary dramatically, from$~$$\sim$500 Hz for $\mathcal{B}$=1 to $\sim$0.05$~$Hz for $\mathcal{B}$=9. Figure \[fig:ghsep\] shows the predicted efficiency for gamma rays (from simulation) along with the measured efficiency for hadronic background under these cuts. The efficiency of photons is universally greater than 30% while keeping, at best, only 2 in $10^3$ hadrons. The efficacy of the cuts is a strong function of the event size, primarily because larger cosmic-ray events produce many more muons than gamma-ray events of a similar size.
The limiting rejection at high energies is better than predicted in the sensitivity design study [@sensipaper]. The original study was conservative in estimating the rejection power that HAWC would ultimately achieve. With more than a year of data, we now know the hadron rejection of the cuts and can accurately compute the background efficiency.
Spectral Fit {#sec:spectralfit}
============
Knowing the angular resolution and the background in each $\mathcal{B}$, the energy spectrum of the Crab Nebula may be inferred from the measured data. Section \[sec:likelihood\] describes the likelihood fit to the data. Section \[sec:likelihoodresults\] describes the resulting measurement, and Section \[sec:systematics\] describes the systematic errors to which this measurement is subject.
Likelihood Analysis {#sec:likelihood}
-------------------
The HAWC data is fit using the maximum likelihood approach to find the physical flux of photons from the Crab [@wilkstheorem; @liff]. In this approach, the likelihood of observations is found under two “nested” hypotheses where some number of free parameters are fixed in one model. This approach can be used to conduct a likelihood ratio test by forming a test statistic, TS, that indicates how likely the data is under a pure background hypothesis or to test the improvement of having additional free parameters in the functional form of the hypothesis spectrum.
The likelihood function is formed over the small (on the scale of the angular resolution) spatial pixels within 2 degrees of the Crab. Each pixel, $p$ has an expected number of background events of $B_{p}$ and, for a specific flux model, an expected number of true photons $S_{p}(\vec{a})$, where $\vec{a}$ denotes the parameters of our spectral model of the Crab. The predicted photon counts fall off from the source according the assumed point spread function. The likelihood $\mathcal{L}(\vec{a})$ is then the simple Poisson probability of obtaining the measured events in each pixel, $M_p$ under the assumption of the flux given by $\vec{a}$. The $\mathcal{B}$ dependence of each term in Equation \[eqn:likelihood\] is suppressed.
$${\rm{ln}}(\mathcal{L}(\vec{a})) = \sum_{\mathcal{B}=1}^{9} \sum_{p=1}^{N} {\rm{ln}} \left( {{( B_{p} +S_{p}(\vec{a}) )} ^{M_p} e^{B_{p} +S_{p}(\vec{a}) } \over {M_p!}} \right)
\label{eqn:likelihood}$$
Specifically, we fit a differential photon flux $\phi(E)$ of the log parabola (LP) form:
$$\phi(E) = \phi_0 (E/E_{0})^{-\alpha -\beta\cdot{\rm{ln}}(E/E_{0})}
\label{eqn:logparabola}$$
Here, $\phi_0$ is the flux at $E_{0}$, $\alpha$ is the primary spectral index and $\beta$ is a second spectral index that governs the changing spectral power across the energy range of the fit. In this formulation, $E_{0}$ is not fitted but is chosen to minimize correlations between the free parameters in the fit. When fit to an LP function, $E_{0}=7~{\rm{TeV}}$ produces good results.
Fit Results {#sec:likelihoodresults}
-----------
We find the parameters for $\vec{a}$ that maximize the likelihood function under signal and background hypotheses and quantify the error region of $\vec{a}$ using Wilks’ Theorem [@wilkstheorem]. Figure \[fig:likelihoodspace\_logparabola\] shows the corresponding spaces of $\alpha$, $\beta$ and $\phi_0$ for the LP fit that are consistent with HAWC data at 1 and 2$\sigma$. The maximum likelihood occurs at $\alpha=2.63\pm0.03$, $\beta=0.15\pm0.03$, and log$_{10}(\phi_0~{\rm{cm}^2}~{\rm{s}}~{\rm{TeV}})=-12.60\pm0.02$. At this best flux the TS, compared to the background-only hypothesis, is 11225, a more than 100$\sigma$ detection.
The TS between an unbroken power-law hypothesis (with $\beta=0$) and the full LP fit is 142, so the spectrum is inconsistent with an unbroken power law at 12$\sigma$.
![Likelihood space around Crab best fit. Shown are the best-fit flux and the region of fluxes allowed at 1 and 2$\sigma$. The space shown is the $\phi_0$, $\alpha$ and $\beta$ space from Equation \[eqn:logparabola\] with a pivot energy of $E_{0}=$7 TeV. []{data-label="fig:likelihoodspace_logparabola"}](Norm-vs-Alpha.eps "fig:"){width="30.00000%"} ![Likelihood space around Crab best fit. Shown are the best-fit flux and the region of fluxes allowed at 1 and 2$\sigma$. The space shown is the $\phi_0$, $\alpha$ and $\beta$ space from Equation \[eqn:logparabola\] with a pivot energy of $E_{0}=$7 TeV. []{data-label="fig:likelihoodspace_logparabola"}](Norm-vs-Beta.eps "fig:"){width="30.00000%"} ![Likelihood space around Crab best fit. Shown are the best-fit flux and the region of fluxes allowed at 1 and 2$\sigma$. The space shown is the $\phi_0$, $\alpha$ and $\beta$ space from Equation \[eqn:logparabola\] with a pivot energy of $E_{0}=$7 TeV. []{data-label="fig:likelihoodspace_logparabola"}](Alpha-vs-Beta.eps "fig:"){width="30.00000%"}
We quantify the energy range of this fit two ways. First, we take the spectral fit solution and compute the lower 10% quantile of true energy for $\mathcal{B}$=1 and the upper 90% quantile of true energy for $\mathcal{B}$=9. These are 375 GeV and 85 TeV respectively. This is the energy range over which, under the fitted hypothesis, most of the measured photons are expected to lie.
A more conservative approach can be made focusing on the lowest and highest energies where HAWC data could definitively reveal a sharp cutoff in the spectrum. To do this, we separately fit functions of the forms:
$$\phi(x)=
\begin{cases}
0 & \text{if } E\geq E_{\rm{high}} \\
\phi_{0} E^{-\alpha}, & \text{otherwise}
\end{cases}$$ and
$$\phi(x)=
\begin{cases}
0 & \text{if } E\leq E_{\rm{low}} \\
\phi_{0} E^{-\alpha}, & \text{otherwise}
\end{cases}$$
to find the highest $E_{\rm{low}}$ and lowest $E_{\rm{high}}$ that are, at 1$\sigma$, inconsistent with the HAWC observation. With this approach, we believe that we have positive detection of photons from the Crab between 1 and 37 TeV. This is not to say that higher or lower energy photons cannot be a part of the HAWC observation, but using the event size $\mathcal{B}$ to measure the energy of photons limits the dynamic range of the observation. Other sources at other declinations may yield different answers.
![The figure shows the measured, background-subtracted number of photons from the Crab in each $\mathcal{B}$ bin. To get the total number of photons, the signal from the Crab is fit for each $\mathcal{B}$ separately. The measurements are compared to prediction from simulation assuming the Crab spectrum is at the HAWC measurement. The fitted spectrum is a good description of the data, with no evidence of bias in the residuals. []{data-label="fig:excess"}](fit-and-residuals.eps){width="65.00000%"}
Systematic Errors {#sec:systematics}
-----------------
Table \[tab:systematics\] summarizes the major systematic errors contributing to the measurement of the Crab spectrum with HAWC. These systematic errors have been investigated by computing the spectrum from the Crab under varying assumptions to study the stability of the results under perturbation of the assumptions.
For spectral measurements, a systematic error in three quantities is shown: the overall flux, the spectral index measured and the energy scale. The errors are summed in quadrature to arrive at a total systematic error. In addition to these systematic errors, a systematic error in the absolute pointing of the instrument has been studied.
### Charge Resolution and Relative Quantum Efficiency
The charge resolution is a quantity that captures by how much individual PMT charge measurements can vary, for fixed input light, and is estimated to be 10-15% from studies using the HAWC calibration system. Additionally, PMTs vary in their photon detection efficiency by 15–20%. These factors are not simple numbers, but vary for different light levels in the detector and can change with the arrival time distribution during air showers. Varying these assumptions, the $\mathcal{P}$ and $\mathcal{C}$ change and impact the event passing rates, impacting the spectrum.
### PMT Absolute Quantum Efficiency
PMTs have an efficiency for converting photons impinging on their surface into PEs detected by the PMT, typically between 20-30%. Of course a single “efficiency” number vastly simplifies the situation: the efficiency is divided between the efficiency for producing a PE and the efficiency for collecting a PE, varies across the face of the PMT, and is wavelength dependent. Additionally, the absorption of the water itself is wavelength dependent. Much of this is modeled, but the simulation carries uncertainties in the treatment and is difficult to validate. The calibration system, in particular, cannot yield the absolute PMT efficiency because it requires establishing the efficiency of the calibration system’s optical path to the PMTs much more precisely than is known. Furthermore, the laser for the calibration system is green light and must be extrapolated for application to blue Cherenkov light.
Instead, the absolute efficiency is established by selecting vertical muons in HAWC tanks by their timing properties. Vertical muons are typically minimum ionizing with a relatively constant energy loss. The simulated response to vertical muons is scaled to match data. Nevertheless, we estimate a $\pm10\%$ uncertainty in the absolute PMT efficiency which propagates to the errors in the spectra of sources.
### Time Dependence, PMT Layout and Crab Optimization
The HAWC instrument has changed over time. The main change is that PMTs or channels are occasionally removed during maintenance. Repaired channels have been re-calibrated and the calibration constants have been changed occasionally for other reasons. The event size bins, $\mathcal{B}$, are based on fractions of available PMTs to mitigate the impact of the varying numbers of PMTs. Furthermore, a single simulation with a single representative PMT layout is used to model the detector and this simplification results in a corresponding systematic error.
Different detector layouts were simulated to bound the impact of sporadic tubes being added or removed. As confirmation, the passing rate of background cosmic rays through the photon/hadron discrimination cuts was studied and shows comparable drifts to the simulation studies.
Additionally, the cuts used in the analysis were established by maximizing the statistical significance of the observations of the Crab Nebula during the first 337 days of data. While strictly not an [*[a priori]{}*]{} measurement, the Crab is strong enough in HAWC data that there is very little bias to the final measurement from this optimization process (and no bias for other weaker sources).
To investigate any potential bias, the data was divided in two pieces, a 337-day and a 170-day dataset. The 337-day dataset corresponds to the period over which the cut optimization was done and is the only dataset that could have an over-optimization bias. The Crab spectrum was then measured separately in each dataset. The fitted spectra differ by $\pm$10% in the flux and $\pm$0.1 in the spectral index, similar to what is expected from the varying number of PMTs.
It is unclear whether the different Crab spectra in the 337-day and 170-day datasets are due to overtuning the Crab or the changing detector later in the data-taking. They are similar size effects. Whatever the origin of the effect, it is a sub-dominant, but non-negligible, systematic error.
### Angular Resolution
The chief uncertainties in the angular resolution arise from a mismatch between the data and the simulation and spectral dependence of the angular resolution. The impact of angular resolution has been studied by reconstructing the Crab spectrum under different angular resolution hypotheses.
### Late Light Simulation
The single largest source of systematic error is how late light in the air shower is treated. Simulation suggests that the arrival time distribution of PEs at the PMTs should be well within $\sim$10 ns. Nevertheless, the distributions of $\mathcal{C}$ and $\mathcal{P}$ in background cosmic rays, as well as the raw PE distributions themselves, suggest some mis-modeled effect above about 50 PEs.
Dedicated studies of late light, using the the calibration system, have been seen to extend ToT measurements, thereby distorting the measured charge in PMTs, but an arrival time distribution much wider than expected from simulation is needed to explain the data.
Efforts to understand this systematic are aimed at measuring the entire PMT waveform for a sample of PMTs to better understand the arrival time distribution of PEs without requiring simulation. It is likely that this systematic error will be better understood in the future, but currently it dominates.
Systematic Overall Flux Spectral Index log$_{10}$(E)
--------------------------------------------------- -------------- ---------------- ---------------
Charge Resolution/ Relative Quantum Efficiency $\pm$ 20% $\pm$ 0.05 $<\pm$ 0.1
PMT Absolute Quantum Efficiency $\pm$ 15% $\pm$ 0.05 $<\pm$ 0.1
Time Dependence, PMT Layout and Crab Optimization $\pm$ 10% $\pm$ 0.1 $<\pm$ 0.1
Angular Resolution $\pm$ 20% $\pm$ 0.1
Late Light Simulation $\pm$ 40% $\pm$ 0.15 $<\pm$ 0.15
Total Flux $\pm$ 50% $\pm$ 0.2 $<$ 0.2
: Summary of primary contributions to HAWC systematic error in measuring photon fluxes. The different effects are described in the text. Systematics in the overall flux, the spectral index of sources, and the energy scale are shown. The systematics claims are conservative and are likely to improve with more understanding and better modeling.[]{data-label="tab:systematics"}
### Absolute Pointing
The absolute pointing error is estimated to be no more than $0.1^\circ$ for sources that transit above 45$^\circ$ in HAWC. It is estimated to be no more than $0.3^\circ$ for higher-inclination sources.
The absolute pointing of the HAWC instrument is impacted by the timing calibration as discussed in Section \[sec:calib\]. Each PMT has a calibrated offset to account for different cable lengths and other timing delays. These offsets are established coarsely by repeated reconstructions to force the peak of maximum cosmic-ray density to be overhead. With this correction, the location of the Crab is within 0.2$^\circ$ of its true location. A final detailed alignment is performed to put the Crab Nebula in its known location. The Crab itself must be in the correct location, by construction.
The absolute pointing error on other sources has been studied two ways. First, the Crab location has been fit using only events in bands of reconstructed zenith angle. The Crab location drifts by no more than $0.1^\circ$ up to a zenith angle of 45$^\circ$. Above that inclination, the Crab is weakly detected and we cannot independently demonstrate better than $0.3^\circ$ absolute pointing error. Furthermore, the Crab location has been reconstructed separately using data from each of the 9 $\mathcal{B}$ bins and they agree to within $0.1^\circ$.
Finally, other bright known sources, the blazars Markarian 421 and Markarian 501, agree with their known locations to within 0.1$^\circ$.
Discussion {#sec:discussion}
==========
Comparison to Other Experiments
-------------------------------
Figure \[fig:crabfittedflux\] shows the Crab spectrum measured with HAWC between 1 and 37 TeV compared to the spectrum reported by other experiments. It is consistent with prior measurements within the systematic errors of the HAWC measurement.
![Crab photon energy spectrum measured with HAWC and compared to other measurements using other instruments [@hesscrab2015; @magiccrabnebula; @veritascrab2015; @tibet100tevul; @tibetcrab; @argocrab] The red band shown for HAWC is the ensemble of fluxes allowed at 1$\sigma$ and the best fit is indicated with a dark red line. The light red band indicates the systematic extremes of the HAWC flux. []{data-label="fig:crabfittedflux"}](crab-flux.eps){width="65.00000%"}
A number of improvements to the HAWC measurement — most notably the inclusion of a proper energy reconstruction — will reduce the systematic errors and increase the dynamic range of our measurements. The observation of the Crab validates this analysis for subsequent application to the sources across the rest of the sky.
Performance Figures
-------------------
Figure \[fig:aeff\] shows the effective area of HAWC, in this analysis, to photons arriving within 13$^\circ$ from overhead. The effective area is defined as the geometrical area over which events are detected, convolved with the efficiency for detecting events. The exact conditions for a photon to be considered detected are complicated for this analysis because, since we are performing a likelihood fit of all pixels in the vicinity of the Crab, photons that are poorly reconstructed play some role in the analysis. In order to have a well-formed effective area, we consider only photons defined within the 68% containment radius, from Table \[tab:recosteps\]. For comparison, Figure \[fig:aeff\] includes the progression of cuts, from the effective area without any photon/hadron discrimination without a strong angular accuracy cut to the full analysis cuts. The effective area can exceed the geometrical area of the instrument (about 2$\times10^4$ m$^2$) because events with a core location off the detector occasionally pass the imposed cuts.
![The effective area for HAWC for events within 13$^\circ$ from overhead. To show the progression of analysis cuts, we show curves without any photon/hadron discrimination, insisting that events only reconstruct within 4$^\circ$ of their true direction. Requiring events to be reconstructed within their 68% containment radius lowers the effective area and photon/hadron discrimination cuts lowers it further. With a requirement that events be reconstructed on the detector, the effective area flattens at roughly half the physical area of the instrument.[]{data-label="fig:aeff"}](aeff.eps){width="65.00000%"}
This computation of the effective area is lower than in [@hawcgrbsensi]. For this analysis, developed for steady, multi-TeV sources, we employ tight angular cuts and have a higher energy threshold than used in the initial design study. Furthermore, the re-triggering, defined by the lower edge of $\mathcal{B}=1$, limits the effective area below 1 TeV.
Figure \[fig:diffsensi\] shows the computed differential sensitivity of HAWC to sources at the declination of the Crab utilizing the procedure of [@sensipaper] with the analysis presented here. A point source of differential photon spectrum $E^{-2.63}$ is simulated and fitted using the full likelihood fit. The flux required to be detected at 5$\sigma$ 50% of the time is shown for each bin, $\mathcal{B}$. The lines for each $\mathcal{B}$ are shown with a width corresponding to the width required to contain 68% of the events under the $E^{-2.63}$ hypothesis. A correction is made to adjust the $\mathcal{B}$ separation to a quarter decade in true energy, and the result is fitted. The sensitivity prediction from [@sensipaper] suffered from uncertain background at the highest energies. Now, with more than a year of data, we know the background precisely and can set the cuts appropriately. This has resulted in a more accurate (and more sensitive) analysis above 10 TeV. Below about 1 TeV, for a number of reasons, the sensitivity is somewhat worse than predicted in the original study. The background is larger than the original simulation-only prediction. Furthermore, in the current analysis we employ a relatively high cut (defined by $\mathcal{B}=1$) so that improperly modeled noise can be neglected.
![The quasi-differential sensitivity of HAWC as a function of photon energy, compared to existing IACTS [@veritasdifferential; @hesscrab2015; @magicdifferential] and Large Area Telescope on the Fermi Gamma-Ray Space Telescope [^1]. We show the flux, assuming a source with a differential energy spectrum $E^{-2.63}$, required to produce a 5$\sigma$ detection 50% of the time. This flux is shown in light red for each of the 9 $\mathcal{B}$ bins, with a width in energy corresponding to the central 68% containment energies in each bin. These values are adjusted to find the equivalent quarter-decade-separated flux sensitivities, and a fit to these values is shown in dark red. The 507-day observation of HAWC corresponds to $\sim$3000 hours of a source at a declination of 22$^\circ$ within HAWC’s field-of-view. HAWC’s one-year sensitivity surpasses a 50-hour observation by current-generation IACTs at $\sim$ 10 TeV. []{data-label="fig:diffsensi"}](differential-sensitivity.eps){width="80.00000%"}
Anticipated Improvements {#sec:improvements}
------------------------
The main limitation of this analysis is the reliance on the number of PMTs, used for the definition of $\mathcal{B}$, to simultaneously constrain the energy of photons, the angular resolution, and photon/hadron efficiency. Figure \[fig:energy\] shows that this is a poor energy estimation with each bin $\mathcal{B}$ spanning roughly an order of magnitude of energy. More critically, an overhead $\sim$10 TeV photon can trigger nearly every PMT in HAWC if the core lands near the center of the detector. Consequently $\mathcal{B}=9$ is an overflow bin of everything above 10 TeV. These limitations can be removed with an event parameter that accounts for the light level in the event and the specific geometry and inclination angle of events. Approaches like this are under development. The planned deployment of a sparse “outrigger” array should further increase the sensitivity to photons above 10 TeV [@hawcoutriggers].
Additionally, the principal systematic error (the modeling of late light) is conservatively estimated here and is being studied using the calibration system. It is likely that the effects of late light will be better modeled in the future.
Finally, the threshold for this analysis is established by including only events where more than 6.7% of the PMTs detect light. The typical number of live, calibrated PMTs is $\sim$1000, corresponding to a threshold of $\sim$70 PMTs. Events with 20–30 PMTs could be reconstructed if the noise could be confidently identified. A relatively high event size threshold is used in this analysis to reduce its dependence on the modeling of noise hits. Planned improvements in the modeling should lower the energy threshold of the spectrum analysis in future studies.
The HAWC instrument is performing well with survey sensitivity exceeding current-generation instruments above 10 TeV, sensitivity which HAWC maintains across much of its field-of-view. The all-sky survey conducted by HAWC probes unique flux space and reveals the highest-energy photon sources in the northern sky. Understanding the Crab gives confidence in the survey results.
We acknowledge the support from: the US National Science Foundation (NSF); the US Department of Energy Office of High-Energy Physics; the Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory; Consejo Nacional de Ciencia y Tecnología (CONACyT), M[é]{}xico (grants 271051, 232656, 260378, 179588, 239762, 254964, 271737, 258865, 243290, 132197); L’OREAL Fellowship for Women in Science 2014; Red HAWC, M[é]{}xico; DGAPA-UNAM (grants RG100414, IN111315, IN111716-3, IA102715, 109916); VIEP-BUAP; PIFI 2012, 2013, PROFOCIE 2014, 2015; the University of Wisconsin Alumni Research Foundation; the Institute of Geophysics, Planetary Physics, and Signatures at Los Alamos National Laboratory; Polish Science Centre grant DEC- 2014/13/B/ST9/945; Coordinaci[ó]{}n de la Investigaci[ó]{}n Científica de la Universidad Michoacana.
[^1]: Pass 8 Sensitivity: https://www.slac.stanford.edu/exp/glast/groups/canda/lat\_Performance.htm
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Given a von Neumann algebra $M$ with a faithful normal finite trace, we introduce the so called finite tracial algebra $M_f$ as the intersection of $L_p$-spaces $L_p(M, \mu)$ over all $p\geq1$ and over all faithful normal finite traces $\mu$ on $M.$ Basic algebraic and topological properties of finite tracial algebras are studied. We prove that all derivations on these algebras are inner.'
author:
- 'Sh.A. Ayupov $^{1,*},$ R.Z. Abdullaev $^2$, K.K. Kudaybergenov $^3$'
title: '**On a certain class of operator algebras and their derivations**'
---
\[section\] \[section\] \[section\] \[section\] \[section\] \[section\]
$^1$ *Institute of Mathematics and Information Technologies, Uzbekistan Academy of Science, Dormon Yoli str. 29, 100125, Tashkent, Uzbekistan*
*and*
*Abdus Salam International Centre for Theoretical Physics, Trieste, Italy,*
e-mail: *sh\[email protected]*
$^{2}$ *Institute of Mathematics and Information Technologies, Uzbekistan Academy of Science, Dormon Yoli str. 29, 100125, Tashkent, Uzbekistan,*
e-mail: *[email protected]*
$^{3}$ *Karakalpak state university, Ch. Abdirov str. 1, 142012, Nukus, Uzbekistan,*
e-mail: *[email protected]*
**AMS Subject Classifications (2000):** 46L51, 46L52, 46L57, 46L07.
**Key words:** von Neumann algebra, faithful normal finite trace, non commutative $L_p$-spaces, Arens algebra, finite tracial algebra, derivations.
Corresponding author
Introduction
============
In the present paper we introduce a new class of algebras, the so called *finite tracial algebras*, which are defined as the intersection of non commutative $L_p$-spaces $L_p(M,\mu)$ [@Yed] over all $p\in [1,\infty)$ and over all faithful normal finite (f.n.f.) traces $\mu$ on a von Neumann algebra $M.$ Equivalently, a finite tracial algebra $M_f$ is the intersection of all non commutative Arens algebras $L^\omega(M,\mu)=\bigcap\limits_{p\geq 1}L_p(M,\mu),$ over all f.n.f. traces $\mu.$ It is known that Arens algebras are metrizable locally convex \*-algebras with respect to the topology generated by the system of $L_p$-norms for a fixed trace. Algebraic and topological properties of Arens algebras have been investigated in the papers [@Abd]- [@Alb], [@Are], [@Ino].
In the present paper we study basic properties of finite tracial algebras with the topology generated by all $L_p$-norms $\{\|\cdot\|^{\mu}_{p}\}$, where $p\in [1,\infty)$ and $\mu$ runs over all f.n.f. traces on the given von Neumann algebra $M.$ We prove that a finite tracial algebra $M_f$ is metrizable or reflexive if and only if the center of the von Neumann algebra $M$ is finite dimensional; in this case $M_f$ coincides with an appropriate Arens algebra. We also give a necessary and sufficient condition for $M_f$ to coincide (as a set) with $M.$ But even in this case one has a new topology on the von Neumann algebra $M.$ We obtain also a description of the dual space for the algebra $M_f.$
Finally we prove that every derivation on a solid subalgebra of the Arens algebra $L^\omega(M,\tau)$ is inner. In particular we obtain that the algebra $M_f$ admits only inner derivations.
Throughout the paper we consider a von Neumann algebra $M$ with a f.n.f trace. Therefore $M$ is a finite von Neumann algebra and thus all closed densely defined operators affiliated with $M$ are measurable with respect to $M$, i. e. the set of all such operators coincides with the algebra $S(M)$ of all measurable operators and hence also with the algebra $LS(M)$ of all locally measurable operators affiliated with $M$ ; moreover the center of $S(M)=LS(M)$ coincides with the set of operators affiliated with the center of $M.$
Preliminaries
=============
Let $M$ be a von Neumann algebra with the positive cone $M^+$ and let $\textbf{1}$ denote the identity operator in $M.$
A positive linear functional $\mu$ is called a *finite trace* if $\mu(u^\ast xu)=\mu(x)$ for all $x\in M$ and each unitary operator $u\in M.$
A finite trace $\mu$ is said to be *faithful* if for $x\in M^{+},$ $\mu(x)=0$ implies that $x=0.$
A finite trace $\mu$ is *normal* if given any monotone net $\{x_\alpha\}$ increasing to $x\in M$, one has $\mu(x)=\sup\mu(x_\alpha).$
Let $\tau$ be a fixed faithful normal finite (f.n.f.) trace on a von Neumann algebra $M.$ The Radon — Nikodym theorem [@Seg Theorem 14] implies that given any f.n.f. trace $\mu$ on $M$ there exists a positive operator $h\in L^{1}(M,\tau)$ affiliated with the center of $M$ such that $\mu(x)=\tau(hx)$ for all $x\in M.$ This operator $h$ is called the Radon — Nikodym derivative of the trace $\mu$ with respect to the trace $\tau$ and it is denoted as $\frac{\textstyle d\mu}{\textstyle d\tau}.$
We recall [@Seg], [@Yed] that given a f.n.f. trace $\tau$ on a von Neumann algebra $M$ the space $L_p(M,\tau)$ , $p\in [1,\infty)$, is defined as $$L_p(M,\tau)=\{x\in S(M):\,\,|x|^p\in L_1(M,\tau)\}.$$ The space $L_p(M,\tau)$ equipped with the norm $\|x\|_p=(\tau(|x|^p))^{1/p}$ is a Banach space and its dual space coincides with $L_q(M,\tau)$ where $\frac{\textstyle 1}{\textstyle p}+\frac{\textstyle 1}{\textstyle q}=1,$ and the duality is given by $$\langle x,a\rangle=f_a(x)=\tau(ax),$$ for all $f_a\in L_{p}(M,\tau)^{\ast},\,\,\,\,a\in L_{q}(M,\tau)$ (see [@Yed Theorem 4.4]).
Following [@Ino] consider the intersection $$L^\omega(M,\tau)=\bigcap\limits_{p\in [1,\infty)}L_p(M,\tau).$$ It is known (see also [@Abd], [@Alb], [@Are]), that $L^\omega(M,\tau)$ is a complete locally convex $\ast$-algebra with respect to the topology $t^{\tau}$ generated by the system of norms $\{\|\cdot\|_p\}_{p\in[1,\infty)}.$
Each operator $a \in\bigcup\limits_{q\in(1,\infty)}L_q(M,\tau)$ defines a continuous linear functional $f_a$ on $(L^\omega(M,\tau),t^\tau)$ by the formula $f_a(x)=\tau (ax),$ and conversely given an arbitrary continuous linear functional $f$ on the algebra $(L^\omega(M,\tau),t^\tau)$ there exists an element $a\in\bigcup\limits_{q\in(1,\infty)}L_q(M,\tau)$ such that $f(x)=\tau (ax).$
Finite Tracial Algebras
=======================
Let $M$ be a finite von Neumann algebra. Denote by $\mathcal{F}$ the set of all f.n.f. traces on $M$ and from now on suppose that $\mathcal{F}\neq\varnothing.$
Consider the space $$M_f=\bigcap\limits_{\mu\in \mathcal{F}}\bigcap\limits_{p\in[1,\infty)}
L_{p}(M,\mu)=\bigcap_{\mu\in\mathcal{F}}L^{\omega}(M, \mu).$$ On the space $M_f$ one can consider the topology $t,$ generated by the system of norms $\{\|\cdot\|_{p}^{\mu}: \mu\in \mathcal{F}, p\in[1,\infty)\}.$
Since each Arens algebra $L^{\omega}(M,\mu),\,\,\mu\in\mathcal{F},$ is a complete locally convex topological $\ast$-algebra in $S(M)$ from the above definition one easily obtains the following
$(M_f,t)$ is a complete locally convex topological $\ast$-algebra.
**Definition.** The topological $\ast$-algebra $M_f$ is called the *finite tracial algebra* with respect to the von Neumann algebra $M.$
**Remark.** Finite tracial algebras present examples of so called $GW^{\ast}$-algebras in the sense of [@Kun].
Recall (see [@Kun]) that a topological $\ast$-algebra $(A, t_A)$ is called $GW^{\ast}$-algebra, if $A$ has a $W^{\ast}$-subalgebra $B$ with $(\textbf{1}+x^{\ast}x)^{-1}\in B$ for all $x\in A$ and the unit ball of $B$ if $t_A$-bounded.
The finite tracial algebra $M_f$ is a $GW^{\ast}$-algebra. Since $M\subset M_f$ it is sufficient to show that the unit ball in $M$ is $t$ - bounded in $M_f$.
Let $x\in M$, $\|x\|_\infty\leq 1$. For $\mu\in \mathcal{F}$, and $1\leq p< \infty$ we have $$\|x\|_{p}^{\mu}=\|x\textbf{1}\|_p^\mu\leq \|x\|_\infty\|\textbf{1}\|_p^\mu\leq \mu(\textbf{1})^{\frac{1}{p}},$$ i. e. $\|x\|_p^\mu\leq \mu(\textbf{1})^{\frac{1}{p}}$ for all $x\in M$, $\|x\|_\infty\leq 1$. This means that the unit ball of $M$ is $t$ - bounded in $M_f$. Therefore $M_f$ is a $GW^*$ - algebra.
The algebra $M_f$ contains $M$ but it is a rather small algebra, since it is contained in all $L_p(M,\mu)$ for all $p\geq 1$ and f.n.f. traces $\mu$ on $M.$ The following result gives necessary and sufficient conditions for $M_f$ to coincide with $M.$
For a finite von Neumann algebra $M$ the following conditions are equivalent
i\) $M_f=M;$
ii\) $M$ is a finite sum of homogeneous type $I_n,\, n\in\mathbb{N}$ von Neumann algebras.
The proof of this theorem consists of several auxiliary proposition which are interesting in their own right. Let us start with the commutative case.
Let $M$ be a von Neumann algebra with a faithful normal trace and let $Z$ be its center. Then the center of the algebra $M_f$ coincides with $Z$, i. e. $Z(M_f)=Z$. In particular if $M$ is commutative then $M_f=M$.
*Proof.* Let $M$ be a von Neumann algebra with a faithful normal finite trace $\tau$, and $\tau(\textbf{1})=1$.
Consider $x\in Z(M_f)$, $x\geq 0$, and let $x=\int\limits_0^\infty \lambda d e_\lambda$ be the spectral resolution of $x$. Since $x\in Z(M_f)$ and $M\subset M_f$, we have that $e_\lambda \in Z$ for all $\lambda\in \mathbb{R}$. Passing if necessary to the element $\varepsilon \textbf{1}+x$ we may suppose without loss of generality that $e_1=0$.
For $n\in \mathbb{N}$ set $$p_n=e_{(n+1)^2}-e_{n^2}$$ and $$y=\sum_{n\in \mathbb{N}}n^2 p_n.$$ Since $xp_n\geq n^2p_n$ for all $n\in \mathbb{N}$, we have that $0\leq y\leq x$ and hence $y\in M_f$.
Let $$F=\{n\in \mathbb{N}:t_n=\tau(p_n)\neq 0\}$$ and $$h=\sum_{n\in F}\frac{1}{n^2 t_n} p_n \in Z(S(M)).$$ Since $$\bigvee\limits_{n=1}^{m}p_n=\bigvee\limits_{n=1}^{m}(e_{(n+1)^2}-e_{n^2})=
\sum\limits_{n=1}^{m}(e_{(n+1)^2}-e_{n^2})=e_{(m+1)^2}-e_1=e_{(m+1)^2}\uparrow \textbf{1},$$ one has that $$\bigvee\limits_{n=1}^{\infty} p_n=\textbf{1}.$$ Therefore there exists $h^{-1}\in S(M)$. Further we have $$\tau(h)=\sum_{n\in F}\frac{1}{n^2t_n}\tau(p_n)=\sum_{n\in F}\frac{1}{n^2t_n}t_n=
\sum_{n\in F}\frac{1}{n^2}\leq\sum_{n\in \mathbb{N}}\frac{1}{n^2}<\infty,$$ i.e. $h\in L_1(M,\tau)$.
Put $\mu(\cdot)=\tau(h\cdot)$. Since $y\in M_f$, it follows that $y\in L_1(M,\mu)$. Therefore $\mu(y)<\infty$.
On the other hand $$hy=\sum_{n\in F}\frac{1}{n^2t_n}p_n \sum_{n\in \mathbb{N}}n^2p_n=\sum_{n\in F}\frac{1}{t_n}p_n,$$ and thus $$\mu(y)=\tau(hy)=\sum_{n\in F}\frac{1}{t_n}\tau(p_n)=\sum_{n\in F}\frac{1}{t_n}t_n=\sum_{n\in F}1=|F|,$$ where $|F|$ is the cardinality of the set $F$. Since $\mu(y)<\infty$ this implies that $F$ is a finite set. Let $k=\max\{n:n\in F\}$. Then $\tau(p_n)=0$ for all $n>k$, and since $\tau$ is faithful we have that $p_n=0$ for all $n>k$, i.e. $e_{(n+1)^2}=e_{n^2}$. But $e_{n^2}\uparrow\textbf{1}$ and thus $e_{n^2}=\textbf{1}$ for all $n>k$. This means that $0\leq x\leq (k+1)^2\textbf{1}$, i.e. $x\in Z$.
The proof is complete. $\blacksquare$
Let $M$ be a type $I_n,\, n\in\mathbb{N}$ von Neumann algebra. Then $M_f=M.$
*Proof.* By [@Tak Ch. V, Theorem 1.27] the von Neumann algebra $M$ of type $I_n\,\,\,(n\in\mathbb{N})$ can be represented as $M=Z\otimes B(H_n),$ where $Z$ is the center $M$ and $H_n$ is the $n$-dimensional Hilbert space. Put $\mathcal{F}_Z=\{\tau|_{Z}:\tau\in\mathcal{F}\}.$ Therefore from Proposition 3.1 we obtain
$$M_f=\bigcap\limits_{p\in[1,\infty)}\bigcap\limits_{\tau\in \mathcal{F}}L_{p}(M,\tau)=
\bigcap\limits_{p\in[1,\infty)}\bigcap\limits_{\mu\in \mathcal{F}_Z}
L_{p}(Z,\mu)\otimes B(H_n)=$$ $$=\left(\bigcap\limits_{p\in[1,\infty)}\bigcap\limits_{\mu\in \mathcal{F}_Z}L_{p}(Z,\mu)\right)\otimes B(H_n)=
Z_{f}\otimes B(H_n)=$$ $$=Z\otimes B(H_n)=M,$$ i.e. $M_f=M.$
The proof is complete. $\blacksquare$
Let $M$ be a finite von Neumann algebra which is isomorphic to the direct sum of an infinite number of homogeneous type $I_n\,\,\,(n\in\mathbb{N})$ von Neunamm algebras. Then $M_f\neq M.$
*Proof.* Suppose that $M={\sum\limits_{k\in K}}^{\oplus}M_k,$ where $K$ is an infinite subset of $\mathbb{N},$ and $M_k$ is a homogeneous type $I_k$ von Neumann algebra.
Since the set $K$ is infinite, there exists a sequence $\{k_n\}\subset K$ such that $k_n\geq 2^n$ for all $n\in \mathbb{N}$. We have that $$M_{k_n}=Z_{k_n}\otimes B(H_{k_n}),$$ where $Z_{k_n}$ is the center of $M_{k_n}$ and $$N_n=\textbf{1}_n\otimes B(H_{2^n})\subset M_{k_n}.$$ Therefore the algebra $M$ contains a subalgebra \*-isomorphic to the algebra $N={\sum\limits_{n\in \mathbb{N}}}^\oplus N_n$.
Hence, without loss of generality we may assume that $M={\sum\limits_{n\in \mathbb{N}}}^{\oplus}N_n,$ where $N_n=B(H_{2^{n}})$ is the algebra of all $2^n\times 2^n$ matrices over $\mathbb{C}.$ On each $N_n$ we consider the unique tracial state (i. e. normalized f.n.f. trace) $\mu_n$ and define on $M$ the following f.n.f. trace $$\tau(x)=\sum\limits_{n\in \mathbb{N}}2^{-n}\mu_{n}(x_{n}),$$ where $x={\sum\limits_{n\in \mathbb{N}}}^{\oplus}x_n\in M.$ Then every f.n.f. trace $\mu$ on $M$ has the form $$\mu(x)=\tau(hx)=\sum\limits_{n\in \mathbb{N}}2^{-n}\mu_{n}(h_{n}x_{n})=
\sum\limits_{n\in \mathbb{N}}2^{-n}\alpha_{n}\mu_{n}(x_{n}),$$ where $$h={\sum\limits_{n\in \mathbb{N}}}^{\oplus}h_n=
{\sum\limits_{n\in \mathbb{N}}}^{\oplus}\alpha_n\textbf{1}_{n}\in L_1(M,\tau),$$ i. e. $\alpha_n>0,\,\,n\in \mathbb{N},$$\sum\limits_{n\in \mathbb{N}}2^{-n}\alpha_n<\infty.$
Take a minimal projection $p_n$ in each $N_n=B(H_{2^{n}}).$ Then $\mu_n(p_n)=\frac{\textstyle 1}{\textstyle 2^n}.$
Consider the unbounded element $x={\sum\limits_{n\in \mathbb{N}}}^{\oplus}n p_n$ in $S(M)\setminus M$ and let us prove that $x\in M_f.$ For every f.n.f. trace $\mu$ on $M$ one has that $$\mu(x^p)=\sum\limits_{n\in \mathbb{N}}2^{-n}\alpha_n\mu_n(n^p p_n)=
\sum\limits_{n\in \mathbb{N}}2^{-n}\alpha_nn^p2^{-n}<\infty,$$ because $n^p2^{-n}<1$ for sufficiently large $n\in\mathbb{N}.$ Therefore $x\in L_p(M,\mu)$ for all $p\geq 1$ and every f.n.f. trace $\mu\in\mathcal{F},$ i. e. $x\in M_f.$
The proof is complete. $\blacksquare$
Let $M$ be a type $II_1$ von Neumann algebra with a f.n.f. trace $\tau.$ Then $M_f\neq M$.
*Proof.* Suppose that the trace $\tau$ is normalized, i. e. $\tau(\textbf{1})=1$, and denote by $\Phi$ the canonical center-valued trace on $M$. Since $M$ is of type $II_1$ there exists a projection $p_1$ such that $$p_1\sim \textbf{1}-p_1.$$ Therefore from $\Phi(p_1)+\Phi(p_1^{\perp})=\Phi(\textbf{1})=\textbf{1}$ and $\Phi(p_1)=\Phi(p_2)$ we obtain that $$\Phi(p_1)=\Phi(p_1^{\perp})=\frac{\textstyle 1}{\textstyle 2}\textbf{1}.$$
Suppose that we have constructed mutually orthogonal projections $p_1,\,p_2,\,
\cdots, \,p_n$ in $M$ such that $$\Phi(p_k)=\frac{\textstyle 1}{\textstyle 2^{k}}\textbf{1},\,k=\overline{1, n}.$$ Set $e_n=\sum\limits_{k=1}^{n}p_k.$ Then $\Phi(e_n^{\perp})=\frac{\textstyle 1}{\textstyle 2^{n}}\textbf{1}.$ Now take a projection $p_{n+1}\leq e_n^{\perp}$ such that $$p_{n+1}\sim e_n^{\perp}-p_{n+1},$$ i. e. $$\Phi(p_{n+1})=\frac{\textstyle 1}{\textstyle
2^{n+1}}\textbf{1}.$$
In this manner we obtain a sequence $\{p_n\}_{n\in\mathbb{N}}$ of mutually orthogonal projections such that $$\Phi(p_n)=\frac{\textstyle
1}{\textstyle 2^{n}}\textbf{1},\,n\in\mathbb{N}.$$ It is clear that $\tau(p_n)=\tau(\Phi(p_n))=\frac{\textstyle 1}{\textstyle 2^{n}}, \,\,\,n\in\mathbb{N}.$
From $$\sum\limits_{n=1}^{\infty}||np_n||_{1}^{\tau}=
\sum\limits_{n=1}^{\infty}\tau(np_n)=
\sum\limits_{n=1}^{\infty}\frac{\textstyle n}{\textstyle
2^{n}}<\infty,$$ it follows that the element $x=\sum\limits_{n=1}^{\infty}np_n$ belongs to $ L_{1}(M, \tau),$ and it is unbounded, i. e. $ x\notin M.$
On the other hand for an arbitrary central element $h\in L_{1}(M, \tau), h> 0,$ and $n\in\mathbb{N}$ we have $$\tau(hp_n)=\tau(\Phi(hp_n))=\tau(h\Phi(p_n))=\tau(h\frac{\textstyle
1}{\textstyle 2^{n}}\textbf{1})=\frac{\textstyle 1}{\textstyle
2^{n}}\tau(h).$$ Therefore for an arbitrary f.n.f. trace $\mu$ on $M$ with $\frac{\textstyle d\mu}{\textstyle d\tau}=h$ we have $$\mu(|x|^{p})=\mu(x^{p})=\tau(h x^{p})=
\tau(h\sum\limits_{n=1}^{\infty}n^{p}p_n)=$$ $$=\sum\limits_{n=1}^{\infty}n^{p}\tau(hp_n)=\tau(h)\sum\limits_{n=1}^{\infty}\frac{\textstyle
n^{p}}{\textstyle 2^{n}}<\infty,$$ i. e. $x\in L_p(M,\mu)$ for all $p\geq 1$ and every f.n.f. trace $\mu.$ Therefore $x\in M_f\setminus M.$
The proof is complete. $\blacksquare$
*Proof of Theorem 3.2.* The implication $(i)\Rightarrow (ii)$ follows from Propositions 3.3 and 3.4, while $(ii)\Rightarrow (i)$ follows from Propositions 3.2.
The proof is complete. $\blacksquare$
Now let us describe continuous linear functionals on the space $(M_f, t)$.
Given any $\mu\in \mathcal{F}$, $1<q<\infty$, and $a\in L_q(M,\mu)$ the functional $\varphi(x)=\mu(xa)$, $x\in M_f$, is a continuous linear functional on $(M_f, t)$. Conversely for any continuous linear functional $\varphi$ on $(M_f,t)$ there exist $\mu\in\mathcal{F},\,\, 1<q<\infty,\,\, a\in L_{q}(M,\mu)$ such that $$\varphi(x)=\mu(x a),\,\,\,x\in M_f.$$
*Proof.* Let $\mu\in\mathcal{F},\,\, 1<q<\infty,\,\, a\in L_{q}(M,\mu).$ Put $$\varphi_a(x)=\mu (x a),\,\,\,x\in M_f.$$ Take $p\in\mathbb{R}$ such that $\frac{\textstyle 1}{\textstyle p}+\frac{\textstyle 1}{\textstyle q}=1.$ Since $$|\varphi_a(x)|=|\mu (x a)|\leq ||a||_{q}^{\mu}||x||_{p}^{\mu}$$ for all $x\in M_f,$ one has that $\varphi_a$ is a continuous linear functional on $(M_f,t).$
Conversely, let $\varphi$ be a continuous linear functional on $(M_f,t).$ By [@Yo Corollary 1 on p.43] there exist $\mu\in\mathcal{F},\,\, 1\leq p<\infty,\,\,c>0$, such that $$|\varphi(x)|\leq c||x||_{p}^{\mu}$$ for all $x\in M_f.$ Since $M\subset M_f$ and $M$ is $\|\cdot\|_p^\mu$-dense in $L_p(M,\mu)$, the functional $\varphi$ can be uniquely extended onto $L_p(M, \mu)$. By [@Yed Theorem 4.4] there exists $a\in L_{q}(M,\mu),\,
\frac{\textstyle 1}{\textstyle p}+\frac{\textstyle 1}{\textstyle q}=1$, such that $$\varphi(x)=\mu(x a)$$ for all $x\in L_{p}(M,\mu).$ In particular $$\varphi(x)=\mu (x a)$$ for all $x\in M_f,$ i.e. $\varphi=\varphi_a.$
The proof is complete. $\blacksquare$
If the von Neumann algebra $M$ is a factor then it has a unique (up to a scalar multiple) f.n.f. trace $\mu.$ In this case the finite tracial algebra $M_f$ coincides with the Arens algebra $L^{\omega}(M,\mu)$ and the topology $t$ merges to the topology $t^{\mu}$ generated by the system of norms $\{\|\cdot\|^{\mu}_{p}\}_{p\geq 1}.$ The following theorem describes the general case where this phenomenon occurs.
Recall some notions from the theory of linear topological spaces. Let $E$ be a locally convex linear topological space. An absolutely convex absorbing set in $E$ is called a barrel. If each barrel in $E$ is a neighborhood of zero, then $E$ is said to be $a$ *barreled space*.
It is known ([@Yo], Theorem 2, p.200 ) that every reflexive locally convex space is barreled.
Let $M$ be a finite von Neumann algebra and suppose that $\mathcal{F}\neq \varnothing$ is the family of all f.n.f. traces on $M.$ The following conditions are equivalent:
\(i) $M_f=L^{\omega}(M,\mu)$ for some (and hence for all) $\mu\in \mathcal{F};$
\(ii) $(M_f,t)$ is metrizable;
\(iii) $(M_f; t)$ is reflexive;
\(iv) the center $Z$ of $M$ is finite dimensional, i. e. $M=\sum\limits_{i=1}^{m}M_{i},$ where all $M_i$ are $I_n$-factors or $II_1$-factors.
*Proof.* Suppose that $Z$ is finite dimensional. Then $M$ is a finite direct sum of factors $M_i,\,\,\,i=\overline{1,k}.$ Then for each factor $M_i$ the algebras $(M_i)_f$ and $L^\omega(M_i, \mu_i)$ coincide and the topology $t_i$ is the same as $t_{i}^{\mu_i}.$ Therefore $$M_{f}=(\sum\limits_{i=1}^{n}M_{i})_{f}=\sum\limits_{i=1}^{n}(M_{i})_{f}=
\sum\limits_{i=1}^{n}L^\omega(M_i, \mu_i)=L^\omega(M, \mu),$$ where $\mu={\sum\limits_{i=1}^{n}}\mu_{i}\in \mathcal{F},$ i. e. $M_{f}=L^\omega(M, \mu).$
Now since the topology $t^{\mu}$ on the Arens algebra $L^\omega(M, \mu)$ is metrizable [@Abd] it follows that $t=t^\mu$ is also metrizable.
It is known [@Abd2] that for finite traces $\mu$ the Arens algebra $(L^\omega (M, \mu), t^\mu)$ is reflexive and hence $(M_f, t)$ is also reflexive.
Therefore $($*iv*$)$ implies $($*i*$)$, $($*ii*$)$ and $($*iii*$)$.
$($*i*$)$ $\Rightarrow$ $($*iv*$)$. Suppose that $M_f=L^\omega(M, \mu)$ for an appropriate $\mu\in \mathcal{F}$. Then there exists a sequence of mutually orthogononal projections $\{p_n\}$ in $Z$ such that $p_n\neq 0$ for all $n\in \mathbb{N}$. Since the trace $\mu$ is finite one has that $\sum\limits_{k=1}^{\infty}\mu(p_k)<\infty$ and hence there is a subsequence $\{n_k: k\in \mathbb{N}\}$ such that $\mu(p_{n_k})\leq \frac{\textstyle 1}{\textstyle 2^k}$ for all $k$.
Set $$x=\sum\limits_{k=1}^\infty k p_k$$ For $p\geq 1$ we have $$\mu(|x|^p)=\sum\limits_{k=1}^\infty k^p\mu(p_k)\leq\sum\limits_{k=1}^\infty k^p\frac{1}{2^k}<\infty,$$ and hence $x\in L^\omega(M, \mu)=M_f$.
On the other hand $x$ is a central element in $M_f$ and Proposition 3.1 implies that $x\in Z(M_f)=Z\subset M$. But it is clear that the element $x$ is unbounded, i.e. $x\notin M$. The contradiction shows that $Z$ is finite dimensional.
$(ii)\Rightarrow (iv).$ Suppose that $(M_f,t)$ is metrizable. By Theorem 3.1 it is complete and hence it is a Fre$^{\prime}$chet space. In particular the center of $M_f$ which coincides with $Z_f$ is also a Fre$^{\prime}$chet space. By Proposition 3.1 $Z_f=Z$ and hence $Z$ is a Fre$^{\prime}$chet space with respect to the induced topology $t_z=t|_{Z}.$
Consider the identity mapping $$I:(Z, \|\cdot\|_\infty)\rightarrow (Z,t_z)$$ where $\|\cdot\|_\infty$ is the operator norm on $Z.$ From the inequalities $$\|x\|_{p}^{\mu}\leq C_{p}^{\mu}\|x\|_\infty$$ (where $C_{p}^{\mu}$ is an appropriate constant for each $p\geq1,$ $\mu\in\mathcal{F}$) it follows that the mapping $I$ is continuous. Since $(Z,t_z)$ is a Fre$^{\prime}$chet space, from Banach theorem on the inverse operator ([@Yo], Chapter II, Section 5) we obtain that the inverse mapping $$I^{-1}:(Z,t_z)\rightarrow (Z, \|\cdot\|_\infty)$$ is also continuous. This means that for some $p\in[1,\infty)$ and an appropriate $\mu\in \mathcal{F}$ there exists a constant $K_{p}^{\mu}$ such that $$\|x\|_\infty\leq K_{p}^{\mu}\|x\|_{p}^{\mu}$$ for all $x\in Z$ ([@Yo], Theorem 1, p. 42).
Now suppose that $\dim Z=\infty$. There exists a sequence $\{p_n\}$ of projections in $Z$ such that $p_n\uparrow \textbf{1}$, $p_n\neq p_{n+1}$. Thus $p_n^\perp\neq 0$, $\tau(p_n^\perp)\rightarrow 0$, i.e. $\|p_n^\perp\|_p^\mu\rightarrow 0$. From the inequality (1) we obtain that $\|p_n^\perp\|_\infty \rightarrow 0$.
On the other hand $\|p_n^\perp\|_\infty =1$. This contradiction implies that $Z$ is finite dimensional.
$(iii)\Rightarrow(iv)$. Suppose that $M_f$ is reflexive. Then the center $Z(M_f)=Z$ is also reflexive as a closed subspace of a reflexive space.
The set $$B=\{x\in Z: ||x||_\infty\leq 1\}$$ is a barrel in $(Z, t)$ and since $Z$ is reflexive, we have that $B$ is a neighborhood of zero in $Z$. Therefore there exist $p\geq 1$, $\mu \in \mathcal{F}$ and $\varepsilon >0$ such that $$\{x\in Z: \|x\|_p^\mu\leq \varepsilon\}\subseteq B$$ i.e. $$\|x\|_\infty\leq \varepsilon^{-1}\|x\|_p^\mu$$ for all $x\in Z$. From this as above it follows that $Z$ is finite dimensional.
The proof is complete. $\blacksquare$
**Remark.** In the von Neumann algebra $M$ the operator topology is stronger than the topology $t$, $t$ is stronger than $t^{\mu}$, and $t^{\mu}$ is stronger than each $L_p$-norm topology for any $p\geq 1$.
Derivations on Finite Tracial Algebras
======================================
Derivations on unbounded operator algebras, in particular on various algebras of measurable operators affiliated with von Neumann algebras, appear to be a very attractive special case of general unbounded derivations on operator algebras.
Let $A$ be an algebra over the complex number. A linear operator $D:A\rightarrow A$ is called a *derivation* if it satisfies the identity $D(xy)=D(x)y+xD(y)$ for all $x, y\in A$ (Leibniz rule). Each element $a\in A$ defines a derivation $D_a$ on $A$ given as $D_a(x)=ax-xa,\,x\in A.$ Such derivations $D_a$ are said to be *inner derivations*.
In [@Alb2] we have investigated and completely described derivations on the algebra $LS(M)$ of all locally measurable operators affiliated with a type I von Neumann algebra $M$ and on its various subalgebras. Recently the above conjecture was also confirmed for the type I case in the paper [@Ber1] by a representation of measurable operators as operator valued functions. Another approach to similar problems in $AW ^{*}$-algebras of type I was suggested in the recent paper [@Gut].
In the paper [@Alb] we have proved the spatiality of derivations on the non commutative Arens algebra $L^{\omega}(M, \tau)$ associated with an arbitrary von Neumann algebra $M$ and a faithful normal semi-finite trace $\tau.$ Moreover if the trace $\tau$ is finite then every derivation on $L^{\omega}(M, \tau)$ is inner.
In this section we prove that each derivation on a finite tracial algebra is inner.
The following result is an immediate corollary of [@Ayu Proposition 3.6].
Let $M$ be a von Neumann algebra with a faithful normal trace $\tau$. Given any derivation $D:M\rightarrow L^\omega (M, \tau)$ there exists an element $a\in L^\omega(M, \tau)$ such that $$D(x)=ax-xa, \,\,\, x\in M.$$
Further we need also the following assertion from [@Ber1 Proposition 6.17].
Let $A$ be a \*-subalgebra of $LS(M)$ such that $M\subseteq A$ and $A$ is solid (that is, if $x\in LS(M)$ and $y\in A$ satisfy $|x|\leq |y|$ then $x\in A$). If $\omega \in LS(M)$ is such that $[\omega, x]\in A$ for all $x\in A$, then there exists $\omega_1\in A$ such that $[\omega, x]=[\omega_1, x]$ for all $x\in A$.
The main result of this section is the following theorem.
Let $M$ be a von Neumann algebra with a faithful normal finite trace $\tau$. If $A\subseteq L^\omega(M, \tau)$ is a solid \*-subalgebra such that $M\subseteq A$, then every derivation on $A$ is inver.
*Proof.* Since $A\subseteq L^\omega(M, \tau)$, by Lemma 4.1 there exits an element $a\in L^\omega(M, \tau)$ such that $$D(x)=ax-xa, \,\,\,\,\,\,\,\, x\in M.$$
Let us show that in fact $$D(x)=ax-xa,\,\,\,\mbox{for all} \,\,\,\, x\in A.$$ Consider $x\in A, \,\, x\geq 0$. Then $(1+x)^{-1}\in M$. From the Leibniz rule it follows that for each invertible $b\in A$ one has $$D(b)=-b D(b^{-1}) b.$$ Therefore $$D(x)=D(\mathbf{1}+x)=-(\mathbf{1}+x)D((\mathbf{1}+x)^{-1})(\mathbf{1}+x).$$ On the other hand since $(1+x)^{-1}\in M$ the equality (2) implies that $$D((1+x)^{-1})=a(\mathbf{1}+x)^{-1}-(\mathbf{1}+x)^{-1}a.$$ Therefore $$-(\mathbf{1}+x)D((\mathbf{1}+x)^{-1})(\mathbf{1}+x)=
-(\mathbf{1}+x)[a(\mathbf{1}+x)^{-1}-(\mathbf{1}+x)^{-1}a](\mathbf{1}+x)=$$ $$=-(\mathbf{1}+x)a+a(\mathbf{1}+x)=ax-xa,$$ i.e. $$D(x)=ax-xa,\,\,\,x\in A, \,\,\, x\geq 0.$$
Since each element from $A$ is a finite linear combination of positive elements, we obtain the equality (3) for arbitrary $x\in A$.
Now since $A$ is a solid \*-subalgebra in $L^\omega(M, \tau)$ containing $A$, Lemma 4.2 implies that the element $a$ implementing the derivation $D$ may be chosed from the algebra $A$, i.e. $$D(x)=ax-xa,\,\,\,x\in A$$ for an appropriate $a\in A.$
The proof is complete. $\blacksquare$
Since the algebra $M_f$ is a solid \*-subalgebra of $L^\omega(M, \tau)$ and contains $M$, we obtain the following result.
If $M$ is a von Neumann algebra with a faithful normal trace, then every derivation on $M_f$ is inner.
**Acknowledgments.** *Part of this work was done within the framework of the Associateship Scheme of the Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste, Italy. The first author would like to thank ICTP for the kind hospitality and for providing financial support and all facilities (July-August, 2009).This work is supported in part by the DFG 436 USB 113/10/0-1 project (Germany).*
[99]{}
Abdullaev R.Z. Isomorphism of Arens algebras, Siberian J. Industrial Math. 1998, Vol. 1, no 2, p. 3-13.
Abdullaev R.Z. The dual space for Arens algebra, Uzbek. Math. J., 1997, no 2, p. 3-7.
Albeverio S., Ayupov Sh.A., Kudaybergenov K.K. Non commutative Arens algebras and their derivations, J. Func. Anal., 253 (2007), no. 1, p. 287-302.
Structure of derivations on various algebras of measurable operators for type I von Neumann algebras, J. Func. Anal., 256 (2009), no. 9, p. 2917-2943.
Albeverio S., Sh.A. Ayupov, R.Z. Abdullaev. Arens Spaces associated with von Neumann Algebras and Normal States, SFB 611, Universität Bonn, Preprint, No 381, 2008.(to appear in POSITIVITY, doi:10.1007/s11117-009-0008-5)
Innerness of derivations on subalgebras of measurable operators, Lobachevskii J. Math. 29 (2008) 60–67.
Arens R. The space $L^{\omega}(0;1)$ and convex topological rings. Bull. Amer. Math. Soc., 52, 1946, p. 931-935.
Derivations in algebras of operator-valued functions, arXiv.math.OA.0811.0902. 2008.
The Wickstead problem, *Sib. Electron. Mat. Izv.* 5: 293–333, 2008.
Inoue A. On class of unbounded operators II, Pacific J. Math. 66 (1976) 411- 431.
Krein S.G., Petunin Yu.N., Semenov E.M. Interpolation of linear operators, Nauka, Moscow, 1978 (in Russian); English translation: American Math. Soc., Providence, RI, 1982.
Kunze W. Zur algebraischen struktur der GC\*-algebren, Mathematische Nachrichten, 1979, Vol. 88, no 1, p. 7-11.
Segal I. A non-commutative extension of abstract integration. Ann.of Math. 1953, vol. 57, p.401-457.
Takesaki M. Theory of operator algebras. I. New-York Heidelburg Berlin: Springer, 1979, XII+415 p.
Yeadon F.J. Non-commutative $L_p$-spaces. Math.Proc. Cambridge Phil. Soc., 1975, v. 77, No 1, p.91-102.
Yosida K. Functional Analysis, Springer-Verlag New York Inc., New York, 1968.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Morris W. Hirsch\
University of California, Berkeley\
University of Wisconsin, Madison
title: On existence and uniqueness of the carrying simplex for competitive dynamical systems
---
> [*Dedicated to Professor Hal Smith on the occasion of his sixtieth birthday*]{}
g
Introduction {#introduction .unnumbered}
============
Consider a system of $n$ competing species whose states are characterized by vectors in the closed positive cone $\K:=[0,\infty)^n\subset {\ensuremath{{\bf R}^{n}}}$. When time is discrete the development of the system is given by a continuous map $T{\colon\thinspace}\K \to \K$. For continuous time is continuous the development is governed by a periodic system of differential equations $\dot x = F(t,x)\equiv
F(t+1,x)$. In this case $T$ denotes the Poincaré map.
For discrete time the [*trajectory*]{} of a state $x$ is the sequence $\{T^k x\}$, also denoted by $\{x(k)\}$, where $k$ varies over the set ${\ensuremath{{\bf N}}}$ of nonnegative integers. In the case of an autonomous differential equation (i.e., $F$ is independent of $t$), the trajectory of $x$ is the solution curve through $x$, denoted by $T^tx$ or $x(t)$, where $t\in[0,\infty)$. In both cases the [*limit set*]{} ${\ensuremath{\omega(x)}}$ is the set of limit points of sequences $x(t_k)$ where $t_k\to \infty$.
In order to exclude spontaneous generation we assume $T_i(x)=0$ when $x_i=0$. Thus there are functions $G_i{\colon\thinspace}\K\to [0,\infty)$, assumed continuous, such that $$\label{eq:TG}
T_i(x)=x_i G_i(x), \qquad ( x\in \K, \quad i=1,\dots,n)$$ For continuous time we assume the differential equation is a system of having the form $\dot x_i=x_iG_i(t,x)$. If $x_i$ is interpreted as the size of species $i$ then $G_i (x)$ is its [*per capita*]{} growth rate.
We take “competition” to mean that increasing any one species does not tend to increase the [*per capita*]{} growth rate of any other species, conventionally modeled by the assumption $$\textstyle \frac{\p G_i}{\p x_j}\le 0, \qquad(i\ne j)$$
A [*carrying simplex*]{} for the map $T$ is a compact invariant hypersurface ${\ensuremath{\Sigma}}\subset \K$ such that every trajectory except the the origin is asymptotic with a trajectory in $\Sigma$, and $\Sigma$ is unordered for the standard vector order in $\K$. In the case of an autonomous differential equation we require that ${\ensuremath{\Sigma}}$ be invariant under the maps $T^t$ for all $t\ge 0$. Some maps have no carrying simplices, others have infinitely many. Our main results gives conditions guaranteeing a unique carrying simplex.
Terminology {#terminology .unnumbered}
-----------
A set $Y\subset \K$ is [*positively invariant*]{} under a map or an autonomous differential equation if it contains the trajectories of all its points, so that $T^t Y\subset Y$ for all $t\ge 0$ (here $t\in {\ensuremath{{\bf N}}}$ or $[0,\infty)$ as appropriate). We call $Y$ [*invariant*]{} if $T^t Y= Y$ for all $t\ge 0$.
If $S$ is a differentiable map, its matrix of partial derivatives matrix at $p$ is denoted by $S'(p)$.
The geometry of $\K$ plays an important role. For each subset ${{\mathbf{I}}}\subset\{1,\dots,n\}$ the $\I$’th [*facet*]{} of $\K$ is $$\K_{{\mathbf{I}}}=\{x\in \K{\colon\thinspace}x_j =0 {\ensuremath{\:\Longleftrightarrow\:}}j\notin {{\mathbf{I}}}\}$$ Thus $\K_{\{i\}}$ is the $i$’th positive coordinate axis. A facet is [*proper*]{} if it lies in the boundary of $\K$, meaning $\I\ne \{0\}$. The intersection of facets is a facet: $\K_I\cap\K_J=\K_{I\cup J}$. The boundary of $\K$ in ${\ensuremath{{\bf R}^{n}}}$, denoted by $\dot \K$, is the union of the proper facets. Each $x\in \K\verb=\=\{0\}$ belongs to the unique facet $\K_{\I(x)}$ where $\I(x):=\{i{\colon\thinspace}x_i > 0\}$.
For each $n\times n$ matrix $A$ and nonempty $\I\subset\{1,\dots,n\}$ we define the principal submatrix $$A_\I:=\left[A_{ij}\right]_{i, j\in \I}$$
The [*vector order*]{} in ${\ensuremath{{\bf R}^{n}}}$ is the relation is defined by $x\succeq y {\ensuremath{\:\Longleftrightarrow\:}}x-y \in \K$. We write $x\succ y$ if also $x\ne
y$. For each set $\I\subset\{1,\dots,n\}$ we write $x\succ_{{\mathbf{I}}}
y$ if $x, y\in
{\ensuremath{\overline{\K_{{\mathbf{I}}}}}}$ and $x\succ y$, and $x\succ\succ_{{\mathbf{I}}} y$ if also $ x_i>y_i$ for all $i\in{{\mathbf{I}}}$. The reverse relations are denoted by $\preceq ,
\prec $ and so forth.
The [*closed order interval*]{} defined by $a,
b\in {\ensuremath{{\bf R}^{n}}}$ is $$[a,b]:=\{x\in{\ensuremath{{\bf R}^{n}}} {\colon\thinspace}a\preceq x\preceq b\}$$
Carrying simplices {#carrying-simplices .unnumbered}
==================
A [*carrying simplex*]{} is a set $\Sigma\subset
\K\verb=\=\{0\}$ having the following properties:
(CS1)
: $\Sigma$ is compact and invariant.
(CS2)
: for every $x\in \K\verb=\= \{0\}$ the trajectory of $x$ is asymptotic with some $y\in
\Sigma$, i.e., $\lim_{t\to\infty}|T^t x-T^ty|=0$.
(CS3)
: ${\ensuremath{\Sigma}}$ is [*unordered*]{}: if $x, y \in {\ensuremath{\Sigma}}$ and $x\succeq y$ then $x=y$.
It follows that each line in $\K$ through the origin meets ${\ensuremath{\Sigma}}$ in a unique point. Therefore ${\ensuremath{\Sigma}}$ is mapped homeomorphically onto the unit (n-1)-simplex $$\Delta^{n-1}:=\{x\in \K{\colon\thinspace}\sum_i x_i=1\}$$ by the radial projection $x\mapsto x/(\sum_i x_i)$.
Long-term dynamical properties of trajectories are accurately reflected by the dynamics in ${\ensuremath{\Sigma}}$ by (CS1) and (CS2), and (CS3) means that $\Sigma$ has simple topology and geometry. The existence of a carrying simplex has significant implications for limit sets ${\ensuremath{\omega(x)}}$:
- If $x >0$ then ${\ensuremath{\omega(x)}} \subset {\ensuremath{\Sigma}}$, a consequence of (CS2). In particular, ${\ensuremath{\Sigma}}$ contains all nontrivial fixed points and periodic orbits.
- If $a, b \in \K$ are distinct limit points of respective states $x, y \succeq 0$ (possibly the same state), then there exist $i, j$ such that $a_i >b_i, \ a_j < b_j$; this follows from (CS3). Thus either ${\ensuremath{\omega(x)}} ={\ensuremath{\omega(y)}}$, or else there exist $i, j$ such that $$\limsup_{t\to\infty}\, x_i(t)- y_i(t) > 0, \qquad
\liminf_{t\to\infty}\, x_j(t)- y_j(t)< 0$$
In many cases ${\ensuremath{\Sigma}}$ is the [*global attractor*]{} for the dynamics in $\K\verb=\=\{0\}$, meaning that as $t$ goes to infinity, the distance from $x(t)$ to ${\ensuremath{\Sigma}}$ goes to zero uniformly for $x$ in any given compact subset of $\K\verb=\=\{0\}$. This implies (Wilson [@Wilson69]) that there is a continuous function $V{\colon\thinspace}\K\verb=\=\{0\}\to [0,\infty)$ such that if $x\ne 0$ then
- $V(x)=0{\ensuremath{\:\Longleftrightarrow\:}}x\in{\ensuremath{\Sigma}}$,
- $V(x(t)) < V(x) {\ensuremath{\:\Longleftrightarrow\:}}x\notin {\ensuremath{\Sigma}}$,
- $\lim_{t\to\infty} V(x(t))=0$,
We can think of $V$ as an “asymptotic conservation law”. While there are many such functions for any carrying simplex, it is rarely possible to find a formula for any of them.
Before stating results we give two simple examples for $n=1$:
If $T$ is the time-one map for the flow defined by the logistic differential equation $$\dot x =rx (\sigma -x), \quad r, \sigma>0,\qquad (x\ge 0),$$ the carrying simplex is just the classical carrying capacity $\sigma$. Here one can define $V (x)= |x-{\ensuremath{\sigma}}|$ for $x>0$.
Consider the map $$\label{eq:0}
T{\colon\thinspace}[0,\infty) \to [0, \infty),\ Tx= xe^{b-ax}, \quad b, a >0,
\qquad x\in [0,\infty)$$ Note that $$T' (x) = (1-x) e^{b-ax},\qquad T' (b/a) = 1-b$$ If there is a carrying simplex, it has to be the unique positive fixed point $b/a$, in which case $\lim_{k\to\infty} T^k x = b/a$ for all $x >0$.
[*If $b \le 1$ then $b/a$ is the carrying simplex.* ]{} In this case the maximum value of $T$ is taken uniquely at $1/a \ge b/a$. If $0<x<b/a$ then $x < Tx <b/a$, hence $T^kx \to b/a$. It follows that if some $T^j x <b/a$ then again $T^k x \to b/a$. If the entire orbit of $x$ is $> b/a$ then the sequence $\{T^k x\}$ decreases to a fixed point $\ge b/a$, hence to be $b/a$.
[*If $b>2$ there is no carrying simplex.* ]{} For then $|T'(b/a)| > 1$, making $b/a$ a locally repelling fixed point. The only way the trajectory of $y \ne b/a$ can converge to $b/a$ is for $T^j y= b/a$ for some $j >0$. The set of such points $y$ is nowhere dense because $T$ is a nonconstant analytic function, hence there is no carrying simplex. For sufficiently large $b$ the dynamics is chaotic.
Example \[ex:exMay\], below, is an $n$-dimensional generalization of Equation (\[eq:0\]).
We say that $T$ is [*strictly sublinear*]{} in a set $X\subset \K$ if the following holds: $x\in X$ and $0<{\ensuremath{\lambda}}<1$ imply ${\ensuremath{\lambda}}x \in X$ and $$\label{eq:3bis}
{\ensuremath{\lambda}}T (x) \prec T({\ensuremath{\lambda}}x), \quad (x\in X \setminus{0})$$ Thus the restricted map $T|X$ exhibits what economists call “decreasing returns to scale.” A state $x$ [*majorizes*]{} a state $y$ if $x \succ y$, and $x$ [*strictly majorizes*]{} $y$ if $x_i >0$ implies $x_i >
y_i$.
The map $T{\colon\thinspace}\K\to\K$ is [*strictly retrotone*]{} in a subset $X\subset \K$ if for all $x, y \in X$ we have $$\mbox{$Tx$ majorizes $Ty\ \implies \ x$ strictly majorizes $y$}$$ Equivalently: $$\mbox{ $x, y\in X\cap {\ensuremath{\overline{\K_{{\mathbf{I}}}}}}$ \ and \ $Tx\succ Ty \ \implies \
x\gg_{{\mathbf{I}}} y$}$$
The origin is a [*repellor*]{} if $T^{-1}(0)=0$ and there exists ${\ensuremath{\delta}}>0$ and an open neighborhood $W\subset\K$ of the origin such that $\liminf_{k\to\infty} {\ensuremath{\vertT^kx\vert}} \ge {\ensuremath{\delta}}$ uniformly in compact subsets of $W\verb=\=\{0\}$.
If in addition there is a global attractor ${\ensuremath{\Gamma}}$, as will be generally assumed, then ${\ensuremath{\Gamma}}$ contains a global attractor $\Gamma_0$ for $T|\, \K\verb=\=\{0\}$. In
We will assume $T$ is given Equation (\[eq:TG\]) has the following properties:
(C0)
: $T^{-1}(0)= 0$ and $G_i (0) > 1$.
The first condition is means that no nontrivial population dies out in finite time. The second means that small populations increase.
(C1)
: [*There is a global attractor ${\ensuremath{\Gamma}}$ containing a neighborhood of $0$.* ]{}
Together with (C0) this implies that there is a global attractor $\Gamma_0 \subset {\ensuremath{\Gamma}}$ for $T|\, \K\verb=\=\{0\}$. The connected component of the origin in $\K\verb=\={\ensuremath{\Gamma}}_0$ is the [*repulsion basin*]{} $B(0)$.
(C2)
: [*$T$ is strictly sublinear in a neighborhod of ${\ensuremath{\Gamma}}$.*]{}
This holds when $
0<{\ensuremath{\lambda}}<1\implies G (x) \prec G ({\ensuremath{\lambda}}x)$.
(C3)
: [*$T$ is strictly retrotone in a neighborhood of the global attractor*]{}
A similar property was introduced by Smith [@Smith86].
Denote the set of boundary points of $\Gamma$ in $\K$ by $\p_\K{\ensuremath{\Gamma}}$.
When [*(C0)—(C3)*]{} hold, the unique carrying simplex is ${\ensuremath{\Sigma}}=\p_\K{\ensuremath{\Gamma}}=\p_\K B(0)$, and ${\ensuremath{\Sigma}}$ is the global attractor for $T|\, \K\setminus\{0\}$.
The proof will appear elsewhere.
The same hypotheses yield further information. It turns out that if $T |{\ensuremath{\Gamma}}$ is locally injective (which Smith assumed), it is a homeomorphism of ${\ensuremath{\Gamma}}$; and in any case the following condition holds:
(C4)
: [*The restriction of $T$ to each positive coordinate axis $\Ko_{\{i\}}$ has a globally attracting fixed point $q_{(i)}$.*]{}
We call $q_{(i)}$ an [*axial*]{} fixed point. Denoting its $i$’th coordinate by $q_i >0$, we set $$q:=(q_1,\dots,q_n)=\sum_i q_{(i)}$$ Smith [@Smith86] shows that (C3) and (C4) imply (C1) with ${\ensuremath{\Gamma}}\subset [0, q]$. In many cases the easiest way to establish a global attractor is to compute the axial fixed points and apply Smith’s result. The following condition implies (C3) for maps $T$ having the form (\[eq:TG\]) when $G$ is $C^1$:
(C5)
: [*If $x\in \K\verb=\=\{0\}$, the matrix $\left[G'(x)\right]_{{{\mathbf{I}}} (x)}$ has strictly negative entries*]{}
For $d\in {\ensuremath{{\bf R}^{n}}}$ we denote the diagonal matrix $D$ with diagonal entries $D_{ii}:=d_i$ by $[d]^{{{\mathsf {diag}}}}$ and also by $[d_i]^{{{\mathsf {diag}}}}$. The $n\times n$ identity matrix is denoted by $I$.
A computation shows that $$T'(x)=[G (x)]^{{{\mathsf {diag}}}} + [x]^{{{\mathsf {diag}}}}G' (x).$$ When $x$ is such that all $G_i (x) >0$, this can be written $$\label{eq:2b}
\begin{split} T'(x)&=[G(x)]^{{{\mathsf {diag}}}}(I- M(x)),\\
M(x) &:= -\left[\frac{x_i}{G_i (x)}\right]^{{{\mathsf {diag}}}}G' (x),
\end{split}$$ and the entries in the $n\times n$ matrix $M(x)$ are $$\label{eq:2a}
\begin{split}
M_{ij}(x)&:=
\frac{-x_i}{G_i (x)}\frac{\p G_i}
{\p x_j} (x),\\
&= -x_i\frac{\p\log G_i}{\p x_j}(x)
\end{split}$$ Note that (C5) implies $M_{ij}(x)> 0$.
The [*spectral radius*]{} $\rho (M)$ of an $n\times n$ matrix $M$ is the maximum of the norms of its eigenvalues. It is a standard result that if $\rho(M) <1$ then $I-M$ is invertible and $(I-M)^{-1}=\sum_{k=0}^\infty M^k$.
Suppose $G$ is $C^1$. Assume [*(C0), (C1), (C2), (C5)*]{}, let [*(C4)*]{} hold with $[0, q]\subset X$, and assume $$\label{eq:4}
0 \prec x \preceq q \implies \rho (M(x)) <1$$ Then [*(C3)*]{} holds, whence the hypotheses and conclusions of Theorem \[th:mainMAPS\] are valid.
The proof will be given elsewhere. Under the same hypotheses the following conclusions also hold:
- [*$T|\Gamma$ is a diffeomorphism*]{}
- [*if $x\in\Gamma \cap \Ko_\I$ then the matrix $[T'(x)_{{{\mathbf{I}}}}]^{-1}$ has strictly positive entries.*]{}
When (C5) holds, either of the following conditions implies (\[eq:4\]): $$\label{eq:3a}
0\prec x\preceq q\implies \sum_i M_{ij}(x) <1, \quad (j=1,\dots,n)$$ $$\label{eq:3b}
0\prec x\preceq q\implies \sum_j M_{ij}(x) <1 \quad (i=1,\dots,n)$$ Each of these conditions implies that the largest positive eigenvalue of $M(x)$ is the spectral radius by (C5) and the theorem of Perron and Frobenius [@BermanNeumann89], and that this eigenvalue is bounded above by the maximal row sum and the maximal column sum by Gershgorin’s theorem [@BrualdiMellendorf94].
Competition models {#competition-models .unnumbered}
==================
In the following illustrative examples we calculate bounds on parameters that make row sums of $M (x)$ obey (\[eq:3a\]), validating the hypotheses and conclusion of Theorem \[th:maps4\] and \[th:mainMAPS\].
Consider a multidimensional version of Equation (\[eq:0\]), based on an ecological model of May & Oster [@May]: $$\label{eq:may}
T{\colon\thinspace}\K\to \K,\quad T_i (x) = x_i\ {\textstyle \exp\big(B_i-\sum_j
A_{ij} x_j\big)}, \qquad
B_i, \;A_{ij}>0$$ This map is not locally injective. In a small neighborhood of the origin $T$ is approximated by the discrete-time Lotka-Volterra map $\hat T$ defined by $(\hat T x)_i = (\exp B_i) x_i (1-\sum_j
A_{ij}x_j)$, but as $\hat T$ does not map $\K$ into itself, it is not useful as a global model. $T$ has a global attractor $\Gamma$ and a source at the origin, so a carrying simplex is plausible. But the special case $n=1$, treated in Example \[th:ex0\], shows that further restrictions are needed.
Condition (C5) holds with $
G_i (x)= {\textstyle \exp\big(B_i-\sum_j
A_{ij} x_j\big)}$. Evidently these functions are strictly decreasing in $x$, which implies $T$ is strictly sublinear. (C4) holds with $q_i=B_i/A_{ii}$, and it can be shown that $\Gamma \subset [0, q]$. In (\[eq:2b\]) the matrix entries are $$\label{eq:mij}
M_{ij}(x)=x_i A_{ij}$$ Therefore Theorem \[th:maps4\] shows that if $$\label{eq:rho}
0\prec x\preceq q \implies \rho (M(x)) < 1$$ then $\p_\K{\ensuremath{\Gamma}}$ is the unique carrying simplex and $T|\Gamma$ is a diffeomorphism. From (\[eq:3a\]), (\[eq:3b\]) and (\[eq:mij\]) we see that (\[eq:rho\]) holds in case one of the following conditions is satisfied: $$\label{eq:rho2}
\mbox{$\displaystyle \frac{B_i}{A_{ii}}\sum_j A_{ij} < 1$ for all
$i$,}$$ or $$\label{eq:rho2a}
\mbox{$\displaystyle \frac{B_i}{A_{ii}}\sum_i A_{ij} < 1$ for all
$j$}$$ These conditions thus imply a unique carrying simplex, by Theorem \[th:maps4\].
To arrive at a biological interpretation of (\[eq:rho2\]), we rewrite it as $$\label{eq:rho3}
q_i \sum_j A_{ij} <1$$ where $q_i :=\frac{B_i}{A_{ii}}$ is the axial equilibrium for species $i$, that is, its stable population in the absence of competitors. Equation (\[eq:may\]) tells us that $A_{ij}$ is the logarithmic rate by which the growth of population $i$ inhibits the growth rate of population $j$. Thus (\[eq:rho3\]) means that the average of these rates must be rather small compared to the single species equilibrium for population $i$. The plausibility of this x1is left to the reader, as is the biological meaning of (\[eq:rho2a\]).
When $n=1$, Equation (\[eq:may\]) defines the map $Tx= xe^{b-ax}$ of Example \[th:ex0\]. The positive fixed point is $q=a/b$, and both (\[eq:rho2\]) and (\[eq:rho2a\]) boil down to $b<1$, which was shown to imply a unique carrying simplex. That example also showed that there is no carrying simplex when $b>2$. As Equation (\[eq:may\]) reduces to Example \[th:ex0\] on each coordinate axis, we see that Equation (\[eq:may\]) lacks a carrying simplex provided $$\mbox{$\displaystyle \frac{B_i}{A_{ii}}\sum_j A_{ij} >2$ for some
$i$,}$$ or $$\mbox{$\displaystyle \frac{B_i}{A_{ii}}\sum_i A_{ij} > 2$ for some
$j$}$$
Consider a competing population model due to Leslie & Gower [@LeslieGower]:
$$\label{eq:lesgow}
T{\colon\thinspace}\K\to\K,\quad T_ix = \frac{C_i x_i}{ 1+ \sum_j A_{ij}x_j},\qquad
C_i,\;A_{ij}>0$$
Note that $T$ need not be locally injective. When $n=1$ all trajectories converge to $0$ if $C \le 1$, and all nonconstant trajectories converge to $\frac{C-1}{A}$ if $C >1$. The case $n=2$ is thoroughly analyzed by Cushing [*et al. *]{}[@Cushing04].
Here $$G_i (x):= \frac{C_i}{ 1+ \sum_j A_{ij}x_j} >0,$$ hence (C5) holds. We assume $C_i>1$, guaranteeing (C4) with $q_i=
\frac{C_i-1}{A_{ii}}$. In (\[eq:2a\]) we have $$M_{ij}(x)= \frac{x_i A_{ij}}{1+\sum_l A_{il}x_l}
< x_i A_{ij}.$$ so the row sums of $M (x)$ are $<1$ for all $x$ provided $q_i\sum_j
A_{ij} <1$. Therefore when $$1<C_i < 1+ \frac {A_{ii}}{\sum_j A_{ij}},$$ Theorems \[th:mainMAPS\] and \[th:maps4\] yield the following conclusions: There is a global attractor ${\ensuremath{\Gamma}}\subset [0,q]$, the unique carrying simplex is $\p_\K {\ensuremath{\Gamma}}$, and $T|{\ensuremath{\Gamma}}$ is a diffeomorphism.
Consider a recurrent, fully connected neural network of $n$ cells (or “cell assemblies”, Hebb [@Hebb49]). At discrete times $t=0,1,\dots$, cell $i$ has activation level $x_i(t)\ge 0$ and the state of the system is $x(t):=(x_1(t),\dots,x_n(t))$. Cell $i$ receives an input signal $s_i(x(t))$ which is a weighted sum of all the activations plus a bias term. Its activation is multiplied by a positive transfer function $\tau_i$ evaluated on $s_i$, resulting in the new activation $x_i (t+1)=x_i (t)\tau_i(s_i)$.
We assume each cell’s activation tends to decrease the activations of all cells, but each cell receives a bias that tends to increase its activation. We model this with negative weights $-A_{ij}<0$, positive biases $B_i>0$, and positive increasing transfer functions. For simplicity we assume all the transfer functions are $e^{\ensuremath{\sigma}}$ where ${\ensuremath{\sigma}}{\colon\thinspace}[0,\infty)\to [0,\infty)$ is $C^1$. States evolve according to the law $$T{\colon\thinspace}\K\to\K, \quad T_i(x)=x_i\exp {\ensuremath{\sigma}}(s_i(x)),\qquad
s_i(x):= B_i-\sum_j A_{ij}x_j$$ We also assume $$\label{eq:neural2}
{\ensuremath{\sigma}}(0)=0,\quad {\ensuremath{\sigma}}'(s) >0, \quad \sup {\ensuremath{\sigma}}'(s)={\ensuremath{\gamma}}<\infty, \qquad (s\in{\ensuremath{{\bf R}}})$$ It is easy to verify that (C1), (C2), (C4) and (C5) hold, with $$\label{eq:neural3}
q_i:= \frac{B_i}{A_{ii}},\quad
G_i (x):=\exp {\ensuremath{\sigma}}\big(B_i-\sum_j
A_{ij}x_j\big),\quad M_{ij}= {\ensuremath{\sigma}}' (s_i(x))A_{ij}$$ where $M_{ij}(x)$ is defined as in (\[eq:2a\]).
It turns out that for given weights and biases, the system has a unique carrying simplex provided the [*gain parameter*]{} $\gamma$ in (\[eq:neural2\]) is not too large. It suffices to assume $$\label{eq:neural4}
{\ensuremath{\gamma}}< \left[\max_{i}\bigg(\frac{B_i}{A_{ii}}\sum_j
A_{ij}\bigg)\right]^{-1}$$ For then (\[eq:neural2\]), (\[eq:neural3\]), (\[eq:neural4\]) imply (\[eq:3b\]) and hence (C3), so Theorems \[th:mainMAPS\] and \[th:maps4\] imply a unique carrying simplex for $T$.
There is a vast literature on neural networks, going back to the seminal book of Hebb [@Hebb49]. Network models of competition were analyzed in the pioneering works of Grossberg [@Grossberg78] and Cohen & Grossberg [@CohenGrossberg83]. Generic convergence in certain types of competitive and cooperative networks is proved in Hirsch [@Hirsch89]. Levine’s book [@Levine00] has mathematical treatments of several aspects of neural network dynamics.
Competitive differential equations {#competitive-differential-equations .unnumbered}
==================================
Consider a periodic differential equation in $\K$: $$\label{eq:per}
\dot u_i = u_iG_i(t, u_1,\dots,u_n)\equiv u_iG_i(t+1, u_1,\dots,u_n),
\quad t, u_i \ge 0, \quad (i=1,\dots,n))$$ where the maps $G_i{\colon\thinspace}\K\to{\ensuremath{{\bf R}}}$ are $C^1$. The solution with initial value $u(0)=x$ is denoted by $t\mapsto T_t x$. Solutions are assumed to be defined for all $t \ge
0$. Each map $T_t$ maps $\K$ diffeomorphically onto a relatively open set in $\K$ that contains the origin. The [*Poincaré map*]{} is $T:= T_1$.
We postulate the following conditions for Equation (\[eq:per\]):
(A1)
: [*total competition: ${\ensuremath{\frac{\partial G_i}{\partial x_j}}}\le 0, \ (i,j=1,\dots,n)$*]{}
(A2)
: [*strong self-competition: $ \sum_{k\in {{\mathbf{I}}} (x)}
{\ensuremath{\frac{\partial G_k}{\partial x_k}}} (t,x)<0$*]{}
(A3)
: [*decrease of large population: $G_i (t,
x) <0$ for $x_i$ sufficiently large.*]{}
This implies existence of a [*global attractor*]{} for the Poincar’e map $T$.
(A4)
: [*increase of small populations: $G_i(t, 0) >0$.*]{}
Under these assumptions there are two obvious candidates for a carrying simplex for $T$, namely $\p_\K B$ and $\p_\K{\ensuremath{\Gamma}}$, the respective boundaries in $\K$ of $B(0)$ and ${\ensuremath{\Gamma}}$. Existence of a unique carrying simplex implies $\p_\K B=\p_\K{\ensuremath{\Gamma}}$.
Assume system (\[eq:per\]) has properties [*(A1)—(A4)*]{}. Then there is a unique carrying simplex, and it is the global attractor for the dynamics in $\K\setminus \{0\}$.
The proof, which will be given elsewhere, uses a subtle dynamical consequence of competition discovered by Wang & Jiang : If $u(t), v(t)$ are solutions to Equation (\[eq:per\]) such that for all $i$ $$u_i(t) < v_i(t), \qquad (s< t< s_1),$$ then $${\ensuremath{\frac{d~}{dt}}} \left(\frac{u_i}{v_i}\right) >0, \qquad (s<t< s_1)$$
A competitive, periodic Volterra-Lotka system in $\K$ of the form $$\dot u_i=u_i\big(B_i(t) -\sum_j A_{ij}(t) u_i\big),\qquad
B_i, A_{ij} >0$$ satisfies (A1)—(A4) and thus the conclusion of Theorem \[th:per\].
Several mathematicians have investigated carrying simplex dynamics for competitive, autonomous Volterra-Lotka systems in $\K$ having the form $$\label{eq:vl}
\dot u_i=u_i \big(B_i -\sum_j A_{ij} u_i\big):=u_i H_i
(u_1,\dots,u_n),\quad B_i, A_{ij} >0$$ The best results are for $n=3$: the interesting dynamics is on a 2-dimensional cell, therefore the Poincaré-Bendixson theorem [@Hartman] precludes any kind of chaos and makes the dynamics easy to analyze. The dynamics for generic systems were classified by M.L. Zeeman [@Zeeman93], with computer graphics exhibited in Zeeman [@ZeemanPics]. She proved that in many cases simple algebraic criteria on the coefficients determine the existence of limit cycles and Hopf bifurcations.
Van den Driessche and Zeeman [@vandenDriesscheZeeman04] applied Zeeman’s classification to model two competing species with species 1, but not species 2, susceptible to disease. They showed that if species 1 can drive species 2 to extinction in the absence of disease, then the introduction of disease can weaken species 1 sufficiently to permit stable or oscillatory coexistence of both species.
Zeeman & Zeeman [@ZeemanZeeman02] showed that generically, but not in all cases, the carrying simplex is uniquely determined by the dynamics in the $2$-dimensional facets of $\K$. Systems with two and three limit cycles have been found by Hofbauer & So [@HofbauerSo], Lu & Luo [@LuLuo02]), and Gyllenberg [*et al.*]{}[@GyllenbergYanWang06]. No examples of Equation (\[eq:vl\]) with four limit cycles are known.
More information on the dynamics of Equation (\[eq:vl\]) can be found in [@vandenDriesscheZeeman98; @XiaoLi00; @ZeemanE02; @ZeemanZeeman94; @ZeemanZeeman03].
Background {#background .unnumbered}
==========
In an important paper on competitive maps, Smith [@Smith86] investigated $C^2$ diffeomorphisms $T$ of $\K$. Under assumptions similar to (C0)—(C5) he proved $T$ is strictly retrotone and established the existence of the global attractor ${\ensuremath{\Gamma}}$ and the repulsion basin $B(0)$. He showed that $\p_\K B(0)$ and $\p_\K{\ensuremath{\Gamma}}$ are compact unordered invariant sets homeomorphic to the unit simplex, and each of them contains all periodic orbits except the origin. His conjecture that $\p_\K B=\p_\K{\ensuremath{\Gamma}}$ remains unproved from his hypotheses. He also showed that for certain types of competitive planar maps every bounded trajectory converges, extending earlier results of Hale & Somolinos [@HaleSomolinos], de Mottoni & Schiaffino [@deMottoniSchiaffino81].
Using Smith’s results and those of Hess & Poláčik [@HessPolacik], Wang & Jiang [@WangJifa] obtained unique carrying simplices for competitive $C^2$ maps.
For further results on the smoothness, geometry and dynamics of carrying simplices, see [@Benaim97; @Mierczynski94a; @Mierczynski99; @Mierczynski99a; @Mierczynski99b].
#### Mea culpa
Uniqueness of the carrying simplex for Equation (\[eq:vl\]) was claimed in Hirsch [@Hirsch88a], but M.L. Zeeman [@ZeemanGap] discovered an error in the proof of Proposition 2.3(d).
[99]{} M. Benaïm, On invariant hypersurfaces of strongly monotone maps. J. Differential Eqns. [**137**]{} (1997), 302–319
A. Berman, M. Neumann & R. Stern, “Nonnegative Matrices in Dynamic Systems.” John Wiley & Sons, New York 1989
R. Brualdi & S. Mellendorf, Regions in the complex plane containing the eigenvalues of a matrix. Amer. Math. Monthly [**101**]{} (1994), 975–985
L. Chua & T. Roska, “Cellular Neural Networks Foundations and Applications.” Cambridge University Press, Cambidge England 2001
M. Cohen & S. Grossberg, Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Systems Man Cybernet. [**13**]{} (1983), 815–826
J. Cushing, S. Levarge, N. Chitnis & S. Henson, Some discrete competition models and the competitive exclusion principle. J. Difference Eqns. Appl. [**10**]{} (2004) no. 13–15, 1139–1151
S. Grossberg, Competition, decision and consensus. J. Math. Anal. Appl. [**66**]{} (1978), 470-493.
M. Gyllenberg, P. Yan & Y. Wang, A 3D competitive Lotka-Volterra system with three limit cycles: a falsification of a conjecture by Hofbauer and So. Appl. Math. Lett. [**19**]{} (2006), 1–7
J. Hale & A. Somolinos, Competition for fluctuating nutrient. J. Math. Biol. [**18**]{} (1983), 255–280
P. Hartman, ”Ordinary Differential Equations.” Wiley, New York 1964
D. Hebb, “The Organization of Behavior.” Wiley, New York 1949
P. Hess and P. Poláčik, Boundedness of prime periods of stable cycles and convergence to fixed points in discrete monotone dynamical systems. SIAM J. Math. Anal. [**24**]{} (1993), 1312-1330
M.W. Hirsch, Systems of differential equations which are competitive or cooperative. III: Competing species. Nonlinearity [**1**]{} (1988), 51–71
M.W. Hirsch, Stability and convergence in strongly monotone dynamical systems. J. Reine Angew. Math. [**383**]{} (1988), 1-58
M.W. Hirsch, Convergent activation dynamics in continuous time neural networks. Neural Networks [**2**]{} (1989), 331–351
M.W. Hirsch & H. Smith, “Monotone dynamical systems, Handbook of Differential Equations: Ordinary Differential Equations, Vol. 2,” 239–258. A. Cañada, P. Drábek & A. Fonda editors. Elsevier North Holland, Boston 2005
M.W. Hirsch & H. Smith, Monotone maps: a review. J. Difference Eqns. Appl. [**11**]{} (2005), 379–398
J. Hofbauer and J. W.-H. So, Multiple limit cycles for three dimensional competitive Lotka-Volterra equations. Appl. Math. Lett. [**7**]{} (1994), 65–70
P. Leslie & J. Gower, The properties of a stochastic model for two competing species. Biometrika [**45**]{} (1958), 316–330
D. Levine, “Introduction to Neural and Cognitive Modeling.” Lawrence Erlbaum Associates, Mahwah, NJ 2000
Z. Lu & Y. Luo, Two limit cycles in three-dimensional Lotka-Volterra systems. Comput. Math. Appl. [**44**]{} (2002), 51–66
R. May & G. Oster, Bifurcations and dynamic complexity in simple ecological models. Amer. Naturalist [**110**]{} (1976), 573–599
J. Mierczyński, The $C\sp 1$ property of carrying simplices for a class of competitive systems of ODEs. J. Differential Eqns. [**111**]{} (1994), 385–409
J. Mierczyński, On smoothness of carrying simplices. Proc. Amer. Math. Soc. [**127**]{} (1999) no. 2, 543–551
J. Mierczyński, On peaks in carrying simplices. Colloq. Math. [**91**]{} (1999), 285–292
J. Mierczyński, Smoothness of carrying simplices for three-dimensional competitive systems: a counterexample. Dynam. Contin. Discrete Impuls. Systems [**6**]{} (1999), 147–154
P. de Mottoni & A. Schiaffino, Competition systems with periodic coefficients: a geometric approach. J. Math. Biol. [**11**]{} (1981) no. 3, 319–335
P. Poláčik and I. Tereščák, Convergence to cycles as a typical asymptotic behavior in smooth strongly monotone discrete-time dynamical systems. Arch. Rational Mech. Anal. [**116**]{} (1992), 339–360
H. Smith, Periodic competitive differential equations and the discrete dynamics of competitive maps. J. Differential Eqns. [**64**]{} (1986) no. 2, 165–194
P. Smolensky, M. Mozer & D. Rumelhart (editors), “Mathematical Perspectives on Neural Networks,” Lawrence Erlbaum Associates, Mahwah NJ 1996
H. Thieme, “Mathematics in Population Biology.” Princeton University Press, Princeton 2003
P. van den Driessche & M.L. Zeeman, Three-dimensional competitive Lotka-Volterra systems with no periodic orbits. SIAM J. Appl. Math. [**58**]{} (1998), 227–234
P. van den Driessche & M.L. Zeeman, Disease induced oscillations between two competing species, SIAM J. Applied Dyn. Sys. [**3**]{} 2004, 604–619 (electronic)
Y. Wang & J. Jiang, Uniqueness and attractivity of the carrying simplex for discrete-time competitive dynamical systems. J. Differential Eqns. [**186**]{} (2002), 611–632
F.W. Wilson, Smoothing derivatives of functions and applications. Trans. Amer. Math. Soc. [**139**]{} (1969), 413–428
D. Xiao & W. Li, Limit cycles for the competitive three dimensional Lotka-Volterra systems. J. Differential Eqns. [**164**]{} (2000), 1–15
E.C. Zeeman, Classification of quadratic carrying simplices in two-dimensional competitive Lotka-Volterra systems. Nonlinearity [**15**]{} (2002), 1993–2018
M.L. Zeeman, personal communication (1995)
M.L. Zeeman, Hopf bifurcations in competitive three-dimensional Lotka-Volterra systems. Dynamics and Stability of Systems [**8**]{} (1993), 189–217
M.L. Zeeman, [http://www.bowdoin.edu/faculty/m/mlzeeman/index.shtml]{}
E.C. Zeeman & M.L. Zeeman, On the convexity of carrying simplices in competitive Lotka-Volterra systems. “Differential Equations, Dynamical Systems, and Control Science,” Lecture Notes in Pure and Appl. Math. [**152**]{} 353–364. Dekker, New York 1994
E.C. Zeeman & M.L. Zeeman, An $n$-dimensional competitive Lotka-Volterra system is generically determined by the edges of its carrying simplex. Nonlinearity [**15**]{} (2002), 2019–2032
E.C. Zeeman & M.L. Zeeman From local to global behavior in competitive Lotka-Volterra systems. Trans. Amer. Math. Soc. [**355**]{} (2003), 713–734 (electronic)
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We report the low temperature magnetic properties of the [DyScO$_3$]{} perovskite, which were characterized by means of single crystal and powder neutron scattering, and by magnetization measurements. Below $T_{\mathrm{N}}=3.15$ K, Dy$^{3+}$ moments form an antiferromagnetic structure with an easy axis of magnetization lying in the $ab$-plane. The magnetic moments are inclined at an angle of $\sim\pm{28}^{\circ}$ to the $b$-axis. We show that the ground state Kramers doublet of Dy$^{3+}$ is made up of primarily $|\pm 15/2\rangle$ eigenvectors and well separated by crystal field from the first excited state at $E_1=24.9$ meV. This leads to an extreme Ising single-ion anisotropy, $M_{\perp}/M_{\|}\sim{0.05}$. The transverse magnetic fluctuations, which are proportional to $M^{2}_{\perp}/M^{2}_{\|}$, are suppressed and only moment fluctuations along the local Ising direction are allowed. We also found that the Dy-Dy dipolar interactions along the crystallographic $c$-axis are 2-4 times larger than in-plane interactions.'
author:
- 'L. S. Wu'
- 'S. E. Nikitin'
- 'M. Frontzek'
- 'A. I. Kolesnikov'
- 'G. Ehlers'
- 'M. D. Lumsden'
- 'K. A. Shaykhutdinov'
- 'E.-J. Guo'
- 'A. T. Savici'
- 'Z. Gai'
- 'A. S. Sefat'
- 'A. Podlesnyak'
title: 'Magnetic ground state of the Ising-like antiferromagnet [DyScO$_3$]{}'
---
[^1]
Introduction {#Intr}
============
Orthorhombic [DyScO$_3$]{} is a member of the rare-earth perovskite family $RM$O$_3$ where $R$ is a rare-earth ion and $M$ is a transition metal ion. Magnetic properties of these compounds have attracted continued attention due to a number of intriguing physical phenomena, like ferroelectric [@Cohen; @Lee] and multiferroic [@Cheong; @Khomskii] properties, temperature and field induced spin reorientation transitions [@Belov], magneto-optical effects [@Kimel], or exotic quantum states at low temperatures [@Mourigal]. In orthoperovskites with $M=$ Fe, Mn, the sublattice of $3d$ moments typically undergoes an ordering transition at several hundreds of Kelvin, whereas the $4f$ sublattice only orders at a few Kelvin, indicating much stronger exchange coupling between the $3d$ moments [@White]. The interaction between the two spin subsystems, however, also plays an important role and often determines the magnetic ground state. For instance, the interplay between the Fe and Dy sublattices gives rise to gigantic magnetoelectric phenomena in [DyFeO$_3$]{} [@Tokunaga]. In the case of non-magnetic $M=$ Al, Sc, or Co (in its low-spin state), the magnetic properties of $RM$O$_3$ are primarily controlled by an electronic structure of $R^{3+}$ ion and rare-earth inter-site interactions. In turn, the crystalline electrical field (CEF) splitting of the lowest lying $4f$ free-ion state determines the single ion anisotropy as well as the magnitude of the magnetic moment.
In spite of its three-dimensional perovskite structure, [DyScO$_3$]{} has been reported as a highly anisotropic magnetic system with an antiferromagnetic (AF) transition, $T_{\mathbf{N}} \simeq{3.1}$ K [@Ke; @Raekers; @Bluschke]. The details of the [DyScO$_3$]{} magnetic state and the Dy-Dy interactions at low temperatures remained poorly understood. Recent studies of [DyScO$_3$]{} lead to conflicting conclusions on the magnetic anisotropy and ground state of rare-earth subsystem. The compound exhibits strong magnetic anisotropy with moments confined in the $ab$-plane. On the one hand, it was suggested that an easy axis is along the $a$-axis [@Ke]. Indeed, such spin configurations were found in some related isostructural compounds, such as YbAlO$_3$ [@Radha81], TbCoO$_3$ [@Knizek], and SmCoO$_3$ [@Jirak]. On the other hand, recent magnetization measurements of [DyScO$_3$]{} suggest that an easy axis is along the crystallographic $b$ direction [@Bluschke]. Besides, in other perovskites with $R=$ Dy, the easy axis of magnetization was reported to be along the $b$-axis, for example DyCoO$_3$ [@Knizek] and DyAlO$_3$ [@Schuchert]. Dy$^{3+}$ in electronic configuration $4f^9$ is a Kramers ion split by the crystal electric field (CEF), resulting in eight doublets. Since the CEF is controlled by the near neighbor coordination which is little affected by an isostructural substitution of ligands, the different ground state of [DyScO$_3$]{} compared to other Dy$M$O$_3$ compounds looks puzzling.
In this article we use the neutron scattering and magnetization measurements to study the magnetic ground state of [DyScO$_3$]{} in more detail. We show that the magnetic properties at low temperatures are dominated by the ground state Kramers doublet $\lvert\pm{15}/2\rangle$, which is characterized by an Ising single-ion anisotropy and moments fluctuating *along* the local Ising direction. We find that the Dy$^{3+}$ ordered moments are canted $28^{\circ}$ away from the $b$-axis, which is in agreement with other Dy$M$O$_3$ isostructural perovskites.
Experimental Details
====================
In this work we used high quality single crystals of [DyScO$_3$]{}, which are commercially available because they are commonly used as substrates for epitaxial ferroelectric and multiferroic perovskite thin film growth [@Choi; @Schlom]. For magnetic measurements [DyScO$_3$]{} crystals were oriented using an x-ray Laue machine and then cut with a wire saw to get planes perpendicular to the $a$, $b$ and $c$-axes. From the Laue patterns we estimate the orientation to be within $\sim{1}^{\circ}$. Magnetization was measured using a vibrating sample SQUID magnetometer (MPMS SQUID VSM, Quantum design) in the temperature range $2-300$ K.
Neutron powder diffraction (NPD) was measured with a crushed single crystal of a total mass $\sim{0.5}$ g, at the wide-angle neutron diffractometer WAND (HB-2C) at the HFIR reactor at Oak Ridge National Laboratory (ORNL). The sample was enclosed in a hollow Al cylindrical sample holder, in order to diminish the strong neutron absorption by Dy, and placed into a He flow cryostat to achieve a minimum temperature of $T={1.5}$ K. An incident neutron beam with a wavelength of 1.4827 [Å]{} was selected with a Ge (113) monochromator.
High energy transfer inelastic neutron scattering (INS) experiments were performed at the Fine Resolution Fermi Chopper Spectrometer (SEQUOIA) at the Spallation Neutron Source (SNS) at ORNL [@SEQ1; @SEQ2], using the same powder sample. The data were collected at $T={6}$ K with an incident neutron energy of $E_{\rm{i}}=100$ meV, resulting in an energy resolution of Gaussian shape with full width at half maximum (FWHM) $\sim{3}$ meV at the elastic line.
The single crystal quasielastic mapping and low energy transfer INS measurements were done at the Cold Neutron Chopper Spectrometer (CNCS) [@CNCS1; @CNCS2]. A bar shaped ($0.4\times{4}\times{20}$ mm$^3$) single crystal of mass $\sim{0.2}$ g was oriented in the $(H0L)$ scattering plane. The detector coverage out-of-plane was about $\pm{15}^{\circ}$, so that a limited $Q$ range along the $K$-direction could also be accessed. The data were collected using a fixed incident neutron energy of 3.2 meV. In this configuration, the energy resolution was $\sim{0.07}$ meV (FWHM) at the elastic line.
Data reduction and analysis was done with the <span style="font-variant:small-caps;">Fullprof</span> [@FP], <span style="font-variant:small-caps;">SARA</span>[*[h]{}*]{} [@Wills], <span style="font-variant:small-caps;">MantidPlot</span> [@Mantid] and <span style="font-variant:small-caps;">Dave</span> [@Dave] software packages.
Results and Analysis
====================
Crystal structure and crystal electric field {#sect_CEF}
--------------------------------------------
![(a) Top: Crystal structure of [DyScO$_3$]{}, where a Dysprosium (Dy) atom is surrounded by eight distorted Scandium-Oxygen (Sc-O) octahedra. Bottom: Local environment of Dy$^{3+}$ in the $z=1/4$ plane, considering twelve nearest Oxygen (O) neighbors (four O1 sites in the same $z=1/4$ plane, four O2 sites above and four O2 sites below the $z=1/4$ plane). The red arrows indicate the Dy$^{3+}$ Ising moment direction. (b) Contour plot of the INS spectrum of [DyScO$_3$]{} taken at the SEQUOIA spectrometer at temperature $T=6$ K. The excitation to the first CEF level was observed centering around 25 meV. The scattering intensity decreases with increasing wave vector $|Q|$, indicating the magnetic nature of the transition due to the form factor. (c) Plot of the integrated intensity over wave vector $|Q|=[1,3]$ Å$^{-1}$ as a function of the energy transfer $E$. The first excited CEF level was fitted with a Gaussian function, which peaks at $\Delta_{1}=24.93\pm{0.02}$ meV, as indicated by the red solid line. The gray bar area indicates the instrumental resolution. Inset: Sketch of the eight isolated CEF doublet states ($E_1, E_2,...,E_7$) of Dy$^{3+}$, due to the low local site symmetry, with the ground doublet well separated from all other levels.[]{data-label="CEF"}](cef.pdf){width="1\linewidth"}
[DyScO$_3$]{} crystallizes in a distorted orthorhombic perovskite structure [@Liferovich; @UECKER]. Lattice parameters refined from our powder neutron diffraction data at 10 K are $a={5.4136}(3)$ [Å]{}, $b={5.6690}(1)$ [Å]{}, and $c={7.8515}(1)$ Å, using the conventional $Pbnm$ space group (lattice parameters $a<b<c$). The crystal structure of [DyScO$_3$]{} is illustrated in Fig. \[CEF\](a), where Dysprosium (Dy) atoms are located between eight distorted Scandium-Oxygen (Sc-O) octahedra. Due to the distortion, the point symmetry of the Dy site is lowered to $C_{\rm{s}}$, with only one mirror plane normal to the $c$-axis. Therefore, Dy$^{3+}$ moments as constrained by the point symmetry would either point along the $c$-axis, or lie in the $ab$-plane. To have a quantitative description of the CEF effect, calculations based on the point charge model [@Stevens; @Hutchings] were performed using the software package McPhase [@McPhase]. The first twelve nearest Oxygen (O) neighbors around the Dy ion were considered (bottom of Fig. \[CEF\](a)), which keep the correct local point symmetry of the Dy site and thus constrain the CEF wave functions. In this local environment, the sixteen fold degenerate $J=15/2$ ($L=5$, $S=5/2$) multiplet ($2J+1=15$) of Dy$^{3+}$ is split into eight doublet states. By local point symmetry, no high-symmetry directions in the $ab$ mirror plane are present. Therefore, the resulting Ising axis is tilted away from the crystal $a$ and $b$ axes. The tilting angle $\varphi$ is determined by the relative distortion of the nearest oxygen octahedra. We transform the old basis ($x, y, z$) along the principle crystal ($a, b, c$) to the new basis ($x', y', z'$), with $\varphi$ defined as the titling angle from the crystal $b$ axis: $$\begin{aligned}
&\mbox{old\ basis} & &\mbox{new\ basis}\\
x&=(1, 0, 0), & x'&=(-\rm cos\varphi, sin\varphi, 0)\\
y&=(0, 1, 0), & y'&=(0, 0, 1)\\
z&=(0, 0, 1), &z'&=(\rm sin\varphi, cos\varphi, 0)\end{aligned}$$ The calculated ground doublet states are best diagonalized when the local $z'$ axis is chosen along the Ising moment easy axis, in which case no imaginary coefficients are left in the ground state wave functions. The tilting angle can be determined with the CEF calculation, by checking the ground state wave function with different values of angle $\varphi$, which results in $\varphi\sim{25}^{\circ}$. The first two Kramers ground state wave functions are given by $$\begin{gathered}
E_{0\pm} = 0.991\lvert\pm{15}/2\rangle
\mp{0.107}\lvert\pm{13}/2\rangle\\
{-0.081}\lvert\pm{11}/2\rangle
\pm{0.014}\lvert\pm{9}/2\rangle
{-0.004}\lvert\pm{7}/2\rangle\\
\mp{0.007}\lvert\pm{5}/2\rangle
{-0.002}\lvert\pm{3}/2\rangle.
\label{E0}\end{gathered}$$ The calculated excited levels are located at energies 22.9 meV, 37.6 meV, 43.1 meV, 52.2 meV, 63.3 meV, 79.7 meV, and 98.1 meV. This calculated CEF scheme indicates a well separated ground state doublet, which is almost entirely made up by the wave function $\lvert\pm{15}/2\rangle$. The calculated CEF parameters and matrix elements for transitions between these CEF levels are shown in detail in Appendix \[append\]. The calculated wave functions for the first excited doublet are mostly from $\lvert\pm{13}/2\rangle$, and thus the matrix element for excitations between the ground state and the first excited state is large. However, for higher excited levels, the wave functions are mostly contributed from $\lvert\pm{11}/2\rangle$, $\lvert\pm{9}/2\rangle$, ..., $\lvert\pm{3}/2\rangle$, and the matrix elements between the ground state to these higher excited levels are very weak because of the $\Delta{S}=1$ selection rule. Thus, the CEF calculations indicate that only the one excitation to the first excited doublet can be observed at low temperatures by INS, when only the ground state is populated.
The CEF calculations based on the point charge model are well consistent with our INS data. The INS spectrum taken with neutron incident energy $E_{\rm{i}}=100$ meV at $T=6$ K exhibits only one dispersionless excitation as expected, see Fig. \[CEF\](b). The scattering intensity of this excitation decreases with increasing wave vector $|Q|$, confirming its magnetic origin. The intensity integrated over a wave vector range $|Q|=[1,3]$ [Å]{}$^{-1}$ is shown in Fig. \[CEF\](b) as function of the energy transfer $E$. By fitting the peak with a Gaussian function (red solid line), the first excited level was determined to be $\Delta_{1}=24.93\pm{0.02}$ meV, which is very close to the calculated value.
Since the ground state is well separated from the other excited CEF levels, the low temperature ($T<\Delta_{1}\simeq{290}$ K) magnetic properties are dominated by the ground state doublet symmetry. Thus, we could use the low temperature magnetization results to examine the calculated ground state wave function, as will be discussed below in the next section.
Magnetization
-------------
![(a) Temperature dependent magnetization of a [DyScO$_3$]{} single crystal with applied field along the $a$, $b$ and $c$-axes. The red dashed line indicates the AF transition at $T_{\rm{N}}=3.15(5)$ K. The magnetization along $c$ was multiplied by a factor 10 for clarity. (b) The field dependent magnetization measured at $T=2$ K. The solid lines are the calculated Brillouin function at 2 K, as explained in the text. (c) Angle dependent magnetization measured at $T=2$ K and field $B=1$ T (blue dots) and $B=3$ T (red dots). The magnetic field was applied in the $ab$-plane. Angle $0^{\circ}$ and $90^{\circ}$ correspond to $B\parallel{b}$ and $B\parallel{a}$, respectively. The green line represents a fit of the experimental data (see main text). Red arrows schematically show a moment configuration of Dy$^{3+}$ at zero field with angle $\varphi=28^{\circ}$ between the $b$-axis and the Dy magnetic moment.[]{data-label="MBTA"}](MvsBvsTvsAngle.pdf){width="0.7\linewidth"}
The temperature dependent magnetization of [DyScO$_3$]{} is shown in Fig. \[MBTA\](a), with the magnetic field applied along the three principle crystallographic axes. A cusp-like anomaly is observed at $T_{\rm{N}}=3.15(5)$ K for all three directions, indicating the ordering to an antiferromagnetic phase, that is consistent with previous reports [@Ke; @Raekers; @Bluschke]. The field dependent magnetization measured in the ordered state exhibits a step-like anomaly when the field is applied in the $ab$-plane, see Fig. \[MBTA\](b). The large observed hysteresis indicates subsequent field-induced first order transitions. Significant anisotropy was observed between the $ab$-plane and the $c$-axis. The high field saturation moments along $a$ and $b$ are more than 20 times larger than the moment along the $c$-axis, confirming that the Dy$^{3+}$ magnetic moments are in the $ab$-plane.
To study further the details of the anisotropy, the magnetization was measured in a magnetic field of $B=1$ T and $B=3$ T, with varying direction of the field in the plane. These measurements are shown in Fig. \[MBTA\](c). A small hump-like feature appears in the magnetization when the applied magnetic field is large enough to polarize the magnetic moments of both Dy sublattices. This is an indication that the moment is tilted relative to the $a$ and $b$-axes, as suggested already by the CEF calculations. The angle dependence of the magnetization in a field $B=3$ T can be well described as: $$\begin{aligned}
M&=\dfrac{M_{\|}}{2}\left(\lvert\cos\left(\theta-\varphi\right)\rvert
+\lvert\cos\left(\theta+\varphi\right)\rvert\right) ,\end{aligned}$$ where $M_{\|}$ is the saturation moment, $\theta$ is the angle between the applied field and the $b$-axis, and $\varphi$ is the angle between the $b$-axis and the Dy$^{3+}$ moment direction. The result of this analysis is shown as the green line in Fig. \[MBTA\]c, with $M_{\|}={10}$ [$\mu_{\rm{B}}$]{}/Dy, and $\varphi={28}^{\circ}$, in good agreement with ref. . Since the experimental temperature ($T=2$ K) and magnetic field ($B=3$ T) are much smaller than the energy scale of the first excited CEF level $\Delta_{1}$, it is safe to calculate the field dependent magnetization with the Brillouin function of the ground state doublets, Fig. \[MBTA\](b). While the saturation at high fields can be well described with a saturation moment $M_{\|}={10}$ [$\mu_{\rm{B}}$]{}/Dy, and an angle $\varphi=28^{\circ}$, a large mismatch between the measurements and calculations remains at low magnetic field. This suggests additional AF correlations in the system, which are missing in the calculation of the Brillouin function. The magnetization along the $c$-axis can be described with an effective moment $M_{\perp}\simeq{0.5}$ [$\mu_{\rm{B}}$]{}/Dy which is consistent with our CEF calculations.
The agreement between the CEF calculations and the magnetization measurements confirms the Ising character of the ground state wave function $E_{\rm{0}\pm}$. The very small magnetization along the $c$-axis also shows that the moments directions are strictly constrained. A direct consequence is that any transverse excitation (spin wave) should be strongly suppressed in [DyScO$_3$]{}. The Ising moments are only allowed to fluctuate along their local easy axis, which is reflected in the neutron scattering polarization factor (details to follow below in Section \[sec\_diffuse\]).
Neutron powder diffraction and magnetic structure
-------------------------------------------------
![(a) Observed (circles) and calculated (solid line) magnetic NPD patterns for [DyScO$_3$]{}. Bars (blue) show the positions of allowed magnetic reflections. The difference curve (green) is plotted at the bottom. Inset: Raw data taken in the AF ordered state at 1.5 K (red), the paramagnetic state at 10 K (black), and the difference (blue) that shows the magnetic contribution to the scattering. (b) Schematic view of the crystal and magnetic structures ($GxAy$) of [DyScO$_3$]{}. (c) Temperature dependence of the ordered moments. The solid line is a guide to the eye. The red dashed line indicates the AF ordering temperature at $T=3.15$ K. []{data-label="WAND"}](WAND_fit_02.pdf){width="0.9\linewidth"}
Fig. \[WAND\](a) shows the NPD pattern of a crushed single crystal of [DyScO$_3$]{} in the paramagnetic ($T=10$ K) and magnetically ordered ($T=1.5$ K) states. The refined unit cell parameters and atomic positions in the $Pbnm$ unit cell do agree well with previously reported data for [DyScO$_3$]{} [@Liferovich]. The AF ordering is manifested in appearance of magnetic Bragg reflections below $T_{\mathrm{N}}$. The symmetry analysis and Rietveld refinement reveal that the magnetic group symmetry is $Pb^{\prime}n^{\prime}m^{\prime}$ and that the magnetic moments are oriented in the $ab$-plane \[($G_xA_y$) representation\]. A schematic view of the crystal and magnetic structures of [DyScO$_3$]{} is shown in Fig. \[WAND\](b). The temperature dependence of the ordered magnetic moments are presented in Fig. \[WAND\](c). A Rietveld refinement of the 1.5 K neutron diffraction dataset reveals that the ordered moments reach $m(G_x)=4.44(3)$ [$\mu_{\rm{B}}$]{} and $m(A_y)=8.36(2)$ [$\mu_{\rm{B}}$]{} (9.47(6) [$\mu_{\rm{B}}$]{} total). This corresponds to a canting angle of $\varphi=28^{\circ}$ to the $b$-axis. This is in excellent agreement with our CEF calculation and magnetization measurements, and also consistent with earlier studies [@Raekers; @Bluschke].
At temperatures above the ordering transition one can also observe additional diffuse scattering at low angles in the powder patterns, see Fig. \[WAND\], indicating short range magnetic correlations and fluctuations above the phase transition. This is studied in better detail with single crystal neutron scattering data which are presented below.
Magnetic dipole-dipole interaction
----------------------------------
In this section we discuss the magnetic ground state in the context of the magnetic dipole-dipole interaction. We have shown that the Dy$^{3+}$ Ising moments are in the $ab$-plane along a direction with an angle $\varphi=\pm{28}^{\circ}$ relative to the $b$-axis. According to representation analysis, four different magnetic structures ($GxAy$, $AxGy$, $FxCy$ and $CxFy$) with propagation vector $k=0$ are allowed in this case. These four structures are shown in Fig. \[dipole\](a). Neutron diffraction shows that [DyScO$_3$]{} selects the $GxAy$ configuration. Since the Dy$^{3+}$-4$f$ electrons are very localized, the 4$f$ exchange interaction is relatively weak. On the other hand, the dipole-dipole interaction is expected to be large due to the extremely large moments in the ground state ($M_{\|}\simeq{10}$ [$\mu_{\rm{B}}$]{}/Dy). Since the dipole-dipole interaction is long range in nature, for each of the four magnetic structures we consider ten near neighbors in total, including eight neighbors within distance $\sim{5.7}$ [Å]{} in the $ab$-plane, and two near neighbors at a distance of $\sim{4.0}$ [Å]{} along the $c$-axis. Further neighboring atoms have little influence. The calculated dipole-dipole energies $$\begin{aligned}
E_{\rm dipole} =-\frac{\mu_{0}}{4\pi}\sum_{\rm i}
\frac{1}{|\mathbf{r}_{\rm i}|^3}
\left[3\left(\mathbf{m}_{0}\cdot\mathbf{\hat{r}}_{\rm i}\right)
\left(\mathbf{m}_{\rm i}\cdot\mathbf{\hat{r}}_{\rm i}\right)
-\left(\mathbf{m}_{0}\cdot\mathbf{m}_{\rm i}\right)\right]
$$ of the four magnetic structures in zero field are $GxAy=-3.61$ K, $AxGy=-0.90$ K, $CxFy=-0.26$ K, and $FxCy=2.44$ K, as presented in Fig. \[dipole\](b).
![(a) Schematic view of the four symmetry allowed magnetic structures in [DyScO$_3$]{}. The red arrows indicate the moment directions. (b) Calculated field dependence of the dipole-dipole energy for each of the four magnetic structures, see main text. In zero field, the $GxAy$ moment configuration is calculated to have the lowest energy of about -3.4 K. For a magnetic field along the $a$-axis and $b$-axis, the $FxCy$ and $CxFy$ configurations will be a new ground state, with critical field 2 T and 0.6 T, respectively.[]{data-label="dipole"}](DyScO3_di.pdf){width="0.7\linewidth"}
As we can see, the $GxAy$ spin configuration has the lowest zero field energy compared with the non-ordered paramagnetic state and the other spin structures. Thus, at zero field the system adopts the $GxAy$ ground state, and the AF ordering temperature $T_{\mathrm N}=3.2$ K is well consistent with the energy scale one estimates from the dipole-dipole interaction. We also note, that the two $GxAy$ and $AxGy$ configurations would be degenerate if there were no interactions in the $ab$-plane. This means that the in-plane interaction lifts the degeneracy and selects the ground state spin configuration. In turn, the in-plane dipolar interaction depends on the relative tilting angle $\varphi$ of the Ising moments [@Kappatsch]. Larger value of $\varphi$ would switch the ground state from $GxAy$ to $AxGy$, as in the case of the isostructural compound YbAlO$_3$ [@Radha81]. Also, if we apply a magnetic field above critical along either the $a$-axis ($B_{\mathrm{crit}}\sim2$ T) or the $b$-axis ($B_{\mathrm{crit}}\sim0.6$ T), either $FxCy$ or $CxFy$ would be the new ground state, stabilized by the Zeeman energy, $$\begin{aligned}
E_{\rm{Zeeman}} =-\sum_{\rm i} \mathbf{B}\cdot \mathbf{M}_{\rm{i}}/z,
$$ where $z$ is the number of ions that are summed over the magnetic unit cell. The total field dependent energy of the system is then contributed from both the dipole and Zeeman terms: $$\begin{aligned}
E =E_{\rm{dipole}}+E_{\rm{Zeeman}},
$$ as seen in Fig. \[MBTA\](b). Since the only difference between $GxAy$ and $CxFy$ (or $AxGy$ and $FxCy$) is the spin arrangement along the $c$-axis, the critical fields give a rough estimate of the interactions along the $c$-axis, which close the gap between the $GxAy$ and $CxFy$ (or $AxGy$ and $FxCy$) states.
It is also important to note, that the Dy-Dy dipolar interaction between two nearest neighbors along the $c$-direction is $E_{\mathrm{dipole}}^{c}\simeq{-0.82}$ K and antiferromagnetic. This is about 2-4 times larger than the corresponding interactions in the $ab$-plane, which are ferromagnetic, $E_{\mathrm{dipole}}^{ab}\simeq{+0.45}$ K and $E_{\mathrm{dipole}}^{ab}\simeq{+0.21}$ K (corresponding to near neighbor distances of $3.79$ [Å]{} and $4.05$ [Å]{} in the plane).
Magnetic diffuse scattering {#sec_diffuse}
---------------------------
![(a)-(e) Contour plots of the measured magnetic neutron diffuse scattering of [DyScO$_3$]{} in the $(H,0,L)$ plane, integrated over the energy window $E=[-0.1, 0.1]$ meV, at different temperatures 1.7 K (a), 3 K (b), 4 K (c), 10 K (d) and 30 K (e). (f) Calculated magnetic scattering factor including the absorption correction, the polarization factor and the magnetic form factor of the Dy$^{3+}$ ion. Here, r. l. u. stands for reciprocal lattice units.[]{data-label="diffuse"}](Contour_01.pdf){width="1.0\linewidth"}
Contour maps of the magnetic neutron scattering in the $(H,0,L)$ plane at different temperatures, above and below $T_{\mathrm{N}}$, show broad diffuse peaks near the magnetic wave vectors $Q=(0,0,1)$ and $(\pm{1},0,1)$, which are smeared out with increasing temperature (Fig. \[diffuse\]). We have calculated magnetic scattering factor as: $$\begin{gathered}
\label{Scattering_factor}
\int dE {\;} S(Q,E)\propto T(Q)\cdot|f(Q)|^{2}\cdot{S(Q)}\\
\times\left(\delta_{\alpha\beta}-\widetilde{Q}_{\alpha}\widetilde{Q}_{\beta}\right){\;}.\end{gathered}$$ Here, $T(Q)$ is the transmission for the [DyScO$_3$]{} sample, $|f(Q)|^{2}$ is the magnetic form factor of Dy$^{3+}$, $S(Q)$ is the magnetic structure factor, and $\left(\delta_{\alpha\beta}-\widetilde{Q}_{\alpha}\widetilde{Q}_{\beta}\right)$ is the polarization factor. As discussed earlier, the transverse moment component is very small, $M^2_{\perp}/M^2_{\|}\simeq{0.0025}$. Therefore, we can safely neglect the fluctuations along the transverse direction. Since neutrons scatter only from the component of the magnetic moment that is perpendicular to the wave vector $\mathbf{Q}$, in the present case the calculated magnetic polarization factor is $$\label{polarization_factor_equation1}
\delta_{\alpha\beta}-\widetilde{Q}_{\alpha}\widetilde{Q}_{\beta}=
1-\frac{Q^2_{\rm{h}}\cdot\sin^{2}\varphi}{Q^2_{\rm{h}}+Q^2_{\rm{l}}}=
\frac{Q^2_{\rm h}\cdot\cos^{2}\varphi+Q^2_{\rm l}}{Q^2_{\rm h}+Q^2_{\rm l}}{\;},$$ where $\varphi=\pm{28}^{\circ}$, $Q_{\rm{h}}=2\pi h/a$ and $Q_{\rm{l}}=2\pi l/c$. The neutron transmission factor was explicitly included because of the strong absorption by Dy and Sc. The calculated 1/e absorption length for a 3.32 meV neutron is only about 0.215 mm [@Patra]. The calculated magnetic scattering factor (\[Scattering\_factor\]) is shown in Fig. \[diffuse\](f). Good agreement was found with the data taken at 10 K (Fig. \[diffuse\](d)) and 30 K (Fig. \[diffuse\](e)). The dip like feature that went through wave vector $Q=(\pm1,0,1)$ is due to strong neutron absorption along the plate. We have adopted the lattice Lorentzian function [@Igor2015] to describe the magnetic structure factor: $$\begin{gathered}
\label{lattice_lorentzian}
S(Q)\propto\frac{\sinh(c/\xi_{l})}{\cosh(c/\xi_{l})-\cos(\pi(l-1))}\\
\times\frac{\sinh(a/\xi_{h})}{\cosh(a/\xi_{h})-\cos(\pi{h})}.\end{gathered}$$ Here, $\xi_{l}$ and $\xi_{h}$ are the correlation lengths along the $c$ and $a$-axes in units Å.
We show constant energy cuts along wave vector $Q_{\rm{h}}$ and $Q_{\rm{l}}$, integrated over $E=[-0.1,0.1]$ meV, $Q_{\rm{l}}=[0.9,1.1]$, and $Q_{\rm{k}}=[-0.2,0.2]$ in Fig. \[Correlation\](a) and (b). The fits of the lattice Lorentzian function are shown as solid lines. The temperature dependent correlation lengths from the fits are shown in Fig. \[Correlation\](c). At temperatures 30 K and 10 K both correlations lengths, along $Q_{\rm{h}}$ and $Q_{\rm{l}}$, are short, much less than one lattice unit. However, as approaching the magnetic ordering temperature, significant correlations build up already at 4 K along both directions, which are greatly enhanced below the ordering temperature. A slight anisotropy was observed between wave vector $Q_{\rm{h}}$ and $Q_{\rm{l}}$, where the correlation along $Q_{\rm{l}}$ is always larger than that along $Q_{\rm{h}}$, as expected from the estimate of the $E_{\mathrm{dipole}}^{c} / E_{\mathrm{dipole}}^{ab}$ dipolar interactions.
![(a) Cut along wave vector $Q_{\rm{h}}$, integrated over $E=[-0.1,0.1]$ meV, $Q_{\rm{l}}=[0.9,1.1]$, and $Q_{\rm{k}}=[-0.2,0.2]$, measured at different temperatures as indicated. (b) Cut along wave vector $Q_{\rm{l}}$, integrated over $E=[-0.1,0.1]$ meV, $Q_{\rm{h}}=[-0.1,0.1]$, and $Q_{\rm{k}}=[-0.2,0.2]$, measured at different temperatures as indicated. The instrumental resolution (gray bar) is estimated from a cut through the pure nuclear (002) peak (gray solid circles). (c) The temperature dependent correlation length from the fit of the lattice Lorentzian functions, as explained in the text. The red dashed line indicates the AF ordering temperature.[]{data-label="Correlation"}](Correlation_fit_final.pdf){width="0.7\linewidth"}
Discussion and Conclusion
=========================
In summary, the low temperature magnetic properties of [DyScO$_3$]{} hasve been well characterized through a combination of CEF calculations, magnetization, powder and single crystal neutron diffraction measurements. All our results are consistent with an Ising Dy$^{3+}$ ground state doublet with wave function $\lvert\pm{15}/2\rangle$, where the local easy axis makes an angle of $\varphi=28^{\circ}$ with the $b$-axis. The Dy$^{3+}$ Ising moments order magnetically at low temperatures ($T_{\rm{N}}=3.15(5)$ K), and the AF magnetic ground state $GxAy$ is selected mainly by the dipole-dipole interaction.
Strong magnetic diffuse scattering is observed for temperatures up to 30 K, which indicates the persistence of strong critical fluctuations over a wide temperature range. This is rather surprising. Considering that the Ising like moments are rather large, $M_{\|}\simeq{10}$ [$\mu_{\rm{B}}$]{}, one would usually expect to see a clean mean-field like transition with very weak critical fluctuation [@Wu2011]. Critical fluctuations could be enhanced by the low dimensionality, as seen in other rare-earth based one-dimensional systems [@Wu2016; @Miiller].
No inelastic excitations have been observed at all in [DyScO$_3$]{} below the first excited CEF level. To understand this puzzling observation, one may go back to the ground state wave function. First, one notes that any inelastic scattering due to transverse fluctuations should be strongly suppressed. Because of the strong Ising like single ion anisotropy, which can be quantified as $M_{\parallel}/M_{\perp}=\langle{E}_{\rm{0}\pm}|J_{\rm{z'}}|{E}_{\rm{0}\pm}\rangle/
\langle{E}_{\rm{0}\pm}|J_{\rm{y'}}|{E}_{\rm{0}\pm}\rangle\simeq{20}$, the neutron scattering intensity due to transverse fluctuations is expected to be about two orders of magnitude smaller than the scattering intensity from longitudinal fluctuations, $M^2_{\parallel}/M^{2}_{\perp}\simeq{400}$, at temperatures and fields smaller the the first CEF excited level $\Delta_{1}$. Thus, any fluctuations along the transverse direction, such as spin waves, would be extremely weak. One would then expect to see strong fluctuations along the longitudinal directions from the Dy$^{3+}$ Ising moments. In the most general case the ground state wave function can be expressed as $$\begin{gathered}
\label{E0_general}
E_{\rm{0}\pm}=\alpha\lvert\pm{15}/2\rangle
+\beta\lvert\pm{13}/2\rangle
+\gamma\lvert\pm{11}/2\rangle\\
+\delta\lvert\pm{7}/2\rangle
+\epsilon\lvert\pm{5}/2\rangle
+\zeta\lvert\pm{3}/2\rangle
+\eta\lvert\pm{1}/2\rangle\\
+\alpha'\lvert\mp{15}/2\rangle
+\beta'\lvert\mp{13}/2\rangle
+\gamma'\lvert\mp{11}/2\rangle\\
+\delta'\lvert\mp{7}/2\rangle
+\epsilon'\lvert\mp{5}/2\rangle
+\zeta'\lvert\mp{3}/2\rangle
+\eta'\lvert\mp{1}/2\rangle{\;}.\end{gathered}$$ Comparing to the calculated ground state wave function, see above, we see that most of these contributions are missing ($\eta=\alpha'=\beta'=\gamma'=\delta'=\epsilon'=\zeta'=\eta'=0$) due to the constraint of the point site symmetry. Therefore, any Hamiltonian matrix element, which connects the ’up’ and ’down’ states of moment $\langle{E}_{\rm{0}\mp}|S^{+}, S^{-}|{E}_{\rm{0}\pm}\rangle=\alpha\beta'+\alpha'\beta+...=0$ is absent. This indicates that neutrons can neither flip the Dy$^{3+}$ ground Ising moments nor propagate them, due to the selection rule $\Delta{S}=1$. In other words, in the case of [DyScO$_3$]{}, these possible spinon excitations along the longitudinal directions are ’hidden’ to neutrons. It is interesting to notice that a splitting of ground-state was observed in the optical absorption spectra of the related DyAlO$_3$ compound [@Schuchert], which was claimed to originate from the one-dimensional interactions along the $c$-axis. Further studies using other direct or indirect techniques, such as ac-susceptibility, or optical absorption, for which such selection rules don’t apply, would be needed to reveal the ’hidden’ low-energy dynamics.
We thank J. M. Sheng and Q. Zhang for the help with the neutron data refinement. We would like to thank A. Christianson, I. Zaliznyak, M. Mourigal, Z. T. Wang, and C. Batista for useful discussions. The research at the Spallation Neutron Source (ORNL) is supported by the Scientific User Facilities Division, Office of Basic Energy Sciences, U.S. Department of Energy (DOE). Research supported in part by the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U.S. Department of Energy. This work was partly supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), Materials Science and Engineering Division.
\[append\]
Point charge model calculations
===============================
A point charge model considers crystalline electric field effects as a perturbation to the appropriate free-ion $4f$ wave functions and energy levels and the perturbed crystalline potential energy may be rewritten as:
$$\begin{aligned}
\label{MvsB}
W_c = \sum_{i}q_{i}V_{i} = \sum_{i}\sum_{j} \frac{q_{i}q_{j}}{\lvert R_{j}-r_{i} \rvert}\end{aligned}$$
where $q_{i}$ and $q_{j}$ are charges of magnetic ions and ligand ions, $\sum_{j}$ is the sum over all neighbor charges [@Hutchings] and $\lvert R_{j}-R_{i} \rvert$ is the distance between magnetic ion and ligand charge. The calculated first excited state is
$$\begin{gathered}
\label{E1}
\lvert{E}_{\pm}\rangle_{1}=-0.09\lvert\pm{15}/2\rangle
\mp{0.976}\lvert\pm{13}/2\rangle\\
+0.191\lvert\pm 11/2\rangle
\pm{0.017}\lvert\pm{9}/2\rangle
+0.029\lvert\pm{7}/2\rangle\\
\mp{0.011}\lvert\pm{5}/2\rangle
+0.038\lvert\pm {3}/2\rangle
\mp{0.007}\lvert\pm {1}/2\rangle\\
+0.018\lvert\mp{1}/2\rangle
\mp{0.003}\lvert\mp{3}/2\rangle
+0.009\lvert\mp{5}/2\rangle\\
+0.003\lvert\mp{9}/2\rangle
\pm{0.002}\lvert\mp {11}/2\rangle{\;}.\end{gathered}$$
The calculated CEF parameters from the point charge model and resulting the energy levels are shown in Table \[Blm\_table\] and \[tab:CEF\] below, respectively.
[ l @ @c ]{} $B_2^0$ & $-4.35\times10^{-1}$\
$B_2^1$ & $0.45\times10^{-1}$\
$B_2^2$ & $-4.03\times10^{-1}$\
$B_4^0$ & $-0.22\times10^{-3}$\
$B_4^1$ & $-0.14\times10^{-3}$\
$B_4^2$ & $3.2\times10^{-3}$\
$B_4^3$ & $2.5\times10^{-3}$\
$B_4^4$ & $-1.0\times10^{-3}$\
$B_6^0$ & $0.02\times10^{-5}$\
$B_6^1$ & $-0.32\times10^{-5}$\
$B_6^2$ & $0.08\times10^{-5}$\
$B_6^3$ & $-1.13\times10^{-5}$\
$B_6^4$ & $0.16\times10^{-5}$\
$B_6^5$ & $3.86\times10^{-5}$\
$B_6^6$ & $0.32\times10^{-5}$\
$E$ (meV) $<n| \mathrm{J}_{\perp} |m>^2$
--------------------------------------- ----------- --------------------------------
$|E_0\rangle \rightarrow |E_1\rangle$ 22.9 8.2
$|E_0\rangle \rightarrow |E_2\rangle$ 37.6 0.1
$|E_0\rangle \rightarrow |E_3\rangle$ 43.1 0.02
$|E_0\rangle \rightarrow |E_4\rangle$ 52.2 0.05
$|E_0\rangle \rightarrow |E_5\rangle$ 63.3 0.02
$|E_0\rangle \rightarrow |E_6\rangle$ 79.7 0
$|E_0\rangle \rightarrow |E_7\rangle$ 98.1 0
: The energy levels and out of ground state transition probabilities. \[tab:CEF\]
[37]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} “,” (, ) @noop @noop [****, ()]{} **, @noop [Ph.D. thesis]{} () @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{} @noop [****, ()]{}
[^1]: Corresponding author. Electronic address: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Jason P. Byrne$^{1}$, Shane A. Maloney$^{1}$, R. T. James McAteer$^{1}$,\
Jose M. Refojo$^{2}$ and Peter T. Gallagher$^{1}$$^{\ast}$\
\
\
\
\
title: 'Propagation of an Earth-directed coronal mass ejection in three dimensions'
---
Solar coronal mass ejections (CMEs) are the most significant drivers of adverse space weather at Earth, but the physics governing their propagation through the heliosphere is not well understood. While stereoscopic imaging of CMEs with the Solar Terrestrial Relations Observatory (STEREO) has provided some insight into their three-dimensional (3D) propagation, the mechanisms governing their evolution remain unclear due to difficulties in reconstructing their true 3D structure. Here we use a new elliptical tie-pointing technique to reconstruct a full CME front in 3D, enabling us to quantify its deflected trajectory from high latitudes along the ecliptic, and measure its increasing angular width and propagation from 2–46 R$_{\odot}$ ($\sim$0.2 AU). Beyond 7 R$_{\odot}$, we show that its motion is determined by aerodynamic drag in the solar wind and, using our reconstruction as input for a 3D magnetohydrodynamic simulation, we determine an accurate arrival time at the L1 point near Earth.
{#section .unnumbered}
CMEs are spectacular eruptions of plasma and magnetic field from the surface of the Sun into the heliosphere. Travelling at speeds of up to 2,500 km s$^{-1}$ and with masses of up to 10$^{16}$ g, they are recognised as drivers of geomagnetic disturbances and adverse space weather at Earth and other planets in the solar system$^{1,2}$. Impacting our magnetosphere with average magnetic field strengths of 13 nT and energies of $\sim$10$^{25}$ J they can cause telecommunication and GPS errors, power grid failures, and increased radiation risks to astronauts$^{3}$. It is therefore important to understand the forces that determine their evolution, in order to better forecast their arrival time and impact at Earth and throughout the heliosphere.
Identifying the specific processes that trigger the eruption of CMEs is the subject of much debate, and many different models exist to explain these$^{4-6}$. One common feature is that magnetic reconnection is responsible for the destabilisation of magnetic flux ropes on the Sun, which then erupt through the corona into the solar wind to form CMEs$^{7}$. In the low solar atmosphere, it is postulated that high-latitude CMEs undergo deflection since they are often observed at different position angles than their associated source region locations$^{8}$. It has been suggested that field lines from polar coronal holes may guide high-latitude CMEs towards the equator$^{9}$, or that the initial magnetic polarity of a flux rope relative to the background magnetic field influences its trajectory$^{10,11}$. During this early phase, CMEs are observed to expand outwards from their launch site, though plane-of-sky measurements of their increasing sizes and angular widths are ambiguous in this regard$^{12}$. This expansion has been modelled as being due to a pressure gradient between the flux rope and the background solar wind$^{13,14}$. At larger distances in their propagation, CMEs are expected to interact with the solar wind and the interplanetary magnetic field. Studies that compare in-situ CME velocity measurements with initial eruption speeds through the corona show that slow CMEs are accelerated toward the speed of the solar wind, and fast CMEs decelerated$^{15,16}$. It has been suggested that this is due to the effects of drag acting on the CME in the solar wind$^{17,18}$. However, the quantification of drag along with that of both CME expansion and non-radial motion is currently lacking, due primarily to the limits of observations from single fixed viewpoints with restricted fields-of-view. The projected 2D nature of these images introduces uncertainties in kinematical and morphological analyses, and therefore the true 3D geometry and dynamics of CMEs has been difficult to resolve. Efforts were made to infer 3D structure from 2D images recorded by the Large Angle Spectrometric Coronagraph (LASCO) on board the Solar and Heliospheric Observatory (SOHO), situated at the first Lagrangian L1 point. These efforts were based upon either a pre-assumed geometry of the CME$^{19,20,21}$ or a comparison of observations with in-situ and on-disk data$^{22,23}$. Of note is the polarisation technique used to reconstruct the 3D geometry of CMEs in LASCO data$^{24}$, though this is only valid for heights of up to 5 R$_{\odot}$ (1 R$_{\odot}$ = 6.95$\times$10$^{5}$ km).
Recently, new methods to track CMEs in 3D have been developed for the STEREO mission$^{25}$. Launched in 2006, STEREO comprises of two near-identical spacecraft in heliocentric orbits ahead and behind the Earth, which drift away from the Sun-Earth line at a rate of $\pm$22$^{\circ}$ per year. This provides a unique twin perspective of the Sun and inner heliosphere, and enables the implementation of a variety of methods for studying CMEs in 3D$^{26}$. Many of these techniques are applied within the context of an epipolar geometry$^{27}$. One such technique consists of tie-pointing lines-of-sight across epipolar planes, and is best for resolving a single feature such as a coronal loop on-disk$^{28}$. Under the assumption that the same feature may be tracked in coronagraph images, many CME studies have also employed tie-pointing techniques with the COR1 and COR2 coronagraphs of the Sun-Earth Connection Coronal and Heliospheric Investigation (SECCHI$^{29}$) aboard STEREO$^{30-32}$. The additional use of SECCHI’s Heliospheric Imagers (HI1/2) allows a study of CMEs out to distances of 1 astronomical unit (1 AU = 149.6$\times$10$^{6}$ km), however a 3D analysis can only be carried out if the CME propagates along a trajectory between the two spacecraft so that it is observed by both HI instruments. Otherwise, assumptions of its trajectory have to be inferred from either its association with a source region on-disk$^{33}$ or its trajectory through the COR data$^{15}$, or derived by assuming a constant velocity through the HI fields-of-view$^{34}$. Triangulation of CME features using time-stacked intensity slices at a fixed latitude, named ‘J-maps’ due to the characteristic propagation signature of a CME, has also been developed$^{35,36}$. This technique is hindered by the same limitation of standard tie-pointing techniques; namely that the curvature of the feature is not considered, and the intersection of sight-lines may not occur upon the surface of the observed feature. Alternatively, forward modelling of a 3D flux rope based upon a graduated cylinder model may be applied to STEREO observations$^{37}$. Some of the parameters governing the model shape and orientation may be changed by the user to best fit the twin observations simultaneously, though the assumed flux rope geometry is not always appropriate. We have developed a new 3D triangulation technique that overcomes the limitations of previous methods by considering the curvature of the CME front in the data. This acts as a necessary third constraint on the reconstruction of the CME front from the combined observations of the twin STEREO spacecraft. Applying this to every image in the sequence enables us to investigate the changing dynamics and morphology of the CME as it propagates from the Sun into interplanetary space.
Results {#results .unnumbered}
-------
On 12 Dec. 2008 an erupting prominence was observed by STEREO while the spacecraft were in near quadrature at 86.7$^{\circ}$ separation (Fig. 1a). The eruption is visible at 50–55$^{\circ}$ north from 03:00 UT in SECCHI/Extreme Ultraviolet Imager (EUVI) images, obtained in the 304 Å passband, in the north-east from the perspective of STEREO-(A)head and off the north-west limb from STEREO-(B)ehind. The prominence is considered to be the inner material of the CME which was first observed in COR1-B at 05:35 UT (Fig. 1b). For our analysis, we use the two coronagraphs (COR1/2) and the inner Heliospheric Imagers (HI1) (Fig. 1c). We characterise the propagation of the CME across the plane-of-sky by fitting an ellipse to the front of the CME in each image$^{38}$ (Supplementary Movie 1). This ellipse fitting is applied to the leading edges of the CME but equal weight is given to the CME flank edges as they enter the field-of-view of each instrument. The 3D reconstruction is then performed using a method of elliptical tie-pointing within epipolar planes containing the two STEREO spacecraft, illustrated in Fig. 2 (see Methods). **Non-radial CME motion.** It is immediately evident from the reconstruction in Fig. 2c (and Supplementary Movie 2) that the CME propagates non-radially away from the Sun. The CME flanks change from an initial latitude span of 16–46$^{\circ}$ to finally span approximately $\pm$30$^{\circ}$ of the ecliptic (Fig. 3b). The mean declination, $\theta$, of the CME is well fitted by a power-law of the form $\theta(r)=\theta_{0}r^{-0.92}~(2~$R$_{\odot}<r<46~$R$_{\odot})$ as a result of this non-radial propagation. Tie-pointing the prominence apex and fitting a power-law to its declination angle results in $\theta^{prom}(r)=\theta_{0}^{prom}r^{-0.82}~(1~$R$_{\odot}<r<3~$R$_{\odot})$, implying a source latitude of $\theta_{0}^{prom}$(1 R$_{\odot}$) $\approx$ 54$^{\circ}$ N in agreement with EUVI observations. Previous statistics on CME position angles have shown that, during solar minimum, they tend to be offset closer to the equator as compared to those of the associated prominence eruption$^{39}$. The non-radial motion we quantify here may be evidence of the drawn-out magnetic dipole field of the Sun, an effect predicted at solar minimum due to the influence of the solar wind pressure$^{40,41}$. Other possible influences include changes to the internal current of the magnetic flux rope$^{11}$, or the orientation of the magnetic flux rope with respect to the background field$^{10}$, whereby magnetic pressure can act asymmetrically to deflect the flux rope pole-ward or equator-ward depending on the field configurations. **CME angular width expansion.** Over the height range 2–46 R$_{\odot}$ the CME angular width ($\Delta\theta=\theta_{max}-\theta_{min}$) increases from $\sim$30$^{\circ}$ to $\sim$60$^{\circ}$ with a power-law of the form $\Delta\theta(r)=\Delta\theta_{0}r^{0.22}$ $(2~$R$_{\odot}<r<46~$R$_{\odot})$ (Fig. 3c). This angular expansion is evidence for an initial overpressure of the CME relative to the surrounding corona (coincident with its early acceleration inset in Fig. 3a). The expansion then tends to a constant during the later drag phase of CME propagation, as it expands to maintain pressure balance with heliocentric distance. It is theorised that the expansion may be attributed to two types of kinematic evolution, namely spherical expansion due to simple convection with the ambient solar wind in a diverging geometry, and expansion due to a pressure gradient between the flux rope and solar wind$^{13}$. It is also noted that the southern portions of the CME manifest the bulk of this expansion below the ecliptic (best observed by comparing the relatively constant ‘Midtop of Front’ measurements with the more consistently decreasing ‘Midbottom of Front’ measurements in Fig. 3b). Inspection of a Wang-Sheeley-Arge (WSA) solar wind model run$^{42}$ reveals higher speed solar wind flows ($\sim$650 km s$^{-1}$) emanating from open-field regions at high/low latitudes (approximately 30$^{\circ}$ north/south of the solar equator). Once the initial prominence/CME eruption occurs and is deflected into a non-radial trajectory, it undergoes asymmetric expansion in the solar wind. It is prevented from expanding upwards into the open-field high-speed stream at higher latitudes, and the high internal pressure of the CME relative to the slower solar wind near the ecliptic accounts for its expansion predominantly to the south. In addition, the northern portions of the CME attain greater distances from the Sun than the southern portions as a result of this propagation in varying solar wind speeds, an effect predicted to occur in previous hydrodynamic models$^{14}$. **CME drag in the inner heliosphere.** Investigating the midpoint kinematics of the CME front, we find the velocity profile increases from approximately 100–300 km s$^{-1}$ over the first 2–5 R$_{\odot}$, before rising more gradually to a scatter between 400–550 km s$^{-1}$ as it propagates outward (Fig. 3a). The acceleration peaks at approximately 100 m s$^{-2}$ at a height of $\sim$3 R$_{\odot}$, then decreases to scatter about zero. This early phase is generally attributed to the Lorentz force whereby the dominant outward magnetic pressure overcomes the internal and/or external magnetic field tension. The subsequent increase in velocity, at heights above $\sim$7 R$_{\odot}$ for this event, is predicted by theory to result from the effects of drag$^{17}$, as the CME is influenced by the solar wind flows of $\sim$550 km s$^{-1}$ emanating from latitudes $\gtrsim$$\pm$5$^{\circ}$ of the ecliptic (again from inspection of the WSA model). At large distances from the Sun, during this postulated drag-dominated epoch of CME propagation, the equation of motion can be cast in the form: $$\begin{aligned}
\label{drag}
M_{cme} \frac{d v_{cme}}{d t}&=&-\frac{1}{2} \rho_{sw} ( v_{cme} - v_{sw} ) | v_{cme} - v_{sw} | A_{cme} C_{D}\end{aligned}$$ This describes a CME of velocity $v_{cme}$, mass $M_{cme}$, and cross-sectional area $A_{cme}$ propagating through a solar wind flow of velocity $v_{sw}$ and density $\rho_{sw}$. The drag coefficient, $C_D$, is found to be of the order of unity for typical CME geometries$^{18}$, while the density and area are expected to vary as power-law functions of distance $r$. Thus, we parameterise the density and geometric variation of the CME and solar wind using a power-law$^{43}$ to obtain: $$\begin{aligned}
\label{pdrag}
\frac{d v_{cme}}{d r} &=& -\alpha r^{-\beta} \frac{1}{v_{cme}}\left ( v_{sw} - v_{cme} \right )^\gamma\end{aligned}$$ where $\gamma$ describes the drag regime, which can be either viscous ($\gamma$ = 1) or aerodynamic ($\gamma$ = 2), and $\alpha$ and $\beta$ are constants primarily related to the cross-sectional area of the CME and the density ratio of the solar wind flow to the CME ($\rho_{sw}/\rho_{cme}$). The solar wind velocity is estimated from an empirical model$^{44}$. We determine a theoretical estimate of the CME velocity as a function of distance by numerically integrating equation (\[pdrag\]) using a 4th order Runge-Kutta scheme and fitting the result to the observed velocities from $\sim$7–46 R$_{\odot}$. The initial CME height, CME velocity, asymptotic solar wind speed, and $\alpha$, $\beta$, and $\gamma$ are obtained from a bootstrapping procedure which provides a final best-fit to the observations and confidence intervals for the parameters (see Methods). Best-fit values for $\alpha$ and $\beta$ were found to be (4.55$^{+2.30}_{-3.27}$)$\times$10$^{-5}$ and -2.02$^{+1.21}_{-0.95}$ which agree with values found in previous modelling work$^{44}$. The best-fit value for the exponent of the velocity difference between the CME and the solar wind, $\gamma$, was found to be 2.27$^{+0.23}_{-0.30}$, which is clear evidence that aerodynamic drag ($\gamma$ = 2) acts during the propagation of the CME in interplanetary space.
The drag model provides an asymptotic CME velocity of 555$_{-42}^{+114}$ km s$^{-1}$ when extrapolated to 1 AU, which predicts the CME to arrive one day before the Advanced Composition Explorer (ACE) or WIND spacecraft detect it at the L1 point. We investigate this discrepancy by using our 3D reconstruction to simulate the continued propagation of the CME from the Alfvén radius ($\sim$21.5 R$_{\odot}$) to Earth using the ENLIL with Cone Model$^{21}$ at NASA’s Community Coordinated Modeling Center. ENLIL is a time-dependent 3D magnetohydrodynamic (MHD) code that models CME propagation through interplanetary space. We use the height, velocity, and width from our 3D reconstruction as initial conditions for the simulation, and find that the CME is actually slowed to $\sim$342 km s$^{-1}$ at 1 AU. This is as a result of its interaction with an upstream, slow-speed, solar wind flow at distances beyond 50 R$_{\odot}$. This CME velocity is consistent with in-situ measurements of solar wind speed ($\sim$330 km s$^{-1}$) from the ACE and WIND spacecraft at L1. Tracking the peak density of the CME front from the ENLIL simulation gives an arrival time at L1 of $\sim$08:09 UT on 16 Dec. 2008. Accounting for the offset in CME front heights between our 3D reconstruction and ENLIL simulation at distances of $21.5~$R$_{\odot}<r<46~$R$_{\odot}$ gives an arrival time in the range 08:09–13:20 UT on 16 Dec. 2008. This prediction interval agrees well with the earliest derived arrival times of the CME front plasma pileup ahead of the magnetic cloud flux rope from the in-situ data of both ACE and WIND (Fig. 4) before its subsequent impact at Earth$^{34,36}$.
Discussion {#discussion .unnumbered}
----------
Since its launch, the dynamic twin-viewpoints of STEREO have enabled studies of the true propagation of CMEs in 3D space. Our new elliptical tie-pointing technique uses the curvature of the CME front as a necessary third constraint on the two viewpoints to build an optimum 3D reconstruction of the front. Here the technique is applied to an Earth-directed CME, to reveal numerous forces at play throughout its propagation.
The early acceleration phase results from the rapid release of energy when the CME dynamics are dominated by outward magnetic and gas pressure forces. Different models can reproduce the early acceleration profiles of CME observations though it is difficult to distinguish between them with absolute certainty$^{45,46}$. For this event the acceleration phase coincides with a strong angular expansion of the CME in the low corona, which tends toward a constant in the later observed propagation in the solar wind. While, statistically, expansion of CMEs is a common occurrence$^{47}$, it is difficult to accurately determine the magnitude and rate of expansion across the 2D plane-of-sky images for individual events. Some studies of these single-viewpoint images of CMEs use characterisations such as the cone model$^{20,21}$ but assume the angular width to be constant (rigid cone) which is not always true early in the events$^{12,38}$. Our 3D front reconstruction overcomes the difficulties in distinguishing expansion from image projection effects, and we show that early in this event there is a non-constant, power-law, angular expansion of the CME. Theoretical models of CME expansion generally reproduce constant radial expansion, based on the suspected magnetic and gas pressure gradients between the erupting flux rope and the ambient corona and solar wind$^{14,48,49}$. To account for the angular expansion of the CME, a combination of internal overpressure relative to external gas and magnetic pressure drop-offs, along with convective evolution of the CME in the diverging solar wind geometry, must be considered$^{13}$.
During this early phase evolution the CME is deflected from a high-latitude source region into a non-radial trajectory as indicated by the changing inclination angle (Fig. 3b). While projection effects again hinder interpretations of CME position angles in single images, statistical studies show that, relative to their source region locations, CMEs have a tendency to deflect toward lower latitudes during solar minimum$^{39,50}$. It has been suggested that this results from the guiding of CMEs towards the equator by either the magnetic fields emanating from polar coronal holes$^{8,9}$ or the flow pattern of the background coronal magnetic field and solar wind/streamer influences$^{19,51}$. Other models show that the internal configuration of the erupting flux rope can have an important effect on its propagation through the corona. The orientation of the flux rope, either normal or inverse polarity, will determine where magnetic reconnection is more likely to occur, and therefore change the magnetic configuration of the system to guide the CME either equator- or pole-ward$^{10}$. Alternatively, modelling the filament as a toroidal flux rope located above a mid-latitude polarity inversion line results in non-radial motion and acceleration of the filament, due to the guiding action of the coronal magnetic field on the current motion$^{11}$. Both of these models have a dependence on the chosen background magnetic field configuration, and so the suspected drawn-out magnetic dipole field of the Sun by the solar wind$^{40,41}$ may be the dominant factor in deflecting the prominence/CME eruption into this observed non-radial trajectory.
At larger distances from the Sun ($>$7 R$_{\odot}$) the effects of drag become important as the CME velocity approaches that of the solar wind. The interaction between the moving magnetic flux rope and the ambient solar wind has been suggested to play a key role in CME propagation at large distances where the Lorentz driving force and the effects of gravity become negligible$^{4}$. Comparisons of initial CME speeds and in-situ detections of arrival times have shown that velocities converge on the solar wind speed$^{15,16}$. For this event we find that the drag force is indeed sufficient to accelerate the CME to the solar wind speed, and quantify that the kinematics are consistent with the quadratic regime of aerodynamic drag (turbulent, as opposed to viscous, effects dominate). The importance of drag becomes further apparent through the CME interaction with a slow-speed solar wind stream ahead of it, slowing it to a speed that accounts for the observed arrival time at L1 near Earth. This agrees with the conjecture that Sun-Earth transit time is more closely related to the solar wind speed than the initial CME speed$^{52}$. Other kinematic studies of this CME through the HI fields-of-view quote velocities of 411$\pm$23 km s$^{-1}$ (Ahead) and 417$\pm$15 km s$^{-1}$ (Behind) when assumed to have zero acceleration during this late phase of propagation$^{34}$, or an average of 363$\pm$43 km s$^{-1}$ when triangulated in time-elongation J-maps$^{36}$. These speeds through the HI fields-of-view, lower than those quantified through the COR1/2 fields-of-view, agree somewhat with the deceleration of the CME to match the slow-speed solar wind ahead of it in our MHD simulation. Ultimately we are able to predict a more accurate arrival time of the CME front at L1.
A cohesive physical picture for how the CME erupts, propagates, and expands in the solar atmosphere remains to be fully developed and understood from a theoretical perspective. Realistic MHD models of the Sun’s global magnetic field and solar wind are required to explain all processes at play, along with a need for adequate models of the complex flux rope geometries within CMEs. Additionally, ambitious space exploration missions, such as Solar Orbiter$^{53}$ (ESA) and Solar Probe$+$$^{54}$ (NASA), will be required to give us a better understanding of the fundamental plasma processes responsible for driving CMEs and determining their adverse effects at Earth.
Methods {#methods .unnumbered}
-------
**CME front detection and characterisation.** For the coronagraph images of COR1/2 a multiscale filter was used to determine a scale at which the signal-to-noise ratio of the CME was deemed optimal for the pixel-chaining algorithm to highlight the edges in the images$^{55 }$. In order to specifically determine the CME front, running and fixed difference masks were overlaid on the multiscale edge detections of both the Ahead and Behind viewpoints simultaneously, enabling us to confidently point-and-click along the relevant CME front edges in each image. For the Heliospheric images of HI1 a modified running difference was used to enhance the faint CME features by correcting for the apparent background stellar motion between frames$^{15}$. The CME was scaled to an appropriate level for point-and-clicking along its front. Once the CME fronts were determined across each instrument plane-of-sky, an ellipse was fit to each front in order to characterise the changing morphology of the CME$^{38}$. **Elliptical tie-pointing.** 3D information may be gleaned from two independent viewpoints of a feature using tie-pointing techniques to triangulate lines-of-sight in space$^{27}$. However, when the object is known to be a curved surface, sight-lines will be tangent to it and not necessarily intersect upon it. Consequently CMEs cannot be reconstructed by tie-pointing alone, but rather their localisation may be constrained by intersecting sight-lines tangent to the leading edges of a CME$^{56,57}$. It is possible to extract the intersection of a given epipolar plane through the ellipse fits in both the Ahead and Behind images, resulting in a quadrilateral in 3D space. Inscribing an ellipse within the quadrilateral such that it is tangent to all four sides$^{58,59}$ provides a slice through the CME that matches the observations from each spacecraft. A full reconstruction is achieved by stacking ellipses from numerous epipolar slices. Since the positions and curvatures of these inscribed ellipses are constrained by the characterised curvature of the CME front in the stereoscopic image pair, the modelled CME front is considered an optimum reconstruction of the true CME front. This is repeated for every frame of the eruption to build the reconstruction as a function of time and view the changes to the CME front as it propagates in 3D. Following Horwitz$^{59}$, we inscribe an ellipse within a quadrilateral using the following steps (see Fig. 5):
1. Apply an isometry to the plane such that the quadrilateral has vertices $(0,0)$, $(A,B)$, $(0,C)$, $(s,t)$, where in the case of an affine transformation we set $A=1$, $B=0$ and $C=1$, with $s$ and $t$ variable.
2. Set the ellipse centre point $(h, k)$ by fixing $h$ somewhere along the open line segment connecting the midpoints of the diagonals of the quadrilateral and hence determine $k$ from the equation of a line, for example: $$\begin{aligned}
h = \frac{1}{2}\left(\frac{s}{2}+\frac{A}{2}\right), \quad
k = \left(h-\frac{s}{2}\right)\left(\frac{t-B-C}{s-A}\right) + \frac{t}{2}\end{aligned}$$
3. To solve for the ellipse tangent to the four sides of the quadrilateral, we can solve for the ellipse tangent to the three sides of a triangle whose vertices are the complex points $$\begin{aligned}
z_{1} = 0, \quad
z_{2} = A+Bi, \quad
z_{3} = -\frac{At-Bs}{s-A}i\end{aligned}$$ and the two ellipse foci are then the zeroes of the equation $$\begin{aligned}
p_{h}(z)&=&(s-A)z^{2}-2(s-A)(h-ik)z-(B-iA)(s-2h)C\end{aligned}$$ whose discriminant can be denoted by $r(h)=r_{1}(h)+ir_{2}(h)$ where $$\begin{aligned}
\nonumber
r_1 \;=\; &4 \left(\left(s-A\right)^{2}-\left(t-B-C\right)^{2}\right)\left(\frac{h-A}{2}\right)^{2} \\ \nonumber
&+4 \left(s-A\right)\left(A\left(s-A\right)+B\left(B-t\right)+C\left(C-t\right)\right)\left(\frac{h-A}{2}\right) \\
&+ \left(s-A\right)^{2}\left(A^{2}-\left(C-B\right)^{2}\right) \\ \nonumber
r_2 \;=\; &8\left(t-B-C\right)\left(s-A\right)\left(\frac{h-A}{2}\right)^{2} \\ \nonumber
&+ 4\left(s-A\right)\left(At+Cs+Bs-2AB\right)\left(\frac{h-A}{2}\right) \\
&+ 2A\left(s-A\right)^{2}\left(B-C\right)\end{aligned}$$Thus we need to determine the quartic polynomial $u(h)=|r(h)|^{2}={r_1(h)}^{2}+{r_2(h)}^{2}$ and we can then solve for the ellipse semimajor axis, $a$, and semiminor axis, $b$, from the equations $$\begin{aligned}
a^{2}-b^{2} \;=\; \sqrt{ \frac{1}{\left(16\left(s-A\right)^{4}\right)}u(h)} \end{aligned}$$ $$\begin{aligned}
a^{2}b^{2} \;=\; \frac{1}{4}\left(\frac{C}{\left(s-A\right)^{2}}\right)\left(2\left(Bs-A\left(t-C\right)\right)h - ACs\right)\left(2h-A\right)\left(2h-s\right) \end{aligned}$$ by parameterising $R=a^{2}-b^{2}$ and $W=a^{2}b^{2}$ to obtain $$\begin{aligned}
a \;=\; \sqrt{ \frac{1}{2}\left(\sqrt{R^{2}+4W}+R\right)}, \quad
b \;=\; \sqrt{ \frac{1}{2}\left(\sqrt{R^{2}+4W}-R\right)}\end{aligned}$$
4. Knowing the axes we can generate the ellipse and float its tilt angle $\delta$ until it sits tangent to each side of the quadrilateral, using the inclined ellipse equation $$\begin{aligned}
\rho^{2} \;=\; \frac{a^{2}b^{2}}{\left(\frac{a^{2}+b^{2}}{2}\right)-\left(\frac{a^{2}-b^{2}}{2}\right)\cos\left(2\omega'-2\delta\right)}\end{aligned}$$ where $\omega'=\omega+\delta$ and $\omega$ is the angle from the semimajor axis to a radial line $\rho$ on the ellipse.
**Drag modelling.** The evolution of CMEs as they propagate from the Sun through the heliosphere is a complex process, simplified by using a parameterised drag model. Comparing equation (\[drag\]) and equation (\[pdrag\]): $$\begin{aligned}
\label{drag3}
\alpha r^{-\beta}&=&\frac{1}{2}\frac{A_{cme} C_{D} \rho_{sw}}{ M_{cme} }\end{aligned}$$ where $C_{D}$ and $M_{cme}$ are approximately constant, and $A_{cme}$ and $\rho_{sw}$ are functions of distance expected to have a power-law form. We can therefore represent their combined behaviour as a single power law, as in equation (\[drag3\]). For example, if we assume a density profile of $\rho_{sw}(r)=\rho_{0}r^{-2}$, and a cylindrical CME of area $A_{cme}(r)=A_{0}r$, then from equation (\[drag3\]) we expect $\beta=1$. The $\alpha$ parameter, representative of the strength of the interaction, is then determined by the constants $A_{0}$, $M_{cme}$ and $C_{D}$, such that high mass, small volume, CMEs are less affected by drag than low mass, large volume, CMEs. This method of parameterisation has been shown to reproduce the kinematic profiles of a large number of events$^{44}$. We assume an additional parameter, $\gamma$, to indicate the type of drag, suggested to be either linear ($\gamma=1$) or quadratic ($\gamma=2$). While this parameterisation may obscure some of the complex interplay between the various quantities, it does not affect the most crucial part that we are trying to test: is aerodynamic drag an appropriate model and, if so, which regime (linear or quadratic) best characterises the kinematics.
A bootstrapping technique$^{60}$ was used to obtain statistically significant parameter ranges from the drag model of equation (\[pdrag\]). This technique involves the following steps:
1. An initial fit to the data $y$ is obtained, yielding the model fit $\hat{y}$ with parameters $\vec{p}$.
2. The residuals of the fit are calculated: $\epsilon = y - \hat{y}$.
3. The residuals are randomly resampled to give $\epsilon^{*}$.
4. The model is then fit to a new data vector $y^{*} = y + \epsilon^{*}$ and the parameters $\vec{p}$ stored.
5. Steps 3–4 are repeated many times (10,000).
6. Confidence intervals on the parameters are determined from the resulting distributions.
In our case the model parameters were: the initial height $h_{cme}$ of the CME at the start of the modelling; the speed $v_{sw}$ of the solar wind at 1AU; the velocity $v_{cme}$ of the CME at the start of the modelling; and the drag parameters $\alpha$, $\beta$, and $\gamma$. In order to test for self-consistency we allowed the observationally known parameters of initial CME height and velocity to vary in the bootstrapping procedure, and recovered comparable values. The parameters $\alpha$ and $\beta$ were in reasonable agreement with values from previous studies$^{44}$.
References {#references .unnumbered}
----------
1. Schwenn, R., Dal Lago, A., Huttunen, E., Gonzalez, W. D. The association of coronal mass ejections with their effects near the Earth. [*Ann. Geophys.*]{} [**23,**]{} 1033–1059 (2005).
2. Prang[é]{}, R. [*et al.*]{} An interplanetary shock traced by planetary auroral storms from the Sun to Saturn. [*Nature*]{} [**432,**]{} 78–81 (2004).
3. [*Severe Space Weather Events – Understanding Societal and Economic Impacts Workshop Report, National Research Council.*]{} (National Academies Press, 2008).
4. Chen, J. Theory of prominence eruption and propagation: interplanetary consequences. [*J. Geophys. Res.*]{} [**101,**]{} 27499–27520 (1996).
5. Antiochos, S. K., DeVore, C. R., Klimchuk, J. A. A model for solar coronal mass ejections. [*Astrophys. J.*]{} [**510,**]{} 485–493 (1999).
6. Kliem, B., T[ö]{}r[ö]{}k, T. Torus instability. [*Phys. Rev. Lett.*]{} [**96,**]{} 255002 (2006).
7. Moore, R. L., Sterling, A. C. in [*Solar Eruptions and Energetic Particles*]{}, Gopalswamy, N., Mewaldt, R., Torsti, J. Eds. (American Geophysical Union, Washington, DC, 2006), [**165,**]{} 43–57.
8. Xie, H. [*et al.*]{}, On the origin, 3D structure and dynamic evolution of CMEs near solar minimum. [*Sol. Phys.*]{} [**259,**]{} 143–161 (2009).
9. Kilpua, E. K. J. [*et al.*]{} STEREO observations of interplanetary coronal mass ejections and prominence deflection during solar minimum period. [*Ann. Geophys.*]{} [**27,**]{} 4491–4503 (2009).
10. Chan[é]{}, E., Jacobs, C., van der Holst, B., Poedts, S., Kimpe, D. On the effect of the initial magnetic polarity and of the background wind on the evolution of CME shocks. [*Astron. Astrophys.*]{} [**432,**]{} 331–339 (2005).
11. Filippov, B. P., Gopalswamy, N., Lozhechkin, A. V. Non-radial motion of eruptive filaments. [*Sol. Phys.*]{} [**203,**]{} 119–130 (2001).
12. Gopalswamy, N., Dal Lago, A., Yashiro, S., Akiyama, S. The expansion and radial speeds of coronal mass ejections. [*Cen. Eur. Astrophys. Bull.*]{} [**33,**]{} 115–124 (2009).
13. Riley, P., Crooker, N. U. Kinematic treatment of coronal mass ejection evolution in the solar wind. [*Astrophys. J.*]{} [**600,**]{} 1035–1042 (2004).
14. Odstr[č]{}il, D., Pizzo, V. J. Three-dimensional propagation of coronal mass ejections in a structured solar wind flow 2. CME launched adjacent to the streamer belt. [*J. Geophys. Res.*]{} [**104,**]{} 493–504 (1999).
15. Maloney, S. A., Gallagher, P. T., McAteer, R. T. J. Reconstructing the 3-D trajectories of CMEs in the inner Heliosphere. [*Sol. Phys.*]{} [**256,**]{} 149–166 (2009).
16. Gonz[á]{}lez-Esparza, J. A., Lara, A., P[é]{}rez-Tijerina, E., Santill[á]{}n, A., Gopalswamy, N. A numerical study on the acceleration and transit time of coronal mass ejections in the interplanetary medium. [*J. Geophys. Res. (Space Physics)*]{} [**108,**]{} 1039 (2003).
17. Tappin, S. J. The deceleration of an interplanetary transient from the Sun to 5 AU. [*Sol. Phys.*]{} [**233,**]{} 233–248 (2006).
18. Cargill, P. J. On the aerodynamic drag force acting on interplanetary coronal mass ejections. [*Sol. Phys.*]{} [**221,**]{} 135–149 (2004).
19. Cremades, H., Bothmer, V. On the three-dimensional configurations of coronal mass ejections. [*Astron. Astrophys.*]{} [**422,**]{} 307–322 (2004).
20. Zhao, X. P., Plunkett, S. P., Liu, W. Determination of geometrical and kinematical properties of halo coronal mass ejections using the cone model. [*J. Geophys. Res.*]{} [**107,**]{} 1223 (2002).
21. Xie, H., Ofman, L., Lawrence, G. Cone model for halo CMEs: application to space weather forecasting. [*J. Geophys. Res.*]{} [**109,**]{} 3109 (2004).
22. D[é]{}moulin, P., Nakwacki, M. S., Dasso, S., Mandrini, C. H. Expected in situ velocities from a hierarchical model for expanding interplanetary coronal mass ejections. [*Sol. Phys.*]{} [**250,**]{} 347–374 (2008).
23. Howard, T. A., Nandy, D., Koepke, A. C. Kinematics properties of solar coronal mass ejections: correction for projection effects in spacecraft coronagraph measurements. [*J. Geophys. Res. (Space Physics)*]{} [**113,**]{} 1104 (2008).
24. Moran, T. G., Davila, J. M. Three-dimensional polarimetric imaging of coronal mass ejections. [*Science*]{} [**305,**]{} 66–71 (2004).
25. Kaiser, M. L. [*et al.*]{} The STEREO mission: an introduction. [*Space Sci. Rev.*]{} [**136,**]{} 5–16 (2008).
26. Mierla, M. [*et al.*]{} On the 3-D reconstruction of coronal mass ejections using coronagraph data. [*Ann. Geophys.*]{} [**28,**]{} 203–215 (2010).
27. Inhester, B. Stereoscopy basics for the STEREO mission. (arXiv: astro-ph/0612649, 2006).
28. Aschwanden, M. J., W[ü]{}lser, J. P., Nitta, N. V., Lemen, J. R. First three-dimensional reconstruction of coronal loops with the STEREO A and B spacecraft. I. Geometry. [*Astrophys. J.*]{} [**679,**]{} 827–842 (2008).
29. Howard, R. A. [*et al.*]{} Sun earth connection coronal and heliospheric investigation (SECCHI). [*Space Sci. Rev.*]{} [**136,**]{} 67–115 (2008).
30. Liewer, P. C. [*et al.*]{} Stereoscopic analysis of the 19 May 2007 erupting filament. [*Sol. Phys.*]{} [**256,**]{} 57–72 (2009).
31. Srivastava, N., Inhester, B., Mierla, M., Podlipnik, B. 3D reconstruction of the leading edge of the 20 May 2007 partial halo CME. [*Sol. Phys.*]{} [**259,**]{} 213–225 (2009).
32. Wood, B. E., Howard, R. A., Thernisien, A., Plunkett, S. P., Socker, D. G. Reconstructing the 3D morphology of the 17 May 2008 CME. [*Sol. Phys.*]{} [**259,**]{} 163–178 (2009).
33. Howard, T. A., Tappin, S. J. Three-dimensional reconstruction of two solar coronal mass ejections using the STEREO spacecraft. [*Sol. Phys.*]{} [**252,**]{} 373–383 (2008).
34. Davis, C. J. [*et al.*]{}, Stereoscopic imaging of an Earth-impacting solar coronal mass ejection: a major milestone for the STEREO mission. [*Geophys. Res. Lett.*]{} [**36,**]{} 8102 (2009).
35. Davis, C. J., Kennedy, J., Davies, J. A. Assessing the accuracy of CME speed and trajectory estimates from STEREO observations through a comparison of independent methods. [*Sol. Phys.*]{} [**263,**]{} 209–222 (2010).
36. Liu, Y. [*et al.*]{} Geometric triangulations of imaging observations to track coronal mass ejections continuously out to 1 AU. [*Astrophys. J. Lett.*]{} [**710,**]{} L82–L87 (2010).
37. Thernisien, A., Vourlidas, A., Howard, R. A. Forward modeling of coronal mass ejections using STEREO/SECCHI data. [*Sol. Phys.*]{} [**256,**]{} 111–130 (2009).
38. Byrne, J. P., Gallagher, P. T., McAteer, R. T. J., Young, C. A. The kinematics of coronal mass ejections using multiscale methods. [*Astron. Astrophys.*]{} [**495,**]{} 325–334 (2009).
39. Gopalswamy, N. [*et al.*]{} Prominence eruptions and coronal mass ejection: a statistical study using microwave observations. [*Astrophys. J.*]{} [**586,**]{} 562–578 (2003).
40. Pneuman, G. W., Kopp, R. A. Gas-magnetic field interactions in the solar corona. [*Sol. Phys.*]{} [**18,**]{} 258–270 (1971).
41. Banaszkiewicz, M., Axford, W. T., McKenzie, J. F. An analytic solar magnetic field model. [*Astron. Astrophys.*]{} [**337,**]{} 940–944 (1998).
42. Arge, C. N., Pizzo, V. J. Improvement in the prediction of solar wind conditions using near-real time solar magnetic field updates. [*J. Geophys. Res.*]{} [**105,**]{} 10465–10480 (2000).
43. Vr[š]{}nak, B., Gopalswamy, N. Influence of the aerodynamic drag on the motion of interplanetary ejecta. [*J. Geophys. Res.*]{} [**107,**]{} 1019 (2002).
44. Vr[š]{}nak, B. Deceleration of coronal mass ejections. [*Sol. Phys.*]{} [**202,**]{} 173–189 (2001).
45. Schrijver, C. J., Elmore, C., Kliem, B., T[" o]{}r[" o]{}k, T., Title, A. M. Observations and modelling of the early acceleration phase of erupting filaments involved in coronal mass ejections. [*Astrophys. J.*]{} [**674,**]{} 586–595 (2008).
46. Lin, C.-H., Gallagher, P. T., Raftery, C. L. Investigating the driving mechanisms of coronal mass ejections. [*Astron. Astrophys.*]{} in press (2010).
47. Bothmer, V., Schwenn, R. Eruptive prominences as sources of magnetic clouds in the solar wind. [*Space Sci. Rev.*]{} [**70,**]{} 215–220 (1994).
48. Berdichevsky, D. B., Lepping, R. P., Farrugia, C. J. Geometric considerations of the evolution of magnetic flux ropes. [*Phys. Rev. E*]{} [**67,**]{} 036405 (2003).
49. Cargill, P. J., Schmidt, J., Spicer, D. S., Zalesak, S. T. Magnetic structure of overexpanding coronal mass ejections: numerical models. [*J. Geophys. Res.*]{} [**105,**]{} 7509–7520 (2000).
50. Yashiro, S. [*et al.*]{} A catalog of white light coronal mass ejections observed by the SOHO spacecraft. [*J. Geophys. Res. (Space Physics)*]{} [**109,**]{} 7105 (2004).
51. MacQueen, R. M., Hundhausen, A. J., Conover, C. W. The propagation of coronal mass ejection transients. [*J. Geophys. Res.*]{} [**91,**]{} 31–38 (1986).
52. Vr[š]{}nak, B., Vrbanec, D., [Č]{}alogovi[ć]{}, J., [Ž]{}ic, T. The role of aerodynamic drag in dynamics of coronal mass ejections. [*IAU Symposium*]{}, [**257,**]{} 271–277 (2009).
53. McComas, D. J. [*et al.*]{} Solar Probe Plus: Report of the Science and Technology Definition Team (STDT). NASA/TM–2008–214161, NASA, 2008.
54. Hassler, D. [*et al.*]{} Solar Orbiter: Exploring the Sun-heliosphere Connection. ESA/SRE(2009)5, 2009.
55. Young, C. A., Gallagher, P. T. Multiscale edge detection in the corona. [*Sol. Phys.*]{} [**248,**]{} 457–469 (2008).
56. Pizzo, V. J., Biesecker, D. A. Geometric localization of STEREO CMEs. [*Geophys. Res. Lett.*]{} [**31,**]{} 21802 (2004).
57. deKoning, C. A., Pizzo, V. J., Biesecker, D. A. Geometric localization of CMEs in 3D space using STEREO beacon data: first results. [*Sol. Phys.*]{} [**256,**]{} 167–181 (2009).
58. Horwitz, A. Finding ellipses and hyperbolas tangent to two, three, or four given lines. [*Southwest J. Pure Appl. Math.*]{} [**1,**]{} 6–32 (2002).
59. Horwitz, A. Ellipses of maximal area and of minimal eccentricity inscribed in a convex quadrilateral. [*Austral. J. Math. Anal. Appl.*]{} [**2,**]{} 1–12 (2005).
60. Efron, B., Tibshirani, R. J. [*An Introduction To The Bootstrap.*]{} (Chapman & Hall/CRC, 1993).
Acknowledgements {#acknowledgements .unnumbered}
----------------
This work is supported by the Science Foundation Ireland under Grants No. 07-RFP-PHYF399 and 729S0DAZ. R.T.J.M.A. was a Marie Curie Fellow at TCD. The STEREO/SECCHI project is an international consortium of the Naval Research Laboratory (USA), Lockheed Martin Solar and Astrophysics Lab (USA), NASA Goddard Space Flight Center (USA), Rutherford Appleton Laboratory (UK), University of Birmingham (UK), Max-Planck-Institut für Sonnen-systemforschung (Germany), Centre Spatial de Liege (Belgium), Institut d’Optique Théorique et Appliquée (France), and Institut d’Astrophysique Spatiale (France). Simulation results have been provided by the Community Coordinated Modeling Center at Goddard Space Flight Center through their public Runs on Request system (http://ccmc.gsfc.nasa.gov). The CCMC is a multi-agency partnership between NASA, AFMC, AFOSR, AFRL, AFWA, NOAA, NSF and ONR. The ENLIL with Cone Model was developed by D. Odstrcil at the University of Colorado at Boulder. We acknowledge the use of WIND data.
Author Contributions {#author-contributions .unnumbered}
--------------------
J.P.B. developed the method and performed the analysis. S.A.M carried out the drag modelling and bootstrapping procedure and contributed to the analysis. J.M.R. developed the visualisation suite. R.T.J.M.A. guided the data prepping. P.T.G. supervised the research. J.P.B., S.A.M., R.T.J.M.A., and P.T.G. discussed the results and implications and contributed to the manuscript at all stages.
Additional information {#additional-information .unnumbered}
----------------------
[**Supplementary Information**]{} accompanies this paper on\
http://www.nature.com/ncomms/journal/v1/n6/full/ncomms1077.html The authors declare no competing financial interests. information is available online at\
http://www.nature.com/naturecommunications Byrne, J.P. [*et al.*]{} Propagation of an Earth-directed coronal mass ejection in three dimensions. [*Nat. Commun.*]{} 1:74 doi: 10.1038/ncomms1077 (2010). Panel [**a**]{} indicates the STEREO spacecraft locations, separated by an angle of 86.7$^{\circ}$ at the time of the event. Panel [**b**]{} shows the prominence eruption observed in EUVI-B off the north-west limb from approximately 03:00 UT which is considered to be the inner material of the CME. The multiscale edge detection and corresponding ellipse characterisation are overplotted in COR1. Panel [**c**]{} shows that the CME is Earth-directed, being observed off the east limb in STEREO-A and the west limb in STEREO-B. The reconstruction is performed using an elliptical tie-pointing technique within epipolar planes containing the two STEREO spacecraft$^{27}$. For example, one of any number of planes will intersect the ellipse characterisation of the CME at two points in each image from STEREO-A and B. Panel [**a**]{} illustrates how the resulting four sight-lines intersect in 3D space to define a quadrilateral that constrains the CME front in that plane$^{56,57}$. Inscribing an ellipse within the quadrilateral such that it is tangent to each sight-line$^{58,59}$ provides a slice through the CME that matches the observations from each spacecraft. Panel [**b**]{} illustrates how a full reconstruction is achieved by stacking multiple ellipses from the epipolar slices. Since the positions and curvatures of these inscribed ellipses are constrained by the characterised curvature of the CME fronts in the stereoscopic image pair, the modelled CME front is considered an optimum reconstruction of the true CME front. Panel [**c**]{} illustrates how this is repeated for every frame of the eruption to build the reconstruction as a function of time and view the changes to the CME front as it propagates in 3D. While the ellipse characterisation applies to both the leading edges and, when observable, the flanks of the CME, only the outermost part of the reconstructed front is shown here for clarity, and illustrated in Supplementary Movie 2. Panel [**a**]{} shows the velocity of the middle of the CME front with corresponding drag model and, inset, the early acceleration peak. Measurement uncertainties are indicated by one standard deviation error-bars. Panel [**b**]{} shows the declinations from the ecliptic (0$^{\circ}$) of an angular spread across the front between the CME flanks with a power-law fit indicative of non-radial propagation. It should be noted that the positions of the flanks are subject to large scatter: as the CME enters each field-of-view the location of a tangent to its flanks is prone to moving back along the reconstruction in cases where the epipolar slices completely constrain the flanks. Hence the ‘Midtop/Midbottom of Front’ measurements better convey the southward dominated expansion. Panel [**c**]{} shows the angular width of the CME with a power-law expansion. For each instrument the first three points of angular width measurement were removed since the CME was still predominantly obscured by each instrument’s occulter. From top to bottom the panels show proton density, bulk flow speed, proton temperature, and magnetic field strength and components. The red dashed lines indicate the predicted window of CME arrival time from our ENLIL with Cone Model run (08:09–13:20 UT on 16 Dec. 2008). We observe a magnetic cloud flux rope signature behind the front, highlighted by the blue dash-dotted lines. An isometry of the plane is applied such that the quadrilateral has vertices $(0,0)$, $(A,B)$, $(0,C)$, $(s,t)$. The ellipse has center $(h,k)$, semimajor axis $a$, semiminor axis $b$, tilt angle $\delta$, and is tangent to each side of the quadrilateral.
{width="\linewidth"}
{width="\linewidth"}
{width="\linewidth"}
{width="\linewidth"}
{width="\linewidth"}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'The weighted $k$-nearest neighbors algorithm is one of the most fundamental non-parametric methods in pattern recognition and machine learning. The question of setting the optimal number of neighbors as well as the optimal weights has received much attention throughout the years, nevertheless this problem seems to have remained unsettled. In this paper we offer a simple approach to locally weighted regression/classification, where we make the bias-variance tradeoff explicit. Our formulation enables us to phrase a notion of optimal weights, and to efficiently find these weights as well as the optimal number of neighbors *efficiently and adaptively, for each data point whose value we wish to estimate*. The applicability of our approach is demonstrated on several datasets, showing superior performance over standard locally weighted methods.'
author:
- 'Oren Anava[^1]'
- 'Kfir Y. Levy[^2]'
bibliography:
- 'bib.bib'
title: '$k^*$-Nearest Neighbors: From Global to Local'
---
Introduction
============
The $k$-nearest neighbors ($k$-NN) algorithm [@cover; @hodges], and Nadarays-Watson estimation [@nadaraya; @watson] are the cornerstones of non-parametric learning. Owing to their simplicity and flexibility, these procedures had become the methods of choice in many scenarios [@top10], especially in settings where the underlying model is complex. Modern applications of the $k$-NN algorithm include recommendation systems [@recommend], text categorization [@text], heart disease classification [@heart], and financial market prediction [@markets], amongst others.
A successful application of the weighted $k$-NN algorithm requires a careful choice of three ingredients: the number of nearest neighbors $k$, the weight vector $\balpha$, and the distance metric. The latter requires domain knowledge and is thus henceforth assumed to be set and known in advance to the learner. Surprisingly, even under this assumption, the problem of choosing the optimal $k$ and $\balpha$ is not fully understood and has been studied extensively since the $1950$’s under many different regimes. Most of the theoretic work focuses on the asymptotic regime in which the number of samples $n$ goes to infinity [@devroye2013probabilistic; @samworth; @stone], and ignores the practical regime in which $n$ is finite. More importantly, the vast majority of $k$-NN studies aim at finding an optimal value of $k$ per dataset, which seems to overlook the specific structure of the dataset and the properties of the data points whose labels we wish to estimate. While kernel based methods such as Nadaraya-Watson enable an adaptive choice of the weight vector $\balpha$, theres still remains the question of how to choose the *kernel’s bandwidth* $\sigma$, which could be thought of as the parallel of the number of neighbors $k$ in $k$-NN. Moreover, there is no principled approach towards choosing the kernel function in practice.
In this paper we offer a coherent and principled approach to *adaptively* choosing the number of neighbors $k$ and the corresponding weight vector $\balpha \in \reals^k$ per decision point. Given a new decision point, we aim to find the best locally weighted predictor, in the sense of minimizing the distance between our prediction and the ground truth. In addition to yielding predictions, our approach enbles us to provide a *per decision point* guarantee for the confidence of our predictions. Fig. \[figs\] illustrates the importance of choosing $k$ adaptively. In contrast to previous works on non-parametric regression/classification, we do not assume that the data $\{(x_i,y_i)\}_{i=1}^n$ arrives from some (unknown) underlying distribution, but rather make a weaker assumption that the labels $\{y_i\}_{i=1}^n$ are independent given the data points $\{x_i\}_{i=1}^n$, allowing the latter to be chosen arbitrarily. Alongside providing a theoretical basis for our approach, we conduct an empirical study that demonstrates its superiority with respect to the state-of-the-art.
This paper is organized as follows. In Section \[sec:def\] we introduce our setting and assumptions, and derive the locally optimal prediction problem. In Section \[sec:alg\] we analyze the solution of the above prediction problem, and introduce a greedy algorithm designed to *efficiently* find the *exact* solution. Section \[sec:Experiments\] presents our experimental study, and Section \[sec:Conclusion\] concludes.
[0.3]{}
![Three different scenarios. In all three scenarios, the same data points $x_1, \ldots , x_n \in \reals^2$ are given (represented by black dots). The red dot in each of the scenarios represents the new data point whose value we need to estimate. Intuitively, in the first scenario it would be beneficial to consider only the nearest neighbor for the estimation task, whereas in the other two scenarios we might profit by considering more neighbors.[]{data-label="figs"}](fig1.png){width="\textwidth"}
[0.3]{}
![Three different scenarios. In all three scenarios, the same data points $x_1, \ldots , x_n \in \reals^2$ are given (represented by black dots). The red dot in each of the scenarios represents the new data point whose value we need to estimate. Intuitively, in the first scenario it would be beneficial to consider only the nearest neighbor for the estimation task, whereas in the other two scenarios we might profit by considering more neighbors.[]{data-label="figs"}](fig2.png){width="\textwidth"}
[0.3]{}
![Three different scenarios. In all three scenarios, the same data points $x_1, \ldots , x_n \in \reals^2$ are given (represented by black dots). The red dot in each of the scenarios represents the new data point whose value we need to estimate. Intuitively, in the first scenario it would be beneficial to consider only the nearest neighbor for the estimation task, whereas in the other two scenarios we might profit by considering more neighbors.[]{data-label="figs"}](fig3.png){width="\textwidth"}
Related Work
------------
Asymptotic universal consistency is the most widely known theoretical guarantee for $k$-NN. This powerful guarantee implies that as the number of samples $n$ goes to infinity, and also $k\to \infty$, $k/n\to 0$, then the risk of the $k$-NN rule converges to the risk of the Bayes classifier for any underlying data distribution. Similar guarantees hold for weighted $k$-NN rules, with the additional assumptions that $\sum_{i=1}^k\alpha_i=1$ and $\max_{i\leq n}\alpha_i \to 0$, [@stone; @devroye2013probabilistic]. In the regime of practical interest where the number of samples $n$ is finite, using $k=\lfloor \sqrt{n}\rfloor$ neighbors is a widely mentioned rule of thumb [@devroye2013probabilistic]. Nevertheless, this rule often yields poor results, and in the regime of finite samples it is usually advised to choose $k$ using cross-validation. Similar consistency results apply to kernel based local methods [@devroye1980distribution; @gyorfi2006distribution].
A novel study of $k$-NN by Samworth, [@samworth], derives a closed form expression for the optimal weight vector, and extracts the optimal number of neighbors. However, this result is only optimal under several restrictive assumptions, and only holds for the asymptotic regime where $n\to \infty$. Furthermore, the above optimal number of neighbors/weights do not adapt, but are rather fixed over all decision points given the dataset. In the context of kernel based methods, it is possible to extract an expression for the optimal kernel’s bandwidth $\sigma$ [@gyorfi2006distribution; @fan1996local]. Nevertheless, this bandwidth is fixed over all decision points, and is only optimal under several restrictive assumptions.
There exist several heuristics to adaptively choosing the number of neighbors and weights separately for each decision point. In [@wettschereck1994locally; @sun2010adaptive] it is suggested to use local cross-validation in order to adapt the value of $k$ to different decision points. Conversely, Ghosh [@ghosh] takes a Bayesian approach towards choosing $k$ adaptively. Focusing on the multiclass classification setup, it is suggested in [@baoli2004adaptive] to consider different values of $k$ for each class, choosing $k$ proportionally to the class populations. Similarly, there exist several attitudes towards adaptively choosing the kernel’s bandwidth $\sigma$, for kernel based methods [@abramson1982bandwidth; @silverman1986density; @demir2010adaptive; @aljuhani2014modification].
Learning the distance metric for $k$-NN was extensively studied throughout the last decade. There are several approaches towards metric learning, which roughly divide into linear/non-linear learning methods. It was found that metric learning may significantly affect the performance of $k$-NN in numerous applications, including computer vision, text analysis, program analysis and more. A comprehensive survey by Kulis [@metric] provides a review of the metric learning literature. Throughout this work we assume that the distance metric is fixed, and thus the focus is on finding the best (in a sense) values of $k$ and $\balpha$ for each new data point.
Two comprehensive monographs, [@devroye2013probabilistic] and [@devroye2015Lectures], provide an extensive survey of the existing literature regarding $k$-NN rules, including theoretical guarantees, useful practices, limitations and more.
Problem Definition {#sec:def}
==================
In this section we present our setting and assumptions, and formulate the locally weighted optimal estimation problem. Recall we seek to find the best local prediction in a sense of minimizing the distance between this prediction and the ground truth. The problem at hand is thus defined as follows: We are given $n$ data points $x_1, \ldots , x_n\in \reals^d$, and $n$ corresponding labels[^3] $y_1, \ldots , y_n\in \reals $. Assume that for any $i \in \{1,\ldots,n\} = [n]$ it holds that $y_i = f( x_i ) + \epsilon_i$, where $f(\cdot)$ and $\epsilon_i$ are such that:
1. **$\mathbf{f(\cdot)}$ is a Lipschitz continuous function:** For any $x,y \in \reals^d$ it holds that $ \left| f(x) - f(y) \right| \leq L \cdot d (x,y) $, where the distance function $d(\cdot,\cdot)$ is set and known in advance. This assumption is rather standard when considering nearest neighbors-based algorithms, and is required in our analysis to bound the so-called *bias* term (to be later defined). In the *binary classification* setup we assume that $f:\reals^d \mapsto [0,1]$, and that given $x$ its label $y\in\{0,1\}$ is distributed $ \text{Bernoulli}(f(x))$.
2. **$\mathbf{\epsilon_i}$’s are noise terms:** For any $i \in [n]$ it holds that $\mathbb{E} \left[ \epsilon_i | x_i \right] = 0 $ and $ | \epsilon_i | \leq b$ for some given $b>0$. In addition, it is assumed that given the data points $\{x_i\}_{i=1}^n$ then the noise terms $\{\epsilon_i\}_{i=1}^n$ are independent. This assumption is later used in our analysis to apply Hoeffding’s inequality and bound the so-called *variance* term (to be later defined). Alternatively, we could assume that $ \mathbb{E} \left[ \epsilon_i^2\vert x_i \right] \leq b$ (instead of $ | \epsilon_i | \leq b$), and apply Bernstein inequalities. The results and analysis remain qualitatively similar.
Given a new data point $x_0$, our task is to estimate $f(x_0)$, where we restrict the estimator $\hat{f}(x_0)$ to be of the form $ \hat{f}(x_0)= \sum_{i=1}^n \alpha_i y_i $. That is, the estimator is a weighted average of the given noisy labels. Formally, we aim at minimizing the absolute distance between our prediction and the ground truth $f(x_0)$, which translates into $$\min_{\balpha \in \Delta_n} \left| \sum_{i=1}^n \alpha_i y_i - f(x_0) \right| \qquad \mathbf{(P1)} ,$$ where we minimize over the simplex, $\Delta_n = \{ \balpha \in \reals^n | \sum_{i=1}^n \alpha_i = 1 \text{ and } \alpha_i \geq 0,\;\forall i \}$. Decomposing the objective of $\mathbf{(P1)}$ into a sum of bias and variance terms, we arrive at the following relaxed objective: $$\begin{aligned}
\left| \sum_{i=1}^n \alpha_i y_i - f(x_0) \right| & = \left| \sum_{i=1}^n \alpha_i \left( y_i - f(x_i) + f(x_i) \right) - f(x_0) \right| \\
& = \left| \sum_{i=1}^n \alpha_i \epsilon_i + \sum_{i=1}^n \alpha_i \left( f(x_i) - f(x_0) \right) \right| \\
& \leq \left| \sum_{i=1}^n \alpha_i \epsilon_i \right| + \left| \sum_{i=1}^n \alpha_i \left( f(x_i) - f(x_0) \right) \right| \\
& \leq \left| \sum_{i=1}^n \alpha_i \epsilon_i \right| + L \sum_{i=1}^n \alpha_i d ( x_i , x_0 ) .\end{aligned}$$ By Hoeffding’s inequality (see supplementary material) it follows that $\left| \sum_{i=1}^n \alpha_i \epsilon_i \right| \leq C\| \balpha \|_2$ for $C = b \sqrt{2 \log \left( \frac{2}{\delta} \right) }$, w.p. at least $1-\delta$. We thus arrive at a new optimization problem $\mathbf{(P2)}$, such that solving it would yield a guarantee for $\mathbf{(P1)}$ with high probability: $$\min_{\balpha \in \Delta_n} C\| \balpha \|_2 + L \sum_{i=1}^n \alpha_i d ( x_i , x_0 ) \qquad \mathbf{(P2)}.$$ The first term in $\mathbf{(P2)}$ corresponds to the noise in the labels and is therefore denoted as the *variance* term, whereas the second term corresponds to the distance between $f(x_0)$ and $\{f(x_i)\}_{i=1}^n$ and is thus denoted as the *bias* term.
Algorithm and Analysis {#sec:alg}
======================
In this section we discuss the properties of the optimal solution for $\mathbf{(P2)}$, and present a greedy algorithm which is designed in order to efficiently find the exact solution of the latter objective (see Section \[sec:algEfficeint\]). Given a decision point $x_0$, Theorem \[thm:Main\] demonstrates that the optimal weight $\alpha_i$ of the data point $x_i$ is proportional to $-d(x_i,x_0)$ (closer points are given more weight). Interestingly, this weight decay is quite slow compared to popular weight kernels, which utilize sharper decay schemes, e.g., exponential/inversely-proportional. Theorem \[thm:Main\] also implies a cutoff effect, meaning that there exists $k^*\in[n]$, such that only the $k^*$ nearest neighbors of $x_0$ donate to the prediction of its label. Note that both $\alpha$ and $k^*$ may adapt from one $x_0$ to another. Also notice that the optimal weights depend on a single parameter $L/C$, namely the Lipschitz to noise ratio. As $L/C$ grows $k^*$ tends to be smaller, which is quite intuitive.
Without loss of generality, assume that the points are ordered in ascending order according to their distance from $x_0$, i.e., $d(x_1,x_0)\leq d(x_2,x_0)\leq\ldots\leq d(x_n,x_0)$. Also, let $\bbeta\in \reals^n$ be such that $\beta_i = {L d(x_i,x_0)}/{C} $. Then, the following is our main theorem:
\[thm:Main\] There exists $\lambda>0$ such that the optimal solution of $\mathbf{(P2)}$ is of the form $$\begin{aligned}
\label{eq:alphaStar}
\alpha^*_i = \frac{\left( \lambda-\beta_i \right) \cdot \mathbf{1} \left\{ \beta_i<\lambda \right\} }{\sum_{i=1}^n \left( \lambda-\beta_i \right) \cdot \mathbf{1} \left\{ \beta_i<\lambda \right\} } .\end{aligned}$$ Furthermore, the value of $\mathbf{(P2)}$ at the optimum is $C\lambda$.
Following is a direct corollary of the above Theorem:
\[cor:Main\] There exists $1\leq k^*\leq n$ such that for the optimal solution of $\mathbf{(P2)}$ the following applies: $$\begin{aligned}
\alpha_i^* >0; \; \forall i\leq k^* \quad \text{ and } \quad \alpha_i^* =0;\; \forall i> k^* .\end{aligned}$$
Notice that $\mathbf{(P2)}$ may be written as follows: $$\min_{\balpha \in \Delta_n} C \left( \| \balpha \|_2 + \balpha^\top \bbeta \right) \qquad \mathbf{(P2)}.$$ We henceforth ignore the parameter $C$. In order to find the solution of $\mathbf{(P2)}$, let us first consider its Lagrangian: $$L(\balpha,\lambda,\btheta) = \| \balpha \|_2 + \balpha^\top \bbeta + \lambda \left( 1-\sum_{i=1}^n\alpha_i \right) - \sum_{i=1}^n \theta_i \alpha_i ,$$ where $\lambda\in\reals$ is the multiplier of the equality constraint $\sum_i\alpha_i=1$, and $\theta_1,\ldots,\theta_n\geq 0$ are the multipliers of the inequality constraints $\alpha_i\geq 0,\; \forall i\in[n]$. Since $\mathbf{(P2)}$ is convex, any solution satisfying the KKT conditions is a global minimum. Deriving the Lagrangian with respect to $\balpha$, we get that for any $i\in[n]$: $$\begin{aligned}
\frac{\alpha_i}{\| \balpha\|_2} = \lambda-\beta_i +\theta_i .\end{aligned}$$ Denote by $\balpha^*$ the optimal solution of $\mathbf{(P2)}$. By the KKT conditions, for any $\alpha^*_i>0$ it follows that $\theta_i=0$. Otherwise, for any $i$ such that $\alpha^*_i=0$ it follows that $\theta_i\geq0$, which implies $\lambda\leq\beta_i$. Thus, for any nonzero weight $\alpha^*_i>0$ the following holds: $$\begin{aligned}
\label{eq:KKTNonz}
\frac{\alpha^*_i}{\| \balpha^*\|_2} = \lambda-\beta_i .\end{aligned}$$ Squaring and summing Equation over all the nonzero entries of $\balpha$, we arrive at the following equation for $\lambda$: $$\begin{aligned}
\label{eq:LambdaEq}
1 = \sum_{\alpha^*_i>0}\frac{\left( \alpha^*_i \right) ^2}{\| \balpha^* \|_2^2} =\sum_{\alpha^*_i>0} (\lambda-\beta_i)^2 .\end{aligned}$$
Next, we show that the value of the objective at the optimum is $C \lambda$. Indeed, note that by Equation and the equality constraint $\sum_i\alpha^*_i=1$, any $\alpha^*_i>0$ satisfies $$\begin{aligned}
\label{eq:solution}
\alpha^*_i = \frac{\lambda-\beta_i}{A},\quad \text{ where }\quad A=\sum_{\alpha^*_i>0} (\lambda-\beta_i) .\end{aligned}$$ Plugging the above into the objective of $\mathbf{(P2)}$ yields $$\begin{aligned}
C \left( \| \balpha^* \|_2 + \balpha^{*\top} \bbeta \right) &=\frac{C}{A}\sqrt{\sum_{\alpha^*_i>0}(\lambda-\beta_i)^2}+\frac{C}{A}\sum_{\alpha^*_i>0} (\lambda-\beta_i)(\beta_i-\lambda+\lambda)\\
& =\frac{C}{A} -\frac{C}{A}\sum_{\alpha^*_i>0} (\lambda-\beta_i)^2+\frac{C\lambda}{A}\sum_{\alpha^*_i>0} (\lambda-\beta_i)\\
& = C \lambda ,\end{aligned}$$ where in the last equality we used Equation , and substituted $A = \sum_{\alpha^*_i>0}(\lambda-\beta_i)$.
Solving $\mathbf{(P2)}$ Efficiently {#sec:algEfficeint}
-----------------------------------
Note that $\mathbf{(P2)}$ is a convex optimization problem, and it can be therefore (*approximately*) solved efficiently, e.g., via any first order algorithm. Concretely, given an accuracy $\epsilon>0$, any off-the-shelf convex optimization method would require a running time which is $\poly(n,\frac{1}{\epsilon})$ in order to find an $\epsilon$-optimal solution to $\mathbf{(P2)}$[^4]. Note that the calculation of (the unsorted) $\bbeta$ requires an additional computational cost of $O(nd)$.
Here we present an efficient method that computes the *exact* solution of $\mathbf{(P2)}$. In addition to the $O(nd)$ cost for calculating $\bbeta$, our algorithm requires an $O(n\log n)$ cost for sorting the entries of $\bbeta$, as well as an additional running time of $O(k^*)$, where $k^*$ is the number of non-zero elements at the optimum. Thus, the running time of our method is independent of any accuracy $\epsilon$, and may be significantly better compared to any off-the-shelf optimization method. Note that in some cases [@indyk1998approximate], using advanced data structures may decrease the cost of finding the nearest neighbors (i.e., the sorted $\bbeta$), yielding a running time substantially smaller than $O(nd+n \log n)$.
Our method is depicted in Algorithm \[algorithm:KstarNN\]. Quite intuitively, the core idea is to greedily add neighbors according to their distance form $x_0$ until a stopping condition is fulfilled (indicating that we have found the optimal solution). Letting $\mathcal{C}_{\text{sortNN}}$, be the computational cost of calculating the sorted vector $\bbeta$, the following theorem presents our guarantees.\
**Input**: vector of ordered distances $\bbeta\in \reals^n$, noisy labels $y_1,\ldots,y_n \in \reals$ : $\lambda_0 =\beta_1+1$, $k=0$ $k\gets k+1$ $ \lambda_k = \frac{1}{k}\left( \sum_{i=1}^k \beta_i + \sqrt{ k + \left( \sum_{i=1}^k \beta_i \right)^2 - k \sum_{i=1}^k \beta_i^2 } \right) $ **Return**: estimation $\hat{f}(x_0)=\sum_i \alpha_i y_i$, where $\balpha\in \Delta_n$ is a weight vector such $
\alpha_i = \frac{\left( \lambda_k-\beta_i \right) \cdot \mathbf{1} \left\{ \beta_i<\lambda_k \right\} }{\sum_{i=1}^n \left( \lambda_k-\beta_i \right) \cdot \mathbf{1} \left\{ \beta_i<\lambda_k \right\} } $
\[thm:alg\] Algorithm \[algorithm:KstarNN\] finds the exact solution of $\mathbf{(P2)}$ within $k^*$ iterations, with an $O(k^{*}+\mathcal{C}_{\text{sortNN}})$ running time.
Denote by $\balpha^*$ the optimal solution of $\mathbf{(P2)}$, and by $k^*$ the corresponding number of nonzero weights. By Corollary \[cor:Main\], these $k^*$ nonzero weights correspond to the $k^*$ smallest values of $\bbeta$. Thus, we are left to show that (1) the optimal $\lambda$ is of the form calculated by the algorithm; and (2) the algorithm halts after exactly $k^*$ iterations and outputs the optimal solution.
Let us first find the optimal $\lambda$. Since the non-zero elements of the optimal solution correspond to the $k^*$ smallest values of $\bbeta$, then Equation is equivalent to the following quadratic equation in $\lambda$: $$\begin{aligned}
k^*\lambda^2 - 2\lambda\sum_{i=1}^{k^*}\beta_i + \left( \sum_{i=1}^{k^*}\beta_i^2-1 \right) =0 .\end{aligned}$$ Solving for $\lambda$ and neglecting the solution that does not agree with $\alpha_i\geq 0,\;\forall i\in[n]$, we get $$\begin{aligned}
\label{eq:lambda}
\lambda = \frac{1}{k^*}\left( \sum_{i=1}^{k^*} \beta_i + \sqrt{ k^* + \left( \sum_{i=1}^{k^*} \beta_i \right)^2 - k^* \sum_{i=1}^{k^*} \beta_i^2 } \right)~.\end{aligned}$$ The above implies that given $k^*$, the optimal solution (satisfying KKT) can be directly derived by a calculation of $\lambda$ according to Equation and computing the $\alpha_i$’s according to Equation . Since Algorithm \[algorithm:KstarNN\] calculates $\lambda$ and $\balpha$ in the form appearing in Equations and respectively, it is therefore sufficient to show that it halts after exactly $k^*$ iterations in order to prove its optimality. The latter is a direct consequence of the following conditions:
1. Upon reaching iteration $k^*$ Algorithm \[algorithm:KstarNN\] necessarily halts.
2. For any $k\leq k^*$ it holds that $\lambda_k \in \reals$.
3. For any $k<k^*$ Algorithm \[algorithm:KstarNN\] does not halt.
Note that the first condition together with the second condition imply that $\lambda_k$ is well defined until the algorithm halts (in the sense that the $``>"$operation in the **while** condition is meaningful). The first condition together with the third condition imply that the algorithm halts after exactly $k^*$ iterations, which concludes the proof. We are now left to show that the above three conditions hold:
**Condition (1):** Note that upon reaching $k^*$, Algorithm \[algorithm:KstarNN\] necessarily calculates the optimal $\lambda=\lambda_{k^*}$. Moreover, the entries of $\balpha^*$ whose indices are greater than $k^*$ are necessarily zero, and in particular, $\alpha_{k^*+1}^*=0$. By Equation , this implies that $\lambda_{k^*}\leq \beta_{k^*+1}$, and therefore the algorithm halts upon reaching $k^*$.
In order to establish conditions (2) and (3) we require the following lemma:
\[lem:lambda\_kOpt\] Let $\lambda_k$ be as calculated by Algorithm \[algorithm:KstarNN\] at iteration $k$. Then, for any $k\leq k^*$ the following holds: $$\begin{aligned}
\lambda_k = \min_{\balpha\in \Delta_n^{(k)} }\left( \| \balpha \|_2 + \balpha^\top \bbeta\right),\;
\text{ where } \Delta_n^{(k)} = \{ \balpha\in \Delta_n : \alpha_i = 0,\; \forall i>k\} \end{aligned}$$
The proof of Lemma \[lem:lambda\_kOpt\] appears in Appendix \[sec:Proof\_lem:lambda\_kOpt\]. We are now ready to prove the remaining conditions.
**Condition (2):** Lemma \[lem:lambda\_kOpt\] states that $\lambda_k$ is the solution of a convex program over a nonempty set, therefore $\lambda_k\in\reals$.
**Condition (3):** By definition $\Delta_{n}^{(k)}\subset \Delta_n^{(k+1)}$ for any $k < n$. Therefore, Lemma \[lem:lambda\_kOpt\] implies that $\lambda_{k}\geq \lambda_{k+1}$ for any $k<k^*$ (minimizing the same objective with stricter constraints yields a higher optimal value). Now assume by contradiction that Algorithm \[algorithm:KstarNN\] halts at some $k_0<k^*$, then the stopping condition of the algorithm implies that $\lambda_{k_0}\leq \beta_{k_0+1}$. Combining the latter with $\lambda_{k} \geq \lambda_{k+1},\; \forall k\leq k^*$, and using $\beta_k\leq \beta_{k+1},\; \forall k\leq n$, we conclude that: $$\begin{aligned}
\lambda_{k^*}\leq \lambda_{k_{0}+1}\leq \lambda_{k_0}\leq \beta_{k_0+1}\leq \beta_{k^*}~.\end{aligned}$$ The above implies that $\alpha_{k^*}=0$ (see Equation ), which contradicts Corollary \[cor:Main\] and the definition of $k^*$.
#### Running time:
Note that the main running time burden of Algorithm \[algorithm:KstarNN\] is the calculation of $\lambda_k$ for any $k\leq k^*$. A naive calculation of $\lambda_k$ requires an $O(k)$ running time. However, note that $\lambda_k$ depends only on $\sum_{i=1}^k\beta_i$ and $\sum_{i=1}^k \beta_i^2$. Updating these sums incrementally implies that we require only $O(1)$ running time per iteration, yielding a total running time of $O(k^*)$. The remaining $O(\mathcal{C}_{\text{sortNN}})$ running time is required in order to calculate the (sorted) $\bbeta$.
Special Cases
-------------
The aim of this section is to discuss two special cases in which the bound of our algorithm coincides with familiar bounds in the literature, thus justifying the relaxed objective of $\mathbf{(P2)}$. We present here only a high-level description of both cases, and defer the formal details to the full version of the paper.
The solution of $\mathbf{(P2)}$ is a high probability upper-bound on the true prediction error $ \left| \sum_{i=1}^n \alpha_i y_i - f(x_0) \right| $. Two interesting cases to consider in this context are $\beta_i = 0$ for all $i \in [n] $, and $\beta_1 = \ldots = \beta_n = \beta > 0$. In the first case, our algorithm includes all labels in the computation of $\lambda$, thus yielding a confidence bound of $2 C \lambda = 2 b \sqrt{ (2 / n) \log \left( 2 / \delta \right) }$ for the prediction error (with probability $1-\delta$). Not surprisingly, this bound coincides with the standard Hoeffding bound for the task of estimating the mean value of a given distribution based on noisy observations drawn from this distribution. Since the latter is known to be tight (in general), so is the confidence bound obtained by our algorithm. In the second case as well, our algorithm will use all data points to arrive at the confidence bound $2 C \lambda = 2 L d + 2 b \sqrt{ (2 / n) \log \left( 2 / \delta \right) }$, where we denote $d(x_1,x_0)= \ldots = d(x_n,x_0) = d$. The second term is again tight by concentration arguments, whereas the first term cannot be improved due to Lipschitz property of $f(\cdot)$, thus yielding an overall tight confidence bound for our prediction in this case.
Experimental Results {#sec:Experiments}
====================
The following experiments demonstrate the effectiveness of the proposed algorithm on several datasets. We start by presenting the baselines used for the comparison.
Baselines
---------
#### The standard $\mathbf{k}$-NN:
Given $k$, the standard ${k}$-NN finds the $k$ nearest data points to $x_0$ (assume without loss of generality that these data points are $x_1,\ldots,x_k$), and then estimates $\hat{f}(x_0) = \frac{1}{k} \sum_{i=1}^k y_i $.
#### The Nadaraya-Watson estimator:
This estimator assigns the data points with weights that are proportional to some given similarity kernel $K:\reals^d \times \reals^d \mapsto \reals_{+}$. That is, $$\begin{aligned}
\hat{f}(x_0) =\frac{ \sum_{i=1}^n K(x_i,x_0) y_i}{\sum_{i=1}^n K(x_i,x_0)} .\end{aligned}$$ Popular choices of kernel functions include the Gaussian kernel $K(x_i,x_j) = \frac{1}{\sigma} e^{-\frac{\|x_i-x_j \|^2}{2\sigma^2}}$; Epanechnikov Kernel $K(x_i,x_j) = \frac{3}{4} \left(1-\frac{\|x_i-x_j \|^2}{\sigma^2}\right)\1_{\left\{\|x_i-x_j \|\leq \sigma \right\}}$; and the triangular kernel $K(x_i,x_j) = \left(1-\frac{\|x_i-x_j \|}{\sigma}\right)\1_{\left\{\|x_i-x_j \|\leq \sigma \right\}}$. Due to lack of space, we present here only the best performing kernel function among the three listed above (on the tested datasets), which is the Gaussian kernel.
Datasets
--------
In our experiments we use 8 real-world datasets, all are available in the UCI repository website (<https://archive.ics.uci.edu/ml/>). In each of the datasets, the features vector consists of real values only, whereas the labels take different forms: in the first 6 datasets (QSAR, Diabetes, PopFailures, Sonar, Ionosphere, and Fertility), the labels are binary $y_i \in \{0,1\}$. In the last two datasets (Slump and Yacht), the labels are real-valued. Note that our algorithm (as well as the other two baselines) applies to all datasets without requiring any adjustment. The number of samples $n$ and the dimension of each sample $d$ are given in Table \[t1\] for each dataset.
\[tb\]
Experimental Setup
------------------
We randomly divide each dataset into two halves (one used for validation and the other for test). On the first half (the validation set), we run the two baselines and our algorithm with different values of $k$, $\sigma$ and $L/C$ (respectively), using $5$-fold cross validation. Specifically, we consider values of $k$ in $\{1,2,\ldots,10\}$ and values of $\sigma$ and $L/C$ in $\{ 0.001 , 0.005 , 0.01 , 0.05 , 0.1 , 0.5 , 1 , 5 , 10\}$. The best values of $k$, $\sigma$ and $L/C$ are then used in the second half of the dataset (the test set) to obtain the results presented in Table \[t1\]. For our algorithm, the range of $k$ that corresponds to the selection of $L/C$ is also given. Notice that we present here the average absolute error of our prediction, as a consequence of our theoretical guarantees.
Results and Discussion
----------------------
As evidenced by Table \[t1\], our algorithm outperforms the baselines on $7$ (out of $8$) datasets, where on $3$ datasets the outperformance is significant. It can also be seen that whereas the standard $k$-NN is restricted to choose one value of $k$ per dataset, our algorithm fully utilizes the ability to choose $k$ adaptively per data point. This validates our theoretical findings, and highlights the advantage of adaptive selection of $k$.
Conclusions and Future Directions {#sec:Conclusion}
=================================
We have introduced a principled approach to locally weighted optimal estimation. By explicitly phrasing the bias-variance tradeoff, we defined the notion of optimal weights and optimal number of neighbors per decision point, and consequently devised an efficient method to extract them. Note that our approach could be extended to handle multiclass classification, as well as scenarios in which predictions of different data points correlate (and we have an estimate of their correlations). Due to lack of space we leave these extensions to the full version of the paper.
A shortcoming of current non-parametric methods, including our $k^*$-NN algorithm, is their limited geometrical perspective. Concretely, all of these methods only consider the distances between the decision point and dataset points, i.e., $\{ d(x_0,x_i)\}_{i=1}^n$, and *ignore* the geometrical relation between the dataset points, i.e., $\{ d(x_i,x_j)\}_{i,j=1}^n$. We believe that our approach opens an avenue for taking advantage of this additional geometrical information, which may have a great affect over the quality of our predictions.
Hoeffding’s Inequality
======================
Let $\{ \epsilon_i \}_{i=1}^n \in [L_i,U_i]^n$ be a sequence of independent random variables, such that $\mathbb{E} \left[ \epsilon_i \right] = \mu_i$. Then, it holds that $$\mathbb{P} \left( \left| \sum_{i=1}^n \epsilon_i - \sum_{i=1}^n \mu_i \right| \geq \varepsilon \right) \leq 2e^{-\frac{2 \varepsilon^2}{\sum_{i=1}^n (U_i - L_i)^2} } .$$
Proof of Lemma \[lem:lambda\_kOpt\] {#sec:Proof_lem:lambda_kOpt}
===================================
First note that for $k=k^*$ the lemma holds immediately by Theorem \[thm:Main\]. In what follows, we establish the lemma for $k<k^*$. Thus, set $k$, let $\Delta_n^{(k)} = \{ \balpha\in \Delta_n : \alpha_i = 0,\; \forall i>k\}$, and consider the following optimization problem: $$\begin{aligned}
\min_{\balpha\in \Delta_n^{(k)} }\left( \| \balpha \|_2 + \balpha^\top \bbeta\right)~ \qquad \mathbf{(P2_k)}.\end{aligned}$$ Similarly to the proof of Theorem \[thm:Main\] and Corollary \[cor:Main\], it can be shown that there exists $\bar{k}\leq k$ such that the optimal solution of $\mathbf{(P2_k)}$ is of the form $(\alpha_1, \ldots,\alpha_{\bar{k}},0\ldots,0)$, where $\alpha_i>0, \; \forall i\leq \bar{k}$. Moreover, given $\bar{k}$ it can be shown that the value of $\mathbf{(P2_k)}$ at the optimum equals $\lambda$, where $$\begin{aligned}
\lambda = \frac{1}{\bar{k} }\left( \sum_{i=1}^{\bar{k}} \beta_i + \sqrt{ \bar{k} + \left( \sum_{i=1}^{\bar{k}} \beta_i \right)^2 - \bar{k} \sum_{i=1}^{\bar{k}} \beta_i^2 } \right) ~,\end{aligned}$$ which is of the form calculated in Algorithm \[algorithm:KstarNN\]. The above implies that showing $\bar{k}=k$ concludes the proof. Now, assume by contradiction that $\bar{k}<k$, then it is immediate to show that the resulting solution of $\mathbf{(P2_k)}$ also satisfies the KKT conditions of the original problem $\mathbf{(P2)}$, and is therefore an optimal solution to $\mathbf{(P2)}$. However, this stands in contradiction to the fact that $\bar{k}< k^*$, and thus it must hold that $\bar{k}=k$, which establishes the lemma.
[^1]: The Voleon Group. Email: `[email protected]`.
[^2]: Department of Computer Science, ETH Zürich. Email: `[email protected]`.
[^3]: Note that our analysis holds for both setups of classification/regression. For brevity we use a *classification* task terminology, relating to the $y_i$’s as *labels*. Our analysis extends directly to the regression setup.
[^4]: Note that $\mathbf{(P2)}$ is not strongly-convex, and therefore the polynomial dependence on $1/\epsilon$ rather than $\log(1/\epsilon)$ for first order methods. Other methods such as the Ellipsoid depend logarithmically on $1/\epsilon$, but suffer a worse dependence on $n$ compared to first order methods.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- |
Dmitri Antonov [^1]\
[*INFN-Sezione di Pisa, Universitá degli studi di Pisa,*]{}\
[*Dipartimento di Fisica, Via Buonarroti, 2 - Ed. B - 56127 Pisa, Italy*]{}\
and\
[*Institute of Theoretical and Experimental Physics,*]{}\
[*B. Cheremushkinskaya 25, RU-117 218 Moscow, Russia*]{}
title: ' **String representation of the SU(N)-inspired dual Abelian-Higgs–type theory with the $\Theta$-term**'
---
**[Abstract]{}**
String representation of the $[U(1)]^{N-1}$ gauge-invariant dual Abelian-Higgs–type theory, which is relevant to the $SU(N)$-QCD with the $\Theta$-term and provides confinement of quarks, is derived. The $N$-dependence of the Higgs vacuum expectation value is found, at which the tension of the string joining quarks becomes $N$-independent, similarly to the real QCD. Contrary to that, the inverse coupling constant of the rigidity term of this string always behaves approximately as $1/N$. A long-range Aharonov-Bohm–type interaction of a dyon (i.e., a quark which acquired a magnetic charge due to the $\Theta$-term) with a closed electric string becomes nontrivial at $\Theta\ne N\pi\times{\,}{\rm integer}$. On the contrary, at these critical values of $\Theta$, the scattering of dyons over strings is absent.
PACS: 11.27.+d; 11.15.Tk; 14.80.Hv
Keywords: confinement, SU(N) gauge field theory, effective action, duality transformation, string model, Theta parameter, Aharonov-Bohm effect
Introduction. The model.
========================
During the last years, the method of Abelian projections [@th] has been extensively used both analytically and numerically to describe confinement in QCD by the monopole mechanism (for recent reviews see [@digiacomo] and refs. therein). In particular, several attempts have been done to address the case of arbitrary number of colors [@suN; @suNN]. On the way of using the method of Abelian projections, it is reasonable to base the respective 3D [*continuum*]{} models on the assumption that monopoles form a dilute plasma (see e.g. ref. [@dw] for the $SU(2)$-case). This is because such a monopole configuration is an approximate stationary point of the action of the $SU(N)$ 3D Georgi-Glashow model, and the confining mechanism of the latter is supposed to be similar to that of Abelian-projected theories [@th]. In the present letter, we shall work in 4D and explore another $SU(N)$-inspired theory describing Abelian-projected monopoles, which provides confinement of quarks. It is based on the alternative assumption [@tHM] that monopoles form magnetic Higgs condensate, rather than the plasma. This assumption looks more appropriate in 4D, where Abelian-projected monopoles are known to be proliferating [@pb], and therefore cannot be treated in the approximation of a dilute plasma. The model we are going to deal with is a straightforward generalization of the respective $SU(3)$-one [@maedan], whose string representation has been explored in refs. [@su3; @theta] (see also [@moresu3] where the collective effects of vortex loops in this model have been studied). Similarly to ref. [@theta], we shall consider the general case of a theory extended by the $\Theta$-term, owing to which quarks acquire a nonvanishing magnetic charge (i.e., become dyons) and scatter over the dual electric Abrikosov-Nielsen-Olesen strings [@ano]. Note that the simplest model of this type, corresponding to the Abelian-projected $SU(2)$-QCD with the $\Theta$-term, has for the first time been considered in ref. [@emil]. As one of the results of the present letter, we shall get the critical values of $\Theta$ in the $SU(N)$-case, at which the long-range topological interaction of dual strings with dyons disappears. These values in particular reproduce the respective $SU(2)$- and $SU(3)$-ones, obtained in the above-mentioned papers.
The partition function of the effective $[U(1)]^{N-1}$ gauge-invariant Abelian-projected theory we are going to explore [^2] reads
$${\cal Z}_\alpha=\int\left(\prod\limits_{i}^{} \left|\Phi_i\right| {\cal D}\left|\Phi_i\right|
{\cal D}\theta_i\right) {\cal D}{\bf B}_\mu
\delta\left(\sum\limits_{i}^{}
\theta_i\right)\exp\Biggl\{-\int d^4x\Biggl[\frac14\left({\bf F}_{\mu\nu}+{\bf F}_{\mu\nu}^{(\alpha)}\right)^2+$$
$$\label{et6}
+\sum\limits_{i}^{}\left[\left|\left(\partial_\mu-
ig_m{\bf q}_i{\bf B}_\mu\right)\Phi_i\right|^2+
\lambda\left(|\Phi_i|^2-\eta^2\right)^2\right]-\frac{i\Theta g_m^2}{16\pi^2}
\left({\bf F}_{\mu\nu}+{\bf F}_{\mu\nu}^{(\alpha)}\right)
\left(\tilde{\bf F}_{\mu\nu}+\tilde{\bf F}_{\mu\nu}^{(\alpha)}\right)
\Biggr]\Biggr\}.$$
Here, the index $i$ runs from 1 to the number of positive roots ${\bf q}_i$’s of the $SU(N)$-group, that is $N(N-1)/2$. Next, $g_m$ is the magnetic coupling constant related to the electric one, $g$, by means of the topological quantization condition $g_mg=4\pi n$. In what follows, we shall for simplicity restrict ourselves to the monopoles possessing the minimal charge only, i.e., set $n=1$, although the generalization to an arbitrary $n$ is straightforward. Note that the origin of root vectors in eq. (\[et6\]) is the fact that monopole charges are distributed along them. Further, $\Phi_i=\left|\Phi_i\right|{\rm e}^{i\theta_i}$ are the dual Higgs fields, which describe the condensates of monopoles, and ${\bf F}_{\mu\nu}=\partial_\mu{\bf B}_\nu-\partial_\nu{\bf B}_\mu$ is the field-strength tensor of the $(N-1)$-component “magnetic” potential ${\bf B}_\mu$. The latter is dual to the “electric” potential, whose components are diagonal gluons. Since the $SU(N)$-group is special, the phases $\theta_i$’s of the dual Higgs fields are related to each other by the constraint $\sum\limits_{i}^{}\theta_i=0$, which is imposed by introducing the corresponding $\delta$-function into the r.h.s. of eq. (\[et6\]). Next, the index $\alpha$ runs from 1 to $N$ and denotes a certain quark color. Finally, $\tilde {\cal O}_{\mu\nu}\equiv\frac12\varepsilon_{\mu\nu\lambda\rho}
{\cal O}_{\lambda\rho}$, and ${\bf F}_{\mu\nu}^{(\alpha)}$ is the field-strength tensor of a test quark of the color $\alpha$, which moves along a certain contour $C$. This tensor obeys the equation $\partial_\mu\tilde {\bf F}_{\mu\nu}^{(\alpha)}=g{\bf m}_\alpha j_\nu$, where $j_\mu(x)=\oint\limits_{C}^{}dx_\mu(\tau)\delta(x-x(\tau))$, and ${\bf m}_\alpha$ is a weight vector of the group $SU(N)$. One thus has ${\bf F}_{\mu\nu}^{(\alpha)}=g{\bf m}_\alpha\tilde{\cal F}_{\mu\nu}$, where ${\cal F}_{\mu\nu}$ can be chosen e.g. in the form ${\cal F}_{\mu\nu}=-\Sigma_{\mu\nu}$. Here, $\Sigma_{\mu\nu}(x)=\int\limits_{\Sigma}^{}d\sigma_{\mu\nu}
(x(\xi))\delta(x-x(\xi))$ is the vorticity tensor current associated with the world sheet $\Sigma$ of the open electric string, bounded by the contour $C$ [^3]. From now on, we shall omit the normalization constant in front of all the functional integrals implying for every color $\alpha$ the normalization condition ${\cal Z}_\alpha\left[C=0\right]=1$.
Note that the $\Theta$-term can be rewritten as
$$\label{ch}
-\frac{i\Theta g_m^2}{16\pi^2}
\left({\bf F}_{\mu\nu}+{\bf F}_{\mu\nu}^{(\alpha)}\right)
\left(\tilde{\bf F}_{\mu\nu}+\tilde{\bf F}_{\mu\nu}^{(\alpha)}\right)=\frac{i\Theta g_m}{\pi}{\bf m}_\alpha
\int d^4x{\bf B}_\mu j_\mu,$$
which means that by virtue of this term quarks start interacting with the magnetic gauge field ${\bf B}_\mu$ [@witten]. This is only possible provided they acquire some magnetic charge, i.e., become dyons. According to eq. (\[ch\]), this charge is indeed nonvanishing and equals to $\Theta g_m/\pi$.
Expanding for a while $|\Phi_i|$ around the Higgs v.e.v. $\eta$, one gets the mass of the dual vector boson, $m=g_m\eta\sqrt{N}$. In what follows, we shall work in the London limit of the model (\[et6\]), which admits a construction of the string representation. This is the limit when $m$ is much smaller than the mass of any of the Higgs fields, $m_H=\eta\sqrt{2\lambda}$. Since we would like the model under study be consistent with QCD, we must have $g\sim\sqrt{\bar\lambda/N}$, where $\bar\lambda$ is the ’t Hooft coupling constant, which remains finite in the large-$N$ limit. Therefore, in the London limit, the Higgs coupling $\lambda$ should grow with $N$ faster than ${\cal O}\left(N^2\right)$, namely it should obey the inequality $\lambda\gg 8\pi^2N^2/\bar\lambda$.
Integrating $|\Phi_i|$’s out, we arrive at the following expression for the partition function (\[et6\]) in the London limit:
$${\cal Z}_\alpha=\int\left(\prod\limits_{i}^{}
{\cal D}\theta_i^{\rm sing}{\cal D}\theta_i^{\rm reg}\right) {\cal D}{\bf B}_\mu{\cal D}k
\delta\left(\sum\limits_{i}^{}
\theta_i^{\rm sing}\right)\exp\Biggl\{-\int d^4x\Biggl[\frac14\left({\bf F}_{\mu\nu}+{\bf F}_{\mu\nu}^{(\alpha)}\right)^2+$$
$$\label{et7}
+\eta^2\sum\limits_{i}^{}\left(\partial_\mu\theta_i-
g_m{\bf q}_i{\bf B}_\mu\right)^2-ik\sum\limits_{i}^{}\theta_i^{\rm reg}-\frac{i\Theta g_m^2}{16\pi^2}
\left({\bf F}_{\mu\nu}+{\bf F}_{\mu\nu}^{(\alpha)}\right)
\left(\tilde{\bf F}_{\mu\nu}+\tilde{\bf F}_{\mu\nu}^{(\alpha)}\right)
\Biggr]\Biggr\}.$$
Here, we have decomposed the total phases of the dual Higgs fields into multivalued and singlevalued (else oftenly called singular and regular, respectively) parts, $\theta_i=
\theta_i^{\rm sing}+
\theta_i^{\rm reg}$, and imposed the constraint of vanishing of the sum of regular parts by introducing the integration over the Lagrange multiplier $k(x)$. The fields $\theta_i^{\rm sing.}$’s describing a certain configuration of closed dual strings are related to the world sheets $\Sigma_i$’s of these strings by means of the equation
$$\label{suz3}
\varepsilon_{\mu\nu\lambda\rho}\partial_\lambda\partial_\rho
\theta_i^{\rm sing}(x)=2\pi\Sigma_{\mu\nu}^i(x)\equiv
2\pi\int\limits_{\Sigma_i}^{}d\sigma_{\mu\nu}\left(x^{(i)}(\xi)\right)
\delta\left(x-x^{(i)}(\xi)\right).$$
This equation is the covariant formulation of the 4D analogue of the Stokes’ theorem for the gradient of the field $\theta_i$, written in the local form. In eq. (\[suz3\]), $x^{(i)}(\xi)\equiv x_\mu^{(i)}(\xi)$ is a vector, which parametrizes the world sheet $\Sigma_i$ with $\xi=(\xi^1, \xi^2)$ standing for the 2D coordinate. As far as the regular parts of the phases, $\theta_i^{\rm reg}$’s, are concerned, those describe single-valued fluctuations around the string configuration described by $\theta_i^{\rm sing}$’s. Note that owing to the one-to-one correspondence between $\theta_i^{\rm sing}$’s and $\Sigma_i$’s, established by eq. (\[suz3\]), the integration over $\theta_i^{\rm sing}$’s is implied in the sense of a certain prescription of the summation over string world sheets. For the $SU(3)$-inspired model, one of the possible concrete forms of such a prescription, corresponding to the summation over the grand canonical ensemble of virtual pairs of strings with opposite winding numbers, has been considered in ref. [@moresu3]. It is also worth noting that by virtue of eq. (\[suz3\]) it is possible to demonstrate that the integration measure ${\cal D}\theta_i$ becomes factorized into the product ${\cal D}\theta_i^{\rm sing}{\cal D}\theta_i^{\rm reg}$.
String representation.
======================
Let us now construct the string representation of the model (\[et7\]). First, similarly to the $SU(3)$-case [@su3; @theta], one can show that due to the equality $\sum\limits_{i}^{}{\bf q}_i=0$, the integration over $k$ yields only an inessential constant factor, and we get
$$\int\left(\prod\limits_{i}^{}
{\cal D}\theta_i^{\rm sing}
{\cal D}\theta_i^{\rm reg}\right) {\cal D}k\delta\left(\sum\limits_{i}^{}
\theta_i^{\rm sing}\right)
\exp\Biggl\{-\int d^4x\Biggl[\eta^2
\sum\limits_{i}^{}\left(\partial_\mu\theta_i-
g_m{\bf q}_i{\bf B}_\mu\right)^2-ik\sum\limits_{i}^{}\theta_i^{\rm reg}\Biggr]\Biggr\}=$$
$$=\int\left(\prod\limits_{i}^{}{\cal D}x^{(i)}(\xi){\cal D}h_{\mu\nu}^i\right)
\delta\left(\sum\limits_{i}^{}\Sigma_{\mu\nu}^i\right)
\exp\Biggl\{-\int d^4x\Biggl[\frac{1}{24\eta^2}
\left(H_{\mu\nu\lambda}^i\right)^2-i\pi h_{\mu\nu}^i\Sigma_{\mu\nu}^i+
ig_m{\bf q}_i
{\bf B}_\mu\partial_\nu \tilde h_{\mu\nu}^i\Biggr]\Biggr\}.$$ Here, the Kalb-Ramond field $h_{\mu\nu}^i$ is dual to $\theta_i^{\rm reg}$, and $H_{\mu\nu\lambda}^i=\partial_\mu h_{\nu\lambda}^i+
\partial_\lambda h_{\mu\nu}^i+\partial_\nu h_{\lambda\mu}^i$ stands for the strength tensor of this field. We have also used the relation (\[suz3\]) and referred the Jacobians [@polikarp] emerging in course of the change of variables $\theta_i^{\rm sing}\to x^{(i)}$ to the integration measures ${\cal D}x^{(i)}(\xi)$’s.
The action of the dual-gauge-field sector of the model can then be written as follows:
$$\int d^4x\Biggl[
\frac14{\bf F}_{\mu\nu}^2+\frac14\left({\bf F}_{\mu\nu}^{(\alpha)}\right)^2+
{\bf B}_\mu\partial_\nu\left(
ig_m{\bf q}_i\tilde h_{\mu\nu}^i-
g{\bf m}_\alpha\tilde\Sigma_{\mu\nu}-
\frac{i\Theta g_m^2}{4\pi^2}\tilde{\bf F}_{\mu\nu}^{(\alpha)}\right)\Biggr].$$ The ${\bf B}_\mu$-fields can then be integrated out as Lagrange multipliers by passing to the new fields $B_\mu^i={\bf q}_i{\bf B}_\mu$, using the formula [@group] [^4] $\left(B_\mu^i\right)^2=\frac{N}{2}{\bf B}_\mu^2$, and introducing the numbers $s_i^{(\alpha)}$’s according to the definition ${\bf m}_\alpha={\bf q}_is_i^{(\alpha)}$. The resulting partition function reads as follows:
$${\cal Z}_\alpha=\int\left(\prod\limits_{i}^{}{\cal D}x^{(i)}(\xi){\cal D}h_{\mu\nu}^i\right)
\delta\left(\sum\limits_{i}^{}\Sigma_{\mu\nu}^i\right)
\exp\Biggl\{-\int d^4x\Biggl[\frac{1}{24\eta^2}
\left(H_{\mu\nu\lambda}^i\right)^2-i\pi h_{\mu\nu}^i\Sigma_{\mu\nu}^i+$$
$$\label{zA}
+\frac{N}{8}\left(
g_mh_{\mu\nu}^i+igs_i^{(\alpha)}
\Sigma_{\mu\nu}-\frac{\Theta g_m}{\pi} s_i^{(\alpha)}
\tilde{\cal F}_{\mu\nu}\right)^2
+\frac14\left({\bf F}_{\mu\nu}^{(\alpha)}
\right)^2\Biggr]\Biggr\}.$$
To proceed with the analysis of this expression, we obviously need to know possible values of $s_i^{(\alpha)}$’s, as well as $\left(s_i^{(\alpha)}\right)^2$ for a fixed $\alpha$. First of all, it is straightforward to see that for a given $\alpha$, only $(N-1)$ numbers $s_i^{(\alpha)}$’s are different from zero. This is simply because only $(N-1)$ ${\bf q}_i$’s out of $N(N-1)/2$ positive roots are so that ${\bf m}_\alpha{\bf q}_i=1/2$, while the others are orthogonal to ${\bf m}_\alpha$. Next, by noting that every root vector can be represented as a difference of two weight vectors and by using the normalization condition ${\bf m}_\alpha{\bf m}_\beta=\left(\delta_{\alpha\beta}-N^{-1}\right)/2$, these nonvanishing $s_i^{(\alpha)}$’s can be found to be $\pm N^{-1}$ (with $\sum\limits_{i}^{}s_i^{(\alpha)}=0$), so that $\left(s_i^{(\alpha)}\right)^2=(N-1)/N^2$. Owing to this result, the singular term $\frac14\left({\bf F}_{\mu\nu}^{(\alpha)}\right)^2$ in eq. (\[zA\]) cancels out, and we get the following intermediate expression for the partition function:
$${\cal Z}_\alpha=\exp\left[-\frac{N-1}{8N}\left(\frac{\Theta g_m}{\pi}\right)^2\int d^4x
{\cal F}_{\mu\nu}^2
-\frac{2i\Theta(N-1)}{N}\hat L(\Sigma, C)\right]\times$$
$$\label{result}
\times\int\left(\prod\limits_{i}^{}{\cal D}x^{(i)}(\xi){\cal D}h_{\mu\nu}^i\right)
\delta\left(\sum\limits_{i}^{}\Sigma_{\mu\nu}^i\right)\exp\Biggl\{-\int d^4x\Biggl[\frac{1}{24\eta^2}
\left(H_{\mu\nu\lambda}^i\right)^2+\frac{Ng_m^2}{8}\left(h_{\mu\nu}^i\right)^2
-i\pi h_{\mu\nu}^i\Sigma_{\mu\nu}^{i{\,}(\alpha)}\Biggr]\Biggr\}.$$
Here, $\hat L(\Sigma, C)\equiv\int d^4xd^4y\tilde\Sigma_{\mu\nu}(x)j_\nu(y)\partial_\mu^xD_0(x-y)$ is the (formal expression for the) 4D Gauss’ linking number of the surface $\Sigma$ with its boundary $C$, which eventually becomes cancelled from the final expression for ${\cal Z}_\alpha$, and $\Sigma_{\mu\nu}^{i{\,}(\alpha)}\equiv\Sigma_{\mu\nu}^i-
Ns_i^{(\alpha)}\Sigma_{\mu\nu}-
\frac{i\Theta Ng_m^2}{4\pi^2}s_i^{(\alpha)}\tilde{\cal F}_{\mu\nu}$, so that $\partial_\mu\Sigma_{\mu\nu}^{i{\,}(\alpha)}=Ns_i^{(\alpha)}j_\nu$.
Further integration over the Kalb-Ramond fields is straightforward and yields
$$\int\left(\prod\limits_{i}^{}{\cal D}h_{\mu\nu}^i\right)
\exp\Biggl\{-\int d^4x\Biggl[\frac{1}{24\eta^2}
\left(H_{\mu\nu\lambda}^i\right)^2+\frac{Ng_m^2}{8}\left(h_{\mu\nu}^i\right)^2
-i\pi h_{\mu\nu}^i\Sigma_{\mu\nu}^{i{\,}(\alpha)}\Biggr]\Biggr\}=$$
$$=\exp\left\{-2\pi^2\int d^4xd^4yD_m(x-y)\left[\eta^2\Sigma_{\mu\nu}^{i{\,}(\alpha)}(x)\Sigma_{\mu\nu}^{i{\,}(\alpha)}(y)
+\frac{2}{g_m^2}\frac{N-1}{N}j_\mu(x)j_\mu(y)\right]\right\},$$ where $D_m(x)=mK_1(m|x|)/(4\pi^2|x|)$ is the massive propagator with $K_1$ standing for the modified Bessel function. Simplifying the integral $\int d^4xd^4y\Sigma_{\mu\nu}^{i{\,}(\alpha)}(x)D_m(x-y)\Sigma_{\mu\nu}^{i{\,}(\alpha)}(y)$ (see ref. [@theta] for the analogous transformations in the $SU(3)$-case) we eventually arrive at the following final expression for the partition function:
$${\cal Z}_\alpha=
\exp\left\{-\frac{N-1}{4N}\left[g^2+\left(\frac{\Theta g_m}{\pi}\right)^2\right]
\int d^4x d^4y j_\mu(x)D_m(x-y)j_\mu(y)\right\}
\int\left(\prod\limits_{i}^{}{\cal D}x^{(i)}(\xi)\right)
\times$$
$$\times\delta\left(\sum\limits_{i}^{}\Sigma_{\mu\nu}^i\right)
\exp\Biggl[
-2(\pi\eta)^2\int d^4x d^4y\hat\Sigma_{\mu\nu}^i(x)
D_m(x-y)\hat\Sigma_{\mu\nu}^i(y)-2i\Theta s_i^{(\alpha)}\hat L\left(\Sigma_i,C\right)+$$
$$\label{main}
+2i\Theta\int d^4xd^4y\left(\frac{N-1}{N}\tilde\Sigma_{\mu\nu}(x)-s_i^{(\alpha)}\tilde\Sigma_{\mu\nu}^i(x)\right)j_\mu(y)
\partial_\nu^xD_m(x-y)\Biggr],$$
where $\hat\Sigma_{\mu\nu}^i\equiv\Sigma_{\mu\nu}^i-Ns_i^{(\alpha)}\Sigma_{\mu\nu}$. This formula is the main result of the present letter. Note that for every color $\alpha$, it is straightforward to integrate out one of the world sheets $\Sigma_i$’s by resolving the constraint imposed by the $\delta$-function.
The first exponent on the r.h.s. of eq. (\[main\]) represents the short-ranged interaction of quarks via dual vector bosons. Noting that for any $\alpha$, ${\bf m}_\alpha^2=(N-1)/(2N)$, we immediately read from this term the total charge of the quark, $\sqrt{g^2+(\Theta g_m/\pi)^2}$. The magnetic part of this charge coincides with the one following from eq. (\[ch\]). Further, the first term in the second exponent on the r.h.s. of eq. (\[main\]) is the short-ranged (self-)interaction of closed world sheets $\Sigma_i$’s and an open one $\Sigma$. In particular, by virtue of the general formulae obtained in ref. [@mpla], one can get from the $\Sigma\times\Sigma$-interaction the following values of the string tension and of the inverse coupling constant of the rigidity term, corresponding to the confining-string world sheet $\Sigma$:
$$\sigma=2\pi(N-1)\eta^2\ln\frac{m_H}{m},~~\alpha^{-1}=-\frac{\pi(N-1)}{4g_m^2N}={\cal O}\left(\frac{1}{N}\right).$$ Here, in the derivation of $\sigma$, we have in the standard way [@ano] set for a characteristic small dimensionless quantity in the model under study the ratio $m/m_H$ and adapted the logarithmic accuracy, i.e., assumed that not only $\frac{m_H}{m}\gg1$, but also $\ln\frac{m_H}{m}\gg1$. While the $1/N$ behavior of $\alpha^{-1}$ is fixed by the requirement that $g_m^2\sim N$, the $N$-dependence of $\sigma$ is subject to such a dependence of $\eta$. In QCD, to the leading order in the parameter of the strong-coupling expansion, $\beta=2N/g^2$, the string tension for the rectangular loop is known to be $N$-independent: $\sigma_{\rm QCD}=\frac{1}{a^2}\ln\frac{2N^2}{\beta}=\frac{1}{a^2}\ln\bar\lambda$, where $a$ is the lattice spacing [^5]. Thus, if we adjust the $N$-dependence of $\eta$ as $\eta\sim\left[(N-1)\ln\frac{\sqrt{\lambda}}{N}\right]^{-1/2}$, where the $N$-dependence of $\lambda$ was discussed in the paragraph following after eq. (\[ch\]), then the resulting string tension will be as $N$-independent, as it is in QCD.
Next, the last term on the r.h.s. of eq. (\[main\]) describes the short-range interactions of dyons with both closed and open strings (obviously, the latter confine these very dyons themselves). Finally, the term $-2i\Theta s_i^{(\alpha)}\hat L\left(\Sigma_i,C\right)$ in eq. (\[main\]) describes the long-range interaction of dyons with closed world sheets, that is the 4D analogue of the Aharonov-Bohm effect [@four]. Since nonvanishing values of $s_i^{(\alpha)}$’s were found to be $\pm N^{-1}$, at $\Theta\ne N\pi\times{\,}{\rm integer}$, dyons (due to their magnetic charge) do interact by means of this term with the closed dual strings. On the contrary, these critical values of $\Theta$ correspond to such a relation between the magnetic charge of a dyon and the electric flux inside the string when the scattering of dyons over strings is absent. Note finally once more that these critical values of $\Theta$ generalize the $SU(2)$- and $SU(3)$-ones obtained in refs. [@emil] and [@theta], respectively.
Acknowledgments
===============
The author is grateful for useful discussions to Prof. Adriano Di Giacomo and Drs. Cristina Diamantini and Luigi Del Debbio. Besides that, he is grateful to Prof. Adriano Di Giacomo and to the whole staff of the Physics Department of the University of Pisa for cordial hospitality. The work has been supported by INFN and partially by the INTAS grant Open Call 2000, Project No. 110.
[99]{}
G. ’t Hooft, Nucl. Phys. [**B 190**]{} (1981) 455. J.M. Carmona, M. D’Elia, L. Del Debbio, A. Di Giacomo, B. Lucini, and G. Paffuti, [*Color confinement and dual superconductivity in full QCD*]{}, hep-lat/0205025; A. Di Giacomo, [*Color confinement and dual superconductivity: an update*]{}, hep-lat/0204032. L. Del Debbio, A. Di Giacomo, B. Lucini, and G. Paffuti, [*Abelian projection in SU(N) gauge theories*]{}, hep-lat/0203023; A. Di Giacomo, [*Independence on the Abelian projection of monopole condensation in QCD*]{}, hep-lat/0206018. M.C. Diamantini and C.A. Trugenberger, Phys. Rev. Lett. [**88**]{} (2002) 251601; JHEP [**04**]{} (2002) 032; U. Ellwanger and N. Wschebor, JHEP [**10**]{} (2001) 023; D. Antonov, Mod. Phys. Lett. [**A 17**]{} (2002) 279. S.R. Das and S.R. Wadia, Phys. Rev. [**D 53**]{} (1996) 5856. S. Mandelstam, Phys. Lett. [**B 53**]{} (1975) 476; Phys. Rep. [**23**]{} (1976) 245; G. ’t Hooft, in: [*High Energy Physics*]{}, Ed. A. Zichichi (Editrice Compositori, Bologna, 1976). A.M. Polyakov, [*Gauge fields and strings*]{} (Harwood Academic Publishers, Chur, 1987).
S. Maedan and T. Suzuki, Prog. Theor. Phys. [**81**]{} (1989) 229. D. Antonov and D. Ebert, Phys. Lett. [**B 444**]{} (1998) 208; D.A. Komarov and M.N. Chernodub, JETP Lett. [**68**]{} (1998) 117. D. Antonov, Phys. Lett. [**B 475**]{} (2000) 81. D. Antonov, Mod. Phys. Lett. [**A 14**]{} (1999) 1829; JHEP [**07**]{} (2000) 055; Nucl. Phys. (Proc. Suppl.) [**B 96**]{} (2000) 491. A.A. Abrikosov, Sov. Phys. JETP [**5**]{} (1957) 1174; H.B. Nielsen and P. Olesen, Nucl. Phys. [**B 61**]{} (1973) 45; for a review see e.g.: E.M. Lifshitz and L.P. Pitaevski, [*Statistical Physics, Vol. 2*]{} (Pergamon, New York, 1987). E.T. Akhmedov, JETP Lett. [**64**]{} (1996) 82. E. Witten, Phys. Lett. [**B 86**]{} (1979) 283. E.T. Akhmedov, M.N. Chernodub, M.I. Polikarpov, and M.A. Zubkov, Phys. Rev. [**D 53**]{} (1996) 2087. R. Gilmore, [*Lie groups, Lie algebras, and some of their applications*]{} (J. Wiley & Sons, New York, 1974).
D.V. Antonov, D. Ebert, and Yu.A. Simonov, Mod. Phys. Lett. [**A 11**]{} (1996) 1905. M.G. Alford, J. March-Russel, and F. Wilczek, Nucl. Phys. [**B 337**]{} (1990) 695; J. Preskill and L.M. Krauss, Nucl. Phys. [**B 341**]{} (1990) 50.
[^1]: E-mail: [[email protected]]{}
[^2]: Throughout the present letter, all the investigations will be performed in the Euclidean space-time.
[^3]: Another possible choice of ${\cal F}_{\mu\nu}$ is ${\cal F}_{\mu\nu}(x)=\partial_\nu^x\int d^4yD_0(x-y)j_\mu(y)-(\mu\leftrightarrow\nu)$, where $D_0(x)=1/(4\pi^2x^2)$ is the massless propagator. The obvious difference between these two choices is the dimensionality of the support of ${\cal F}_{\mu\nu}$ – either it is a 2D Dirac sheet $\Sigma$, or the whole 4D space-time. It is known, however, that this ambiguity in the choice of the solution to the equation $\partial_\mu{\cal F}_{\mu\nu}=j_\nu$ does not affect physical results.
[^4]: See also the last paper in ref. [@suNN] for the discussion of this formula.
[^5]: This fact stems also from the natural conjecture that the linear term in the quark-antiquark potential should have the same $N$-dependence as the Coulomb term, that is $V_{\rm Coul}(R)=-\frac{g_{\rm QCD}^2}{4\pi R}\frac{N^2-1}{2N}={\cal O}\left(N^0\right)$.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Phase transition of the classical Ising model on the Sierpiński carpet, which has the fractal dimension $\log_3^{~} 8 \approx 1.8927$, is studied by an adapted variant of the higher-order tensor renormalization group method. The second-order phase transition is observed at the critical temperature $T_{\rm c}^{~} = 1.4783(1)$. Position dependence of local functions is studied by means of impurity tensors, which are inserted at different locations on the fractal lattice. The critical exponent $\beta$ associated with the local magnetization varies by two orders of magnitude, depending on lattice locations, whereas $T_{\rm c}^{~}$ is not affected.'
author:
- 'Jozef <span style="font-variant:small-caps;">Genzor</span>$^{1}$'
- 'Andrej [<span style="font-variant:small-caps;">G</span>endiar]{}$^{2}$'
- 'Tomotoshi [<span style="font-variant:small-caps;">N</span>ishino]{}$^{1}$'
title: Measurements of magnetization on the Sierpiński carpet
---
Introduction
============
Understanding of phase transitions and critical phenomena plays an important role in the condensed matter physics[@phase_trans]. Systems on regular lattices are of the major target of such studies, where elementary models exhibit translationally invariant states, which are scale invariant at criticality. It has been known that critical behavior is controlled by global properties, such as dimensionality and symmetries. This is the concept of the *universality*.
If we focus our attention on inhomogeneous lattices, there is a group of fractal lattices, which are self-similar and exhibit non-integer Hausdorff dimensions. Geometrical details, such as lacunarity and connectivity, could thus modify the properties of their critical phenomena. An important aspect of the fractal lattices is the ramification, which is the smallest number of bonds that have to be cut in order to isolate an arbitrarily large bounded subset surrounding a point. In the early studies by Gefen [*et al.*]{}[@Gefen1; @Gefen2; @Gefen3; @Gefen4], it was shown that the short-range classical spin models on finitely ramified lattices exhibit no phase transition at nonzero temperature.
The ferromagnetic Ising model on the fractal lattice that corresponds to the Sierpiński carpet is one of the most extensively studied models with fractal lattice geometry. Monte Carlo studies combined with the finite-size scaling method have been performed[@Carmona; @Monceau1; @Monceau2; @Pruessner; @Bab], including Monte Carlo renormalization group method[@MCRG]. The critical temperature $T_{\rm c}^{~}$ is relatively well estimated within the narrow range $1.47 \lesssim T_{\rm c}^{~} \lesssim 1.50$, where one of the most recent estimate is $T_{\rm c}^{~} = 1.4945(50)$ by Bab [*et al.*]{}[@Bab]. On the other hand, estimates of critical exponents are still fluctuating, since it is rather hard to collect sufficient numerical data for a precise finite-size scaling analysis[@RSRG]. This is partially so because an elementary lattice unit can contain too many sites, and there is a variety of choices with respect to boundary conditions. This situation persists even in a recent study by means of a path-counting approach[@Perreau]. Yet, a number of issues remains unresolved concerning uniformity of fractal systems in the thermodynamic limit[@Bab].
Recently, we show that the higher-order tensor renormalization group (HOTRG) method[@HOTRG] can be used as an appropriate numerical tool for studies of certain types of fractal systems[@2dising; @gasket]. The method is based on the real-space renormalization group, and, therefore, the self-similar property of fractal lattices can be treated in a natural manner. In this article, we apply the HOTRG method to the Ising model on the fractal lattice that corresponds to the Sierpiński carpet. The method enables us to estimate $T_{\rm c}^{~}$ from the temperature dependence of the entanglement entropy $s( T )$. In order to check the uniformity in the thermodynamic functions, we choose three distinct locations on the lattice, and calculate the local magnetization $m( T )$ and the bond energy $u( T )$. As it is trivially expected, these local functions, $m( T )$ and $u( T )$, yield the identical $T_{\rm c}^{~}$. Contrary to the naive intuition, the critical exponent $\beta$, which is associated with the local magnetization $m( T )\propto(T_{\rm c}^{~}-T)^{\beta}$, strongly depends on the location of measurement, and the estimated exponent $\beta$ can vary within two orders of magnitude with respect to the three different locations on the fractal lattice, where the local functions are calculated.
Structure of this article is as follows. In the next section, we explain the recursive construction of the fractal lattice, and express the partition function of the system in terms of contractions among tensors. In Sec. III we introduce HOTRG method for the purpose of keeping the numerical cost realistic. The way of measuring the local functions $m( T )$ and $u( T )$ is explained. Numerical results are shown in Sec. IV. Position dependence on the local functions is observed. In the last section, we summarize the obtained results, and discuss the reason for the pathological behavior of the fractal system.
Model representation
====================
There are several different types of discrete lattices that can be identified as the Sierpiński carpet. Among them, we choose the one constructed by the extension process shown in Fig. \[fig:Fig\_1\]. In the first step ($n = 1$), there are eight spins in the unit, as it is shown on the left. The Ising spins $\sigma = \pm 1$ are represented by the circles, and the ferromagnetic nearest-neighbor interactions are denoted by the horizontal and vertical lines. In the second step ($n = 2$), the eight units are grouped to form a new extended unit, as shown in the middle. Now, there are $64$ spins on the $9 \times 9$ square lattice grid. On the right side, we show the third step ($n = 3$). Generally, in the $n$-th step, an extended unit contains $8^n_{~}$ spins on the $3^n_{~} \times 3^n_{~}$ lattice. The Hausdorff dimension of this lattice is $d_{\rm H}^{~} = \log_3^{~} 8 \approx 1.8927$ in the thermodynamic limit $n\to\infty$.
In the series of the extended units we have thus constructed, there is another type of the recursive structure. In Fig. \[fig:Fig\_1\] at the bottom of each unit, we have drawn a pyramid-like area by the thick lines. One can identify four such pyramid-like areas within each unit (enumerated by $n$), and each area can be called the [*corner*]{} $C^{(n)}_{~}$. The corners are labeled $C^{(1)}_{~}$, $C^{(2)}_{~}$, and $C^{(3)}_{~}$ from left to right therein. It should be noted that there are only $2^{n-1}_{~}$ spin sites in common, where two adjacent corners meet.
In the case $n = 2$ drawn in the middle, we shaded a region on the left, which contains six sites, and label the region $X^{(1)}_{~}$. Having observed the corner $C^{(2)}_{~}$ at the bottom, we found out that the corner consists of two rotated pieces of $X^{(1)}_{~}$ and the four pieces of $C^{(1)}_{~}$. In $n = 3$, we shaded a larger region $X^{(2)}_{~}$ (in the similar manner as $X^{(1)}_{~}$), which now contains 36 sites. We can recognize that $X^{(2)}_{~}$ consists of seven pieces of $X^{(1)}_{~}$ and the two pieces of $C^{(1)}_{~}$. We have thus identified the following recursive relations, which build up the fractal:
- Each $n$-th unit contains 4 pieces of $C^{(n)}_{~}$,
- $\!C^{(n+1)}_{~}\!$ contains 2 pieces of $X^{(n)}_{~}\!$ and 4 pieces of $C^{(n)}_{~}\!$,
- $\!X^{(n+1)}_{~}\!$ contains 7 pieces of $X^{(n)}_{~}\!$ and 2 pieces of $C^{(n)}_{~}\!$.
![ Build-up process of a discrete analog of the Sierpiński carpet. The circles represent the lattice points, where the Ising spins are located. The vertical and horizontal links denote the interacting pairs. The first three units $n = 1$, $2$, and $3$ are shown. For each unit $n$, we draw the corners $C^{(n)}_{~}$ by the thick lines. We label the shaded regions $X^{(1)}_{~}$ and $X^{(2)}_{~}$.[]{data-label="fig:Fig_1"}](Fig1.pdf){width="48.00000%"}
The Hamiltonian of the Ising model, which is constructed on the series of finite-size systems $n = 1, 2, 3, \cdots$, has the form $$H^{(n)}_{~} = - J \sum_{\left< a b \right>} \sigma_a^{~} \sigma_b^{~} \, .
\label{Eq_1}$$ The summation runs over all pairs of the nearest-neighbor Ising spins, as shown by the circles in Fig. \[fig:Fig\_1\]. The spin positions are labeled by the lattice indices $a$ and $b$. They are connected by the lines, which correspond to the ferromagnetic interaction $J > 0$, and no external magnetic field is imposed. First we calculate the partition function (expressed in arbitrary step $n$) $$Z^{(n)}_{~} = \sum \exp\biggl[ - \frac{~ H^{(n)}_{~} }{ {k_{\rm B}^{~}T} } \biggr]
\label{Eq_2}$$ as a function of temperature $T$, where the summation is taken over all spin configurations, and where $k_{\rm B}^{~}$ denotes the Boltzmann constant. At initial step $n=1$, we define the [*corner*]{} matrix $$C_{ij}^{(1)} = \sum\limits_{\xi = \pm1}^{~}
\exp\bigl[ K \xi \left(\sigma_a^{~} + \sigma_b^{~} \right) \bigr] \, ,
\label{Eq_3}$$ where $K = J / k_{\rm B}^{~} T$, and the matrix indices $i = ( \sigma_a^{~} + 1 ) / 2$ and $j = ( \sigma_b^{~} + 1 ) / 2$ take the value either $0$ or $1$. The structure on the right-hand side is graphically shown in Fig. \[fig:Fig\_2\] (top), and the summation taken over the spin $\xi$ is denoted by the filled circle. We have chosen the ordering of the indices $i$ and $j$, which is opposite if comparing $C_{ij}^{(1)}$ with the corresponding graph. The partition function of the smallest unit ($n = 1$), which contains 8-spins, is then expressed as $$Z^{(1)}_{~} = \sum_{ijkl}^{~} C_{ij}^{(1)} \, C_{jk}^{(1)} \, C_{kl}^{(1)} \, C_{li}^{(1)} \, ,
\label{Eq_4}$$ and can be abbreviated to ${\rm Tr} \, \bigl[ C^{(1)}_{~} \bigr]^4_{~}$. We will express $Z^{(n)}_{~}$ for arbitrary $n > 1$ in the same trace form $$Z^{(n)}_{~} = {\rm Tr} \, \bigl[ C^{(n)}_{~} \bigr]^4_{~}
\label{Eq_5}$$ by means of the corner matrix $C^{(n)}_{ij}$, where each one undergoes [*extensions*]{}, as we define in the following.
Let us notice that the region $X^{(1)}_{~}$ appears from the step $n = 2$. The Boltzmann weight corresponding to this region $X^{(1)}_{~}$ can be expressed by the 4-leg (order-4) tensor $$\begin{split}
X_{ijkl}^{(1)} = \sum_{\xi \eta}^{~} & \exp\bigl[ K \left( \sigma_a^{~} \sigma_b^{~} +
\sigma_c^{~} \sigma_d^{~} + \xi \eta \right) \bigr] \\
\times & \exp \bigl[ K \xi \left( \sigma_a^{~} + \sigma_d^{~} \right) +
K \eta \left( \sigma_b^{~} + \sigma_c^{~} \right) \bigr] \, ,
\end{split}
\label{Eq_6}$$ where the spin locations are depicted in Fig. \[fig:Fig\_2\] (bottom). We have additionally introduced new indices $k = ( \sigma_c^{~} + 1 ) / 2$ and $l = ( \sigma_d^{~} + 1 ) / 2$. Now we can mathematically represent the recursive relations in terms of contractions among the matrices $C^{(n)}_{~}$ and tensors $X^{(n)}_{~}$. Figure \[fig:Fig\_3\] shows the graphical representation of the extension processes. Taking the contraction among the two tensors $X^{(n)}_{~}$ and the four matrices $C^{(n)}_{~}$, as shown in Fig. \[fig:Fig\_3\] (left), we obtain the [*extended*]{} corner matrix $C^{(n+1)}_{~}$ through the corresponding formula $$\begin{split}
C_{ij}^{(n+1)} &= C_{( i_1^{~} i_2^{~}) (j_1^{~} j_2^{~})}^{(n+1)} \\
& = \sum_{a b c d e f}^{~}
C_{a j_2^{~}}^{(n)} X_{a b c j_1^{~}}^{(n)} C_{f c}^{(n)} C_{d b}^{(n)} X_{d e i_1^{~} f}^{(n)} C_{i_2^{~} e}^{(n)} \, ,
\end{split}
\label{Eq_7}$$ where the new indices $i$ and $j$, respectively, represent the grouped indices $( i_1^{~} i_2^{~})$ and $(j_1^{~} j_2^{~})$. Apparently, the diagram in Fig. \[fig:Fig\_3\] (left) is more convenient than Eq. for the better understanding of the contraction geometry. This relation can be easily checked for the case $n = 1$ after comparing Figs. \[fig:Fig\_1\], \[fig:Fig\_2\], and \[fig:Fig\_3\].
Similarly, the extension process from $X^{(n)}_{~}$ to $X^{(n+1)}_{~}$ shown in Fig. \[fig:Fig\_3\] (right) can be expressed by the formula $$\begin{split}
X_{ijkl}^{(n+1)} & = X_{( i_1^{~} i_2^{~}) (j_1^{~} j_2^{~}) ( k_1^{~} k_2^{~}) (l_1^{~} l_2^{~})}^{(n+1)} \\
& =\sum\limits_{{a b c d e f}\atop{g h p r q s}} X_{a b l_1^{~} p}^{(n)} X_{b c k_2^{~} l_2^{~}}^{(n)}
X_{c d q k_1^{~}}^{(n)} X_{f g d a}^{(n)} \\
& \hspace{1.4cm} X_{e f r i_1^{~}}^{(n)} X_{g h j_1^{~} s}^{(n)}
X_{i_2^{~} j_2^{~} h e}^{(n)} C_{r p}^{(n)} C_{s q}^{(n)} \, , \end{split}
\label{Eq_8}$$ where we have again abbreviated the grouped indices to $i = (i_1^{~} i_2^{~})$, $j = (j_1^{~} j_2^{~})$, $k = (k_1^{~} k_2^{~})$, and $l = (l_1^{~} l_2^{~})$. This relation can be checked for the case $n = 1$ by comparing the area $X^{(1)}_{~}$ and $X^{(2)}_{~}$ in Fig. 1.
Through the iterative extension of the tensors, we can [*formally*]{} obtain the corner matrix $C^{(n)}_{ij}$ for arbitrary $n$, and express $Z^{(n)}_{~}$ by Eq. . The free energy per spin is then $$f^{(n)}_{~} = - \frac{1}{8^n} \, k_{\rm B}^{~} T \ln Z^{(n)}_{~}
\label{Eq_9}$$ since the $n$-th unit contains $8^n_{~}$ spins. This function converges to a value $f^{(\infty)}_{~}$ in the thermodynamic limit $n \rightarrow \infty$, where convergence with respect to $n$ is rapid, and $n = 35$ is sufficient in the numerical analyses. The specific heat per site can be evaluated by taking the second derivative of the free energy $c_f^{~}( T ) = - T \frac{\partial^2_{~}}{\partial T^2_{~}}f^{(\infty)}_{~}$.
Renormalization Group Transformation
====================================
The matrix dimension of $C^{(n)}_{~}$ is $2^{n-1}_{~}$ by definition. Therefore, it is impossible to keep all the matrix elements faithfully in numerical analysis, when $n$ is large. The situation is severer for $X^{(n)}_{~}$, which has four indices. By means of the HOTRG method [@HOTRG], it is possible to reduce the tensor-leg dimension, the degree of freedom, down to a realistic number. The reduction process is performed by the renormalization group transformation $U$, which is created from the higher-order singular value decomposition (SVD) [@hosvd] applied to the extended tensor $X^{(n+1)}_{ijkl}$.
Suppose that the tensor-leg dimension in $X^{(n)}_{ijkl}$ is $D$ for each index, i.e., $i,j,k,l=0,1,\dots,D-1$. As we have shown in Eq. , the dimension of the grouped index $i = ( i_1^{~} i_2^{~} )$ in $X_{(i_1^{~} i_2^{~}) (j_1^{~} j_2^{~}) (k_1^{~} k_2^{~}) (l_1^{~} l_2^{~}) }^{(n+1)}$ is equal to $D^2_{~}$. We reshape the four tensor indices to form a rectangular matrix with the grouped index $(i_1^{~} i_2^{~})$ and the remaining grouped index $(j_1^{~} j_2^{~} k_1^{~} k_2^{~} l_1^{~} l_2^{~})$ with the dimension $D^6$. Applying the singular value decomposition to the reshaped tensor, we obtain $$X_{(i_1^{~} i_2^{~}) (j_1^{~} j_2^{~} k_1^{~} k_2^{~} l_1^{~} l_2^{~}) }^{(n+1)} =
\sum_{\xi}^{~} U_{(i_1^{~} i_2^{~}) \, \xi}^{~} \, \omega_{\xi}^{~} \,
V^{~}_{(j_1^{~} j_2^{~} k_1^{~} k_2^{~} l_1^{~} l_2^{~}) \, \xi } \, ,
\label{Eq_10}$$ where $U$ and $V$ are generalized unitary, i.e. orthonormal, matrices $U^T_{~}U=V^T_{~}V=\mathbb{1}$. We assume the decreasing order for the singular values $\omega_{\xi}^{~}$ by convention. Keeping $D$ dominant degrees of dominant freedom for the index $\xi$ at most, we regard the matrix $U_{(i_1^{~} i_2^{~}) \, \xi}^{~}$ as the renormalization group (RG) transformation from $(i_1^{~} i_2^{~})$ to the renormalized index $\xi$. For the purpose of clarifying the relation between the original pair of indices $(i_1^{~} i_2^{~})$ and the renormalized index $\xi$, we [*rename*]{} $\xi$ to $i$ and write the RG transformation as $U_{(i_1^{~} i_2^{~}) \, i}^{~}$. In the same manner, we obtain $U_{(j_1^{~} j_2^{~}) \, j}^{~}$, $U_{(k_1^{~} k_2^{~}) \, k}^{~}$, and $U_{(l_1^{~} l_2^{~}) \, l}^{~}$, where we have distinguished the transformation matrices by their indices.
The RG transformation is then performed as $$\begin{aligned}
\begin{split}
X_{i j k l }^{(n+1)} \leftarrow \sum_{{i_1 i_2 j_1 j_2}\atop{k_1 k_2 l_1 l_2}}
U_{(i_1^{~} i_2^{~}) \, i}^{~} & \, U_{(j_1^{~} j_2^{~}) \, j}^{~} \,
U_{(k_1^{~} k_2^{~}) \, k}^{~} \, U_{(l_1^{~} l_2^{~}) \, l}^{~} \\
& X_{(i_1^{~} i_2^{~}) (j_1^{~} j_2^{~}) (k_1^{~} k_2^{~}) (l_1^{~} l_2^{~}) }^{(n+1)} \, ,
\label{Eq_11}
\end{split}\end{aligned}$$ where the sum is taken over the indices on the connected lines in Fig. \[fig:Fig\_4\] (left). The left arrow used in Eq. represents the replacement of the expanded tensor $X_{(i_1^{~} i_2^{~}) (j_1^{~} j_2^{~}) (k_1^{~} k_2^{~}) (l_1^{~} l_2^{~}) }^{(n+1)}$ for the renormalized one $X_{i j k l }^{(n+1)}$. Since the RG transformation matrices $U$ are obtained from SVD applied to $X_{(i_1^{~} i_2^{~}) (j_1^{~} j_2^{~}) (k_1^{~} k_2^{~}) (l_1^{~} l_2^{~}) }^{(n+1)}$, there is no guarantee that the RG transformation can be straightforwardly applied to $C_{(i_1^{~} i_2^{~}) (j_1^{~} j_2^{~})}^{(n+1)}$, as we have defined in Eq. . It has been confirmed that the transformation $$C_{i j}^{(n+1)} \leftarrow \sum\limits_{{i_1^{~} i_2^{~}}\atop{j_1^{~} j_2^{~}}} U_{(i_1^{~} i_2^{~}) \, i}^{~} \, U_{(j_1^{~} j_2^{~}) \, j}^{~} \,
C_{(i_1^{~} i_2^{~}) (j_1^{~} j_2^{~})}^{(n+1)}
\label{Eq_12}$$ is of use in the actual numerical calculation. The corresponding diagram is shown in Fig. \[fig:Fig\_4\] (right).
We add a remark on the choice of the transformation matrix $U$. In a trial calculation, once we tried to create $U$ from the corner matrix $C_{i j}^{(n+1)}$ by both SVD and diagonalization. However, we encountered numerical instabilities, in which the singular values (or eigenvalues) decayed to zero too rapidly, especially, when $n$ was large. Thus, we always create $U$ from SVD that is applied to $X^{(n+1)}_{ijkl}$ only.
With the use of these RG transformations, it is possible to repeat the extension processes in Eq. and , and to obtain a good numerical estimate for $Z^{(n)}_{~}$ and $f^{(n)}_{~}$ in Eq. . The actual numerical calculations in this work were performed by a slightly modified procedure. We split $X^{(n)}_{ijkl}$ into two halves and represent each part by $3$-leg tensor. This computational trick allowed us to increase the leg-dimension up to $D = 28$, or even larger.
Impurity tensors
----------------
In the framework of the HOTRG method, thermodynamic functions, such as the magnetization per site $m( T )$ and the internal energy per bond $u( T )$, can be calculated from the free energy per site $f^{(\infty)}_{~}$. Alternatively, these functions are obtained by inserting impurity tensors (separately derived from $C^{(n)}_{~}$ and $X^{(n)}_{~}$) into the tensor network of the entire system. Since the fractal lattice under consideration is inhomogeneous, these thermodynamic functions can depend on the position they are placed. In order to check the dependence, we choose three typical locations $A$, $B$, and $Y$, as shown in Fig. \[fig:Fig\_5\] on the fractal lattice.
As an example of such a single site function, let us consider a tensor representation of the local magnetization. Looking at the position of site $A$ in Fig. \[fig:Fig\_5\], one finds that it is located on the corner matrix $C^{(1)}_{~}$. Thus, the initial impurity tensor on that location is expressed as $$A_{ij}^{(1)} = \sum_{\xi = \pm 1}^{~} \xi \, \exp{\bigl[ K \, \xi \left(\sigma_i^{~} + \sigma_j^{~} \right) \bigr]} \, ,
\label{Eq_13}$$ similar to Eq. . It is also easy to check that the initial impurity tensor $B^{(1)}_{~}$, which is placed on a position different from $A$, is expressed by the identical equation, so that we have $A_{ij}^{(1)} = B_{ij}^{(1)}$. The site $Y$ lies inside the area $X^{(1)}_{~}$ and we define the corresponding initial tensor for local magnetization as $$\begin{aligned}
Y_{ijkl}^{(1)} = & \sum_{\xi\eta}^{~} \frac{\xi + \eta}{2}
\exp{\bigl[K \, ( \sigma_i^{~} \sigma_j^{~} + \sigma_k^{~} \sigma_l^{~} + \xi\eta) \bigr]}
\nonumber\\
&\times \exp{\bigl[K \, \xi \left( \sigma_j^{~} + \sigma_k^{~} \right)
+ K \, \eta \left( \sigma_i^{~} + \sigma_l^{~} \right) \bigr]} \, ,
\label{Eq_14}\end{aligned}$$ similarly to Eq. .
We can thus build up analogous extension processes of tensors, each of which contains an impurity tensor we have defined. The extension process of the impurity corner matrix that contains $A^{(1)}_{ij}$ is then written as $$A_{(i_1^{~} i_2^{~}) (j_1^{~} j_2^{~})}^{(n+1)} = \sum\limits_{abcdef}
C_{a j_2^{~}}^{(n)} X_{a b c j_1^{~}}^{(n)} A_{f c}^{(n)} C_{d b}^{(n)} X_{d e i_1^{~} f}^{(n)} C_{i_2^{~} e}^{(n)} \, ,
\label{Eq_15}$$ which is graphically shown in Fig. \[fig:Fig\_6\] (top left). Therein, the RG transformation $A_{(i_1^{~} i_2^{~}) (j_1^{~} j_2^{~})}^{(n+1)} \to A_{ij}^{(n+1)}$ is depicted by the green lines with the open circles, which stand for $U$ in accord with Eq. . The impurity tensor placed around the site $B$ obeys the extension procedure $$B_{(i_1^{~} i_2^{~}) (j_1^{~} j_2^{~})}^{(n+1)} = \sum\limits_{abcdef}
C_{a j_2^{~}}^{(n)} X_{a b c j_1^{~}}^{(n)} C_{f c}^{(n)} B_{d b}^{(n)} X_{d e i_1^{~} f}^{(n)} C_{i_2^{~} e}^{(n)} \, ,
\label{Eq_16}$$ as shown on the top right of Fig. \[fig:Fig\_6\] (top right). For the location $Y$ shown in Fig. \[fig:Fig\_5\], we take the contraction $$\begin{split}
Y_{(i_1^{~} i_2^{~}) (j_1^{~} j_2^{~}) (k_1^{~} k_2^{~}) (l_1^{~} l_2^{~}) }^{(n+1)} = \sum\limits_{{a b c d e f}\atop{g h p r q s}}
X_{a b l_1^{~} p}^{(n)} X_{b c k_2^{~} l_2}^{(n)} X_{c d q k_1^{~}}^{(n)} \\
Y_{f g d a}^{(n)} X_{e f r i_1^{~}}^{(n)} X_{g h j_1^{~} s}^{(n)}
X_{i_2^{~} j_2^{~} h e}^{(n)} C_{r p}^{(n)} C_{s q}^{(n)} \, ,
\end{split}
\label{Eq_17}$$ which is depicted in Fig. \[fig:Fig\_6\] (bottom), where the graph is rotated by the right angle for book keeping.
In the calculation of the local bond energy $u( T )$, the initial tensors satisfy the equations $$\begin{aligned}
A_{ij}^{(1)} &= - \frac{J}{2} \left( \sigma_i^{~} + \sigma_j^{~} \right)
\sum_{\xi}^{~} \xi \, \exp{\bigl[ K \xi \left( \sigma_i^{~} + \sigma_j^{~} \right) \bigr]} \, , \\
\begin{split}
Y_{ijkl}^{(1)} &= \sum_{\xi\eta} - J \xi\eta \,
\exp{\bigl[ K( \sigma_i^{~} \sigma_j^{~} + \sigma_k^{~} \sigma_l^{~} + \xi\eta) \bigr]} \\
&\times \exp{\bigl[ K \xi \left(\sigma_j^{~} + \sigma_k^{~} \right)
+ \eta \left(\sigma_i^{~} + \sigma_l^{~} \right) \bigr]} \, ,
\end{split}\end{aligned}$$ recalling that $B_{ij}^{(1)} = A_{ij}^{(1)}$. Starting the extension processes with these initial tensors, we can calculate the expectation value of the bond energy around the site $A$ by means of the ratio $$u_{\rm A}^{~}( T ) = \lim_{n \rightarrow \infty}^{~}
\frac{{\rm Tr} \, \Bigl( A^{(n)}_{~} \left[ C^{(n)}_{~} \right]^3 \Bigr)}{{\rm Tr}\, \Bigl( \left[ C^{(n)}_{~} \right]^4 \Bigr)} \, .$$ The convergence with respect to $n$ is fast because of the fractal geometry. It is straightforward to obtain the local energy $u_{\rm B}^{~}( T )$ and $u_{\rm Y}^{~}( T )$, as well as the local magnetization $m_{\rm A}^{~}( T )$, $m_{\rm B}^{~}( T )$, and $m_{\rm Y}^{~}( T )$ in the same manner.
Numerical Results
=================
For simplicity, we use the temperature scale with $k_{\rm B}^{~} = 1$ and fix the ferromagnetic interaction strength to $J = 1$. All the shown data are obtained after taking a sufficiently large number of system extensions, provided that the convergence with respect to $n$ has been reached. The degrees of freedom $D$ for each leg-dimension is $D = 28$ at most. Apart from the critical (phase transition) region, where $D$ needs to be the largest, we used $D = 18$, which sufficed to obtain precise and converged data we have used for drawing all the graphs.
An analogous kind of the entanglement entropy $s( T )$ can be calculated by the HOTRG method. After applying SVD to the extended tensor, $s( T )$ can be naturally obtained from the singular values $\omega_{\xi}^{~}$ in Eq. through the formula $$\label{eq:entropy}
s( T ) = -\sum_{\xi}^{~} \frac{\omega_{\xi}^2}{\Omega} \, \ln \frac{\omega_{\xi}^2}{\Omega} \, ,$$ where $\Omega = \sum_{\xi}^{~} \omega_{\xi}^2$ normalizes the probability. The entanglement entropy $s( T )$ always exhibits stable convergence with respect to $n$. Figure \[fig:Fig\_7\] shows the temperature dependence of $s( T )$, which is obtained with $D = 18$. There is a sharp peak at the critical temperature $T_{\rm c}^{~}$, which can be roughly determined as $1.48$ from the data shown.
Taking the numerical derivative with respect to $T$ for the calculated local energies $u_{\rm A}^{~}( T )$, $u_{\rm B}^{~}( T )$, and $u_{\rm Y}^{~}( T )$, respectively, we obtain the specific heats $c_{\rm A}^{~}( T )$, $c_{\rm B}^{~}( T )$, and $c_{\rm Y}^{~}( T )$, as shown Fig. 8. We observe a sharp peak in $c_{\rm Y}^{~}( T )$ at $T_{\rm c}^{~}$, whereas there is only a rounded maximum in $c_{\rm A}^{~}( T )$ and $c_{\rm B}^{~}( T )$, and their peak positions do not coincide with $T_{\rm c}^{~}$ associated with the position $Y$. The specific heat per site $c_f^{~}( T )$ defined in Sec. II as well as $c_{\rm A}^{~}( T )$ and $c_{\rm B}^{~}( T )$ demonstrate a weak singularity at $T_{\rm c}^{~}$. This fact can be confirmed by taking their derivative with respect to $T$, i.e., $\frac{\partial c}{\partial T}$, which leads to the identical singularity at $T_{\rm c}^{~}$, as shown in the inset of Fig. \[fig:Fig\_8\]. The result clearly manifests that the critical behavior strongly depends on the location, where the measurements of the bond energy is carried out.
Figure \[fig:Fig\_9\] shows the local magnetizations $m_{\rm A}^{~}( T )$, $m_{\rm B}^{~}( T )$, and $m_{\rm Y}^{~}( T )$ with respect to temperature $T$, under the condition that $D = 18$. They fall to zero simultaneously at the identical $T_{\rm c}^{~}$, while the critical exponent $\beta$ in $m( T ) \propto ( T_{\rm c}^{~} - T )^{\beta}_{~}$ is significantly different for each case. From the plotted $m_{\rm A}^{~}( T )$ we obtain $\beta = 0.52$, and from $m_{\rm B}^{~}( T )$ we obtain $\beta = 0.78$. In both cases we use the rough estimate $T_{\rm c}^{~} = 1.478$, and the data in the range $| T_{\rm c}^{~} - T | < 0.015$ are considered for numerical fitting. Since the variation in $m_{\rm Y}^{~}( T )$ is too rapid to capture $\beta$ under the condition $D = 18$, we increase the tensor-leg freedom up to $D = 28$. Figure \[fig:Fig\_10\] shows $m_{\rm Y}^{~}( T )$ zoomed-in around $T \sim 1.478$. It should be noted that a small numerical error is strongly amplified in the temperature region $| T - T_{\rm c}^{~} | \lesssim 10^{-5}_{~}$. Therefore, the data points in this narrow region were excluded from the fitting analysis. Then, we obtain $T_{\rm c}^{~} = 1.4783(1)$. The estimated critical exponent $\beta = 0.0048(1)$ is roughly two orders of magnitude smaller than $\beta$ obtained from $m_{\rm A}^{~}( T )$ and $m_{\rm B}^{~}( T )$. In the similar manner, as we have observed for the specific heat, critical behavior of the model strongly depends on the location of the impurity tensors $A$, $B$, and $Y$ on the Sierpiński carpet.
Conclusions and Discussions
===========================
We have investigated the phase transition of the ferromagnetic Ising model on the Sierpiński carpet. The numerical procedures in the HOTRG method are modified, so that they fit the recursive structure in the fractal lattice. We have confirmed the presence of the second order phase transition, which is located around $T_{\rm c}^{~} =1.4783(1)$, in accordance with the previous studies [@Carmona; @Monceau1; @Monceau2; @Pruessner; @Bab]. The global behavior of the entire system captured by the free energy per site $f^{(\infty)}_{~}$ exhibits the presence of a very weak singularity at $T_{\rm c}^{~}$, as we observed in Ref. .
What is characteristic of this fractal lattice is the position dependence in the local magnetization $m( T )$ and local energy $u( T )$. For example, we find that the critical exponent $\beta$ differs by a couple of orders of magnitude, which corresponds to the fact that the measured magnetization depends on position, where the impurity tensor is placed on the fractal-lattice. A key feature appears in the local energy $u_{\rm Y}^{~}( T )$, where we deduce a sharp peak in its temperature derivative, $c_{\rm Y}^{~}( T )$; contrary to the smooth behavior in $c_f^{~}( T )$, being the averaged specific heat. Intuitively, such position dependence would be explained by the density of sites around the pinpointed location. Around the site Y, the spins are interconnected more densely than those around the boundary sites A and B in Fig. \[fig:Fig\_5\]. One might find a similarity with the critical behavior on the Bethe lattice[@Baxter; @p4hyper], where the singular behavior is only visible deep inside the system, whereas the free energy is represented by an analytic function of $T$ for the entire lattice.
The current study can be extended to other fractal lattices, e.g., variants of the Sierpiński carpet or to a fractal lattice, we had already studied earlier[@2dising], where the positional dependence of the impurities has not been examined yet. Another point to be considered is to investigate more variations of the locations on the fractal lattice in order to analyze the mechanism of the non-trivial behavior observed in the current position dependence.
This work was partially funded by Agentúra pre Podporu Výskumu a Vývoja (APVV-16-0186 EXSES) and Vedecká Grantová Agentúra MŠVVaŠ SR a SAV (VEGA Grant No. 2/0123/19). T.N. and A.G. acknowledge the support of Ministry of Education, Culture, Sports, Science and Technology (Grant-in-Aid for Scientific Research JSPS KAKENHI 17K05578 and 17F17750). J.G. was supported by Japan Society for the Promotion of Science (P17750). T.N. thanks the funding of the Grant-in-Aid for Scientific Research MEXT ”Exploratory Challenge on Post-K computer” (Frontiers of Basic Science: Challenging the Limits).
[99]{} *Phase transitions and critical phenomena*, vol. 1-20, ed. C. Domb, M.S. Green, and J. Lebowitz (Academic Press, 1972-2001). Y. Gefen, B.B. Mandelbrot, and A. Aharony, Phys. Rev. Lett. [**45**]{}, 855-858 (1980). Y. Gefen, Y. Meir, B.B. Mandelbrot, and A. Aharony, Phys. Rev. Lett. [**50**]{}, 145-148 (1983). Y. Gefen, A. Aharony, and B.B. Mandelbrot, J. Phys. A: Math. Gen. [**16**]{}, 1267-1278 (1983). Y. Gefen, A. Aharony, and B.B. Mandelbrot, J. Phys. A: Math. Gen. [**17**]{}, 1277-1289 (1984). J.M. Carmona, U.M.B. Marconi, J.J. Ruiz-Lorenzo, A. Tarancón, Phys. Rev. B [**58**]{}, 14387 (1998). P. Monceau, M. Perreau, F. Hébert, Phys. Rev. B [**58**]{}, 6386 (1998). P. Monceau, M. Perreau, Phys. Rev. B [**63**]{}, 184420 (2001). G. Pruessner, D. Loison, and K.D. Schotte, Phys. Rev. B [**64**]{}, 134414 (2001). M.A. Bab, G. Fabricus, and E. Albano, Phys. Rev. E [**71**]{}, 036139 (2005). Pai-Yi Hsiao, P. Monceau, Phys. Rev. B [**67**]{}, 064411 (2003). T.W. Burkhardt and J.M.J. van Leeuwen, Real-space renormalization, Topics in Current Physics 30 (Springer, Berlin, 1982), and references therein. M. Perreau, Phys. Rev. B [**96**]{}, 174407 (2017), and references therein. J. Genzor, A. Gendiar, and T. Nishino, Phys. Rev. E [**93**]{}, 012141 (2016). R. Krcmar, J. Genzor, Y. Lee, H. Čenčarikovǎ, T. Nishino, and A. Gendiar, Phys. Rev. E [**98**]{}, 062114 (2018). Z.Y. Xie, J. Chen, M.P. Qin, J.W. Zhu, L.P. Yang, and T. Xiang, Phys. Rev. B [**86**]{}, 045139 (2012). L. de Lathauwer, B. de Moor, J. Vandewalle, SIAM J. Matrix Anal. Appl. [**21**]{}, 1324 (2000). R. J. Baxter, Exactly Solved Models in Statistical Mechanics (Academic Press, London, 1982). R. Krcmar, A. Gendiar, K. Ueda, and T. Nishino, J. Phys. A: Math. Theor. [**41**]{} 125001 (2008).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Electron-phonon coupling plays a central role in the transport properties and photophysics of organic crystals. Successful models describing charge- and energy-transport in these systems routinely include these effects. Most models for describing photophysics, on the other hand, only incorporate local electron-phonon coupling to intramolecular vibrational modes, while nonlocal electron-phonon coupling is neglected. One might expect nonlocal coupling to have an important effect on the photophysics of organic crystals, because it gives rise to large fluctuation in the charge-transfer couplings, and charge-transfer couplings play an important role in the spectroscopy of many organic crystals. Here, we study the effects of nonlocal coupling on the absorption spectrum of crystalline pentacene and 7,8,15,16-tetraazaterrylene. To this end, we develop a new mixed quantum-classical approach for including nonlocal coupling into spectroscopic and transport models for organic crystals. Importantly, our approach does not assume that the nonlocal coupling is linear, in contrast to most modern charge-transport models. We find that the nonlocal coupling broadens the absorption spectrum non-uniformly across the absorption line shape. In pentacene, for example, our model predicts that the lower Davydov component broadens considerably more than the upper Davydov component, explaining the origin of this experimental observation for the first time. By studying a simple dimer model, we are able to attribute this selective broadening to correlations between the fluctuations of the charge-transfer couplings. Overall, our method incorporates nonlocal electron-phonon coupling into spectroscopic and transport models with computational efficiency, generalizability to a wide range of organic crystals, and without any assumption of linearity.'
author:
- 'Steven E. Strong'
- 'Nicholas J. Hestand'
bibliography:
- 'lib.bib'
title: 'Modeling Nonlocal Electron-Phonon Coupling in Organic Crystals Using Interpolative Maps: The Spectroscopy of Crystalline Pentacene and 7,8,15,16-Tetraazaterrylene'
---
[^1]
[^2]
Introduction\[sec:intro\]
=========================
Absorption and photoluminescence spectroscopies are powerful techniques for interrogating the electronic structure of conjugated organic materials. Spectral shifts, vibrionic peak ratio changes, and line broadening all provide information about the sign and magnitude of the exciton coupling, the curvature and width of the exciton band, the exciton coherence length, and the disorder within the system.[@kasha_energy_1963; @kasha_exciton_1965; @mcrae_enhancement_1958; @knapp_lineshapes_1984; @fidder_optical_1991; @spano_absorption_2006; @spano_vibronic_2011; @spano_spectral_2010; @hestand_expanded_2018] Over the past several decades, significant effort has been devoted to develop a comprehensive understanding of these spectroscopic signatures, with a great deal of success.[@tretiak_density_2002; @spano_spectral_2010; @spano_h_2014; @koehler_electronic_2015; @bredas_wspc_2016; @hestand_molecular_2017; @hestand_expanded_2018; @nelson_non_2020]
One of the main challenges in modeling conjugated organic systems is the importance of electron-phonon coupling,[@holstein_studies_1959; @holstein_studies_1959-1; @su_solitons_1979; @munn_theory_1985; @munn_theory_1985-1; @hannewald_note_2004; @hannewald_theory_2004; @zhao_munnsilbey_1994; @stojanovic_nonlocal_2004; @yi_nonlocal_2012; @troisi_charge-transport_2006; @coropceanu_charge_2007; @troisi_dynamics_2006; @landi_rapid_2018; @arago_dynamics_2015; @xie_nonlocal_2018; @landi_explaining_2019; @lee_vibronic_2017; @fetherolf_unification_2020; @duan_ultrafast_2019; @yonehara_role_2020; @terenziani_charge_2006; @wang_multiscale_2010; @wang_mixed_2011; @ciuchi_transient_2011; @fratini_transient_2016; @alvertis_impact_2020; @sanchez_interaction_2010] which is commonly divided into two types: local and nonlocal.[@coropceanu_charge_2007] Local electron-phonon coupling is the modulation of the electronic Hamiltonian by predominately intramolecular phonons, while nonlocal (Peierls) electron-phonon coupling is the modulation of the electronic Hamiltonian by predominately lattice phonons. Both types are important for accurate descriptions of excited states in organic systems, and several model Hamiltonians have been devised to account for these effects: Holstein models describe local electron-phonon coupling,[@holstein_studies_1959; @holstein_studies_1959-1] Su-Schrieffer-Heeger models describe nonlocal electron-phonon couplings,[@su_solitons_1979] and extended Holstein models describe both simultaneously.[@lee_vibronic_2017; @fetherolf_unification_2020; @duan_ultrafast_2019] In terms of spectroscopy, local vibronic coupling is responsible for the pronounced $\sim$1400 [$\mathrm{cm}^{-1}$]{} vibronic progression observed in the optical response of many conjugated organic systems and is routinely incorporated in spectroscopic models. However, far less attention has been paid to the role that nonlocal electron-phonon coupling plays in the optical response.
Several works have shown that nonlocal electron-phonon coupling gives rise to large fluctuations in the charge-transfer (CT) interactions within organic systems, on the order of their average values.[@troisi_electronic_2005; @troisi_dynamics_2006; @troisi_prediction_2007; @wang_multiscale_2010; @landi_rapid_2018; @arago_dynamics_2015; @arago_regimes_2016; @fornari_exciton_2016] This phenomenon arises from the facts that 1) optical lattice phonons have energies well below $k_B T$ at room temperature[@dellavalle_intramolecular_2004] (in contrast with inorganic systems[@kulda_inelastic_1994]), and 2) the CT couplings are very sensitive to molecular packing; displacements on the order of the carbon-carbon bond length can dramatically alter the magnitude *and* sign of these quantities.[@kazmaier_theoretical_1994; @bredas_charge_2004; @coropceanu_charge_2007; @gisslen_crystallochromy_2009; @hestand_expanded_2018]
The majority of work considering nonlocal electron-phonon coupling has focused on its important role in charge transport, where it results in charge carrier localization and limits carrier mobility.[@troisi_charge-transport_2006; @troisi_dynamics_2006; @coropceanu_charge_2007; @wang_multiscale_2010; @troisi_charge_2011; @wang_mixed_2011; @ciuchi_transient_2011; @fratini_transient_2016] The same CT couplings that are perturbed by the nonlocal electron-phonon coupling, however, also play an important role in the spectroscopic response of many organic systems. In closely packed organic crystals, for example, the optically bright Frenkel excitons couple through a short-ranged, CT mediated “superexchange” mechanism.[@harcourt_rate_1994; @scholes_rate_1995; @hestand_expanded_2018] This implies that the spectroscopy of organic crystals should also be sensitive to fluctuations in the CT couplings.[@klugkist_scaling_2008; @fidder_optical_1991; @arago_dynamics_2015; @arago_regimes_2016; @fornari_exciton_2016] Surprisingly, however, absorption spectra of many organic crystals can be successfully modeled without including nonlocal electron-phonon coupling.[@hestand_expanded_2018; @hennessy_vibronic_1999; @hoffmann_lowest_2000; @hoffmann_optical_2002; @heinemeyer_exciton-phonon_2008; @gisslen_crystallochromy_2009; @gisslen_crystallochromy_2011; @gao_vibronic_2011; @lalov_vibronic_2007; @lalov_model_2008; @stradomska_intermediate_2011; @yamagata_nature_2011; @beljonne_charge_2013; @yamagata_hj_2014; @hestand_interference_2015; @hestand_polarized_2015; @austin_enhanced_2017; @oleson_perylene_2019; @lewis_ab_2020; @tempelaar_vibronic_2017]
To resolve this apparent discrepancy, we develop a new method for modeling the spectroscopy of organic crystals that incorporates nonlocal electron-phonon coupling, in addition to the other common ingredients of spectroscopic models: Frenkel excitons, CT excitons, and local electron-phonon coupling. The method is based on a mixed quantum-classical approach that treats the low-frequency phonon modes classically via molecular dynamics (MD) simulations while the high-frequency intramolecular vibrations and electronic degrees of freedom are treated quantum-mechanically using a Holstein-style Hamiltonian. We parameterize the Hamiltonian at each time step according to the MD trajectory, using a mapping approach to make repeated evaluation of the CT couplings computationally tractable. Both the mixed quantum-classical approach and the map are similar in spirit to approaches used to model the infrared spectroscopy of water and the amide I band of peptides.[@sceats_intramolecular_1979; @belch_oh_1983; @ojamae_simulation_1992; @buck_structure_1998; @buch_molecular_2005; @corcelli_combined_2004; @corcelli_infrared_2005; @gruenbaum_robustness_2013; @skinner_vibrational_2009; @yang_signatures_2010; @bakker_vibrational_2010; @kwac_molecular_2003; @schmidt_ultrafast_2004; @la_cour_jansen_modeling_2006; @courjansen_transferable_2006; @wang_development_2011; @carr_assessment_2014; @cunha_assessing_2016; @feng_refinement_2018] Importantly, our approach allows for the treatment of nonlocal electron-phonon couplings very generically, and can account for arbitrarily complex dependence of the couplings on the intermolecular structure. In particular, it is not necessary to make the common assumption that the nonlocal electron-phonon coupling is linear.
We apply our method to understand the effects of nonlocal electron-phonon coupling on the absorption spectroscopy of organic crystals. We focus on two specific systems that exhibit different packing motifs: pentacene, which packs in a herringbone structure,[@holmes_nature_1999] and 7-8-15-16-tetraazaterrylene (TAT), which exhibits a slipped $\pi$-stacking structure (Fig. \[fig:schematic\]).[@fan_synthesis_2012; @wise_spectroscopy_2014]

We chose these systems for three reasons: First, to evaluate the effects of nonlocal coupling between systems with different packing motifs and therefore different nonlocal coupling forms and strengths.[@landi_rapid_2018] Second, the absorption spectroscopy of both systems has previously been modeled in the absence of nonlocal coupling, so significant parameterization efforts are not required.[@yamagata_hj_2014; @hestand_polarized_2015] And finally, because these and related systems have received considerable attention as promising organic semiconductors.[@koehler_electronic_2015; @bredas_wspc_2016; @ostroverkhova_organic_2016; @schweicher_molecular_2020] We find that both systems exhibit significant fluctuations in the CT and total excitonic coupling due to nonlocal electron-phonon coupling, in agreement with previous results.[@troisi_electronic_2005; @troisi_charge-transport_2006; @troisi_dynamics_2006; @arago_dynamics_2015; @arago_regimes_2016; @fornari_exciton_2016; @landi_rapid_2018] These fluctuations broaden the absorption spectrum, but interestingly, the broadening is not uniform. We find that this nonuniformity is due at least in part to correlations in the electron- and hole-transfer couplings. This is most obvious for pentacene where the $||\mathbf{a}$-polarized lower Davydov component broadens six times more than the ${\perp}\mathbf{a}$-polarized upper Davydov component.
Methods
=======
Model Hamiltonian\[sec:model\]
------------------------------
Our model is motivated by a separation in energy scales. In organic crystals, the intermolecular degrees of freedom oscillate at much lower frequencies than the intramolecular and electronic degrees of freedom due to the weak van der Waals forces that hold the crystals together. With this in mind, we separate the Hamiltonian into a classical part $H_\mathrm{cls}$ that accounts for low-frequency intermolecular vibrations, and a quantum mechanical part $\hat H_\mathrm{qm}$ that accounts for the high-frequency degrees of freedom.[@skinner_vibrational_2009; @wang_mixed_2011; @shi_modeling_2018; @cerezo_adiabatic-molecular_2019]
The quantum-mechanical part of the Hamiltonian is Holstein-like and includes Frenkel excitons, CT excitons, and vibronic coupling $$\hat{H}_\mathrm{qm}=\hat{H}_\mathrm{FE}+\hat{H}_\mathrm{CT}+\hat{H}_\mathrm{vib}.
\label{eq:ham}$$ Here $\hat{H}_\mathrm{FE}$ represents the Frenkel exciton Hamiltonian $$\label{eq:ham_fe}
\hat{H}_\mathrm{FE}=\sum_{i}\left(E_{S_1}+\Delta_{0-0}\right)B_{i}^{\dagger}B_{i}
+\sum_{i\ne j} J\left(\mathbf{q}^{N}_{i},\mathbf{q}^{N}_{j}\right)B_{i}^{\dagger}B_{j}$$ where $i$ and $j$ label the molecules in the crystal and the operator $B_i^{\dagger}$ ($B_{i}$) creates (annihilates) a Frenkel exciton on molecule $i$. The first sum in $\hat{H}_\mathrm{FE}$ accounts for the energy of a localized Frenkel exciton; $E_{S_{1}}$ is the excitation energy for the $\mathrm{S}_{1}\leftarrow \mathrm{S}_{0}$ transition for a molecule in dilute solution and $\Delta_{0-0}$ is the solution-to-crystal shift that accounts for non-resonant interactions within the crystal. The second term in $\hat{H}_\mathrm{FE}$ accounts for long-range Coulombic coupling between excitons at sites $i$ and $j$, $J(\mathbf{q}^{N}_i,\mathbf{q}^{N}_j)$, where $\mathbf{q}^{N}_{i}$ is the $3N$ dimensional vector of the atomic coordinates of molecule $i$, and $N$ is the number of atoms in molecule $i$.
The second term in Eq. \[eq:ham\] is the CT Hamiltonian $$\label{eq:ham_ct}
\begin{split}
\hat{H}_\mathrm{CT}&=\sum_{i\ne j}E_{CT}\left(\mathbf{q}^{N}_{i},\mathbf{q}^{N}_{j}\right)c_i^{\dagger}c_i d_j^{\dagger}d_j\\
&+\sum_{i\ne j}t_{e}\left(\mathbf{q}^{N}_{i},\mathbf{q}^{N}_{j}\right)c_i^{\dagger}c_j
+\sum_{i\ne j}t_{h}\left(\mathbf{q}^{N}_{i},\mathbf{q}^{N}_{j}\right)d_i^{\dagger}d_j,
\end{split}$$ where the operator $c_{i}^{\dagger}$ ($c_{i}$) creates (annihilates) an electron on molecule $i$ and the operator $d_{i}^{\dagger}$ ($d_{i}$) creates (annihilates) a hole on molecule $i$. The energy of a CT exciton with the electron localized to molecule $i$ and the hole localized to molecule $j$ is represented by $$\label{eq:ect}
E_{CT}\left(\mathbf{q}^{N}_i,\mathbf{q}^{N}_j\right)=I_{P}-E_{A}+P+V\left(\mathbf{q}^{N}_i,\mathbf{q}^{N}_j\right).$$ This energy depends on the atomic positions of the host chromophores $i$ and $j$ through the Coulomb binding energy $V\left(\mathbf{q}^{N}_i,\mathbf{q}^{N}_j\right)$. The ionization potential $I_P$, electron affinity $E_A$, and polarization energy $P$ also enter into the expression for $E_{CT}$ but we approximate these terms to be independent of the coordinates of the host molecules. The second and third sums in $\hat{H}_\mathrm{CT}$ account for short-range CT coupling with $t_e\left(\mathbf{q}^{N}_i,\mathbf{q}^{N}_j\right)$ and $t_h\left(\mathbf{q}^{N}_i,\mathbf{q}^{N}_j\right)$ representing the electron- and hole-transfer couplings between sites $i$ and $j$. These terms couple the Frenkel and CT states, and control charge hopping within the CT manifold.
The third term in Eq. \[eq:ham\] accounts for the high-frequency intramolecular vibrations responsible for the vibronic progression observed in the absorption spectrum
$$\label{eq:ham_vib}
\begin{split}
\hat{H}_\mathrm{vib}&=\hbar\omega_\mathrm{vib}\sum_{i}b_{i}^{\dagger}b_{i}
+\hbar\omega_\mathrm{vib}\sum_{i}\left\{\lambda\left(b_{i}^{\dagger}+b_{i}\right)+\lambda^2\right\}B_{i}^{\dagger}B_{i}\\
&+\hbar\omega_\mathrm{vib}\sum_{i\ne j}\left\{\lambda_{e}\left(b_{i}^{\dagger}+b_{i}\right)+\lambda_{h}\left(b_{j}^{\dagger}+b_{j}\right)+\lambda_{h}^{2}+\lambda_{e}^{2}\right\}c_i^{\dagger}c_i d_j^{\dagger}d_j.
\end{split}$$
Here the operator $b_{i}^{\dagger}$ ($b_{i}$) creates (annihilates) a vibrational quantum on chromophore $i$ with energy $\hbar\omega_\mathrm{vib}$. The first summation therefore describes the vibrational energy of each molecule. The final three terms account for the local electron-phonon coupling of the exciton, electron, and hole to the intramolecular vibration. The Huang-Rhys factors $\lambda^2$, $\lambda_e^2$, and $\lambda_h^2$, describe the shift in the nuclear potential relative to the ground state when the molecule hosts a Frenkel exciton, electron, or hole, respectively. In principle, all intramolecular vibrational modes can be accounted for, but this is prohibitively expensive for all but the simplest molecules. Instead, we treat the numerous closely spaced modes that contribute to the vibronic progression empirically using a line width that depends on the number of vibrational quanta in the absorbing state, as discussed by Yamagata et al.[@yamagata_hj_2014]
Nonlocal electron-phonon coupling enters the Hamiltonian classically through the time-dependent fluctuations in the atomic coordinates. That is, in Eqs. \[eq:ham\]–\[eq:ham\_vib\], $\mathbf{q}^{N}_{i}\rightarrow\mathbf{q}^{N}_{i}(t)$. We model these fluctuations using MD simulations of the crystal. As discussed above, this corresponds to a separation of the Hamiltonian into the quantum mechanical one given in Eq. \[eq:ham\] and the classical Hamiltonian $$\label{eq:classicalHam}
H_\mathrm{cls} = \sum_\alpha \frac{\mathbf{p}_\alpha^2}{2M_\alpha} + U(\{\mathbf{q}\})$$ where $\mathbf{p}_\alpha$ is the momentum conjugate to atomic coordinate $\mathbf{q}_\alpha$, $M_\alpha$ is the mass of particle $\alpha$, and $U(\{\mathbf{q}\})$ is the potential used in the MD simulation (Sec. \[sec:md\]), which is, in principle, a function of the set of all the coordinates $\{\mathbf{q}\}$. Here the sum over $\alpha$ goes over atoms, not molecules. We then use various ab-initio techniques and the mapping approach developed in Section \[sec:TATmap\] to compute the $\mathbf{q}^{N}_{i}$-dependent quantities in the Hamiltonian (Eq. \[eq:ham\]). If the coordinates $\{\mathbf{q}\}$ are instead static and taken from an experimental crystal structure, the quantum mechanical Hamiltonian reduces to the time-independent one used in previous work to model the optical properties of many different organic crystals.[@hestand_expanded_2018; @hennessy_vibronic_1999; @hoffmann_lowest_2000; @hoffmann_optical_2002; @heinemeyer_exciton-phonon_2008; @gisslen_crystallochromy_2009; @gisslen_crystallochromy_2011; @gao_vibronic_2011; @lalov_vibronic_2007; @lalov_model_2008; @stradomska_intermediate_2011; @yamagata_nature_2011; @beljonne_charge_2013; @yamagata_hj_2014; @hestand_interference_2015; @hestand_polarized_2015; @austin_enhanced_2017; @oleson_perylene_2019]
In reality, all of the quantities appearing in the quantum mechanical Hamiltonian depend on atomic coordinates and therefore fluctuate in time. However, we neglect the time dependence of the terms not explicitly expressed as a function of $\mathbf{q}^{N}_{i}$ in Eqs. \[eq:ham\]–\[eq:ham\_vib\] ($E_{S_{1}}-\Delta_{0-0}$, $\lambda$, $\lambda_{e}$, $\lambda_{h}$, $\omega_\mathrm{vib}$) under the assumption that their fluctuations are small and the spectroscopy can be adequately described using average values.
While many authors have included nonlocal electron-phonon couplings in tight-binding Hamiltonians like Eq. \[eq:ham\],[@su_solitons_1979; @munn_theory_1985; @munn_theory_1985-1; @hannewald_note_2004; @hannewald_theory_2004; @zhao_munnsilbey_1994; @stojanovic_nonlocal_2004; @yi_nonlocal_2012; @lee_vibronic_2017; @xie_nonlocal_2018; @landi_rapid_2018; @landi_explaining_2019; @fetherolf_unification_2020; @duan_ultrafast_2019; @yonehara_role_2020; @sanchez_interaction_2010] our approach is unique because it is able to handle nonlocal electron-phonon couplings of arbitrary form. Typically, these couplings are treated in the linear response regime, assuming that lattice distortions are small enough that the electronic Hamiltonian is perturbed linearly. Recent work has relaxed this linear approximation in the case of local electron-phonon couplings,[@adolphs_going_2013; @li_effects_2015; @li_quasiparticle_2015] but to our knowledge, nonlocal electron-phonon couplings have not been treated outside the linear regime. In organic crystals, nonlinear effects in nonlocal electron-phonon couplings may be important due to the sensitivity of the CT couplings to small geometric displacements. Our method includes nonlocal electron-phonon coupling explicitly without the usual assumption of linearity.
To calculate the absorption spectrum, we represent the time-dependent Hamiltonian $\hat H_\mathrm{qm}(t)$ using a two-particle basis set,[@philpott_theory_1971; @spano_excitons_2006; @spano_spectral_2010; @hestand_expanded_2018] truncated to include only states with less than $v_\mathrm{max}$ vibrational quanta. We then compute the polarized absorption spectrum using the Fourier transform of the transition-dipole autocorrelation function $C_\mu(t)$
$$\label{eq:absorption}
\begin{split}
A(\omega)&\propto \omega~\mathrm{Re} \int_{0}^{\infty} dt~e^{-i\omega t} C_\mu(t)\\
C_\mu(t)&=\left\langle \boldsymbol\varepsilon\boldsymbol\cdot\boldsymbol\mu(t+t_0)
\exp_+ \left\{-\frac{i}{\hbar}\int_{t_{0}}^{t} \, d\tau\,\hat{H}_\mathrm{qm}(\tau+t_{0})-\hat{\Gamma}(\tau+t_{0})\right\}\boldsymbol\varepsilon\boldsymbol\cdot\boldsymbol\mu\left(t_0\right)\right\rangle_{t_0}.
\end{split}$$
Here $\exp_+$ is a time-ordered exponential, $\boldsymbol\mu(t)$ contains the $\mathrm{S}_{1}\leftarrow \mathrm{S}_{0}$ transition dipole moments of the basis states at time $t$, $\boldsymbol\varepsilon$ is the polarization vector of the incoming light, $\hat{\Gamma}\left(t\right)$ is a matrix of eigenstate-dependent line broadening parameters that depends on the number of vibrational quanta in each eigenstate of $H_\mathrm{qm}(t)$ \[see Supplementary Material (SM)\],[@yamagata_hj_2014] and $\left\langle\ldots\right\rangle_{t_0}$ is an average over initial time points of the MD trajectory. Each spectrum is averaged over $N_\mathrm{samp}$ initial time points $t_0$ separated by $T_\mathrm{samp}$ (see SM). The time-dependence of the transition dipole moments arises due to fluctuations in the molecular orientations according to $H_\mathrm{cls}$. We solve Eq. \[eq:absorption\] using the Numerical Integration of the Schrodinger Equation approach.[@jansen_nonadiabatic_2006; @jansen_waiting_2009]
Molecular Dynamics Simulations\[sec:md\]
----------------------------------------
We perform MD simulations of TAT and pentacene using the LAMMPS, VMD, and TopoTools packages.[@plimpton_fast_1995; @humphrey_vmd_1996; @kohlmeyer_topotools_2017] We model the intra- and intermolecular potentials using the DREIDING force field[@mayo_dreiding_1990] with atomic charges taken from electronic structure calculations (see SM).[@della_valle_computed_2008; @strong_tetracene_2015; @deng_predictions_2004] This force field reproduces experimental crystal structures in similar systems.[@mattheus_modeling_2003]
The MD simulations are performed at constant number of particles, volume, and energy (NVE) and use a rRESPA multi-timescale integrator[@tuckerman_reversible_1992] to accommodate the fast intramolecular motions (see SM). The TAT simulations are initialized in the crystal structure of Fan et al.[@fan_synthesis_2012] and the pentacene simulations are initialized in the crystal structure of Holmes et al.[@holmes_nature_1999] We simulate a 10$\times$2$\times$2 supercell of TAT (80 molecules) and a 4$\times$4$\times$2 supercell of pentacene (64 molecules). Each simulation is equilibrated for 10 ps during which velocities are rescaled every 1 ps to maintain a temperature of 298 K. To compute the distribution of intermolecular geometries and couplings, we average over 1 ns of simulation time, collecting data every 1 ps. To compute the spectra, we evaluate the Hamiltonian every 2 fs for 100 ps.
Results and Discussion
======================
While the time-dependent quantities in equations \[eq:ham\]–\[eq:ham\_vib\] may be calculated directly from the atomic coordinates derived from MD simulations using standard ab-initio approaches, modeling the spectroscopy of organic crystals in this way would require millions of such calculations because they must be repeated for each molecule, or pair of molecules, at every time step. These requirements make such calculations nearly prohibitive,[@troisi_dynamics_2006; @arago_dynamics_2015; @nematiaram_practical_2019; @begusic_on-the-fly_2020] and at least impractical for many applications. To render this approach practical and generalizable, we develop a map to estimate the values of the time-dependent quantities in the Hamiltonian ($E_{CT}$, $J$, $t_e$, and $t_h$) for any (realistic) set of atomic coordinates, without the need for repeated electronic structure calculations. This approach is motivated by similar methods that are used to model the infrared spectroscopy of water and the amide I stretch in peptides.[@sceats_intramolecular_1979; @belch_oh_1983; @ojamae_simulation_1992; @buck_structure_1998; @buch_molecular_2005; @corcelli_combined_2004; @corcelli_infrared_2005; @gruenbaum_robustness_2013; @skinner_vibrational_2009; @yang_signatures_2010; @bakker_vibrational_2010; @kwac_molecular_2003; @schmidt_ultrafast_2004; @la_cour_jansen_modeling_2006; @courjansen_transferable_2006; @wang_development_2011; @carr_assessment_2014; @cunha_assessing_2016; @feng_refinement_2018]
For the CT energies $E_{CT}$ and the Coulomb couplings $J$, relatively simple relationships between the atomic coordinates and the electronic properties have already been developed. The CT couplings, on the other hand, are complex functions of the overlap between the frontier molecular orbitals of both chromophores involved, which are often oscillatory structures with many nodal surfaces.[@coropceanu_charge_2007; @kazmaier_theoretical_1994; @hestand_expanded_2018] While a similar mapping approach was recently used to account for diagonal disorder in organic semiconductors,[@shi_modeling_2018] the concept has not, to our knowledge, been applied to the CT couplings. The main challenge of the problem is its dimensionality: assuming that the coupling between two molecules is independent of the surrounding molecules, the map still must be parameterized in a $3\cdot2N-6\sim200$ dimensional space ($N=42$ for TAT and $N=36$ for pentacene). To overcome this problem, we make the rigid-body approximation[@day_atomistic_2003; @coropceanu_charge_2007; @girlando_peierls_2010; @landi_rapid_2018] when evaluating $\hat{H}_\mathrm{qm}$. Specifically, before computing the time-dependent quantities in the Hamiltonian (Eq. \[eq:ham\]), we replace each molecule in the MD simulations with the geometry-optimized monomer structure, translated and rotated to have the same center of mass and principal axes of inertia. In some respects, it would be simpler to perform an MD simulation with rigid molecules. Instead, we use the DREIDING force field, which calls for flexible molecules, because it is well tested for organic crystals like these and is easily transferable to new systems without extensive parameterization efforts.[@mayo_dreiding_1990; @strong_tetracene_2015; @mattheus_modeling_2003] Simulating rigid monomers would require reparameterizing the MD force field in this work, as well as in future studies of different organic crystals.
The rigid-body approximation decouples the intra- and intermolecular phonons such that the nonlocal electron-phonon coupling depends only on the intermolecular modes and contributions from the intramolecular modes are neglected. Note, however, that the important high-frequency intramolecular modes are still treated through the local electron-phonon coupling (Eq. \[eq:ham\_vib\]). Importantly, the rigid-body approximation reduces the dimensionality of the problem from $\sim$200 to 6. Since each monomer is exactly the same, the relative coordinates of any pair can be specified by 3 translational and 3 rotational degrees of freedom. In this reduced dimension, it might be possible to perform an explicit interpolation on a 6-dimensional grid that maps nearest-neighbor intermolecular geometries observed in the simulations to precomputed couplings at sparse grid points. We first test this idea with TAT because it is simpler than pentacene in regard to modeling optical properties; TAT can be modeled as a collection of non-interacting one-dimensional $\pi$-stacks[@yamagata_hj_2014] while pentacene must be treated as a collection of two-dimensional layers (Fig. \[fig:schematic\]).[@hestand_exciton_2015]
TAT
---
### Developing the Map\[sec:TATmap\]
To assess the feasibility of an interpolative map, we quantify the fluctuations of the 6 intermolecular degrees of freedom for all nearest-neighbor pairs of molecules in TAT (Fig. \[fig:TATdimers\]).
![Distributions of nearest-neighbor pair geometries in TAT. The molecular coordinate system is defined in Fig. \[fig:schematic\]a. (a) Distributions of the fluctuations of the translational slips. (b) Distribution of the fluctuations of the rotational axes on a unit sphere, depicted as a Lambert azimuthal equal-area projection in which the $\delta\Theta_z=1$ pole of the unit sphere is mapped to the origin, the $\delta\Theta_z=0$ equator is shown as the thin black line, and the $\delta\Theta_z=-1$ pole is mapped to the thick black circle at the perimeter. This projection conserves area, which is an important property for a histogram. (c) Distribution of the cosine of the fluctuations of the rotational angles. It is important to plot the cosine instead of the angle itself to avoid singularities in the Jacobian at $\delta\theta=0^\circ$.[@strong_tetracene_2015][]{data-label="fig:TATdimers"}](slipAndAxisDists_TAT_1col.pdf)
The 3 translational degrees of freedom are described as “slips” $\mathbf{s}_{ij}$ of molecule $j$’s center-of-mass along the principal axes of molecule $i$. The 3 orientational degrees of freedom are described by an angle of rotation $\theta_{ij}$ (1 degree of freedom) and the rotation axis $\mathbf\Theta_{ij}$ about which to rotate (3 degrees of freedom minus 1 for normalization). The rotation defined by $\theta_{ij}$ and $\mathbf\Theta_{ij}$ superimposes the principal axes of molecule $j$ onto those of molecule $i$.
We are interested in the fluctuations of the intermolecular geometries about their averages, which we define as $\delta\mathbf{s}$, $\delta\theta$, and $\delta\mathbf\Theta$. For the translational slips, $\delta\mathbf{s}$ is simply defined by $\delta\mathbf{s} = \mathbf{s} - \langle \mathbf{s}\rangle$. The definitions of $\delta\theta$ and $\delta\mathbf\Theta$ are more complex, and are discussed in the SM. In the case of TAT, the molecules are $\pi$-stacked, so their equilibrium structure has no rotation ($\langle\theta\rangle=0^\circ$, Table \[tab:TATgrid\]). In this special case, $\theta=\delta\theta$ and $\mathbf\Theta=\delta\mathbf{\Theta}$ so these distinctions are unimportant, but as we will see, this is not the case for pentacene (Sec. \[sec:pen\]). Several other details regarding the intermolecular degrees of freedom are discussed in the SM.
We find that the distributions of slips and orientations are localized (Fig. \[fig:TATdimers\]), making this system amenable to an interpolation map as described above. It is not surprising that the slip distributions and rotation angle distribution are localized, since the system is crystalline so mobility is limited, but it was not clear to us *a priori* that the distribution of rotation axes would likewise be localized. Specifically, we find that the observed rotation axes $\delta\mathbf{\Theta}$ cluster around the $xz$-equator of the unit sphere. This important observation means that instead of covering the entire surface of the unit sphere with an interpolation grid of rotational axes, we only need to include rotational axes along the $xz$-equator.
Based on the distributions in Fig. \[fig:TATdimers\], we construct a grid that encompasses most of the observed configurations (Table \[tab:TATgrid\]).
[l S\[table-format=1.2\] S\[table-format=1.2\] S\[table-format=2.1\] S\[table-format=1.1\] S\[table-format=1\]]{} & [Expt.[@fan_synthesis_2012]]{} & [$\langle \cdot \rangle_\mathrm{sim}$]{} & [Extent]{} & [Spacing]{} & [Grid Size]{}\
$s_x$ (Å) & 1.00 & 1.01 & 0.6 & 0.2 & 7\
$s_y$ (Å) & 1.28 & 1.12 & 0.6 & 0.2 & 7\
$s_z$ (Å) & 3.37 & 3.42 & 0.2 & 0.2 & 3\
$\theta$ ($^{\circ}$) & 0 & 0 & 10 & 5 & 3\
$\mathbf\Theta$ & [–]{} & [–]{} & [$xz$-equator]{} & [60$^\circ$]{} & 6\
The sparsity of the grid is informed by calculations of CT couplings for TAT and similar $\pi$-conjugated systems,[@kazmaier_theoretical_1994; @hestand_interference_2015; @hestand_exciton_2015; @bredas_charge_2004; @coropceanu_charge_2007] which show that the couplings vary approximately linearly on the scale of the grid spacings we use. This grid results in a total of 2646 grid points, or 2646 electronic structure calculations of dimers that must be completed to compute the CT couplings at each grid point. This number can be reduced to 1911 points by recognizing that for grid points with $\delta\theta=0$ the rotational axis is irrelevant and that dimension of the grid can be ignored. The size of the grid could be further reduced by accounting for the symmetry of the molecule and the cross-correlation between the distributions of intermolecular geometries. That is, all the slips do not take their maximum values at the same time. For the level of electronic structure calculations we use, and for the number of atoms in TAT, these optimizations are not necessary so we do not make them. They may become necessary for larger molecules or more expensive electronic structure basis sets.
The CT couplings are calculated at each grid point of the map following the methods discussed in Refs. using the Gaussian software package[@frisch_gaussian_2010] and the B3LYP/3-21G level of theory. The map is provided as a supplementary data file (see SM). Several other basis sets and functionals were also considered, but we found that all methods gave qualitatively similar results (see SM). The couplings for non-nearest-neighbors are assumed to be zero, which is generally a good approximation given the short-range, exponentially decaying nature of these interactions.[@coropceanu_charge_2007]
The average nearest-neighbor pair geometry we observe in simulation is quite close to the experimental crystal structure; the largest difference in slip is 0.16 Å (Table \[tab:TATgrid\]).[@fan_synthesis_2012] Even over such small displacements, however, the computed CT couplings can change sign.[@hestand_exciton_2015]. To account for this sensitivity, we treat the fluctuations about the equilibrium geometry in the MD simulation as fluctuations about the experimental crystal structure. That is, in the MD simulation we measure the slips $\mathbf{s}=\langle \mathbf{s}\rangle + \delta \mathbf{s}$, but we interpolate the CT couplings using $\mathbf{s}=\mathbf{s}_\mathrm{expt} + \delta \mathbf{s}$. This provides the important benefit that the interpolation map only needs to be computed once, and can then be applied to any MD simulation, regardless of the equilibrium intermolecular geometries that are realized by a particular force field. At most, one might have to extend the map to accommodate larger fluctuations in one MD simulation relative to another.
With the completed grid of CT couplings, we perform a linear interpolation to map a dimer configuration from the MD simulation to a pair of CT couplings $t_e$ and $t_h$. We interpolate the orientational axis by first projecting the axis to the $xz$-equator and then interpolating along the equator. For the 0.1% of dimer configurations that are outside the grid, we linearly extrapolate the couplings.
We now turn to the calculation of the time-dependent CT energies ($E_{CT}$) and Coulomb couplings ($J$). To compute the CT energies, we treat each electron and hole as a point charge located at the center-of-mass of its chromophore. This is a good approximation for large electron-hole separations, and has been applied successfully to compute time-independent CT energies in previous work.[@merrifield_ionized_1961; @hestand_expanded_2018; @yamagata_hj_2014; @hestand_polarized_2015; @austin_enhanced_2017] Moreover, it allows us to replace the expression for the Coulomb potential $V\left(\mathbf{q}^{N}_i(t),\mathbf{q}^{N}_j(t)\right)$ in Eq. \[eq:ect\] with a simple expression $$\label{eq:ct_scaling}
V\left(r_{ij}^\mathrm{com}(t)\right)=\frac{-e^2}{4\pi \varepsilon_{0}\varepsilon_{s}}\frac{1}{r_{ij}^\mathrm{com}(t)}.$$ Here $r_{ij}^\mathrm{com}(t)=|\mathbf{q}_j^{\mathrm{com}}(t) - \mathbf{q}_i^{\mathrm{com}}(t)|$ is the distance between the center of mass coordinates of molecules $i$ and $j$, $\varepsilon_s$ is the static dielectric constant, and the remaining terms take their usual meanings. The CT energy therefore fluctuates with the center of mass fluctuations, $E_{CT}\left(r_{ij}^\mathrm{com}(t)\right)$. It is convenient to express $E_{CT}\left(r_{ij}^\mathrm{com}(t)\right)$ in terms of the static nearest-neighbor CT energy $E_{CT}(\langle r_{n.n.} \rangle)=I_{P}-E_{A}+P+V\left(\langle r_{n.n.} \rangle \right)$, so that $$\label{eq:ct_neighbor}
E_{CT}\left(r_{ij}(t)\right)=E_{CT}(\langle r_{n.n.} \rangle)-V\left(\langle r_{n.n.}\rangle \right)+V\left(r_{ij}^\mathrm{com}(t)\right).$$ While $E_{CT}(\langle r_{n.n.}\rangle)$ could be computed from first principles, previous work has either treated it as a fitting parameter, whose value is extracted by fitting calculated spectra to experimental spectra,[@yamagata_hj_2014; @hestand_exciton_2015] or derived it from experiment.[@yamagata_nature_2011; @hestand_polarized_2015] We take the nearest-neighbor CT energy for TAT from Yamagata et al (see SM).[@yamagata_hj_2014]
The Coulomb coupling may be calculated using a variety of approaches, including the point dipole approximation, transition charges,[@chang_monopole_1977] and the density-cube method.[@Krueger1998] Here we use the transition charge method as it provides a good compromise between accuracy and speed.[@kistler_benchmark_2013] Because the transition charges depend on the intramolecular geometry and the intramolecular geometry of each molecule fluctuates during the MD simulations, computing the Coulombic coupling for these structures would necessitate an expensive calculation of the transition charges of each molecule in the crystal at each time step. As discussed in the context of the CT couplings above, we make the rigid-body approximation[@day_atomistic_2003; @coropceanu_charge_2007; @landi_rapid_2018] and replace each molecule in the MD simulation with a geometry-optimized molecule. This means that we need only compute the transition charges once, for the geometry-optimized monomer (see SM). Within this framework, it is straightforward to apply the transition charges of the geometry-optimized molecule at each time step and compute the Coulomb coupling between each pair. The raw Coulombic couplings are then scaled by an optical dielectric constant $\varepsilon$.
The values of all time-independent parameters are given in the SM.
### Spectroscopy
We first compute the distributions of the couplings in TAT. We find that the fluctuations in the couplings are on the same order of magnitude as their means, in line with observations of Troisi and coworkers (Fig. \[fig:TATspec\]a and SM).[@troisi_electronic_2005; @troisi_dynamics_2006; @troisi_prediction_2007; @arago_dynamics_2015; @arago_regimes_2016; @fornari_exciton_2016; @landi_rapid_2018] To make contact with their work on fluctuations in the exciton coupling,[@arago_dynamics_2015; @arago_regimes_2016; @fornari_exciton_2016] we also compute the total exciton coupling $J_\mathrm{tot} = J_\mathrm{SR}+J$, where $J$ is the Coulomb coupling in Eq. \[eq:ham\_fe\] and $J_\mathrm{SR}$ is the short-ranged, charge-transfer mediated coupling[@harcourt_rate_1994; @scholes_rate_1995; @hestand_expanded_2018] $$J_\mathrm{SR}(\mathbf{q}^{N}_i,\mathbf{q}^{N}_j) = \frac{-2t_e(\mathbf{q}^{N}_i,\mathbf{q}^{N}_j) t_h(\mathbf{q}^{N}_i,\mathbf{q}^{N}_j)}{E_{CT}(\mathbf{q}^{N}_i,\mathbf{q}^{N}_j) - E_{S_1} - \Delta_{0-0}}.$$ Fig. \[fig:TATspec\]a shows the distribution of $J_\mathrm{tot}$ for nearest-neighbors in TAT.
![(a) The distributions of the nearest-neighbor couplings in TAT. (b) The experimental absorption spectrum of TAT (blue),[@yamagata_hj_2014] compared to the theoretical spectra with (purple) and without (orange) nonlocal electron-phonon coupling. The spectra are normalized by the maximum peak height to allow comparison between the experiment and the theory. (c) The effects of the nonlocal electron-phonon coupling cannot be fully captured by simply broadening the $\langle\hat{H}\rangle$ spectrum, even though the change in the 0-0/0-1 peak area ratio can (see SM). The purple curve here is the same as the purple curve in panel (b).[]{data-label="fig:TATspec"}](tatSpec_3panel.pdf)
Note that $J_\mathrm{SR}$ and $J_\mathrm{tot}$ do not enter into the calculation of the spectrum, but are simply used to make a comparison with previous results. As discussed in Sec. \[sec:intro\], one might expect these large fluctuations to have important effects on the absorption spectroscopy of TAT, but this expectation is at odds with the successful modeling of the absorption of TAT without nonlocal electron-phonon coupling.[@yamagata_hj_2014] Further, the coupling distributions are strongly non-Gaussian, indicating that nonlinear models of nonlocal electron-phonon coupling, like the one presented here, may be needed to accurately capture photophysical and charge transport properties. A direct comparison between linear and nonlinear models of nonlocal electron-phonon coupling is beyond the scope of this work, but merits future investigation.
The effects of these fluctuating couplings on the theoretical absorption spectrum of TAT are shown in Fig. \[fig:TATspec\]b. To isolate the effects of nonlocal electron-phonon coupling, we compare the spectrum computed with the time-averaged Hamiltonian $\langle \hat H \rangle$ to the time-dependent Hamiltonian $\hat H(t)$ (Eq. \[eq:ham\]). The spectrum in calculated for only one of the eight pillars of molecules in the MD simulation (Fig. \[fig:schematic\]c), because the couplings between adjacent stacks are negligible, so the stacks can be treated as independent, quasi-one-dimensional systems.[@yamagata_hj_2014]
The theoretical spectra (Fig. \[fig:TATspec\]b), both with and without nonlocal electron-phonon coupling, match the experimental spectrum quite well, as has been previously demonstrated for the case without nonlocal electron-phonon coupling.[@yamagata_hj_2014] Note that the $\langle \hat{H}(t)\rangle$ spectrum is calculated using the same parameter set and model as in reference , with only minor changes in the couplings. The nonlocal electron-phonon coupling has two noticeable effects on the spectrum: the spectrum broadens and the relative peak heights change. To quantify these changes, we fit the spectra with a set of four Lorentzian functions, one for each of the four observable vibronic peaks (see SM). We find that the full-width half-maximum of the vibronic peaks increase by an average of $\sim200$ [$\mathrm{cm}^{-1}$]{} due to the nonlocal electron-phonon coupling, and that the change in the relative peak heights is mainly a byproduct of the broadening, not a separate effect (see Fig. \[fig:TATspec\]c and SM). Note, however, that the entire spectral line shape of the $\hat{H}(t)$ spectrum cannot be reproduced by uniformly broadening the $\langle \hat{H} \rangle$ spectrum (Fig. \[fig:TATspec\]c), indicating that nonlocal electron-phonon coupling has a more nuanced effect on the spectrum. This is discussed in detail in Sec. \[sec:penSpec\] when considering the pentacene spectrum.
These results provide proof-of-principle for our approach to incorporating nonlocal electron-phonon coupling into spectroscopic calculations. Despite the large fluctuations in the CT couplings (Fig. \[fig:TATspec\]a), the effects on the absorption spectrum are, somewhat surprisingly, relatively minimal. In order to make contact with previous work studying the fluctuations of couplings in acene systems,[@troisi_dynamics_2006; @arago_dynamics_2015; @arago_regimes_2016; @fornari_exciton_2016] and to demonstrate our mapping approach in a more complex crystal structure, we now turn to the case of pentacene.
Pentacene\[sec:pen\]
--------------------
### Developing the Map
We begin, as in the case of TAT, by quantifying the fluctuations in the nearest-neighbor intermolecular geometries observed in pentacene. In TAT, only the nearest-neighbors, which form a one-dimensional stack of molecules along the crystalline $\mathbf{a}$-axis, have non-negligible couplings. In pentacene, there are non-negligible couplings between neighbors in both the $\mathbf{a}$ and $\mathbf{b}$ directions, resulting in an effective two-dimensional system that must be considered to accurately model spectroscopy or charge transport. This means that we can no longer consider only nearest-neighbors, but must consider all neighbors within the crystalline $\mathbf{ab}$-plane (Fig. \[fig:schematic\]). In pentacene, these neighbors can be classified by their spatial relationship in the crystal structure. We label them using their fractional coordinates in the unit cell relative to a reference molecule at $(0,0,0)$. In this notation, the two nearest-neighbors are those at $\pm({\ensuremath{\frac{1}{2}}},-{\ensuremath{\frac{1}{2}}},0)$ and $\pm({\ensuremath{\frac{1}{2}}},{\ensuremath{\frac{1}{2}}},0)$, with center-of-mass distances of 4.7 and 5.2 Å, respectively.[@holmes_nature_1999] These pairs exhibit the herringbone stacking motif that characterizes crystalline acenes. The next nearest-neighbors are the $\pm(1,0,0)$ and $\pm(0,1,0)$ dimers at 6.3 and 7.7 Å.[@holmes_nature_1999] These pairs are co-planar, not herringbone stacked. The CT couplings ($t_e$ and $t_h$) between the $\pm(0,1,0)$ dimers are negligible,[@yamagata_nature_2011; @hestand_polarized_2015] so we ignore them here, but this still leaves three distinct types of dimers, compared to a single type in TAT. This means that we must parameterize the interpolation map in three disjoint regions of the 6-dimensional intermolecular geometry space. Note that the labelling scheme described above is the same as that of Hestand et al.[@hestand_polarized_2015], except that we define the lattice vectors in the same way as Holmes et al.,[@holmes_nature_1999] while Hestand et al. swap the $\mathbf{a}$ and $\mathbf{b}$ lattice vectors.
The distributions of intermolecular geometries for the $\pm({\ensuremath{\frac{1}{2}}},-{\ensuremath{\frac{1}{2}}},0)$ dimer are shown in Fig. \[fig:PENTdimers\].
![Distributions of the geometries of the $\pm({\ensuremath{\frac{1}{2}}},-{\ensuremath{\frac{1}{2}}},0)$ dimer in pentacene. The distributions for the $\pm({\ensuremath{\frac{1}{2}}},{\ensuremath{\frac{1}{2}}},0)$ and $\pm(1,0,0)$ dimers are shown in the SM. The molecular coordinate system is defined in Fig. \[fig:schematic\]d. (a) Distributions of the fluctuations of the translational slips. (b) Distribution of the fluctuations of the rotational axes on a unit sphere, depicted as a Lambert azimuthal equal-area projection. (c) Distribution of the cosine of the fluctuations of the rotational angles. See the caption of Fig. \[fig:TATdimers\] for details.[]{data-label="fig:PENTdimers"}](slipAndAxisDists_PENT_1col.pdf)
The distributions look qualitatively similar to those in TAT with one exception: the fluctuations in the rotational axis are localized around $(\pm 1,0,0)$. This means that the majority of rotations are about the molecular long axis of pentacene. The distributions for the $\pm({\ensuremath{\frac{1}{2}}},{\ensuremath{\frac{1}{2}}},0)$ and $\pm(1,0,0)$ dimers are qualitatively similar (see SM). Based on these distributions, we construct a grid for each type of dimer, similar to the one for TAT shown in Table \[tab:TATgrid\] (see SM). Instead of interpolating the rotational axes along the $xz$-equator, we interpolate all rotational axes with $\delta\Theta_x\leq0$ to $(-1,0,0)$ and all rotational axes $\delta\Theta_x>0$ to $(1,0,0)$. The total number of grid points is 11,277, much larger than for TAT due to the three disjoint regions of the map that must be considered. As before, we shift the origin of the grid to the experimental dimer configurations, making the grid generalizable to any MD force field. Previous work on pentacene has scaled the CT couplings obtained from DFT calculations by a factor of 1.1 to obtain better agreement with experiment.[@hestand_polarized_2015] Here we do not scale the CT couplings to be consistent with the calculations for TAT.
As before, the fluctuating Coulomb couplings are computed based on the transition charge method, with transition charges calculated for the geometry-optimized monomer (see SM). The CT energies are also computed as before (Eq. \[eq:ct\_neighbor\]), except that the nearest-neighbor CT energy is parameterized separately for each dimer type (see SM), in accordance with previous theoretical[@yamagata_nature_2011; @beljonne_charge_2013; @hestand_polarized_2015] and experimental works.[@sebastian_charge_1981] For all electron-hole pairs further than the $\pm(0,1,0)$ dimer, the CT energy is calculated using the $\pm({\ensuremath{\frac{1}{2}}},-{\ensuremath{\frac{1}{2}}},0)$ energy and Eq. \[eq:ct\_neighbor\].[@yamagata_nature_2011; @beljonne_charge_2013; @hestand_polarized_2015; @hestand_polarized_note_axes]
### Spectroscopy\[sec:penSpec\]
The calculated $||\mathbf{a}$ and ${\perp}\mathbf{a}$ polarized spectra with and without nonlocal electron-phonon couplings are shown in Fig. \[fig:PENTspec\].
![The experimental absorption spectrum of pentacene, polarized along the crystalline $\mathbf{a}$-axis (top) and perpendicular to the $\mathbf{a}$-axis (bottom). We compare the experimental spectra (blue), to the theoretical spectra with (purple) and without (orange) nonlocal electron-phonon coupling. The spectra are normalized to the maximum peak height to allow comparison between the experiment and the theory.[]{data-label="fig:PENTspec"}](compareFlucts_2panel.pdf)
Both theoretical spectra agree qualitatively with the experiment.[@hestand_polarized_2015; @hestand_polarized_note_axes] As in the case of TAT, the main effect of nonlocal electron-phonon coupling is to broaden the spectra. Interestingly, however, the $||\mathbf{a}$ spectrum (lower Davydov component) broadens considerably more than the ${\perp} \mathbf{a}$ spectrum (upper Davydov component). Fitting the spectra with Lorentzian functions shows that the full-width half-maximum of the lowest energy vibronic peak increases by about 300 [$\mathrm{cm}^{-1}$]{} in the $||\mathbf{a}$ spectrum when nonlocal electron-phonon coupling is included, but only by about 50 [$\mathrm{cm}^{-1}$]{} in the ${\perp}\mathbf{a}$ spectrum (see SM). Previous theoretical modeling of the absorption spectrum of pentacene has used a larger line width parameter for the $||\mathbf{a}$ spectrum than the ${\perp}\mathbf{a}$ spectrum in order to capture the observed experimental line widths.[@hestand_polarized_2015; @a_hestand_polarized_note] Here, we use the same line width parameter for both components (see SM) and find that the nonlocal electron-phonon coupling selectively increases the line width of the $||\mathbf{a}$ spectrum.
The disparate line width broadening of the two polarization components can be understood by considering a simple dimer model that incorporates nonlocal electron-phonon coupling through fluctuations in $t_{e}(t)$ and $t_{h}(t)$. To focus on the effect of nonlocal electron-phonon coupling, we neglect local vibronic coupling and Coulomb coupling. The Hamiltonian for this simplified system can be expressed in the basis $\left|h_{1}, e_{1} \right\rangle$, $\left|h_{2}, e_{2} \right\rangle$, $\left|h_{1}, e_{2} \right\rangle$, $\left|h_{2}, e_{1} \right\rangle$ as $$\hat{H}_\mathrm{dimer}(t) = \begin{bmatrix}
E_\mathrm{S_{1}} & 0 & t_{e}(t) & t_{h}(t) \\
0 & E_\mathrm{S_{1}} & t_{h}(t) & t_{e}(t) \\
t_{e}(t) & t_{h}(t) & E_\mathrm{CT} & 0 \\
t_{h}(t) & t_{e}(t) & 0 & E_\mathrm{CT} \\
\end{bmatrix}.$$ Here $\left|h_{n},e_{m} \right\rangle$ represent a state with a hole on chromophore $n$ and an electron on molecule $m$. When $n=m$, the state is a Frenkel exciton and when $n\ne m$ the state is a CT exciton.
It is convenient to transform the Hamiltonian to a symmeterized basis $\left|\psi_\mathrm{FE+}\right\rangle$, $\left|\psi_\mathrm{CT+}\right\rangle$, $\left|\psi_\mathrm{FE-}\right\rangle$, $\left|\psi_\mathrm{CT-}\right\rangle$ (see SM). In this basis, only states with the same symmetry ($+$ or $-$) are coupled, and the Hamiltonian is $$\hat{H}_\mathrm{dimer}(t) = \begin{bmatrix}
E_\mathrm{S_{1}} & t_{+}(t) & 0 & 0 \\
t_{+}(t) & E_\mathrm{CT} & 0 & 0 \\
0 & 0 & E_\mathrm{S_{1}} & t_{-}(t) \\
0 & 0 & t_{-}(t) & E_\mathrm{CT}\\
\end{bmatrix}.$$ where $t_\pm(t) = t_{e}(t) \pm t_{h}(t)$. In this representation, the coupling between $\left|\psi_\mathrm{FE+}\right\rangle$ and $\left|\psi_\mathrm{CT+}\right\rangle$ depends on the sum of the CT couplings $t_+(t)=t_{e}(t)+t_{h}(t)$ while the coupling between $\left|\psi_\mathrm{FE-}\right\rangle$ and $\left |\psi_\mathrm{CT-}\right\rangle$ depends on the difference $t_-(t)=t_{e}(t)-t_{h}(t)$. While this discrepancy is rather subtle, it has profound effects on how nonlocal electron-phonon coupling can influence the eigenstates of the system and the accompanying absorption spectrum.
Previous studies based on Frenkel exciton models have shown that off-diagonal disorder broadens the line width according to the variance of the disorder distribution; broader disorder distributions give rise to broader line widths.[@klugkist_scaling_2008; @fidder_optical_1991] In our model, then, the line width of transitions to the symmetric states are broadened by the breadth of the distribution of $t_{+}(t)$ while the line width of transitions to the antisymmetric states are broadened by the breadth of the distribution of $t_{-}(t)$. To understand how disparate broadening between the two states might arise, we consider the case where the fluctuations in $t_e(t)$ and $t_h(t)$ are Gaussian with means $\langle t_e\rangle$ and $\langle t_h\rangle$ and variances $\sigma_e^2$ and $\sigma_h^2$.[@akimov_stochastic_2017] We need not specify anything about the temporal correlations of $t_e(t)$ and $t_h(t)$, except that they are shorter-lived than the timescale of the spectroscopic measurement, so that the Gaussian statistics are sufficiently sampled. If the fluctuations in $t_{e}(t)$ and $t_{h}(t)$ are uncorrelated, then the variances of the distributions of $t_{+}(t)$ and $t_{-}(t)$ are both $\sigma_\pm^2 = \sigma_e^2+\sigma_h^2$. In this case, the absorption peaks arising from the symmetric and antisymmetric states broaden to the same extent. In real systems, however, $t_{e}(t)$ and $t_{h}(t)$ *are* correlated, so $\sigma_\pm^2 = \sigma_e^2+\sigma_h^2\pm 2\rho\sigma_e^{}\sigma_h^{}$, where $\rho$ is the correlation coefficient between the distributions of $t_{e}(t)$ and $t_{h}(t)$. Thus, when $\rho$ is positive, $\sigma_+^2>\sigma_-^2$, and vice versa for $\rho<0$. This means that the absorption peaks arising from the symmetric states are broadened more ($\rho>0$) or less ($\rho<0$) by the nonlocal electron-phonon coupling than those arising from the antisymmetric states. In the limiting case where $t_e(t)$ and $t_h(t)$ are perfectly correlated ($\rho=1$) and identically distributed ($\sigma_e=\sigma_h$), $\sigma_{+}^2=4\sigma_e^2$ while $\sigma_-^2=0$. In this extreme, the nonlocal electron-phonon coupling *only* broadens the symmetric absorption peak. Likewise when $t_e(t)$ and $t_h(t)$ are perfectly anticorrelated ($\rho=-1$) and identically distributed, only the antisymmetric absorption peak is broadened.
Thus, correlations between the fluctuations of $t_e(t)$ and $t_h(t)$ can lead to selective broadening like that seen in pentacene.[@hestand_polarized_2015; @hestand_polarized_note_axes] In pentacene, the $||\mathbf{a}$ polarized lower Davydov component arises due to absorption to the symmetric state while the ${\perp}\mathbf{a}$ polarized upper Davydov component spectrum arises mainly due to absorption to the antisymmetric state.[@yamagata_nature_2011; @hestand_polarized_2015; @hestand_polarized_note_axes] Our simple model therefore demonstrates that the selective broadening of the lowest energy peak in the $||\mathbf{a}$ polarized spectrum can be attributed to positive correlations between fluctuations in $t_{e}$ and $t_{h}$. Indeed, our mixed quantum-classical mapping approach predicts that the fluctuations in $t_{e}$ and $t_{h}$ are in fact positively correlated (see SM).
We note that the simple dimer model described above cannot explain the nonuniform broadening observed in the TAT spectrum, or differences in broadening of different peaks in the $||\mathbf{a}$-polarized or ${\perp} \mathbf{a}$-polarized pentacene spectrum. Additional nuances arise when local vibronic coupling is considered as the phonons allow electronic states of different symmetry to mix[@spano_vibronic_2011; @spano_reclassifying_2007]. Moreover, disorder in the system breaks the periodicity of the lattice, which also allows electronic states of different symmetry to mix. A complete description of nonuniform broadening due to nonlocal electron phonon coupling requires an in-depth analysis of the Hamiltonian in Eq. \[eq:ham\]. This should be the subject of future investigation, but is beyond the scope of the current work.
While selective line broadening represents an interesting effect of nonlocal electron-phonon coupling, the overall effect on the absorption spectrum is relatively minor (Fig. \[fig:PENTspec\]). As was the case for TAT (Fig. \[fig:TATspec\]a), we find that for pentacene, the width of the coupling distributions are indeed quite large, and on the same order as their means (see SM), in agreement with the results of Troisi and coworkers.[@troisi_dynamics_2006; @arago_dynamics_2015] Nevertheless, the spectral line shape is relatively unaffected by these wide distributions.
Conclusion
==========
We describe an approach that, for the first time, incorporates nonlocal electron-phonon coupling of arbitrary form into model Hamiltonians for studying organic crystals. Our approach is inspired by mixed quantum-classical approaches that are common in theoretical vibrational spectroscopy, and also relies on a interpolation map for fast look-up of precomputed electronic couplings. We apply this approach to study the absorption spectroscopy of two organic crystals with important applications in semiconductor devices. We find that, even though the electronic couplings fluctuate over several hundred [$\mathrm{cm}^{-1}$]{}, the effect on the absorption spectrum is minimal and largely manifests as an increase in the absorption line width. This explains how previous work, which often uses phenomenological line broadening parameters fit to experiment, has been able to accurately model the absorption spectra of organic crystals without accounting for nonlocal electron-phonon coupling. The effects of nonlocal electron-phonon coupling cannot be entirely captured through an increased line broadening parameter, however, as we find that the line broadening is not uniform across the spectrum and that different peaks broaden to different extents. Importantly, this explains, for the first time, the different line widths observed in the upper and lower Davydov components of the pentacene spectrum. Using a model dimer Hamiltonian, we attribute the different line widths in pentacene to correlations between the fluctuations in CT couplings induced by nonlocal electron-phonon coupling. There are several possibilities for extending the approach presented here. For example, maps of the time-independent quantities of Eq. \[eq:ham\] could be generated to include fluctuations in those quantities in the model. Other possibilities include generating the coupling maps on-the-fly over the course of the MD simulation until sufficient coverage of the sampled space has been obtained. Machine learning approaches may also provide an avenue towards more accurate maps and towards relaxing the rigid-body approximation.[@kananenka_machine_2019; @jackson_efficient_2019; @jackson_electronic_2019; @kraemer_charge_2020]
While spectroscopy is an important experimental probe in organic crystals, and can elucidate the nature of the electronic and vibronic interactions in a system, these systems hold most promise in the semiconductor industry, where charge transport properties like carrier mobility are of utmost importance. The time-dependent Hamiltonian that our method computes can be used with any of the standard approaches that currently exist to evaluate the charge transport properties of a material.[@bondarenko_comparison_2020] We expect that this will permit a comprehensive understanding of the effects of nonlinearities in the nonlocal electron-phonon coupling on the electronic properties of organic crystals.
Supplementary Material {#supplementary-material .unnumbered}
======================
The supplementary material is available upon request. It contains details about the MD simulations, the mapping procedure, the charge-transfer integral maps, probability distributions for the pentacene fluctuations, average nearest-neighbor CT energies, statistics for the coupling fluctuations, the complete parameter set for the spectroscopic calculations, comparison of different ab-inito methods for calculating the CT couplings, information about the phenomenological line broadening parameter, details about the spectral analysis, and details of the dimer model.
This work was completed with resources provided by the Pritzker School of Molecular Engineering and Research Computing Center at the University of Chicago.
Author Information {#author-information .unnumbered}
==================
Corresponding Authors {#corresponding-authors .unnumbered}
---------------------
Steven E. Strong: [email protected]\
Nicholas J. Hestand: [email protected]\
[^1]: S.E.S and N.J.H contributed equally to this work
[^2]: S.E.S and N.J.H contributed equally to this work
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Operator entanglement of two-qubit joint unitary operations is revisited. Schmidt number is an important attribute of a two-qubit unitary operation, and may have connection with the entanglement measure of the unitary operator. We found the entanglement measure of two-qubit unitary operators is classified by the Schmidt number of the unitary operators. The exact relation between the operator entanglement and the parameters of the unitary operator is clarified too.'
author:
- 'Hui-Zhi Xia'
- Chao Li
- Qing Yang
- 'Ming Yang [^1]'
- 'Zhuo-Liang Cao'
title: 'Operator entanglement of two-qubit joint unitary operations revisited: Schmidt number approach'
---
Introduction
============
Unitary operations have been placed in a very important position in the quantum communication and entanglement manipulating, such as, quantum cryptography[@cryp], teleportation[@tele], entanglement swapping[@swapping], quantum states purification\[4\], entanglement production[@generation] and so on. In quantum teleportation, to transfer the unknown quantum state to the remote user, the sender must apply an joint unitary operator on the unknown state particle and one of the entangled particles. In quantum entanglement swapping, a joint unitary transformation on two particles(they are from two different entangled pairs) will let two remote particles entangled without direct interaction. In entanglement purification process, joint unitary operations and measurements can transfer the entanglement from many partially entangled pairs to few near perfect entangled pairs. In entanglement generation, the joint unitary operations and single qubit operations can let the initially product particles entangled. From the above applications we can see that, it is the nonlocal attribute of the bipartite joint unitary transformation that plays the most important role. The nonlocal attribute of a bipartite joint unitary operator has been studied from different aspects, such as entangling power[@Zanardi1], operator entanglement[@Zanardi2; @wangxg1], and entanglement-changing power[@Yemy]. Entangling power is the mean entanglement(linear entropy) produced by acting with U on a given distribution of pure product states[@Zanardi1]. Because a quantum operator belongs to a Hilbert-Schmidt space, one can consider the entanglement of the operator itself, which is named as Operator entanglement[@Zanardi2]. It is a natural extension of the entanglement measures of quantum states[@Nielsen1; @Bose; @Wootters; @Hill] to the level of general quantum evolutions. Up to now, several methods have been proposed to quantify the entanglement of an unitary bipartite operator, such as linear entropy[@Zanardi2], von Neumann entropy[@wangxg1], concurrence[@wangxg2] and Schmidt strength[@Nielsen2] etc. The relations between the entangling power and these operator entanglement measures have also been discussed recently[@wangxg1; @Zanardi2; @Balakrishnan].
In general, the entangling power of an unitary operator is related to those operator entanglement measures in complicated or indirect ways, so does the relation between the different operator entanglement measures. Clarifying the exact relation between the entangling power and different operator entanglement measures, and the relation between different operator entanglement measures will be very helpful for us to understand the nonlocal attributes and entanglement capacity of a joint unitary operator. After getting the whole nonlocal features of joint unitary operators, we can choose the optimal unitary operator to produce the specific entangled state as we want it to be, and the quantum communication protocols(such as teleportation, entanglement swapping etc) can be realized in an optimal way by introducing the appropriate joint operations[@future]. For two-qubit unitary operators, the two operator entanglement measures Schmidt strength and linear entropy are shown to have a one-to-one relation between them for the Schmidt number $2$ case, but no such relation exists for the Schmidt number $4$ case[@Balakrishnan]. This result also shows that, the Schmidt number is a very important parameter of an unitary operator when the entangling power and operator entanglement of it is concerned. In this paper, we are going to study the operator entanglement of joint two-qubit unitary operators with different Schmidt numbers. In this paper, we use the linear entropy as the entanglement measure of joint two-qubit unitary operator[@Zanardi2; @wangxg1], and study the Schmidt number and the entanglement measure of any unitary operator in four-dimensional Hilbert-Schmidt space. The Schmidt number of two-qubit unitary operators has the following three possible situations: $1$, $2$ or $4$[@Nielsen2]. We will show that the entanglement measure of two-qubit unitary operators is classified by the Schmidt number of the unitary operator. In the light of the numerical analysis, we can get the extreme value of the operator entanglement for the two-qubit unitary operators. Further, the relation between the operator entanglement and the parameters of the unitary operator will be clarified here.
Operator entanglement and Schmidt number of two-qubit joint unitary operations
==============================================================================
There exist local unitary operators $U_{A}$,$U_{B}$ ,$V_{A}$ ,$V_{B}$ and a two-qubit unitary operator $U_{d}$, so that arbitrary two-qubit unitary operator $U_{AB}$ can be canonically decomposed as[@Kraus; @Zhang]: $$U_{AB}=(U_{A}\otimes U_{B})\cdot U_{d}\cdot(V_{A}\otimes V_{B}),
\label{decompose}$$ where $U_{d}=\exp[-i\vec{\sigma}_{A}^{T}d \vec{\sigma}_{B}]$, and $d$ is a diagonal matrix. In the light of this theory, any bipartite unitary operator can be decomposed as the form above. Moreover, the entanglement measure of a unitary operator’s must be invariant under the local unitary transformations[@wangxg2]. So, the entanglement measure of any bipartite unitary operator can be simplified into the entanglement measure of operator $U_{d}$. In the standard computational basis, we have[@Rezakhani]: $$U_{d}=\left(
\begin{array}{cccc}
e^{-i{c}_{3}}c^{-} & 0 & 0 & {-i}e^{-i{c}_{3}}s^{-} \\
0 & e^{i{c}_{3}}c^{+} & {-i}e^{i{c}_{3}}s^{+} & 0 \\
0 & {-i}e^{i{c}_{3}}s^{+} & e^{i{c}_{3}}c^{+} & 0 \\
{-i}e^{{-i}{c}_{3}}s^{-} & 0 & 0 & e^{{-i}{c}_{3}}c^{-} \\
\end{array}
\right),
\label{decomatrix}$$ where $c^{\pm}=\cos({c}_{1}\pm{c}_{2})$, $s^{\pm}=\sin({c}_{1}\pm{c}_{2})$, and one can always restrict oneself to the region $\frac{\pi}{4}\geq{c}_{1}\geq{c}_{2}\geq|{c}_{3}|$, which is the so-called Weyl chamber[@Zhang].
Any operator $U$ acting on the systems $A$ and $B$ can be written in the operator-Schmidt decomposition[@Horodecki]: $$U=\sum_{l}{s}_{l}{A}_{l}\otimes {B}_{l},
\label{opeschmidt}$$ where ${s}_{l}$ are the Schmidt coefficients with the positive value and ${A}_{l}$, ${B}_{l}$ are orthonormal operator bases for $A$ and $B$, respectively. To calculate the operator entanglement of the unitary operator $U_{AB}$, we only need to make the Schmidt decomposition of the unitary operator ${U}_{d}$. From the Ref.[@wangxg1], entanglement measure of a unitary operator can be expressed as: $$E(U)=1-\sum_{l}\frac{s_{l}^{4}}{d_{1}^{2}d_{2}^{2}},
\label{opentangle}$$ where ${d}_{1}$ and ${d}_{2}$ are dimensions of $A$ and $B$, respectively. So we can get the entanglement measure for the unitary operator ${U}_{d}$ as follows: $$\begin{aligned}
E({U}_{d})&=&1-\frac{1}{4}\{1-\sin^{2}({c}_{1}+{c}_{2})\cos^{2}({c}_{1}+{c}_{2}) -\sin^{2}({c}_{1}-{c}_{2})\cos^{2}({c}_{1}-{c}_{2})\\ \nonumber
&&+[1+2\cos^{2}(2{c}_{3})]\sin^{2}({c}_{1}+{c}_{2})\sin^{2}({c}_{1} -{c}_{2})\\ \nonumber &&+[1+2\cos^{2}(2{c}_{3})]\cos^{2}({c}_{1}+{c}_{2})\cos^{2}({c}_{1}-{c}_{2})\}.
\label{EUd}\end{aligned}$$
The Schmidt number[@Nielsen1; @Nielsen2] is the number of non-zero coefficients $s_{l}$. For the unitary operator $U_{d}$, the Schmidt coefficients $s_{l}$ are as follows:
$${s}_{1}=[\cos^{2}(c_{1}+c_{2})+\cos^{2}(c_{1}-c_{2})+2\cos(2c_{3})\cos(c_{1}+c_{2})\cos(c_{1}-c_{2})]^{1/2},$$
$${s}_{2}=[\sin^{2}(c_{1}+c_{2})+\sin^{2}(c_{1}-c_{2})+2\cos(2c_{3})\sin(c_{1}+c_{2})\sin(c_{1}-c_{2})]^{1/2},$$
$${s}_{3}=[\sin^{2}(c_{1}+c_{2})+\sin^{2}(c_{1}-c_{2})-2\cos(2c_{3})\sin(c_{1}+c_{2})\sin(c_{1}-c_{2})]^{1/2},$$
$${s}_{4}=[\cos^{2}(c_{1}+c_{2})+\cos^{2}(c_{1}-c_{2})-2\cos(2c_{3})\cos(c_{1}+c_{2})\cos(c_{1}-c_{2})]^{1/2}.$$
\[Schmidtcoes\]
We made numerical analysis for the Schmidt number and the entanglement measure of the unitary operator, and got the relation between the Schmidt number and the entanglement measure of the unitary operator(shown in Table. \[Table1\]).
Schmidt number of ${U}_{d}$ Operator entanglement of ${U}_{d}$
----------------------------- ------------------------------------
$Sch=1$ $E({U}_{d})=0$
$Sch=2$ $0<E({U}_{d})\leq\frac{1}{2}$
$Sch=4$ $0<E({U}_{d})\leq\frac{3}{4}$
: The Schmidt number versus the entanglement measure of the unitary operator ${U}_{d}$ \[Table1\].
For the Schmidt number $4$ case, the first plot in Fig.\[fig1\] shows how the entanglement measure of $U_{d}$ depends on the parameters $c_{1}$ and $c_{2}$ for $c_{3}=0$ when the Schmidt number of $U_{d}$ is $4$. As the parameters $c_{1}$ and $c_{2}$ approach to $0$, which represents a unit matrix, the entanglement measure of the unitary operator approaches to $0$. As the parameters $c_{1}$ and $c_{2}$ are equal to $\frac{\pi}{4}$, which represents the SWAP gate, the entanglement measure of the unitary operator is equal to the maximum value $\frac{3}{4}$. As the parameters ${c}_{1}$ and ${c}_{2}$ are increasing, the entanglement measure of unitary operator $U_{d}$ is increasing too. If $c_{3}\neq 0$, the changing pattern of the operator entanglement is similar to that of the $c_{3}=0$ case. From the other three plots in Fig.\[fig1\], we can see that the minimum entanglement for the Schmidt-$4$ operator is oscillating with $c_{3}$, and the maximum value and the period of the oscillation are $0.5$ and $\frac{\pi}{2}$, respectively. When $c_{3}=\frac{\pi}{4}$, the minimum entanglement reaches its maximum $0.5$. The maximum value of operator entanglement is still $\frac{3}{4}$.
![\[fig1\]Entanglement measure of unitary operator $U_{d}$ versus the parameters $c_{1}$, $c_{2}$ for parameter $c_{3}=0,\pi/16, \pi/8,\pi/4,$ respectively, when the Schmidt number is $4$.](fig1.eps){width="\textwidth"}
For the Schmidt number $2$ case, if $c_{3}\neq0 $, then $c_{1},c_{2}$ must be zero, so the operator entanglement can be expressed in a very simple form $E({U}_{d})=\frac{1}{2}\sin^{2}(2c_{3})$. Fig.\[fig2\] shows how the entanglement measure of $U_{d}$ depends on the parameters $c_{1}$ and $c_{2}$ for $c_{3}=0$ when the Schmidt number of $U_{d}$ is $2$. This curve is the boundary line of the first plot in Fig.\[fig1\]. As the parameters $c_{1}$ and $c_{2}$ approach to $0$, the entanglement measure of unitary operator $U_{d}$ approaches to $0$. The entanglement measure of unitary operator will increase as the parameters $c_{1}$ or $c_{2}$ increase. The entanglement measure of the unitary operator with Schmidt number $2$ can get to the extreme value $\frac{1}{2}$.
![\[fig2\]Entanglement measure of unitary operator $U_{d}$ versus the parameters $c_{1}$, $c_{2}$ for parameter ${c}_{3}=0$ when the Schmidt number is $2$.](fig2.eps){width="\textwidth"}
From the two figures we can see that, if we want to design an operation so that it has a specific operator entanglement(or entangling power), we have infinite design schemes(i.e. $c_{1},c_{2},c_{3}$) for the Schmidt-number-$4$ type operations. But we only have two design schemes for the Schmidt-number-$2$ type operations. That is to say, Schmidt-number-$4$ type operations have a variety of design schemes rather than the only two design schemes of the Schmidt-number-$2$ type operations. So, the Schmidt-number-$4$ type operations are superior to the Schmidt-number-$2$ type operations. In addition, the maximum operator entanglement of the Schmidt-number-$4$ type operations can reach $\frac{3}{4}$, while the maximum operator entanglement of the Schmidt-number-$2$ type operations only reaches $\frac{1}{2}$. So, in practice, we will prefer the Schmidt-number-$4$ type operations.
An example in Cavity QED system
===============================
To demonstrate the abstract relation between the operator entanglement and the parameters of the unitary operator, we will take the following detailed example. Consider two two-level atoms($1$, $2$) trapped in a single-mode optical cavity, and the two atoms are coupled to the cavity mode with the same coupling constant $g$. The excited state$|e\rangle_{i}$ and the ground state$|g\rangle_{i}$, $(i=1,2)$ are the two levels used to encode quantum information. The two atoms have different transition frequencies $\omega_{1}$, $\omega_{2}$, and $\omega_{1}\neq \omega_{2}$. The frequency of the cavity mode is denoted by $\omega_{0}$. The atom $1$ is resonantly driven by an external classical field with coupling constant $\Omega$. Suppose the cavity mode is initially prepared in vacuum state, under the large detuning condition $\delta_{1}=\omega_{1}-\omega_{0}\gg g$, $\delta_{2}=\omega_{2}-\omega_{0}\gg g$ and in the strong driving regime $\Omega\gg \frac{g^{2}}{\delta_{1}}$, the effective Hamiltonian of the total system can be expressed as[@Song2010]:
$$\label{Heff}
H_{eff}=\frac{\lambda}{2}\sigma_{1}^{x}\sigma_{2}^{x},$$
where $\lambda=\frac{g^{2}}{\delta_{1}}$ is the effective coupling constant between atoms $1$ and $2$, and $\sigma_{i}^{x}$ is the Pauli operator of the $i$th atom. The unitary transformation induced by this effective Hamiltonian can be expressed as: $$U_{eff}=\left(
\begin{array}{cccc}
cos(\frac{\lambda t}{2}) & 0 & 0 & {-i}sin(\frac{\lambda t}{2}) \\
0 & cos(\frac{\lambda t}{2}) & {-i}sin(\frac{\lambda t}{2}) & 0 \\
0 & {-i}sin(\frac{\lambda t}{2}) & cos(\frac{\lambda t}{2}) & 0 \\
{-i}sin(\frac{\lambda t}{2}) & 0 & 0 & cos(\frac{\lambda t}{2}) \\
\end{array}
\right).
\label{Ueff}$$ If we set $c_{1}=\frac{\lambda t}{2}$, ${c}_{2}=0$, ${c}_{3}=0$ in Eq.(\[decomatrix\]), it is just the joint unitary operator in Eq.(\[Ueff\]). That is to say, the above mentioned physical process is just a physical realization of the joint unitary operation (\[decomatrix\]). The Schmidt number of the operator (\[Ueff\]) is $2$, and the operator entanglement measure of it can be expressed as $E(U_{eff})=\frac{1}{2}\sin^{2}(\lambda t)$. The relationship between the operator’s entanglement measure and the effective interaction time $\lambda t$ between the two atoms is depicted in Fig.\[fig3\]. From this figure we can easily see that the maximum operator entanglement is $\frac{1}{2}$ with $Sch=2$.
![\[fig3\]Entanglement measure of unitary operator $U_{eff}$ versus the effective interaction time $\lambda t$ between the two atoms. Here the Schmidt number of $U_{eff}$ is $2$.](fig3.eps){width="\textwidth"}
Conclusion
==========
In this paper, the linear entropy and the Schmidt number of an arbitrary two-qubit unitary operator is discussed. The results have shown that the Schmidt number is related with the entanglement measure of unitary operators closely. For the same operator entanglement within the range $(0,\frac{1}{2}]$, there exist infinite unitary operators with Schmidt number $4$ but only $2$ unitary operators with Schmidt number $2$. In this sense, we can say that the unitary operators with Schmidt number $4$ can be realized more easily than the unitary operators with Schmidt number $2$ if the same operator entanglement is required. In addition, for the unitary operators with Schmidt number $4$, the range for the operator entanglement is $(0,\frac{3}{4}]$. But, for the unitary operators with Schmidt number $2$, the range decline to $(0,\frac{1}{2}]$. There must be some requirement of entanglement which can be available for the unitary operator with Schmidt number $4$ only.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work is supported by National Natural Science Foundation of China (NSFC) under Grants No. 10704001, No. 61073048, No.10905024 and 11005029, the Specialized Research Fund for the Doctoral Program of Higher Education(20113401110002), the Key Project of Chinese Ministry of Education.(No.210092), the China Postdoctoral Science Foundation under Grant No. 20110490825, Anhui Provincial Natural Science Foundation under Grants No. 11040606M16 and 10040606Q51, the Key Program of the Education Department of Anhui Province under Grants No. KJ2012A020, No. KJ2012A244, No. KJ2010A287, No. KJ2012B075 and No. 2010SQRL153ZD, the ‘211’ Project of Anhui University, the Talent Foundation of Anhui University under Grant No.33190019, the personnel department of Anhui province, and Anhui Key Laboratory of Information Materials and Devices (Anhui University).
[99]{} N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, Rev. Mod. Phys. 74, 145(2002). C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters, Phys. Rev. Lett. 70, 1895 (1993). M. Zukowski, A. Zeilinger, M. A. Horne, and A. K. Ekert, Phys. Rev. Lett. 71, 4287 (1993). C. H. Bennett, G. Brassard, S. Popescu, B. Schumacher, J. A. Smolin, and W. K. Wootters, Phys. Rev. Lett. 76, 722 (1996). A. G. White, D. F. V. James, P. H. Eberhard, and P. G. Kwiat, Phys. Rev. Lett. 83, 3103 (1999). P. Zanardi, C. Zalka, and L. Faoro, Phys. Rev. A 62, 030301(R)(2000). P. Zanardi, Phys. Rev. A 63, 040304(R)(2001). X.-G. Wang, B. C. Sanders, and D. W. Berry. Phys. Rev. A 67, 042323 (2003). M.-Y. Ye, D. Sun, Y.-S. Zhang,and G.-C. Guo, Phys. Rev. A 70, 022326 (2004). M. A. Nielsen and I. L. Chuang. Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, England,2000). S. Bose, V. Vedral, P. L. Knight. Phys.Rev. A 60, 194 (1999). W. K. Wootters, Phys. Rev. Lett. 80, 2245 (1998). S. Hill and W. K. Wootters, Phys. Rev. Lett. 78, 5022 (1997). X.-G. Wang and P. Zanardi, Phys. Rev. A 66, 044303 (2002). M. A. Nielsen, C. M. Dawson, J. L. Dodd, A. Gilchrist, D. Mortimer, T. J. Osborne, M. J. Bremner, A. W. Harrow, A. Hines, Phys. Rev. A 67, 052301(2003). S. Balakrishnan and R. Sankaranarayanan, Phys. Rev. A 83, 062320 (2011). The optimality of these quanutm communication protocols by introducing the appropriate joint unitary operations will be discussed in another work which will be published soon. B. Kraus and J. I. Cirac. Phys. Rev. A 63, 062309 (2001). J. Zhang, J. Vala, S. Sastry, K. B. Whaley. Phys. Rev. A 67, 042313 (2003). A. T. Rezakhani. Phys. Rev. A 70, 052313 (2004). M. Horodecki, P. Horodecki, and R. Horodecki. Phys. Rev. A 60, 1888 (1999). J.-S. Jin, C.-S. Yu, P Pei, and H.-S. Song, Phys. Rev. A 82, 042112 (2010).
[^1]: Corresponding Author: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
[*Dedicated to the memory of Vadim Kuznetsov*]{}
*Briefly after I had moved from CWI, Amsterdam to a professorship at the University of Amsterdam in 1992, Vadim Kuznetsov contacted me about the possibility to come to Amsterdam as a postdoc. We successfully applied for a grant. He arrived with his wife Olga and his son Simon in Amsterdam for a two-years stay during 1993–1995. I vividly remember picking them up at the airport and going in the taxi with all their stuff to their first apartment in Amsterdam, at the edge of the red light quarter. These were two interesting years, where we learnt a lot from each other. We wrote one joint paper, but Vadim wrote many further papers alone or with other coauthors during this period. We should have written more together, but our temperaments were too different for that. Vadim was always speeding up, while I wanted to ponder and to look for further extensions and relations with other work.*
After his Amsterdam years Vadim had a marvelous career which led to prestigious UK grants, tenure in Leeds, and a lot of organizing of conferences and proceedings. We met several times afterwards. I visited for instance Leeds for one week, and Vadim was an invited speaker at the conference in Amsterdam in 2003 on the occasion of my sixtieth birthday.
Introduction
============
Zhedanov [@1] introduced in 1991 an algebra $AW(3)$ with three generators $K_0$, $K_1$, $K_2$ and three relations in the form of $q$-commutators, which describes deeper symmetries of the Askey–Wilson polynomials. In fact, for suitable choices of the structure constants of the algebra, the Askey–Wilson polynomial $p_n(x)$ is the kernel of an intertwining operator between a representation of $AW(3)$ by $q$-difference operators on the space of polynomials in $x$ and a representation by tridiagonal operators on the space of of infinite sequences $(c_n)_{n=1,2,\ldots}$. In the first representation $K_1$ is multiplication by $x$ and $K_0$ is the second order $q$-difference operator for which the Askey–Wilson polynomials are eigenfunctions with explicit eigenvalues ${\lambda}_n$. In the second representation $K_0$ is the diagonal operator with diagonal elements ${\lambda}_n$ and $K_1$ is the tridiagonal operator corresponding to the three-term recurrence relation for the Askey–Wilson polynomials. The formula for $p_n(x)$ expressing the intertwining property with respect to $K_2$ is the so-called [*$q$-structure relation*]{} for the Askey–Wilson polynomials (see [@6]) and the relation for $AW(3)$ involving the $q$-commutator of $K_1$ and $K_2$ is the so-called [*$q$-string equation*]{} (see [@7]). Terwilliger & Vidunas [@16] showed that every Leonard pair satisfies the $AW(3)$ relations for a suitable choice of the structure constants.
In 1992, one year after Zhedanov’s paper [@1], Cherednik [@8] introduced double affine Hecke algebras associated with root systems (DAHA’s). This was the first of an important series of papers by the same author, where a representation of the DAHA was given in terms of $q$-difference-reflection operators ($q$-analogues of Dunkl operators), joint eigenfunctions of such operators were identified as non-symmetric Macdonald polynomials, and Macdonald’s conjectures for ordinary (symmetric) Macdonald polynomials associated with root systems could be proved. For a nice exposition of this theory see Macdonald’s recent book [@5]. In particular, the DAHA approach to Macdonald–Koornwinder polynomials, due to several authors (see Sahi [@9; @10], Stokman [@14] and references given there) is also presented in [@5]. The last chapter of [@5] discusses the rank one specialization of these general results. For the DAHA of type $A_1$ (one parameter) this yields non-symmetric $q$-ultraspherical polynomials. For the DAHA of type $(C_1^\vee,C_1)$ (four parameters) the non-symmetric Askey–Wilson polynomials are obtained. These were earlier treated by Sahi [@10] and by Noumi & Stokman [@11]. See also Sahi’s recent paper [@12].
Comparison of Zhedanov’s $AW(3)$ with the DAHA of type of type $(C_1^\vee,C_1)$, denoted by ${\tilde{\mathfrak{H}}}$, suggests some relationship. Both algebras are presented by generators and relations, the first has a representation by $q$-difference operators on the space of symmetric Laurent polynomials in $z$ and the second has a representation by $q$-difference-reflection operators on the space of general Laurent polynomials in $z$. Since this representation of the DAHA is called the [*basic representation*]{} of ${\tilde{\mathfrak{H}}}$, I will call the just mentioned representation of $AW(3)$ also the [*basic representation*]{}. In the basic representation of $AW(3)$ the operator $K_0$ is equal to some operator $D$ occurring in the basic representation of ${\tilde{\mathfrak{H}}}$ and involving reflections, provided $D$ is restricted in its action to symmetric Laurent polynomials. This suggests that the basic representation of $AW(3)$ may remain valid if we represent $K_0$ by $D$, so that it involves reflection terms. It will turn out in this paper that this conjecture is correct in the $A_1$ case, i.e., when the Askey–Wilson parameters are restricted to the continuous $q$-ultraspherical case. In the general case the conjecture is true for a rather harmless central extension of $AW(3)$ involving a generator $T_1$, which will be identified with the familiar $T_1$ in ${\tilde{\mathfrak{H}}}$ which has in the basic representation of ${\tilde{\mathfrak{H}}}$ the symmetric Laurent polynomials as one of its two eigenspaces.
This paper does not suppose any knowledge about the general theory of double affine Hecke algebras and about Macdonald and related polynomials in higher rank. The contents of the paper are as follows. Section 2 presents $AW(3)$ and its relationship with Askey–Wilson polynomials. We add to $AW(3)$ one more relation expressing that the Casimir operator $Q$ is equal to a special constant $Q_0$ (of course precisely the constant occurring for $Q$ in the basic representation), and we denote the resulting quotient algebra by $AW(3,Q_0)$. Then it is shown that the basic representation of $AW(3,Q_0)$ is faithful. Section 3 discusses ${\tilde{\mathfrak{H}}}$ (the DAHA of type $(C_1^\vee,C_1)$), its basic representation, and the basis vectors for the 2-dimensional eigenspaces of the operator $D$ in terms of Askey–Wilson polynomials. Section 4 gives an explicit expression for the non-symmetric Askey–Wilson polynomials which is in somewhat different terms than the explicit expression in [@5 § 6.6]. Two presentations of ${\tilde{\mathfrak{H}}}$ by generators and relations of PBW-type are given in Section 5. The faithfulness of the basic representation is proved (a result which of course is also a special case of the known result in the case of general rank, see Sahi [@9]). The main result of the present paper, the embedding of a central extension of $AW(3,Q_0)$ in ${\tilde{\mathfrak{H}}}$, is stated and proved in Section 6.
For the computations in this paper I made heavy use of computer algebra performed in [*Mathematica*]{}${}^{\mbox{\footnotesize\textregistered}}$. For reductions of expressions in non-commuting variables subject to relations I used the package [*NCAlgebra*]{} [@13] within [*Mathematica*]{}${}^{\mbox{\footnotesize\textregistered}}$. [*Mathematica*]{} notebooks containing these computations will be available for downloading in <http://www.science.uva.nl/~thk/art/>.
[**Conventions**]{}\
Throughout assume that $q$ and $a$, $b$, $c$, $d$ are complex constants such that $$\begin{gathered}
q\ne0,\qquad
q^m\ne1\ (m=1,2,\ldots),\qquad
a,b,c,d\ne0,\qquad
abcd\ne q^{-m}\ (m=0,1,2,\ldots).\!\!
\label{21}\end{gathered}$$ Let $e_1$, $e_2$, $e_3$, $e_4$ be the elementary symmetric polynomials in $a$, $b$, $c$, $d$: $$\begin{gathered}
e_1:=a+b+c+d,\qquad
e_2:=ab+ac+bc+ad+bd+cd,\nonumber\\
e_3:=abc+abd+acd+bcd,\qquad
e_4:=abcd.
\label{58}\end{gathered}$$ For symbols and use the notation of [@2]. In particular, $$\begin{gathered}
(a;q)_k:=\prod_{j=0}^{k-1}(1-aq^j),\qquad
(a_1,\ldots,a_r;q)_k:=(a_1;q)_k\cdots(a_r;q)_k,
\\
{\,\mbox{}_{r}\phi_{r-1}\left(
\genfrac{}{}{0pt}{}{q^{-n},a_2,\ldots,a_r}{b_1,\ldots,b_{r-1}};q,z\right)}:=
\sum_{k=0}^n\frac{(q^{-n},a_2,\ldots,a_r;q)_k}
{(b_1,\ldots,b_{r-1},q;q)_k}\,z^k.\end{gathered}$$ For Laurent polynomials $f$ in $z$ the $z$-dependence will be written as $f[z]$. Symmetric Laurent polynomials $f[z]=\sum\limits_{k=-n}^n c_k z^k$ (where $c_k=c_{-k}$) are related to ordinary polynomials $f(x)$ in $x={\tfrac12}(z+z^{-1})$ by $f({\tfrac12}(z+z^{-1}))=f[z]$.
Zhedanov’s algebra $\boldsymbol{AW(3)}$ {#5}
=======================================
Zhedanov [@1] introduced an algebra $AW(3)$ with three generators $K_0$, $K_1$, $K_2$ and with three relations $$\begin{gathered}
[K_0,K_1]_q=K_2,\\
[K_1,K_2]_q=B\,K_1+ C_0\,K_0+D_0,\\
[K_2,K_0]_q=B\,K_0+C_1\,K_1+D_1,\end{gathered}$$ where $$[X,Y]_q:=q^{\frac12}XY-q^{-{\frac12}} YX$$ is the $q$-commutator and where the structure constants $B$, $C_0$, $C_1$, $D_0$, $D_1$ are fixed complex constants. He also gave a [*Casimir operator*]{} $$\begin{gathered}
Q:=\big(q^{-{\frac12}}-q^{\frac32}\big)K_0K_1K_2+qK_2^2+B(K_0K_1+K_1K_0)+qC_0K_0^2
+q^{-1}C_1K_1^2\\
\phantom{Q:=}{}+(1+q)D_0K_0+(1+q^{-1})D_1K_1,\end{gathered}$$ which commutes with the generators.
Clearly, $AW(3)$ can equivalently be described as an algebra with two generators $K_0$, $K_1$ and with two relations $$\begin{gathered}
\label{1}
(q+q^{-1})K_1K_0K_1-K_1^2K_0-K_0K_1^2=B\,K_1+ C_0\,K_0+D_0,\\
\label{2}
(q+q^{-1})K_0K_1K_0-K_0^2K_1-K_1K_0^2=B\,K_0+C_1\,K_1+D_1.\end{gathered}$$ Then the Casimir operator $Q$ can be written as $$\begin{gathered}
Q=(K_1K_0)^2\!-(q^2+1+q^{-2})K_0(K_1K_0)K_1+(q+q^{-1})K_0^2K_1^2\!
+(q+q^{-1})(C_0K_0^2+C_1K_1^2)\nonumber\\
\phantom{Q=}{}+B\bigl((q+1+q^{-1})K_0K_1+K_1K_0\bigr)
+(q+1+q^{-1})(D_0K_0+D_1K_1).\label{80}\end{gathered}$$
Let the structure constants be expressed in terms of $a$, $b$, $c$, $d$ by means of $e_1$, $e_2$, $e_3$, $e_4$ (see ) as follows: $$\begin{gathered}
B:=(1-q^{-1})^2(e_3+qe_1),\nonumber\\
C_0:=(q-q^{-1})^2,\nonumber\\
C_1:=q^{-1}(q-q^{-1})^2 e_4,\label{42}\\
D_0:=-q^{-3}(1-q)^2(1+q)(e_4+qe_2+q^2),\nonumber\\
D_1:=-q^{-3}(1-q)^2(1+q)(e_1e_4+qe_3).\nonumber\end{gathered}$$ Then there is a representation (the [*basic representation*]{}) of the algebra $AW(3)$ with structure constants on the space ${{{\cal A}}_{\rm sym}}$ of symmetric Laurent polynomials $f[z]=f[z^{-1}]$ as follows: $$\begin{gathered}
(K_0f)[z]=({D_{\rm sym}}f)[z],\qquad
(K_1f)[z]=((Z+Z^{-1})f)[z]:=(z+z^{-1})f[z],
\label{4}\end{gathered}$$ where ${D_{\rm sym}}$, given by $$\begin{gathered}
({D_{\rm sym}}f)[z]:=\frac{(1-az)(1-bz)(1-cz)(1-dz)}{(1-z^2)(1-qz^2)}\,
\bigl(f[qz]-f[z]\bigr)\nonumber\\
\phantom{({D_{\rm sym}}f)[z]:=}{}+\frac{(a-z)(b-z)(c-z)(d-z)}{(1-z^2)(q-z^2)}\,
\bigl(f[q^{-1}z]-f[z]\bigr)
+(1+q^{-1}abcd)f[z],
\label{3}\end{gathered}$$ is the second order operator having the [*Askey–Wilson polynomials*]{} (see [@3], [@2 § 7.5], [@4 § 3.1]) as eigenfunctions. It can indeed be verified that the operators $K_0$, $K_1$ given by satisfy relations , with structure constants , and that the Casimir operator $Q$ becomes the following constant in this representation: $$\begin{gathered}
(Qf)(z)=Q_0\,f(z),
\label{82}\end{gathered}$$ where $$\begin{gathered}
Q_0:=q^{-4}(1-q)^2\Bigl(q^4(e_4-e_2)+q^3(e_1^2-e_1e_3-2e_2)\nonumber\\
\phantom{Q_0:=}{}-q^2(e_2e_4+2e_4+e_2)
+q(e_3^2-2e_2e_4-e_1e_3)+e_4(1-e_2)\Bigr).
\label{81}\end{gathered}$$
Let $AW(3,Q_0)$ be the algebra generated by $K_0$, $K_1$ with relations , and $$\begin{gathered}
Q=Q_0,
\label{83}\end{gathered}$$ assuming the structure constants . Then the basic representation of $AW(3)$ is also a representation of $AW(3,Q_0)$.
The Askey–Wilson polynomials are given by $$\begin{gathered}
p_n\bigl({\tfrac12}(z+z^{-1});a,b,c,d\mid q\bigr)
:=\frac{(ab,ac,ad;q)_n}{a^n}\,
{\,\mbox{}_{4}\phi_{3}\left(
\genfrac{}{}{0pt}{}{q^{-n},q^{n-1}abcd,az,az^{-1}}{ab,ac,ad};q,q\right)}.
\label{11}\end{gathered}$$ These polynomials are symmetric in $a$, $b$, $c$, $d$ (although this cannot be read off from ). We will work with the renormalized version which is [*monic*]{} as a Laurent polynomial in $z$ (i.e., the coefficient of $z^n$ equals 1): $$\begin{gathered}
P_n[z]=P_n[z;a,b,c,d\mid q]:=
\frac1{(abcdq^{n-1};q)_n}\,
p_n\bigl({\tfrac12}(z+z^{-1});a,b,c,d\mid q\bigr)
\nonumber\\
\phantom{P_n[z]}{}=a^{-n}\sum_{k=0}^n
\frac{(q^{-n};q)_k\,(az,az^{-1};q)_k\,(abq^k,acq^k,adq^k;q)_{n-k}\,q^k}
{(q;q)_k\,(abcdq^{n+k-1};q)_{n-k}}\,.
\label{13}\end{gathered}$$ Note that the monic Askey–Wilson polynomials $P_n[z]$ are well-defined for all $n$ under condition .
The eigenvalue equation involving ${D_{\rm sym}}$ is $$\begin{gathered}
{D_{\rm sym}}P_n={\lambda}_nP_n,\qquad
{\lambda}_n:=q^{-n}+abcd q^{n-1}.
\label{12}\end{gathered}$$ Under condition all eigenvalues in are distinct.
The three-term recurrence relation for the monic Askey–Wilson polynomials (see [@4 (3.1.5)]) is as follows: $$\begin{gathered}
(z+z^{-1})P_n[z]=P_{n+1}[z]+{\beta}_n P_n[z]+{\gamma}_n P_{n-1}[z]\qquad(n\ge1),\nonumber\\
(z+z^{-1})P_0[z]=P_1[z]+{\beta}_0 P_0[z],
\label{59}\\
{\beta}_n:=q^{n-1}\,\frac{(1-q^n-q^{n+1})e_3+qe_1+q^{2n-1}e_3e_4
-q^{n-1}(1+q-q^{n+1})e_1e_4}{(1-q^{2n-2}e_4)(1-q^{2n}e_4)},
\label{60}\\
{\gamma}_n:=(1-q^{n-1}ab)(1-q^{n-1}ac)(1-q^{n-1}ad)(1-q^{n-1}bc)
(1-q^{n-1}bd)(1-q^{n-1}cd)
\nonumber\\
\phantom{{\gamma}_n:=}{}\times\frac{(1-q^n)(1-q^{n-2}e_4)}
{(1-q^{2n-3}e_4)(1-q^{2n-2}e_4)^2(1-q^{2n-1}e_4)}.
\label{61}\end{gathered}$$ From this we see that $P_n[z]$ remains well-defined if the condition $a,b,c,d\ne0$ in is omitted. It also follows from and – that the representation of $AW(3)$ is not necessarily irreducible, but that it has $1\in{{{\cal A}}_{\rm sym}}$ as a cyclic element. The representation will become irreducible if we moreover require that none of $ab$, $ac$, $ad$, $bc$, $bd$, $cd$ equals $q^{-m}$ for some $m=0,1,2,\ldots$.
We now show that $AW(3,Q_0)$ has the elements $$\begin{gathered}
K_0^n(K_1K_0)^lK_1^m\qquad(m,n=0,1,2,\ldots, \ \ l=0,1)
\label{54}\end{gathered}$$ as a basis and that the representation of $AW(3,Q_0)$ is faithful.
\[55\] Each element of $AW(3,Q_0)$ can be written as a linear combination of elements .
$AW(3,Q_0)$ is spanned by elements $K_{\alpha}=K_{{\alpha}_1}\cdots K_{{\alpha}_k}$, where ${\alpha}=({\alpha}_1,\ldots,{\alpha}_k)$, ${\alpha}_i=0$ or 1. Let $\rho({\alpha})$ the number of pairs $(i,j)$ such that $i<j$, ${\alpha}_i=1$, ${\alpha}_j=0$. $K_{\alpha}$ has the form iff $\rho({\alpha})=0$ or 1. We will show that each $K_{\alpha}$ with $\rho({\alpha})>1$ can be written as a linear combination of elements $K_{\beta}$ with $\rho({\beta})<\rho({\alpha})$. Indeed, if $\rho({\alpha})>1$ then $K_{\alpha}$ must have a substring $K_1K_1K_0$ or $K_1K_0K_0$ or $K_1K_0K_1K_0$. By substitution of relations , or (with ), respectively, we see that each such string is a linear combination of elements $K_{\beta}$ with $\rho({\beta})<\rho({\alpha})$.
The elements form a basis of $AW(3,Q_0)$ and the representation of $AW(3,Q_0)$ is faithful. \[76\]
Because of Lemma \[55\] it is sufficient to show that the operators $$\begin{gathered}
({D_{\rm sym}})^n\,(Z+Z^{-1})^m\qquad(m,n=0,1,2,\ldots),\nonumber\\
({D_{\rm sym}})^{n-1}\,(Z+Z^{-1})\,{D_{\rm sym}}\,(Z+Z^{-1})^{m-1}\qquad(m,n=1,2,\ldots)
\label{56}\end{gathered}$$ acting on ${{{\cal A}}_{\rm sym}}$ are linearly independent. By and we have for all $j$: $$\begin{gathered}
({D_{\rm sym}})^n\,(Z+Z^{-1})^m\,P_j[z]
={\lambda}_{j+m}^n P_{j+m}[z]+\cdots,\nonumber\\
({D_{\rm sym}})^{n-1}\,(Z+Z^{-1})\,{D_{\rm sym}}\,(Z+Z^{-1})^{m-1}\,P_j[z]
={\lambda}_{j+m}^{n-1}{\lambda}_{j+m-1} P_{j+m}[z]+\cdots,
\label{84}\end{gathered}$$ where the right-hand sides give expansions in terms of $P_k[z]$ with $k$ running from $j+m$ downwards.
Suppose that the operators are not linearly independent. Then $$\begin{gathered}
\sum_{k=0}^m\sum_l a_{k,l} ({D_{\rm sym}})^l\,(Z+Z^{-1})^k\nonumber\\
\qquad{}+
\sum_{k=1}^m\sum_l b_{k,l} ({D_{\rm sym}})^{l-1}\,(Z+Z^{-1})\,{D_{\rm sym}}\,(Z+Z^{-1})^{k-1}
=0\label{85}\end{gathered}$$ for certain coefficients $a_{k,l}$, $b_{k,l}$ such that for some $l$ $a_{m,l}\ne0$ or $b_{m,l}\ne0$. Then it follows from that for all $j$, when we let the [left-hand side]{} of act on $P_j[z]$, the coefficient of $P_{j+m}[z]$ yields: $$\sum_l (a_{m,l}{\lambda}_{j+m}^l+b_{m,l}{\lambda}_{j+m}^{l-1}{\lambda}_{j+m-1})=0.\label{86}$$ By we have, writing $x=q^{j+m}$ and $u= q^{-1}abcd$, $${\lambda}_{j+m}=x^{-1}+ ux,\qquad
{\lambda}_{j+m-1}=qx^{-1}+ q^{-1}ux.$$ We can consider the identity as an identity for Laurent polynomials in $x$. Since the left-hand side vanishes for infinitely many values of $x$, it must be identically zero. Let $n$ be the maximal $l$ for which $a_{m,l}\ne0$ or $b_{m,l}\ne0$. Then, in particular, the coefficients of $x^{-n}$ and $x^n$ in the left-hand side of must be zero. This gives explicitly: $$\begin{gathered}
a_{m,n}+q b_{m,n}=0,\qquad
u^n a_{m,n}+q^{-1}u^n b_{m,n}=0.\end{gathered}$$ This implies $a_{m,n}=b_{m,n}=0$, contradicting our assumption.
Note that we have 6 structure constants $B$, $C_0$, $C_1$, $D_0$, $D_1$, $Q_0$ depending on 4 parameters $a$, $b$, $c$, $d$. However, 2 degrees of freedom in the structure coefficients are caused by scale transformations. Indeed, the scale transformations $K_0\to c_0K_0$ and $K_1\to c_1K_1$ induce the following transformations on the structure coefficients: $$\begin{gathered}
B\to c_0c_1B,\quad
C_0\to c_1^2C_0,\quad
C_1\to c_0^2C_1,\quad
D_0\to c_0c_1^2D_0,\quad
D_1\to c_0^2c_1D_1,\quad
Q_0\to c_0^2c_1^2 Q_0.\!\end{gathered}$$ But these scale transformations also affect the basic representation. This becomes $K_0=c_0{D_{\rm sym}}$, $K_1=c_1(Z+Z^{-1})$.
The double affine Hecke algebra of type $\boldsymbol{(C_1^\vee,C_1)}$
=====================================================================
Recall condition . The double affine Hecke algebra of type $(C_1^\vee,C_1)$, denoted by ${\tilde{\mathfrak{H}}}$ (see [@5 § 6.4]), is generated by $Z$, $Z^{-1}$, $T_1$, $T_0$ with relations $ZZ^{-1}=1=Z^{-1}Z$ and $$\begin{gathered}
(T_1+ab)(T_1+1)=0,
\label{6}\\
(T_0+q^{-1}cd)(T_0+1)=0,
\label{8}\\
(T_1Z+a)(T_1Z+b)=0,
\label{7}\\
(qT_0Z^{-1}+c)(qT_0Z^{-1}+d)=0.
\label{9}\end{gathered}$$ Here I have used the notation of [@12], which is slightly different from the notation in [@5 § 6.4]. Conditions on $q$, $a$, $b$, $c$, $d$ in [@5] are more strict than in . This will give no problem, as can be seen by checking all results hereafter from scratch.
From and and the non-vanishing of $a$, $b$, $c$, $d$ we see that $T_1$ and $T_0$ are invertible: $$\begin{gathered}
T_1^{-1}=-a^{-1}b^{-1}T_1-(1+a^{-1}b^{-1}),
\label{38}\\
T_0^{-1}=-qc^{-1}d^{-1}T_0-(1+qc^{-1}d^{-1}).
\label{39}\end{gathered}$$ Put $$\begin{gathered}
Y:=T_1T_0,
\label{36}\\
D:=Y+q^{-1}abcdY^{-1}=T_1T_0+q^{-1}abcdT_0^{-1}T_1^{-1},
\label{37}\\
{Z_{\rm sym}}:=Z+Z^{-1}.
\label{48}\end{gathered}$$ By and $D$ commutes with $T_1$ and $T_0$. By and ${Z_{\rm sym}}$ commutes with $T_1$.
The algebra ${\tilde{\mathfrak{H}}}$ has a faithful representation, the so-called [*basic representation*]{}, on the space ${{\cal A}}$ of Laurent polynomials $f[z]$ as follows: $$\begin{gathered}
(Zf)[z]:=z\,f[z],
\label{14}\\
(T_1f)[z]:=\frac{(a+b)z-(1+ab)}{1-z^2}\,f[z]+
\frac{(1-az)(1-bz)}{1-z^2}\,f[z^{-1}],
\label{15}\\
(T_0f)[z]:=\frac{q^{-1}z((cd+q)z-(c+d)q)}{q-z^2}\,f[z]
-\frac{(c-z)(d-z)}{q-z^2}\,f[qz^{-1}].
\label{16}\end{gathered}$$ The representation property is from [@5 § 6.4] or by straightforward computation. The faithfulness is from [@5 (4.7.4)] or by an independent proof later in this paper.
Now we can compute: $$\begin{gathered}
(Yf)[z]=
\frac{z \bigl(1+ab-(a+b)z\bigr)
\bigl((c+d)q-(cd+q)z\bigr)}{q(1-z^2)(q-z^2)}\,f[z]\nonumber\\
\phantom{(Yf)[z]=}{}+\frac{(1-az)(1-bz)(1-cz)(1-dz)}{(1-z^2)(1-q z^2)}f[qz]\nonumber\\
\phantom{(Yf)[z]=}{}+\frac{(1-a z)(1-b z) \bigl((c+d)qz-(cd+q)\bigr)}
{q(1-z^2)(1-q z^2)}\,f[z^{-1}]\nonumber\\
\phantom{(Yf)[z]=}{}+\frac{(c-z)(d-z)\bigl(1+ab-(a+b)z\bigr)}{(1-z^2)(q-z^2)}\,f[qz^{-1}],
\label{34}
\\
(Df)[z]
=
\frac{(1-q)z(1-az) (1-bz)\,\bigl((q+1)(cd+q)z-q(c+d)(1+z^2)\bigr)}
{q(1-z^2)(q-z^2)(1-q z^2)}\,f[z^{-1}]
\nonumber\\
\phantom{(Df)[z]=}{}+\frac{(1-q)z(c-z)(d-z)\,\bigl((a+b) (q+z^2)-(a b+1)(q+1)z\bigr)}
{(1-z^2)(q-z^2)(q^2-z^2)}\,f[qz^{-1}]
\nonumber\\
\phantom{(Df)[z]=}{}+\Bigl((a+b)(cd+q)(q+z^2)+q(ab+1)(c+d)(1+z^2)
\nonumber\\
\phantom{(Df)[z]=}{}-\bigl((q+1)(cd+q)(ab+1)+2q(a+b)(c+d)\bigr)z\Bigr)
\,\frac z{q(1-z^2)(q-z^2)}\,f(z)
\label{10}\\
\phantom{(Df)[z]=}{}+\frac{(c-z)(d-z)(aq-z)(bq-z)\!}{(q-z^2)(q^2-z^2)} f[q^{-1}z]\!
+\frac{(1-az) (1-bz) (1-cz)(1-dz)\!}{(1-z^2)(1-qz^2)} f[qz]
.\!\nonumber\end{gathered}$$ If we compare and then we see that $$(Df)[z]=({D_{\rm sym}}f)[z]\qquad\mbox{if}\qquad f[z]=f[z^{-1}].$$ In particular, if we apply $D$ to the Askey–Wilson polynomial $P_n[z]$ given by then we obtain from that $$\begin{gathered}
DP_n={\lambda}_nP_n.
\label{17}\end{gathered}$$
By and the operators $T_1$ and $T_0$, acting on ${{\cal A}}$ as given by , have two eigenvalues. We can characterize the eigenspaces.
\[19\] $T_1$ given by has eigenvalues $-ab$ and $-1$. $T_1f=-ab\,f$ iff $f$ is symmetric. If $a$, $b$ are distinct from $a^{-1}$, $b^{-1}$ then $T_1f=-f$ iff $f[z]=z^{-1}(1-az)(1-bz)g[z]$ for some symmetric Laurent polynomial $g$.
We compute $$\begin{gathered}
(T_1f)[z]+ab\,f[z]=
\frac{(1-az)(1-bz)}{1-z^2}\,(f[z^{-1}]-f[z]),\end{gathered}$$ which settles the first assertion. We also compute $$\begin{gathered}
(T_1f)[z]+f[z]=
\frac{(1-az)(1-bz)}{1-z^2}\,f[z^{-1}]-
\frac{(a-z)(b-z)}{1-z^2}\,f[z].\end{gathered}$$ This equals zero if $f[z]=z^{-1}(1-az)(1-bz)g[z]$ with $g$ symmetric. On the other hand, if $(T_1f)[z]+f[z]=0$ and $a$, $b$ are distinct from $a^{-1}$, $b^{-1}$ then $$(1-az)(1-bz)f[z^{-1}]=(a-z)(b-z)f[z]$$ and hence $f[z]=z^{-1}(1-az)(1-bz)g[z]$ for some Laurent polynomial $g$ and we obtain $g[z]=g[z^{-1}]$.
\[20\] $T_0$ given by has eigenvalues $-q^{-1}cd$ and $-1$. $T_0f=-q^{-1}cd\,f$ iff $f[z]=f[qz^{-1}]$. If $c$, $d$ are distinct from $qc^{-1}$, $qd^{-1}$ then $T_0f=-f$ iff $f[z]=z^{-1}(c-z)(d-z)g[z]$ for some Laurent polynomial $g$ satisfying $g[z]=g[qz^{-1}]$.
We compute $$\begin{gathered}
(T_0f)[z]+q^{-1}cd\,f[z]=
\frac{(c-z)(d-z)}{q-z^2}\,(f[z]-f[qz^{-1}]),\end{gathered}$$ which settles the first assertion. We also compute $$\begin{gathered}
(T_0f)[z]+f[z]=
\frac{(q-cz)(q-dz)}{q(q-z^2)}\,f[z]-
\frac{q(c-z)(d-z)}{q(q-z^2)}\,f[qz^{-1}].\end{gathered}$$ Then the second assertion is proved by similar arguments as in the proof of Proposition \[19\].
We now look for further explicit solutions of the eigenvalue equation $$\begin{gathered}
Df={\lambda}_nf.
\label{18}\end{gathered}$$ Clearly, the solution $P_n$ (see ) also satisfies $T_1P_n=-ab\,P_n$. In order to find further solutions of we make an Ansatz for $f$ as suggested by Propositions \[19\] and \[20\], namely $f[z]=z^{-1}(1-az)(1-bz)g[z]$ or $f[z]=g[q^{-{\frac12}} z]$ or $f[z]=z^{-1}(c-z)(d-z)g[q^{-{\frac12}}z]$, in each case with $g$ symmetric. Then it turns out that takes the form of the Askey–Wilson second order $q$-difference equation, but with parameters and sometimes also the degree changed. We thus obtain as further solutions $f$ of for $n\ge1$: $$\begin{gathered}
Q_n[z]:=a^{-1}b^{-1}z^{-1}(1-az)(1-bz)\,
P_{n-1}[z;qa, qb,c,d\mid q],
\label{31}\\
P_n^\dagger[z]:=q^{{\frac12}n}\,
P_n\big[q^{-{\frac12}}z;q^{\frac12}a,q^{\frac12}b,q^{-{\frac12}}c,q^{-{\frac12}}d\mid q\big],
\label{32}\\
Q_n^\dagger[z]:=q^{{\frac12}(n-1)}z^{-1}(c-z)(d-z)\,
P_{n-1}\big[q^{-{\frac12}}z;q^{\frac12}a,q^{\frac12}b,q^{\frac12}c,q^{\frac12}d\mid q\big].
\label{33}\end{gathered}$$ So we have for $n\ge1$ four different eigenfunctions of $D$ at eigenvalue $q^{-n}+abcdq^{n-1}$ which are also eigenfunction of $T_1$ or $T_0$: $$\begin{gathered}
T_1P_n=-ab\,P_n,\qquad
T_1Q_n=-Q_n,\qquad
T_0P_n^\dagger=-q^{-1}cd\,P_n^\dagger,\qquad
T_0Q_n^\dagger=-Q_n^\dagger.
\label{51}\end{gathered}$$ They all are Laurent polynomials of degree $n$ with highest term $z^n$ and lowest term ${{\rm const}\,}z^{-n}$: $$\begin{aligned}
{3}
&P_n[z]=z^n+\cdots+z^{-n},\qquad&&
Q_n[z]=z^n+\cdots+a^{-1}b^{-1}z^{-n},&\nonumber\\
&P_n^\dagger[z]=z^n+\cdots+q^nz^{-n},\qquad&&
Q_n^\dagger[z]=z^n+\cdots+q^{n-1}cdz^{-n}.&
\label{29}\end{aligned}$$ Since the eigenvalues ${\lambda}_n$ are distinct for different $n$, it follows that $D$ has a 1-dimensional eigenspace ${{\cal A}}_0$ at eigenvalue ${\lambda}_0$, consisting of the constant Laurent polynomials, and that it has a 2-dimensional eigenspace ${{\cal A}}_n$ at eigenvalue ${\lambda}_n$ if $n\ge1$, which has $P_n$ and $P_n^\dagger$ as basis vectors, but which also has any other two out of $P_n$, $Q_n$, $P_n^\dagger$, $Q_n^\dagger$ as basis vectors, provided these two functions have the coefficients of $z^{-n}$ distinct. Generically we can use any two out of these four as basis vectors. The basis consisting of $P_n$ and $P_n^\dagger$ occurs in [@5 § 6.6]. In the following sections we will work first with the basis consisting of $P_n$ and $Q_n^\dagger$, but afterwards it will be more convenient to use $P_n$ and $Q_n$.
Non-symmetric Askey–Wilson polynomials
======================================
Since $T_1$ and $T_0$ commute with $D$, the eigenspaces of $D$ in ${{\cal A}}$ are invariant under $Y=T_1T_0$. We can find explicitly the eigenvectors of $Y$ within these eigenspaces ${{\cal A}}_n$.
\[79\] The non-symmetric Askey–Wilson polynomials $$\begin{gathered}
E_{-n}[z]:=\frac1{1-q^{n-1}cd}\,(P_n[z]-Q_n^\dagger[z])\qquad(n=1,2,\ldots),
\label{23}\\
E_n[z]:=\frac{q^n(1-q^{n-1}abcd)}{1-q^{2n-1}abcd}\,P_n[z]+
\frac{1-q^n}{1-q^{2n-1}abcd}\,Q_n^\dagger[z]\qquad(n=1,2,\ldots),
\label{24}\\
E_0[z]:=1
\label{35}\end{gathered}$$ span the one-dimensional eigenspaces of $Y$ within ${{\cal A}}_n$ with the following eigenvalues: $$\begin{gathered}
YE_{-n}=q^{-n}\,E_{-n}\qquad(n=1,2,\ldots),
\label{25}\\
YE_n=q^{n-1}abcd\,E_n\qquad(n=0,1,2,\ldots).
\label{26}\end{gathered}$$ The coefficients of highest and lowest terms in $E_{-n}$ and $E_n$ are: $$\begin{gathered}
E_{-n}[z]=z^{-n}+\cdots+{{\rm const}\,}z^{n-1}\qquad(n=1,2,\ldots),
\label{27}
\\
E_n[z]=
z^n+\cdots+\left(1-\frac{(1-q^n)(1-q^{n-1}cd)}{1-q^{2n-1}abcd}\right)z^{-n}
\qquad(n=1,2,\ldots).
\label{28}\end{gathered}$$
Clearly, by their definition, $E_{-n}$ and $E_n$ are in ${{\cal A}}_n$, while , follow from . Equation for $n=0$ follows from and Propositions \[19\] and \[20\]. For the proof of , we use a $q$-difference equation for Askey–Wilson polynomials (see [@2 (7.7.7)], [@4 (3.1.8)]): $$\begin{gathered}
\frac{P_n[q^{-{\frac12}}z;a,b,c,d\mid q]-P_n[q^{\frac12}z;a,b,c,d\mid q]}
{(q^{-{\frac12}n}-q^{{\frac12}n})(z-z^{-1})}=
P_{n-1}\big[z;q^{\frac12}a,q^{\frac12}b,q^{\frac12}c,q^{\frac12}d\mid q\big].
\label{30}\end{gathered}$$ The expression $(YE_{-n})[z]-q^{-n}E_{-n}[z]$ ($n=1,2,\ldots$) only involves terms $P_n[w;a,b,c,d\mid q]$ for $w=z,qz,q^{-1}z$ and terms $P_{n-1}[w;q^{\frac12}a,q^{\frac12}b,q^{\frac12}c,q^{\frac12}d\mid q]$ for $w=q^{-{\frac12}} z, q^{\frac12}z$, as can be seen from , and . Now twice substitute in this expression with $z$ replaced by $q^{-{\frac12}} z$ and $q^{\frac12}z$, respectively. Then we arrive at an expression only involving terms $P_n[w;a,b,c,d\mid q]$ for $w=z,qz,q^{-1}z$. By it can be recognized as $(({D_{\rm sym}}P_n)[z]-(q^{-n}+abcd q^{n-1})P_n[z])/(1-q^n)$, which equals zero by . This settles . The reduction of the expression $(YE_n)[z]-q^{n-1}abcd E_n[z]$ ($n=1,2,\ldots$) can be done in a completely similar way. Here we arrive at the expression $(({D_{\rm sym}}P_n)[z]-(q^{-n}+abcd q^{n-1})P_n[z])
/(1-q^{1-2n}(abcd)^{-1})$, which equals zero.
By condition all eigenvalues of $Y$ on ${{\cal A}}$ (see , ) are distinct. So for all $n\in{\mathbb{Z}}$ $E_n[z]$ is the unique Laurent polynomial of degree $|n|$ which satisfies or and has coefficient of $z^n$ equal to 1. Moreover, for $n\ge1$, $E_{-n}$ is the unique element of ${{\cal A}}_n$ of the form , and $E_n$ is the unique element of ${{\cal A}}_n$ of the form
The occurrence of the $q$-difference equation in the proof of Theorem \[79\] and the occurrence of Askey–Wilson polynomials with shifted parameters as eigenfunctions of $D$ (see –) is probably much related to the one-variable case of the $q$-difference equations in Rains [@15 Corollary 2.4].
From , and we obtain $$\begin{gathered}
E_{-n}=\frac{ab}{ab-1}\,(P_n-Q_n)\qquad(n=1,2,\ldots),
\label{49}\\
E_n=\frac{(1-q^n ab)(1-q^{n-1}abcd)}{(1-ab)(1-q^{2n-1}abcd)}\,P_n-
\frac{ab(1-q^n)(1-q^{n-1}cd)}{(1-ab)(1-q^{2n-1}abcd)}\,
Q_n\qquad(n=1,2,\ldots).\!\!\!
\label{50}\end{gathered}$$ Next, , and yield $$\begin{gathered}
T_1 E_{-n}=-\frac{1+ab-abcdq^{n-1}-abq^n}{1-abcdq^{2n-1}}\,E_{-n}-ab\,E_n
\qquad(n=1,2,\ldots),
\label{52}\\
T_1 E_n=\frac{(1-q^n)(1-abq^n)(1-cdq^{n-1})(1-abcdq^{n-1})}
{(1-abcdq^{2n-1})^2}\,E_{-n}
\nonumber\\
\phantom{T_1 E_n=}{}
-\frac{abq^{n-1}(cd+q-cdq^n-abcdq^n)}{1-abcdq^{2n-1}}\,E_n\qquad
(n=1,2,\ldots).
\label{53}\end{gathered}$$
A PBW-type theorem for $\boldsymbol{{\tilde{\mathfrak{H}}}}$
============================================================
In this section I will give two other sets of relations for ${\tilde{\mathfrak{H}}}$, both equivalent to – and both of PBW-type form. For the second set of relations we will see that the spanning set of elements of ${\tilde{\mathfrak{H}}}$, as implied by these relations, is indeed a basis. This is done by showing that this set of elements is linearly independent in the basic representation, which also shows that this representation is faithful. The faithfulness of the basic representation was first shown, in the more general $n$ variable setting, by Sahi [@9].
${\tilde{\mathfrak{H}}}$ can equivalently be described as the algebra generated by $T_1$, $T_0$, $Z$, $Z^{-1}$ with relations $ZZ^{-1}=1=Z^{-1}Z$ and $$\begin{gathered}
T_1^2=-(ab+1)T_1-ab,
\label{64}\\
T_0^2=-(q^{-1}cd+1)T_0-q^{-1}cd,
\label{65}\\
T_1Z =Z^{-1}T_1+(ab+1) Z^{-1}-(a+b),
\label{66}\\
T_1Z^{-1}=ZT_1-(ab+1)Z^{-1}+(a+b),
\label{67}\\
T_0Z=qZ^{-1}T_0-(q^{-1}cd+1)Z+(c+d),
\label{68}\\
T_0Z^{-1}=qZT_0+q^{-1}(q^{-1}cd+1)Z-q^{-1}(c+d).
\label{69}\end{gathered}$$ ${\tilde{\mathfrak{H}}}$ is spanned by the elements $Z^mT_0^iY^nT_1^j$, where $m\in{\mathbb{Z}}$, $n=0,1,2,\ldots$, $i,j=0,1$.
, are equivalent to , , and , are equivalent to , . Furthermore, is equivalent to , and is equivalent to . Hence relations are equivalent to relations –.
For the second statement note that – imply that each word in ${\tilde{\mathfrak{H}}}$ can be written as a linear combination of words $Z^mT_0^i(T_1T_0)^nT_1^j$, where $m\in{\mathbb{Z}}$, $n=0,1,2,\ldots$, $i,j=0,1$. Then substitute $Y=T_1T_0$.
\[71\] ${\tilde{\mathfrak{H}}}$ can equivalently be described as the algebra generated by $T_1$, $Y$, $Y^{-1}$, $Z$, $Z^{-1}$ with relations $YY^{-1}=1=Y^{-1}Y$, $ZZ^{-1}=1=Z^{-1}Z$ and $$\begin{gathered}
T_1^2=-(ab+1)T_1-ab,\nonumber\\
T_1Z = Z^{-1}T_1+(ab+1)Z^{-1}-(a+b),\nonumber\\
T_1Z^{-1}= ZT_1-(ab+1)Z^{-1}+(a+b),\nonumber\\
T_1Y= q^{-1}abcd Y^{-1}T_1-(ab+1)Y+ab(1+q^{-1}cd),\nonumber\\
T_1Y^{-1}= q(abcd)^{-1}YT_1+q(abcd)^{-1}(1+ab)Y-q(cd)^{-1}(1+q^{-1}cd),\nonumber\\
YZ= qZY+(1+ab)cd\,Z^{-1}Y^{-1}T_1
-(a+b)cd\,Y^{-1}T_1
-(1+q^{-1}cd)Z^{-1}T_1\nonumber\\
\phantom{YZ=}{}
-(1-q)(1+ab)(1+q^{-1}cd)Z^{-1}
+(c+d)T_1
+(1-q)(a+b)(1+q^{-1}cd),\nonumber\\
YZ^{-1}= q^{-1}Z^{-1}Y
-q^{-2}(1+ab)cd\,Z^{-1}Y^{-1}T_1
+q^{-2}(a+b)cd\,Y^{-1}T_1\nonumber\\
\phantom{YZ^{-1}=}{}
+q^{-1}(1+q^{-1}cd)Z^{-1}T_1
-q^{-1}(c+d)T_1,\nonumber\\
Y^{-1}Z=q^{-1}ZY^{-1}-q(ab)^{-1}(1+ab)Z^{-1}Y^{-1}T_1
+(ab)^{-1}(a+b)Y^{-1}T_1\nonumber\\
\phantom{Y^{-1}Z=}{}
+q(abcd)^{-1}(1+q^{-1}cd)Z^{-1}T_1
+q(abcd)^{-1}(1-q)(1+ab)(1+q^{-1}cd)Z^{-1}\nonumber\\
\phantom{Y^{-1}Z=}{}
-(abcd)^{-1}(c+d)T_1
-(abcd)^{-1}(1-q)(1+ab)(c+d),\nonumber\\
Y^{-1}Z^{-1}= qZ^{-1}Y^{-1}+q(ab)^{-1}(1+ab)Z^{-1}Y^{-1}T_1
-(ab)^{-1}(a+b)Y^{-1}T_1\nonumber\\
\phantom{Y^{-1}Z^{-1}=}{}
-q^2(abcd)^{-1}(1+q^{-1}cd)Z^{-1}T_1
+q(abcd)^{-1}(c+d)T_1.
\label{70}\end{gathered}$$ ${\tilde{\mathfrak{H}}}$ is spanned by the elements $Z^mY^nT_1^i$, where $m,n\in{\mathbb{Z}}$, $i=0,1$.
First we start with relations –. Then , give , . Next put $Y:=T_1T_0$, $Y^{-1}:=T_0^{-1}T_1^{-1}$. Then verify relations from relations –, most conveniently with the aid of computer algebra package, for instance by using [@13].
Conversely we start with relations . Then the first of these relations gives . Put $T_0:=T_1^{-1}Y$. Then verify relations – from relations , where again computer algebra may be used.
The last statement follows from the PBW-type structure of the relations . Observe that by the first five relations together with the trivial relations, every word in $T_1$, $Y$, $Y^{-1}$, $Z$, $Z^{-1}$ can be written as a linear combination of words with at most one occurrence of $T_1$ in each word and only on the right, and with no substrings $YY^{-1}$, $Y^{-1}Y$, $ZZ^{-1}$, $Z^{-1}Z$, and with no more occurrences of $Y$, $Y^{-1}$, $Z$, $Z^{-1}$ in each word than in the original word. If in one of these terms there are misplacements ($Y$ or $Y^{-1}$ before $Z$ or $Z^{-1}$) then apply one of the last four relations followed by the previous step in order to reduce the number of misplacements.
The basic representation – of ${\tilde{\mathfrak{H}}}$ is faithful. A basis of ${\tilde{\mathfrak{H}}}$ is provided by the elements $Z^mY^nT_1^i$, where $m,n\in{\mathbb{Z}}$, $i=0,1$.
For $j>0$ we have $$\begin{gathered}
Z^mY^n\,E_{-j}=q^{-jn}\,z^{m-j}+\cdots+{{\rm const}\,}z^{m+j-1},\nonumber\\
Z^mY^nT_1\,E_{-j}={{\rm const}\,}z^{m-j}+\cdots-ab(q^{j-1}abcd)^n\,z^{m+j},\label{72}\\
Z^mY^nT_1^{-1}\,E_{-j}={{\rm const}\,}z^{m-j}+\cdots+(q^{j-1}abcd)^n\,z^{m+j}.\nonumber\end{gathered}$$ This follows from –, and . Suppose that some linear combination $$\begin{gathered}
\sum_{m,n}a_{m,n} Z^mY^n+\sum_{m,n}b_{m,n}Z^mY^nT_1
\label{73}\end{gathered}$$ acts as the zero operator in the basic representation, while not all coefficients $a_{m,n}$, $b_{m,n}$ are zero. Then there is a maximal $r$ for which $a_{r,n}$ or $b_{r,n}$ is nonzero for some $n$. If $b_{r,n}\ne0$ for some $n$ then let the operator act on $E_{-j}$. By we have that for all $j\ge1$ $$\begin{gathered}
\sum_n b_{r,n}(q^{j-1}abcd)^n\,z^{r+j}=0,\qquad
{\rm hence}\qquad \sum_n b_{r,n}(q^{j-1}abcd)^n=0.\end{gathered}$$ By assumption we see that $\sum_n b_{r,n} w^n=0$. Hence $b_{r,n}=0$ for all $n$, which is a contradiction.
So $a_{r,n}\ne0$ for some $n$. Let the operator act on $T_1^{-1}E_{-j}$. By we have that for all $j\ge1$ $$\begin{gathered}
\sum_n a_{r,n}(q^{j-1}abcd)^n\,z^{r+j}=0,\qquad
{\rm hence}\qquad \sum_n a_{r,n}(q^{j-1}abcd)^n=0.\end{gathered}$$ Again we arrive at the contradiction that $a_{r,n}=0$ for all $n$.
The embedding of a central extension of $\boldsymbol{AW(3,Q_0)}$ in $\boldsymbol{{\tilde{\mathfrak{H}}}}$
=========================================================================================================
Let us now examine whether the representation of $AW(3)$ on ${{{\cal A}}_{\rm sym}}$ extends to a representation on ${{\cal A}}$ if we let $K_0$ act as $D$ instead of ${D_{\rm sym}}$. It will turn out that this is only true for certain specializations of $a$, $b$, $c$, $d$, but that a suitable central extension ${\widetilde{AW}(3)}$ of $AW(3)$ involving $T_1$ will realize what we desire.
${\widetilde{AW}(3)}$ is the algebra generated by $K_0$, $K_1$, $T_1$ with relations $$\begin{gathered}
\label{43}
T_1K_0=K_0T_1,\qquad
T_1K_1=K_1T_1,\qquad
(T_1+ab)(T_1+1)=0,\\
(q+q^{-1})K_1K_0K_1-K_1^2K_0-K_0K_1^2\nonumber\\
\qquad{}{}=B\,K_1+ C_0\,K_0+D_0+E\,K_1(T_1+ab)
+F_0(T_1+ab),\label{44}\\
(q+q^{-1})K_0K_1K_0-K_0^2K_1-K_1K_0^2\nonumber\\
\qquad{}=B\,K_0+C_1\,K_1+D_1+E\,K_0(T_1+ab)
+F_1(T_1+ab),\label{45}\end{gathered}$$ where the structure constants are given by together with $$\begin{gathered}
E:=-q^{-2}(1-q)^3(c+d),\nonumber\\
F_0:=q^{-3}(1-q)^3(1+q)(cd+q),\label{46}\\
F_1:=q^{-3}(1-q)^3(1+q)(a+b)cd.\nonumber\end{gathered}$$
It can be shown that the following adaptation of is a Casimir operator for ${\widetilde{AW}(3)}$, commuting with $K_0$, $K_1$, $T_1$: $$\begin{gathered}
{\widetilde Q}:=(K_1K_0)^2-(q^2+1+q^{-2})K_0(K_1K_0)K_1+(q+q^{-1})K_0^2K_1^2
\nonumber\\
\phantom{{\widetilde Q}:=}{}+(q+q^{-1})(C_0K_0^2+C_1K_1^2)+\bigl(B+E(T_1+ab)\bigr)\bigl((q+1+q^{-1})K_0K_1+K_1K_0\bigr)
\nonumber\\
\phantom{{\widetilde Q}:=}{}+(q+1+q^{-1})\bigl(D_0+F_0(T_1+ab)\bigr)K_0+(q+1+q^{-1})\bigl(D_1+F_1(T_1+ab)\bigr)K_1\nonumber\\
\phantom{{\widetilde Q}:=}{}+G(T_1+ab),
\label{87}\end{gathered}$$ where $$\begin{gathered}
G:=-q^{-4}(1-q)^3\Bigl((a+b)(c+d)\bigl(cd(q^2+1)+q\bigr)
-q(ab+1)\bigl((c^2+d^2)(q+1)-cd\bigr)\nonumber\\
\phantom{G:=}{}+(cd+e_4)(q^2+1)+(e_2+e_4-ab)q^3\Bigr).
\label{90}\end{gathered}$$ Let ${\widetilde{AW}(3,Q_0)}$ be the algebra generated by $K_0$, $K_1$, $T_1$ with relations – and additional relation $$\begin{gathered}
{\widetilde Q}=Q_0,
\label{91}\end{gathered}$$ where ${\widetilde Q}$ is given by and $Q_0$ by .
There is a representation of the algebra ${\widetilde{AW}(3,Q_0)}$ on the space ${{\cal A}}$ of Laurent polynomials $f[z]$ such that $K_0$ acts as $D$, $K_1$ acts by multiplication by $z+z^{-1}$, and the action of $T_1$ is given by . This representation is faithful.
It follows by straightforward computation, possibly using computer algebra, that this is a representation of ${\widetilde{AW}(3,Q_0)}$. In the same way as for Lemma \[55\] it can be shown that ${\widetilde{AW}(3,Q_0)}$ is spanned by the elements $$\begin{gathered}
K_0^n(K_1K_0)^iK_1^mT_1^j\qquad(m,n=0,1,2,\ldots,\ \ i,j=0,1).
\label{74}\end{gathered}$$ Now we will prove that the representation is faithful. Suppose that for certain coefficients $a_{k,l}$, $b_{k,l}$, $c_{k,l}$, $d_{k,l}$ we have $$\begin{gathered}
\sum_{k,l}a_{k,l}\,D^l\,(Z+Z^{-1})^k+
\sum_{k,l}b_{k,l}\,D^{l-1}\,(Z+Z^{-1})\,D\,(Z+Z^{-1})^{k-1}\nonumber\\
+\Bigg(\sum_{k,l}c_{k,l}\,D^l\,(Z+Z^{-1})^k+
\sum_{k,l}d_{k,l}\,D^{l-1}\,(Z+Z^{-1})\,D\,(Z+Z^{-1})^{k-1}\Bigg)(T_1+ab)=0
\label{75}\end{gathered}$$ while acting on ${{\cal A}}$. Then, since $T_1P_j=-ab P_j$ (see ), we have for all $j\ge0$ that $$\begin{gathered}
\sum_{k,l}a_{k,l}\,{D_{\rm sym}}^l\,(Z+Z^{-1})^k\,P_j[z]+
\sum_{k,l}b_{k,l}\,{D_{\rm sym}}^{l-1}\,(Z+Z^{-1})\,D\,(Z+Z^{-1})^{k-1}\,P_j[z]=0.\end{gathered}$$ Then by the proof of Theorem \[76\] if follows that all coefficients $a_{k,l}$, $b_{k,l}$ vanish.
It follows from and that $(T_1+ab)E_{-n}=-ab Q_n$ (also if $ab=1$). Hence, if we let , with vanishing $a_{k,l}$, $b_{k,l}$, act on $E_{-j}[z]$, and divide by $-ab$, then: $$\begin{gathered}
\Bigg(\sum_{k,l}c_{k,l}\,D^l\,(Z+Z^{-1})^k+
\sum_{k,l}d_{k,l}\,D^{l-1}\,(Z+Z^{-1})\,D\,(Z+Z^{-1})^{k-1}\Bigg)Q_j[z]=0.\end{gathered}$$ From we see that the three-term recurrence relation for $P_n[z]$ has an analogue for $Q_n[z]$: $$\begin{gathered}
(z+z^{-1})Q_n[z]=Q_{n+1}[z]+\tilde{\beta}_n Q_n[z]
+\tilde{\gamma}_n Q_{n-1}[z]\qquad(n\ge2),\end{gathered}$$ where $\tilde{\beta}_n$ and $\tilde{\gamma}_n$ are obtained from the corresponding ${\beta}_n$ and ${\gamma}_n$ ( and ) by replacing $a$, $b$, $n$ by $qa$, $qb$, $n-1$, respectively. Hence remains valid if we replace each $P$ by $Q$. Again, similarly as in the proof of Theorem \[76\], it follows that all coefficients $c_{k,l}$, $d_{k,l}$ vanish.
\[77\] The algebra ${\widetilde{AW}(3,Q_0)}$ can be isomorphically embedded into ${\tilde{\mathfrak{H}}}$ by the mapping $$\begin{gathered}
K_0\mapsto Y+q^{-1}abcd Y^{-1},\qquad
K_1\mapsto Z+Z^{-1},\qquad
T_1\mapsto T_1.
\label{78}\end{gathered}$$
The embedding is valid for ${\widetilde{AW}(3,Q_0)}$ and ${\tilde{\mathfrak{H}}}$ acting on ${{\cal A}}$. Now use the faithfulness of the representations of ${\widetilde{AW}(3,Q_0)}$ and ${\tilde{\mathfrak{H}}}$ on ${{\cal A}}$.
By Corollary \[77\] the relations – and are valid identities in ${\tilde{\mathfrak{H}}}$ after substitution by . These identities can also be immediately verified within ${\tilde{\mathfrak{H}}}$, for instance by usage of the package [@13].
If $a$, $b$, $c$, $d$ are such that $E,F_0,F_1=0$ in then we have already a homomorphism of the original algebra $AW(3)$ into ${\tilde{\mathfrak{H}}}$ under the substitutions $K_0:=D$, $K_1:=Z+Z^{-1}$ in , . This is the case iff $c=-d=q^{\frac12}$ (or $-q^{\frac12}$) and $a=-b$. For these parameters the Askey–Wilson polynomials become the continuous $q$-ultraspherical polynomials (see [@2 (7.5.25), (7.5.34)]): $$\begin{gathered}
P_n\big[z;a,-a,q^{\frac12},-q^{\frac12}\mid q\big]={{\rm const}\,}C_n\big({\tfrac12}(z+z^{-1});a^2\mid q^2\big).\end{gathered}$$ However, for these specializations of $a$, $b$, $c$, $d$ we see from and that ${\widetilde Q}$ still slightly differs from $Q$: it is obtained from $Q$ by adding the term $(q^{-1}-q)^3(1-a^2)(T_1-a^2)$. So ${\widetilde{AW}(3,Q_0)}$ then still differs from $AW(3,Q_0)$.
For such $a$, $b$, $c$, $d$ the operator $T_0$ acting on ${{\cal A}}$ (formula ) simplifies to $(T_0f)[z]=f[qz^{-1}]$. We then have the specialization of parameters in ${\tilde{\mathfrak{H}}}$ to the one-parameter double affine Hecke algebra of type $A_1$ (see [@5 § 6.1–6.3]). Explicit formulas for the non-symmetric $q$-ultraspherical polynomials become much nicer than in the general four-parameter Askey–Wilson case, see [@5 (6.2.7), (6.2.8)].
Acknowledgements {#acknowledgements .unnumbered}
----------------
I am very much indebted to an anonymous referee who pointed out errors in the proof of an earlier version of Theorem \[76\]. I also thank him for suggesting a further simplification in my proof of the corrected theorem. I thank Siddhartha Sahi for making available to me a draft of his paper [@12] in an early stage. I also thank Jasper Stokman for helpful comments.
[99]{}
Askey R., Wilson J., Some basic hypergeometric orthogonal polynomials that generalize Jacobi polynomials, [*Mem. Amer. Math. Soc.*]{} (1985), no. 319. Cherednik I., Double affine Hecke algebras, Knizhnik–Zamolodchikov equations, and Macdonald’s operators, [*Int. Math. Res. Not.*]{} (1992), no. 9, 171–180.
Gasper G., Rahman M., Basic hypergeometric series, 2nd ed., Cambridge University Press, 2004.
Grünbaum F.A., Haine L., On a $q$-analogue of the string equation and a generalization of the classical orthogonal polynomials, in Algebraic Methods and $q$-Special Functions, Editors J.F. van Diejen and L. Vinet, [*CRM Proc. Lecture Notes*]{}, Vol. 22, Amer. Math. Soc., 1999, 171–181.
Koekoek R., Swarttouw R.F., The Askey-scheme of hypergeometric orthogonal polynomials and its $q$-analogue, Report 98-17, Faculty of Technical Mathematics and Informatics, Delft University of Technology, 1998, <http://aw.twi.tudelft.nl/~koekoek/askey/>.
Koornwinder T.H., The structure relation for Askey–Wilson polynomials, [*J. Comput. Appl. Math.*]{} (2007), article in press, doi: [10.1016/j.cam.2006.10.015](http://dx.doi.org/10.1016/j.cam.2006.10.015), [math.CA/0601303](http://arxiv.org/abs/math.CA/0601303).
Macdonald I.G., Affine Hecke algebra and orthogonal polynomials, Cambridge University Press, 2003. NCAlgebra: a “Non Commutative Algebra” package running under [*Mathematica*]{}${}^{\mbox{\footnotesize\textregistered}}$,\
<http://www.math.ucsd.edu/~ncalg/>.
Noumi M., Stokman J.V., Askey–Wilson polynomials: an affine Hecke algebraic approach, in Laredo Lectures on Orthogonal Polynomials and Special Functions, Nova Sci. Publ., Hauppauge, NY, 2004, 111–144, [math.QA/0001033](http://arxiv.org/abs/math.QA/0001033). Rains E.M., A difference integral representation of Koornwinder polynomials, in Jack, Hall–Littlewood and Macdonald Polynomials, [*Contemp. Math.*]{} [**417**]{} (2006), 319–333, [math.CA/0409437](http://arxiv.org/abs/math.CA/0409437). Sahi S., Nonsymmetric Koornwinder polynomials and duality, [*Ann. of Math. (2)*]{} [**150**]{} (1999), 267–282, [q-alg/9710032](http://arxiv.org/abs/q-alg/9710032). Sahi S., Some properties of Koornwinder polynomials, in $q$-Series from a Contemporary Perspective, [*Contemp. Math.*]{} [**254**]{} (2000), 395–411. Sahi S., Raising and lowering operators for Askey–Wilson polynomials, [*SIGMA*]{} [**3**]{} (2007), 002, 11 pages, [math.QA/0701134](http://arxiv.org/abs/math.QA/0701134). Stokman J.V., Koornwinder polynomials and affine Hecke algebras, [*Int. Math. Res. Not.*]{} (2000), no. 19, 1005–1042, [math.QA/0002090](http://arxiv.org/abs/math.QA/0002090). Terwilliger P., Vidunas R., Leonard pairs and the Askey–Wilson relations, [*J. Algebra Appl.*]{} [**3**]{} (2004), 411–426, [math.QA/0305356](http://arxiv.org/abs/math.QA/0305356). Zhedanov A.S., “Hidden symmetry” of Askey–Wilson polynomials, [*Theoret. and Math. Phys.*]{} [**89**]{} (1991), 1146–1157.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
In this paper, we study the space of translational limits ${\cal
T}(M)$ of a surface $M$ properly embedded in ${{\mbox{\bb R}}^3}$ with nonzero constant mean curvature and bounded second fundamental form. There is a natural map ${\cal T}$ which assigns to any surface $\Sigma \in
{\cal T}(M)$, the set ${\cal T}(\Sigma)\subset {\cal T}(M)$. Among various dynamics type results we prove that surfaces in minimal ${\cal T}$-invariant sets of ${\cal T}(M)$ are chord-arc. We also show that if $M$ has an infinite number of ends, then there exists a nonempty minimal ${\cal T}$-invariant set in ${\cal T}(M)$ consisting entirely of surfaces with planes of Alexandrov symmetry. Finally, when $M$ has a plane of Alexandrov symmetry, we prove the following characterization theorem: $M$ has finite topology if and only if $M$ has a finite number of ends greater than one.
[*Mathematics Subject Classification:*]{} Primary 53A10, Secondary 49Q05, 53C42
[*Key words and phrases:*]{} Minimal surface, constant mean curvature, homogeneous space, Delaunay surface, minimal invariant set, chord-arc.
bibliography:
- 'bill.bib'
nocite: '[@kk2]'
---
[The Dynamics Theorem for $CMC$ surfaces in $R^3$]{}
[William H. Meeks, III[^1] Giuseppe Tinaglia]{}
Introduction.
=============
A general problem in classical surface theory is to describe the asymptotic geometric structure of a connected, noncompact, properly embedded, nonzero constant mean curvature ($CMC$) surface $M$ in ${{\mbox{\bb R}}^3}$. In this paper, we will show that when $M$ has bounded second fundamental form, for any divergent sequence of points $p_n\in M$, a subsequence of the translated surfaces $M-p_n$ converges to a properly immersed surface of the same constant mean curvature which bounds a smooth open subdomain on its mean convex side. The collection ${\mbox{\bb T}}(M)$ of all these limit surfaces sheds light on the geometry of $M$ at infinity.
We will focus our attention on the subset ${\cal T}(M)\subset
{\mbox{\bb T}}(M)$ consisting of the connected components of surfaces in ${\mbox{\bb T}}(M)$ which pass through the origin in ${{\mbox{\bb R}}^3}$. Given a surface $\S\in {\cal T}(M)$, we will prove that ${\cal T}(\S)$ is always a subset of ${\cal T}(M)$. In particular, we can consider ${\cal T}$ to represent a function: $${\cal T}\colon {\cal T}(M) \to {\cal P}({\cal T}(M)),$$ where ${\cal P}({\cal T}(M))$ denotes the power set of ${\cal
T}(M)$. Using the fact that ${\cal T}(M)$ has a natural compact metric space topology, we obtain classical dynamics type results on ${\cal T}(M)$ with respect to the mapping ${\cal T}$. These dynamics results include the existence of nonempty minimal ${\cal
T}$-invariant sets in ${{\cal T}(M)}$ and are described in Theorem \[T\], which we refer to as the $CMC$ Dynamics Theorem in ${{\mbox{\bb R}}^3}$, or more simply as just the Dynamics Theorem.
Assume $M\subset {{\mbox{\bb R}}^3}$ is a connected, noncompact, properly embedded $CMC$ surface with bounded second fundamental form. In section 3, we demonstrate various properties of the minimal ${\cal T}$-invariant sets in ${{\cal T}(M)}$. For example, we prove:
> [*Surfaces in minimal ${\cal T}$-invariant sets in ${{\cal T}(M)}$ are chord-arc.*]{}
> [*If $M$ has an infinite number of ends, then ${{\cal T}(M)}$ contains a minimal ${\cal T}$-invariant set in which every element has a plane of Alexandrov symmetry.*]{}
> [*If $M$ has finite genus, then any element in a minimal ${\cal T}$-invariant set is a Delaunay surface[^2].*]{}
In the special case that $M$ has finite topology[^3], this last result follows from the main theorem in [@kks1], however the full generality of this result is needed in applications in [@mt1; @mt2].
In section \[sc4\], we deal with $CMC$ surfaces with a plane of Alexandrov symmetry. In particular we obtain the following characterization result:
> [*If $M$ is a complete, connected, noncompact embedded $CMC$ surface with a plane of Alexandrov symmetry and bounded second fundamental form, then $M$ has finite topology if and only if it has a finite number of ends greater than one.*]{}
The collection of properly embedded $CMC$ surfaces with bounded second fundamental form is quite large and varied (see [@gb1; @kap1; @ka5; @la3; @map; @mpp1]). Many of these examples appear as doubly and singly-periodic surfaces. The techniques of Kapouleas [@kap1] and Mazzeo-Pacard [@map] can be applied to obtain many nonperiodic examples of finite and infinite topology. Some theoretical aspects of the study of these special surfaces have been developed previously in works of Meeks [@me17], Korevaar-Kusner-Solomon [@kks1] and Korevaar-Kusner [@kk2]; results from all of these three key papers are applied here. More generally, the broader theory of properly embedded $CMC$ surfaces in homogeneous three-manifolds is an active field of research with many interesting recent results [@dh1; @fm1; @hars1]. In [@mt5], we will generalize the ideas contained in this paper to obtain related theoretical results for properly embedded separating $CMC$ hypersurfaces of bounded second fundamental form in homogeneous $n$-manifolds.
In subsequent papers, [@mt1; @mt2], we apply the results contained in this manuscript. In [@mt2], we prove that the existence of a Delaunay surface in ${{\cal T}(M)}$ implies $M$ does not admit any other noncongruent isometric immersion into ${{\mbox{\bb R}}^3}$ with the same constant mean curvature (see also [@ku2; @smyt1]). In [@mt1], we show that [*any complete, embedded, noncompact, simply-connected $CMC$ surface $M$ in a fixed homogeneous three-manifold $N$ has the appearance of a suitably scaled helicoid nearby any point of $M$ where the second fundamental form is sufficiently large*]{} (see [@tin1] for a related result). [Acknowledgements:]{} We thank Rob Kusner and Joaquin Perez for their helpful comments on the results and proofs contained in this paper. We also thank Joaquin Perez for making the figures that appear here.
The Dynamics Theorem for $CMC$ surfaces of bounded curvature. {#improved}
=============================================================
In this section, motivated by previous work of Meeks, Perez and Ros in [@mpr10], we prove a dynamics type result for the space ${\cal T}(M)$ of certain translational limit surfaces of a properly embedded, $CMC$ surface $M\subset {{\mbox{\bb R}}^3}$ with bounded second fundamental form. All of these limit surfaces satisfy the almost-embedded property described in the next definition.
\[def\] [Suppose $W$ is a complete flat three-manifold with boundary $\partial W=\S$ together with an isometric immersion $f\colon W \to {{\mbox{\bb R}}^3}$ such that $f$ restricted to the interior of $W$ is injective. This being the case, if $f(\S)$ is a $CMC$ surface and $f(W)$ lies on the mean convex side of $f(\S)$, we call the image surface $f(\S)$ a [*strongly Alexandrov embedded $CMC$ surface*]{}.]{}
We note that, by elementary separation properties, any properly embedded $CMC$ surface in ${{\mbox{\bb R}}^3}$ is always strongly Alexandrov embedded. Furthermore, by item [*1*]{} of Theorem \[T\] below, any strongly Alexandrov embedded $CMC$ surface in ${{\mbox{\bb R}}^3}$ with bounded second fundamental form is properly immersed in ${{\mbox{\bb R}}^3}$.
Recall that the only compact Alexandrov embedded[^4] $CMC$ surfaces in ${{\mbox{\bb R}}^3}$ are spheres by the classical result of Alexandrov [@aa1]. Hence, from this point on, we will only consider surfaces $M$ which are noncompact and connected.
Suppose $M\subset {{\mbox{\bb R}}^3}$ is a connected, noncompact, strongly Alexandrov embedded $CMC$ surface with bounded second fundamental form.
1. ${\cal T}(M)$ is the set of all connected, strongly Alexandrov embedded $CMC$ surfaces $\S \subset {{\mbox{\bb R}}^3}$, which are obtained in the following way.
There exists a sequence of points $p_n\in M$, $\lim_{n\to
\infty}|p_n|=\infty$, such that the translated surfaces $M-p_n$ converge $C^2$ on compact sets of ${{\mbox{\bb R}}^3}$ to a strongly Alexandrov embedded $CMC$ surface $\Sigma'$, and $\Sigma$ is a connected component of $\Sigma'$ passing through the origin. Actually we consider the immersed surfaces in ${\cal T}(M)$ to be [*pointed*]{} in the sense that if such a surface is not embedded at the origin, then we consider the surface to represent two different elements in ${\cal T}(M)$ depending on a choice of one of the two preimages of the origin.
2. $\Delta \subset {\cal T}(M)$ is called [*${\cal T}$-invariant*]{}, if $\S\in\Delta$ implies ${\cal T}(\S)\subset \Delta$.
3. A nonempty subset $\Delta\subset {\cal T}(M)$ is called a [*minimal*]{} ${\cal T}$-invariant set, if it is ${\cal T}$-invariant and contains no smaller nonempty ${\cal
T}$-invariant sets.
4. If $\S \in {\cal T}(M)$ and $\S$ lies in a minimal ${\cal T}$-invariant set of ${\cal T}(M)$, then $\S$ is called a [*minimal element*]{} of ${\cal T}(M)$.
Throughout the remainder of this paper, ${\mbox{\bb B}}(p,R)$ denotes the open ball in ${{\mbox{\bb R}}^3}$ of radius $R$ centered at the point $p$ and ${\mbox{\bb B}}(R)$ denotes the open ball of radius $R$ centered at the origin in ${{\mbox{\bb R}}^3}$. Furthermore, we will always orient surfaces so that their mean curvature $H$ is positive.
With these definitions in hand, we now state our Dynamics Theorem.
\[T\] Let $M\subset {{\mbox{\bb R}}^3}$ be a connected, noncompact, strongly Alexandrov embedded $CMC$ surface with bounded second fundamental form. Let $W$ be the associated complete flat three-manifold on the mean convex side of $M$. Then the following statements hold:
1. \[n1\] $M$ is properly immersed in ${{\mbox{\bb R}}^3}$.
2. \[n2\] There exist positive constants $c_1,c_2$ depending only on the mean curvature of $M$ and on an upper bound for the norm of its second fundamental form, such that for any $p\in M$ and $R\geq 1$, $$\label{eq4}c_1\leq \frac{{\rm
Area}(M\cap{\mbox{\bb B}}(p,R))}{{\rm Volume}(W\cap {\mbox{\bb B}}(p,R))}\leq
c_2.$$ In particular, for $R\geq 1$, $\mbox{\rm
Area}(M\cap{\mbox{\bb B}}(R))\leq \frac{4\pi c_2}{3} R^3$. Furthermore, $M$ has a regular neighborhood of radius $\ve$ in $W$, where $\ve>0$ only depends on the mean curvature of $M$ and on an upper bound for the norm of its second fundamental form.
3. \[n3\] $W$ is a handlebody[^5] and every point in $W$ is a distance of less than $\frac1H$ from $\partial W$, where $H$ is the mean curvature of $M$.
4. \[n4\] ${\cal T}(M)$ is nonempty and ${\cal T}$-invariant.
5. \[n5\] ${\cal T}(M)$ has a natural compact topological space structure given by a metric $d_{{\cal T}(M)}$. The metric $d_{{{\cal T}(M)}}$ is induced by the Hausdorff distance between compact subsets of ${{\mbox{\bb R}}^3}$.
6. \[n6\] If $M$ is an element of ${\cal T}(M)$, then ${\cal T}(M)$ is a connected space. In particular, if $M$ is invariant under a translation, then ${\cal T}(M)$ is connected.
7. \[n7\] A nonempty set $\Delta \subset {\cal T}(M)$ is a minimal ${\cal T}$-invariant set if and only if whenever $\S \in \Delta$, then ${\cal T}(\S)=\Delta$.
8. \[n8\] Every nonempty ${\cal T}$-invariant set of ${\cal T}(M)$ contains a nonempty minimal ${\cal
T}$-invariant set. In particular, since ${\cal T}(M)$ is itself a nonempty ${\cal T}$-invariant set, ${\cal T}(M)$ always contains nonempty minimal invariant sets.
9. \[n9\] Any minimal ${\cal T}$-invariant set in ${\cal T}(M)$ is a compact connected subspace of ${\cal T}(M)$.
For the proofs of items [*\[n1\]*]{} and [*\[n2\]*]{} see Corollary 5.2 in [@mt3] or see [@mr7]. The key idea in the proof of Corollary 5.2 is to show that the immersed surface $M$ has a fixed size regular neighborhood on its mean convex side.
We now prove item [*\[n3\]*]{}. The proof that $W$ is a handlebody is based on topological techniques used previously to study the topology of a complete, orientable flat three-manifold $X$ with minimal surfaces as boundary. These techniques were first developed by Frohman and Meeks [@fme1] and later generalized by Freedman [@fre1]. An important consequence of the results and theory developed in these papers is that if $\partial X$ is mean convex, $X$ is not a handlebody, and $X$ is not a Riemannian product of a flat surface with an interval, then $X$ contains an orientable, noncompact, embedded, stable minimal surface $\S$ with compact boundary. Suppose now that $M\subset {{\mbox{\bb R}}^3}$ is a strongly Alexandrov embedded $CMC$ surface with associated domain $W$ on its mean convex side. Since $M$ is not totally geodesic, $W$ cannot be a Riemannian product of a flat surface with an interval. Therefore, if $W$ is not a handlebody, there exists an orientable, noncompact, embedded stable minimal surface $\S\subset W$ with compact boundary. Since $\S$ is orientable and stable, a result of Fisher-Colbrie [@fi1] implies $\S$ has finite total curvature. It is well known that such a $\S$ has an end $E$ which is asymptotic to an end of a catenoid or a plane [@sc1]. We will obtain a contradiction when $E$ is a catenoidal type end; the case where $E$ is a planar type end can be treated in the same manner. After a rotation of $M$, assume that the catenoid to which $E$ is asymptotic is vertical and $E$ is a graph over the complement of a disk in the $(x_1,x_2)$-plane; assume the disk is ${\mbox{\bb B}}(R)\cap \{x_3=0\}$ for some large $R$. Let $S^2$ be a sphere in ${{\mbox{\bb R}}^3}$ with mean curvature equal to the mean curvature of $M$, which lies below $E$ and which is disjoint from the solid cylinder $\{(x_1,x_2,x_3)\mid x_1^2+x_2^2\leq R^2\}$. By vertically translating $S^2$ upward across the $(x_1,x_2)$-plane and applying the maximum principle for $CMC$ surfaces, we find that as $S^2$ translates across $E$, the portions of the translated sphere that lie above $E$ do not intersect $M=\partial W$. Thus, some vertical translate ${\widehat}{S}^2$ of $S^2$ lies inside $W$. Next translate ${\widehat}{S}^2$ inside $W$ so that it touches $\partial W$ a first time. The usual application of the maximum principle for $CMC$ surfaces implies that $M$ is a sphere, which is not possible since $M$ is not compact.
Note that if some point $p\in W$ had distance at least $\frac1H$ from $\partial W$, then $\partial {\mbox{\bb B}}(p,\frac1H)$ is a sphere of mean curvature $H$ in $W$. The arguments in the previous paragraph show that no such sphere can exist, and this contradiction completes the proof of item [*\[n3\]*]{}.
The uniform local area estimates for $M$ given in item [*\[n2\]*]{} and the assumed bound on the second fundamental form of $M$, together with standard compactness arguments, imply that for any divergent sequence of points $\{p_n\}_n$ in $M$, a subsequence of the translated surfaces $M-p_n$ converges on compact sets of ${{\mbox{\bb R}}^3}$ to a strongly Alexandrov embedded $CMC$ surface ${\mbox{\bb M}}_{\infty}$ in ${{\mbox{\bb R}}^3}$. The component $M_{\infty}$ of ${\mbox{\bb M}}_{\infty}$ passing through the origin is a surface in ${\cal T}(M)$ (if $M_{\infty}$ is not embedded at the origin, then one obtains two elements in ${\cal
T}(M)$ depending on a choice of one of the two pointed components). Hence, ${{\cal T}(M)}$ is nonempty.
Let $\S\in {{\cal T}(M)}$ and $\S'\in {\cal T}(\S)$. By definition of ${{\cal T}(\S)}$, any compact domain of $\S'$ can be approximated arbitrarily well by translations of compact domains “at infinity” in $\S$. In turn, by definition of ${{\cal T}(M)}$, these compact domains “at infinity” in $\S$ can be approximated arbitrarily well by translated compact domains “at infinity” on $M$. Hence, a standard diagonal argument implies that $\S'\in{{\cal T}(M)}$. Thus, ${{\cal T}(M)}$ is ${\cal T}$-invariant, which proves item [*\[n4\]*]{}.
Suppose now that $\S \in {\cal T}(M)$ is embedded at the origin. In this case, there exists an $\ve>0$ depending only on the bound of the second fundamental form of $M$, so that there exists a disk $D(\S)\subset \S\cap \overline{{\mbox{\bb B}}}(\ve)$ with $\partial
D(\S)\subset\partial\overline{{\mbox{\bb B}}}(\ve)$, $\vec{0}=(0,0,0) \in D(\S)$ and such that $D(\S)$ is a graph with gradient at most 1 over its projection to the tangent plane $T_{\vec{0}}D(\S)\subset {{\mbox{\bb R}}^3}$. Given another such $\S'\in {\cal T}(M)$, define $$d_{{\cal T}(M)}(\S,\S')=d_{\cal H}(D(\S),D(\S')),$$ where $d_{\cal H}$ is the Hausdorff distance. If $\vec{0}$ is not a point where $\S$ is embedded, then since we consider $\S$ to represent one of two different pointed surfaces in ${\cal T}(M)$, we choose $D(\S)$ to be the disk in $\S\cap{\mbox{\bb B}}(\ve)$ containing the chosen base point. With this modification, the above metric is well-defined on ${\cal T}(M)$.
Using the fact that the surfaces in ${\cal T}(M)$ have uniform local area and curvature estimates (see item [*\[n2\]*]{}), we will now prove ${\cal T}(M)$ is sequentially compact and hence compact. Let $\{\S_n\}_n$ be a sequence of surfaces in ${\cal T}(M)$ and let $\{D(\S_n)\}_n$ be the related sequence of graphical disks defined in the previous paragraph. A standard compactness argument implies that a subsequence, $\{D(\S_{n_i})\}_{n_i}$ of these disks converges to a graphical $CMC$ disk $D_\infty$. Using item [*\[n2\]*]{}, it is straightforward to show that $D_\infty$ lies on a complete, strongly Alexandrov embedded surface $\S_\infty$ with the same constant mean curvature as $M$. Furthermore, $\S_\infty$ is a limit of compact domains $\Delta_{n_i}\subset \S_{n_i}$. In turn, the $\Delta_{n_i}$’s are limits of translations of compact domains in $M$, where the translations diverge to infinity. Hence, $\S_\infty$ is in ${\cal T}(M)$ and by definition of $d_{{\cal T}(M)}$, a subsequence of $\{\S_n\}_n$ converges to $\S_\infty$. Thus, ${\cal
T}(M)$ is a compact metric space with respect to the metric $d_{{\cal T}(M)}$. We remark that this compactness argument can be easily modified to prove that the topology of ${\cal T}(M)$ is independent of the sufficiently small radius $\ve$ used to define $d_{{\cal T}(M)}$. It follows that the topological structure on ${\cal T}(M)$ is determined ($\ve$ chosen sufficiently small), and it is in this sense that the topological structure is natural. This completes the proof of item [*\[n5\]*]{}.
Suppose now that $M\in{\cal T}(M)$. Note that whenever $X\in {\cal
T}(M)$, then the path connected set of translates ${\rm
Trans}(X)=\{X-q\mid q\in X\}$ is a subset of ${\cal T}(M)$. In particular, ${\rm Trans}(M)$ is a subset of ${\cal T}(M)$. We claim that the closure of ${\rm Trans}(M)$ in ${\cal T}(M)$ is equal to ${\cal T}(M)$. By definition of closure, the closure of ${\rm
Trans}(M)$ is a subset of $ {\cal T}(M)$. Using the definition of ${\cal T}(M)$ and the metric space structure on ${\cal T}(M)$, it is straightforward to check that ${\cal T}(M)$ is contained in the closure of ${\rm Trans}(M)$; hence, $\overline{\rm Trans(M)}={{\cal T}(M)}$. Since the closure of a path connected set in a topological space is always connected, we conclude that ${\cal T}(M)$ is connected, which completes the proof of item [*\[n6\]*]{}.
We now prove item [*\[n7\]*]{}. Suppose $\Delta$ is a nonempty, minimal ${\cal T}$-invariant set and $\S \in \De$. By definition of ${\cal T}$-invariance, ${{\cal T}(\S)}\subset\De$. By item [*\[n4\]*]{}, ${{\cal T}(\S)}$ is a nonempty ${\cal T}$-invariant set. By definition of minimal ${\cal T}$-invariant set, ${{\cal T}(\S)}=\De$, which proves one of the desired implications. Suppose now that $\De\subset {{\cal T}(\S)}$ is nonempty and that whenever $\S\in \De$, ${{\cal T}(\S)}=\De$; it follows that $\De$ is a ${\cal T}$-invariant set. If $\De'\subset \De$ is a nonempty ${\cal
T}$-invariant set, then there exists a $\S'\in \De'$, and thus, $\De={\cal T}(\S')\subset\De'\subset \De$. Hence, $\De'=\De$, which means $\De$ is a minimal ${\cal T}$-invariant set and item [*\[n7\]*]{} is proved.
Now we prove item [*\[n8\]*]{} through an application of Zorn’s lemma. Suppose $\Delta \subset {\cal T}(M)$ is a nonempty ${\cal
T}$-invariant set and $\S \in \Delta$. Using the definition of ${\cal T}$-invariance, an elementary argument proves ${\cal
T}(\Sigma )$ is a nonempty ${\cal T}$-invariant set in $\Delta$ which is a closed set of ${\cal T}(M)$; essentially, this is because the set of limit points of a set in a topological space forms a closed set (also see the proofs of items [*\[n4\]*]{} and [*\[n5\]*]{} for this type of argument). Next consider the set $\Lambda
$ of all nonempty ${\cal T}$-invariant subsets of $\Delta $ which are closed sets in ${\cal T}(M)$, and as we just observed, this collection is nonempty. Also, observe that $\Lambda $ has a partial ordering induced by inclusion $\subset$.
We first check that any linearly ordered set in $\Lambda $ has a lower bound, and then apply Zorn’s Lemma to obtain a minimal element of $\Lambda$ with respect to the partial ordering $\subset$. To do this, suppose $\Lambda '\subset \Lambda $ is a nonempty linearly ordered subset and we will prove that the intersection $\bigcap
_{\Delta'\in \Lambda '}\Delta '$ is an element of $\Lambda $. In our case, this means that we only need to prove that such an intersection is nonempty, because the intersection of closed (respectively ${\cal T}$-invariant) sets in a topological space is a closed set (respectively ${\cal T}$-invariant) set. Since each element of $\Lambda'$ is a closed set of ${\cal T}(M)$ and the finite intersection property holds for the collection $\Lambda'$, then the compactness of ${{\cal T}(M)}$ implies $\bigcap _{\Delta'\in \Lambda
'}\Delta ' \neq \mbox{\O}$. Thus, $\bigcap _{\Delta'\in \Lambda
'}\Delta ' \in \Lambda$ is a lower bound for $\Lambda'$. By Zorn’s lemma applied to $\Lambda$ under the partial ordering $\subset$, $\Delta$ contains a smallest, nonempty, closed ${\cal T}$-invariant set $\Omega$. We now check that $\Omega$ is a nonempty, minimal ${\cal T}$-invariant subset of $\Delta$. If $\Omega'$ is a nonempty ${\cal T}$-invariant subset of $\Omega$, then there exists a $\Sigma'\in\Omega'$. By our previous arguments, ${\cal
T}(\Sigma')\subset \Omega'\subset \Omega$ is a nonempty ${\cal
T}$-invariant set in $\Delta$ which is a closed set in ${\cal
T}(M)$, i.e., ${\cal T}(\S')\in \Lambda$. Hence, by the minimality property of $\Omega$ in $\Lambda$, we have ${\cal T}(\Sigma')=
\Omega'=\Omega$. Thus, $\Omega$ is a nonempty, minimal ${\cal
T}$-invariant subset of $\Delta$, which proves item [*\[n8\]*]{}.
Let $\Delta\subset {\cal T}(M)$ be a nonempty, minimal ${\cal
T}$-invariant set and let $\S\in \Delta$. By item [*\[n7\]*]{}, ${\cal T}(\S)=\Delta$. Since ${\cal T}(\Sigma)$ is a closed set in ${{\cal T}(M)}$ and ${\cal T}(M)$ is compact, then $\Delta$ is compact. Since $\S\in {\cal T}(\S)=\Delta$, item [*\[n6\]*]{} implies $\Delta$ is also connected which completes the proof of item [*\[n9\]*]{}.
\[rm25\] [It turns out that any complete, connected, noncompact, embedded $CMC$ surface $M\subset{{\mbox{\bb R}}^3}$ with compact boundary and bounded second fundamental form, is properly embedded in ${{\mbox{\bb R}}^3}$, has a fixed sized regular neighborhood on its mean convex side and so has cubical area growth; these properties of $M$ follow from simple modifications of the proof of these properties in the case when $M$ has empty boundary (see [@mr7; @mt3]). For such an $M$, the space ${\cal T}(M)$ also can be defined and consists of a nonempty set of strongly Alexandrov embedded $CMC$ surfaces without boundary. We will use this remark in the next section where $M$ is allowed to have compact boundary. Also we note that items [*\[n4\]*]{} - [*\[n9\]*]{} of the Dynamics Theorem make sense under small modifications and hold for properly embedded separating $CMC$ hypersurfaces $M$ with bounded second fundamental form in noncompact homogeneous $n$-manifolds $N$, where ${\cal T}(M)$ is the set of connected properly immersed surfaces that pass through a fixed base point of $N$ and which are components of limits of $M$ under a sequence of “translational" isometries of $N$ which take a divergent sequence of points in $M$ to the base point; see [@mt5] for details. ]{}
The Minimal Element Theorem.
============================
In this section, we give applications of the Dynamics Theorem to the theory of complete embedded $CMC$ surfaces $M$ in ${{\mbox{\bb R}}^3}$ with bounded second fundamental form and compact boundary. Let $R$ be the radial distance to the origin in ${{\mbox{\bb R}}^3}$. We will obtain several results concerning the geometry of minimal elements in ${\cal
T}(M)$, when the area growth of $M$ is less than cubical in $R$ or when the genus of the surfaces $M\cap {\mbox{\bb B}}(R)$ grows slower than cubically in $R$. With this in mind, we now define some growth constants for the area and genus of $M$ in ${{\mbox{\bb R}}^3}$.
For any $p\in M$, we denote by $M(p,R)$ the connected component of $M\cap {\mbox{\bb B}}(p,R)$ which contains $p$; if $M$ is not embedded at $p$ and there are two immersed components $M(p,R)$, $M'(p,R)$ corresponding to two pointed immersions, then in what follows we will consider both of these components separately.
For $n=1,2,3,$ we define: $$A_{\sup}(M,n)=\limsup \sup_{p\in M}({\rm Area}[M(p,R)]\cdot R^{-n}),$$ $$A_{\inf}(M,n)=\liminf \inf_{p\in M}({\rm Area}[M(p,R)]\cdot R^{-n}),$$ $$G_{\sup}(M,n)=\limsup \sup_{p\in M}({\rm Genus}[M(p,R)]\cdot R^{-n}),$$ $$G_{\inf}(M,n)=\liminf \inf_{p\in M}({\rm Genus}[M(p,R)]\cdot R^{-n}).$$
In the above definition, note that $\sup_{p\in M}({\rm
Area}[M(p,R)]\cdot R^{-n})$ and the other similar expressions are functions from $(0,\infty)$ to ${\mbox{\bb R}}$ and therefore they each have a $\limsup$ or a $\liminf$, respectively.
By item [*\[n2\]*]{} of Theorem \[T\] and Remark \[rm25\], $A_{\sup}(M,3)$ is a finite number. We now prove that $G_{\sup}(M,3)$ is also finite. Since $M$ has bounded second fundamental form, it admits a triangulation $T$ whose edges are geodesic arcs or smooth arcs in the boundary of $M$ of lengths bounded between two small positive numbers, and so that the areas of 2-simplices in $T$ also are bounded between two small positive numbers. Let $T(M(p,R))$ be the set of simplices in $T$ which intersect $M(p,R)$. Note that for $R$ large, the number of edges in $T(M(p,R)) $ which intersect $M(p,R)$ is less than some constant $K$ times the area of $M(p,R)$, where $K$ depends only on the second fundamental form of $M$. Hence, the number of generators of the first homology group $H_1(T( M(p,R)),{\mbox{\bb R}})$ is less than $K$ times the area of $M(p,R)$. Since there are at least $ {\rm Genus}[M(p,R)]$ linearly independent simplicial homology classes in $H_1(T( M(p,R)),{\mbox{\bb R}})$, then $$\label{eq5} {\rm Genus}[M(p,R)] \leq K {\rm Area}[M(p,R)]
\quad \text{for } R \text{ large}.$$ In particular, since $A_{\sup}(M,3)$ is finite, equation implies that $G_{\sup}(M,3)$ is also finite.
Suppose that $M\subset {{\mbox{\bb R}}^3}$ is a complete, noncompact, connected embedded $CMC$ surface with compact boundary (possibly empty) and with bounded second fundamental form.
1. For any divergent sequence of points $p_n\in M$, a subsequence of the translated surfaces $M-p_n$ converges to a properly immersed surface of the same constant mean curvature which bounds a smooth open subdomain on its mean convex side. [*Let ${\mbox{\bb T}}(M)$ denote the collection of all such limit surfaces.*]{}
2. If there exists a constant $C>0$ such that for all $p,q\in
M$ with $d_{{\mbox{\bbsmall R}}^3}(p,q)\geq 1$, $d_M(p,q) \leq C \cdot
d_{{\mbox{\bbsmall R}}^3}(p,q)$, then we say that $M$ is [*chord-arc.*]{} (Note that the triangle inequality implies that if $M$ is chord-arc and $p,q\in M$ with $d_{{\mbox{\bbsmall R}}^3}(p,q)<1$, then $d_{M}(p,q)<6C$.)
We note that in the above definition and in Theorem \[sp2\] below, the embedded hypothesis on $M$ can be replaced by the weaker hypothesis that $M$ has a fixed size one-sided neighborhood on its mean convex side (see Remark \[rm25\]).
We now state the main theorem of this section. For the statement of this theorem, recall that a plane $P\subset {{\mbox{\bb R}}^3}$ is a [*plane of Alexandrov symmetry*]{} for a surface $M\subset {{\mbox{\bb R}}^3}$, if it is a plane of symmetry which separates $M$ into two open components $M^+,$ $M^-,$ each of which is a graph over a fixed subdomain of $P$.
\[sp2\] Let $M\subset
{{\mbox{\bb R}}^3}$ be a complete, noncompact, connected embedded $CMC$ surface with possibly empty compact boundary and bounded second fundamental form. Then the following statements hold.
1. \[oneend\] If $\S\in {{\cal T}(M)}$ is a minimal element, then either every surface in ${\mbox{\bb T}}(\S)$ is the translation of a fixed Delaunay surface or every surface in ${\mbox{\bb T}}(\S)$ has one end. In particular, if $\S\in {{\cal T}(M)}$ is a minimal element, then every surface in ${\mbox{\bb T}}(\S)$ is connected and ${{\cal T}(\S)}={\mbox{\bb T}}(\S)$.
2. \[sca\] Minimal elements of ${{\cal T}(M)}$ are chord-arc.
3. \[n10\] Let $\S$ be a minimal element of ${\cal T}(M)$. For all $D, \, \ve>0$, there exists a $d_{\ve,D}>0$ such that the following statement holds. For every compact domain $X\subset \S$ with extrinsic diameter less than $D$ and for each $q\in \S$, there exists a smooth compact, domain $X_{q,\ve}\subset \S$ and a translation, $\tau\colon {{\mbox{\bb R}}^3}\to {{\mbox{\bb R}}^3}$, such that $$d_{\S}(q,X_{q,\ve})<d_{\ve,D}\;\;\; \mbox{and} \;\;\; d_{\cal H}(X,
\tau(X_{q,\ve}))<\ve,$$ where $d_{\S}$ is the distance function on $\S$ and $d_{\cal H}$ is the Hausdorff distance on compact sets in ${{\mbox{\bb R}}^3}$. Furthermore, if $X$ is connected, then $X_{q,\ve}$ can be chosen to be connected.
4. \[half\] If $M$ has empty boundary and lies in the halfspace $\{x_3\geq 0\}$, then some minimal element of ${\cal T}(M)$ has the $(x_1,x_2)$-plane as a plane of Alexandrov symmetry.
5. \[balls\] If $E$ is an end representative[^6] of $M$ such that ${{\mbox{\bb R}}^3}- E$ contains balls of arbitrarily large radius, then ${\cal T}(M)$ contains a surface with a plane of Alexandrov symmetry.
6. \[inf3\] The following statements are equivalent:
1. \[A3\] $A_{\inf}(M,3)=0.$
2. \[G3\] $G_{\inf}(M,3)=0.$
3. \[sym\] ${\cal T}(M)$ contains a minimal element with a plane of Alexandrov symmetry.
4. \[Afinite2\] $A_{\inf}(M,2)$ is finite.
5. \[Gfinite2\] $G_{\inf}(M,2)$ is finite.
7. \[infty\] If $M$ has an infinite number of ends, then there exists a minimal element in ${\cal T}(M)$ with a plane of Alexandrov symmetry.
8. If ${\cal T}(M)$ does not contain an element with a plane of Alexandrov symmetry, then the following statements hold.
1. \[bal\_a\] There exists a constant $F$ such that for every end representative $E$ of a surface in ${\mbox{\bb T}}(M)$, there exists a positive number $R(E)$ such that $$[{{\mbox{\bb R}}^3}-{\mbox{\bb B}}(R(E))]\subset \{x\in {{\mbox{\bb R}}^3}\mid d_{{\mbox{\bbsmall R}}^3} (x,E)<F\}.$$ In particular, if $E_1$ and $E_2$ are end representatives of a surface in ${\mbox{\bb T}}(M)$, then for $R$ sufficiently large, for any $x\in
E_1 -{\mbox{\bb B}}(R)$, $d_{{\mbox{\bbsmall R}}^3} (x,E_2-{\mbox{\bb B}}(R))<F\}$.
2. \[bal\_b\] There is a uniform upper bound on the number of ends of any element in ${\mbox{\bb T}}(M)$. In particular, there is a uniform upper bound on the number of components of any element in ${\mbox{\bb T}}(M)$.
9. \[inf2\] Suppose $\Sigma$ is a minimal element of ${\cal T}(M)$. Then the following statements are equivalent.
1. \[A2\] $A_{\inf}(\Sigma, 2)=0$.
2. \[G2\] $G_{\inf}(\Sigma, 2)=0$.
3. \[D\] $\Sigma$ is a Delaunay surface.
4. \[Afinite1\] $A_{\inf}(\Sigma,1)$ is finite.
5. \[Gfinite1\] $G_{\inf}(\Sigma,1)$ is finite.
The following corollary gives some immediate consequences of Theorem \[sp2\]. The proof of this corollary appears after the proof of Theorem \[sp2\].
\[cor2\] Let $M\subset
{{\mbox{\bb R}}^3}$ be a complete, noncompact, connected, embedded $CMC$ surface with compact boundary and bounded second fundamental form. Then the following statements hold.
1. $A_{\sup}(M,3)=0\quad \implies \quad G_{\sup}(M,3)=0
\quad \implies $\
$\implies \quad$ Every minimal element in ${\cal T}(M)$ has a plane of Alexandrov symmetry.
2. $A_{\sup}(M,2)=0 \quad \implies \quad G_{\sup}(M,2)=0
\quad \implies$\
$\implies \quad$ Every minimal element in ${\cal T}(M)$ is a Delaunay surface.
We make the following conjecture related to the Minimal Element Theorem. Note that item [*\[inf2\]*]{} of Theorem \[sp2\] implies that the conjecture holds for $n=1$.
Suppose that $M\subset {{\mbox{\bb R}}^3}$ satisfies the hypotheses of Theorem \[sp2\]. Then for any minimal element $\S
\in {\cal T}(M)$ and for $n= 1, \,2,$ or $3,$ $$\lim_{R\to \infty}
{\rm Area}[\S\cap {\mbox{\bb B}}(R)]\cdot R^{-n} \; \text{ and } \; \lim_{R\to \infty}
{\rm Genus}[\S\cap {\mbox{\bb B}}(R)]\cdot R^{-n}$$ exist (possibly infinite). Furthermore, $$A_{\inf}(\S,n)=A_{\sup}(\S,n)=\lim_{R\to \infty}
{\rm Area}[\S\cap {\mbox{\bb B}}(R)]\cdot R^{-n}$$ $$G_{\inf}(\S,n)=G_{\sup}(\S,n)=\lim_{R\to \infty}
{\rm Genus}[\S\cap {\mbox{\bb B}}(R)]\cdot R^{-n}.$$
[*Proof of Theorem \[sp2\].*]{} We postpone the proofs of items [*\[oneend\], \[sca\], \[n10\]*]{} to after the proofs of the items [*\[half\] - \[inf2\]*]{} of the theorem.
Assume that $M$ has empty boundary and $M\subset \{x_3\geq 0\}$. Using techniques similar to the ones discussed by Ros and Rosenberg in [@ror1], we now prove that some element of ${{\cal T}(M)}$ has a horizontal plane of Alexandrov symmetry, that is, item [*\[half\]*]{}. Let $W_M$ be the smooth open domain in ${{\mbox{\bb R}}^3}-M$ on the mean convex side of $M$. Note that $W_M \subset \{ x_3\geq 0\}$. After a vertical translation of $M$, assume that $M$ is not contained in a smaller halfspace of $\{x_3\geq 0\}.$ Since $M$ has a fixed size regular neighborhood on its mean convex side and $M$ has bounded second fundamental form, then for any generic and sufficiently small $\ve>0$, $M_{\ve}=M\cap \{x_3\leq \ve\}$ is a nonempty graph of small gradient over its projection to $P_0=\{x_3=0\}$; we let $P_t=\{x_3=t\}$. Note that the mean curvature vector to $M_{\ve}$ is upward pointing. In what follows, $R_{P_t}\colon {{\mbox{\bb R}}^3}\to {{\mbox{\bb R}}^3}$ denotes reflection in $P_t$, while $\Pi\colon {{\mbox{\bb R}}^3}\to {{\mbox{\bb R}}^3}$ denotes orthogonal projection onto $P_0$.
For any $t>0$, consider the new surface with boundary, $\widehat{M}_t$, obtained by reflecting $M_t=M\cap\{x_3\leq t\}$ across the plane $P_t$, i.e., ${\widehat}{M}_t=R_{P_t}(M_t)$. Let $
T=\sup \{t\in (0,\infty)\mid {\rm for}\;\; t'<t$, the surface $M_{t'}$ is a graph over its projection to $P_0$, $\widehat{M}_{t'}\cap M=
\partial \widehat{M}_{t'}=\partial M_{t'}$ and the infimum of the angles that the tangent spaces to $M$ along $\partial M_t$ make with vertical planes is bounded away from zero}. Recall that by height estimates for $CMC$ graphs with zero boundary values [@ror1], $\ve<T\leq \frac{1}{H}$, where $H$ is the mean curvature of $M$.
If there is a point $p\in \partial M_T$ such that the tangent plane $T_pM$ is vertical, then the classical Alexandrov reflection principle implies that the plane $P_T$ is a plane of Alexandrov symmetry. Next suppose that the angles that the tangent spaces to $M_T$ make with $(0,0,1)$ along $\partial M_T$ are not bounded away from zero. In this case, let $p_n\in \partial M_T$ be a sequence of points such that the tangent planes $T_{p_n}M$ converge to the vertical (the dot products of the normal vectors to the planes with $(0,0,1)$ are going to zero) and let $\Sigma \in {\cal T}(M)$ be a related limit of the translated surfaces $M-p_n$. One can check that $\Sigma \cap \{x_3<0\} $ is a graph over $P_0$ and that its tangent plane at the origin is vertical. Now the usual application of the boundary Hopf maximum principle at the origin, or equivalently, the Alexandrov reflection argument, implies $P_0$ is a plane of Alexandrov symmetry for $\Sigma$.
Suppose now that the tangent planes of $M$ along $\partial M_T$ are bounded away from the vertical. In this case, $P_T$ is not a plane of Alexandrov symmetry. So, by the usual application of the Alexandrov reflection principle, we conclude that $\widehat{M}_T
\cap M=\partial \widehat{M}_T=\partial M_T$. By definition of $T$, there exist $\delta_n>0$, $\delta_n\rightarrow 0$, such that $F_n=\widehat{M}_{T+\delta_n}\cap M$ is not contained in $\partial
M_{T+\delta_n}$. We first show that not only is $\Pi(F_n)$ contained in the interior of $\Pi(M_T)$, but for some $\eta>0$, it stays at a distance at least $\eta$ from $\Pi(\partial M_T)$ for $\delta_n$ sufficiently small. In fact, since we are assuming that the tangent planes of $M$ along $\partial M_T$ are bounded away by a fixed positive angle from the vertical, if $\delta$ is small enough, the tangent planes of $M$ along $\partial M_{T+\delta}$ are also bounded away by a fixed positive angle from the vertical. Thus, the previous statement on the existence of an $\eta >0$ is a consequence of the existence of a fixed size one-sided regular neighborhood for $M$ in $W_M$.
The discussion in the previous paragraph implies that there exists a sequence of points $p_n\in M_T$ which stay at a distance at least $\eta$ from $\partial M_T$ and such that the distance from $R_T(p_n)$ and $M-M_T$ is going to zero. The fact that $p_n$ stays at a distance at least $\eta$ from $\partial M_T$ implies that for $n$ large there exists an $\ve>0$ such that $R_T({\mbox{\bb B}}(p_n,\ve)\cap M)$ is disjoint from $M$ and it is a graph over $\Pi({\mbox{\bb B}}(p_n,\ve)\cap
M)$. Consider the element $\Sigma\in {\cal T}(M)$ obtained as a limit of the translated surfaces $M-\Pi(p_n)$ and let $\lim_{n\to
\infty}p_n=p=(0,0,S)\in \S$. From the way $\Sigma$ is obtained, $p$ is a positive distance from $\partial \S_T$. Moreover, $R_T(p)\in
\S-\S_T$ and $\widehat{\S}_T$ is tangent to $\S-\S_T$ and lies on its mean convex side. The maximum principle implies that $P_T$ is a plane of Alexandrov symmetry which contradicts the assumption that tangent planes of $M$ along $\partial M_T$ are bounded away by a fixed positive angle from the vertical. This completes the proof that there exists a surface $\Sigma \in {{\cal T}(M)}$ with the $(x_1,x_2)$-plane as a plane of Alexandrov symmetry. It then follows from item [*\[n8\]*]{} of Theorem \[T\] that the nonempty ${\cal
T}$-invariant set ${{\cal T}(\S)}\subset {{\cal T}(M)}$ contains minimal element of ${{\cal T}(M)}$ with the $(x_1,x_2)$-plane as a plane of Alexandrov symmetry, which proves item [*\[half\]*]{}.
We now prove item [*\[balls\]*]{} holds. Assume now that $M$ has possibly nonempty compact boundary and there exists a sequence of open balls ${\mbox{\bb B}}(q_n,n)\subset {{\mbox{\bb R}}^3}-M$. Note that these balls can be chosen so that they are at distance at least $n$ from the boundary of $M$ and so that there exists a sequence of points $p_n\in\partial
{\mbox{\bb B}}(q_n,n)\cap M$ diverging in ${{\mbox{\bb R}}^3}$. After choosing a subsequence, we may assume that the translated balls ${\mbox{\bb B}}(q_n,n)-p_n$ converge to an open halfspace $K$ of ${{\mbox{\bb R}}^3}$ and a subsequence of the translated surfaces $M-p_n$ gives rise to an element $M_\infty\in{\cal T}(M)$ with $M_\infty$ contained in the halfspace ${{\mbox{\bb R}}^3}-K$ and $\partial
M_{\infty}=\mbox{\O}$. By the previous discussion when $M$ has empty boundary (item [*\[half\]*]{}), $ {\cal T}(M_\infty)\subset {\cal
T}(M)$ contains a minimal element with a plane of Alexandrov symmetry. This completes the proof of item [*\[balls\]*]{}.
We now prove item [*\[inf3\]*]{} in the theorem. First observe that ${\it \ref{Afinite2} \implies~\ref{A3}}$ and that ${\it
\ref{Gfinite2}\implies~\ref{G3}}$. Also, equation implies that ${\it \ref{A3}\implies~\ref{G3}}$ and that ${\it
\ref{Afinite2}\implies~\ref{Gfinite2}}$. We now prove that ${\it
\ref{sym}\implies~\ref{Afinite2}}$. Suppose that ${\cal T}(M)$ contains a minimal element $\Sigma$ which has a plane of Alexandrov symmetry and let $W_{\Sigma}$ denote the embedded three-manifold on the mean convex side of $\Sigma$. In this case $W_\S$ is contained in a slab, and by item [*\[n2\]*]{} of Theorem \[T\], the area growth of $\Sigma$ is comparable to the volume growth of $W_{\Sigma}$. Note that the volume of $W_{\Sigma}$ grows at most like the volume of the slab which contains it, and so, the volume growth of $W_{\Sigma}$ and the area growth of $\S$ is at most quadratic in $R$. By the definitions of ${\cal T}(M)$ and $A_{\inf}(M,2)$, we see that $A_{\inf}(M,2)$ is finite which implies [*\[Afinite2\]*]{}.
In order to complete the proof of item [*\[inf3\]*]{}, it suffices to show ${\it \ref{G3}\implies \ref{sym}}$. However, since the proof of ${\it \ref{G3}\implies \ref{sym}}$ uses the fact that ${\it \ref{A3}\implies \ref{sym}}$, we first show that ${\it
\ref{A3}\implies \ref{sym}}$. Assume that $A_{\inf}(M,3)=0$ and we will prove that ${\cal T}(M)$ contains a surface $\Sigma$ which lies in a halfspace of ${{\mbox{\bb R}}^3}$. Since $A_{\inf}(M,3)=0$, we can find a sequence of points $\{p_n\}_n\subset M$ and positive numbers $R_n$, $R_n\to \infty$, such that the connected component $M(p_n, R_n)$ of $M\cap \overline{{\mbox{\bb B}}} (p_n,R_n)$ containing $p_n$ has area less than $\frac{1}{n}R_n^3.$ Since $M$ has bounded second fundamental form, there exists an $\ve>0$ such that for any $q\in {{\mbox{\bb R}}^3}$, if ${\mbox{\bb B}}(q,r)\cap M \not =\mbox{\O}$, then ${\rm Area}({\mbox{\bb B}}(q,r+1)\cap
M)\geq \ve$. Using this observation, together with the inequality ${\rm Area}(M\cap \overline{{\mbox{\bb B}}}(p_n,R_n))\leq \frac{1}{n} R_n^3$ and the equality ${\rm Volume}\, ({\mbox{\bb B}}(p_n, R_n))= \frac{4\pi}{3} R_n^3$, we can find a sequence of points $q_n\in {\mbox{\bb B}}(p_n, R_n)$, numbers $k_n$ with $k_n\to \infty$, such that ${\mbox{\bb B}}(q_n, k_n)\subset [{\mbox{\bb B}}(p_n,
\frac{R_n}{2})-M(p_n,R_n)]$ and such that there are points $s_n\in
\partial {\mbox{\bb B}}(q_n,k_n)\cap M(p_n, R_n)$ with $|s_n|\to \infty$ (see Figure \[fig1b\]). Let $\Sigma \in {\cal T}(M)$ be a limit surface arising from the sequence of translated surfaces $M(p_n, R_n) - s_n$. Note that $\Sigma$ is disjoint from an open halfspace obtained from a limit of a subsequence of the translated balls ${\mbox{\bb B}}(q_n, k_n)-s_n$. Since $\Sigma$ lies in a halfspace of ${{\mbox{\bb R}}^3}$, item [*\[half\]*]{} in the theorem implies ${\cal T}(M)$ contains a minimal element with a plane of Alexandrov symmetry. The existence of this minimal element proves that ${\it \ref{A3} \implies \ref{sym}}$.
![Finding large balls in the complement of $M(p_n,
R_n)$[]{data-label="fig1b"}](fig1.jpg){width="2.7in"}
We now prove that ${\it \ref{G3} \implies \ref{sym}}$ and this will complete the proof of item [*\[inf3\]*]{}. Assume that $G_{\inf}(M,3)=0$. Since $G_{\inf}(M,3)=0$, there exists a sequence of points $p_n\in M$ and $R_n \to \infty$, such that the genus of $M(p_n, R_n) \subset \overline{{\mbox{\bb B}}}(p_n,R_n)$ is less than $\frac{1}{n} R_n^3$. Using the fact that the genus of disjoint surfaces is additive, a simple geometric argument, which is similar to the argument that proved ${\it \ref{A3} \implies \ref{sym}}$, shows that we can find a sequence of points $q_n\in {\mbox{\bb B}}(p_n, R_n)$ diverging in ${{\mbox{\bb R}}^3}$ and numbers $k_n$, with $k_n\to \infty$, such that one of the following statements holds.
1. $\mbox{Genus}(M(q_n, k_n))=0$.
2. ${\mbox{\bb B}}(q_n, k_n)\subset [{\mbox{\bb B}}(p_n, \frac{R_n}{2})-M(p_n,R_n)]$ and, as $n$ varies, there exist points $s_n\in
\partial {\mbox{\bb B}}(q_n,k_n)\cap M(p_n, R_n)$ diverging in ${{\mbox{\bb R}}^3}$.
If statement 2 holds, then our previous arguments imply that ${\cal
T}(M)$ contains a surface $\Sigma$ which lies in a halfspace of ${{\mbox{\bb R}}^3}$ and that ${\cal T}(M)$ contains a minimal element with a plane of Alexandrov symmetry. Thus, we may assume statement 1 holds.
Since statement 1 holds, then the sequence of translated surfaces $M-q_n$ yields a limit surface $\Sigma \in {{\cal T}(M)}$ of genus zero. If $\S$ has a finite number of ends, then $\S$ has an annular end $E$. By the main theorem in [@me17], $E$ is contained in a solid cylinder in ${{\mbox{\bb R}}^3}$. Under a sequence of translations of $E$, we obtain a limit surface $D\in {{\cal T}(\S)}$ which is contained in a solid cylinder. By item [*\[half\]*]{}, there is a minimal element $D'\in {\cal T}(D)\subset {{\cal T}(M)}$ which has a plane of Alexandrov symmetry; this conclusion also follows from the main result in [@kks1].
Suppose now $\Sigma$ has genus zero and an infinite number of ends. For each $n\in
{\mbox{\bb N}}$, there exists numbers, $T_n$ with $T_n\to \infty$, such that the number $k(n)$ of noncompact components, $$\{\S_1(T_n),
\S_2(T_n),\ldots,\S_{k(n)}(T_n)\},$$ in $\Sigma-{\mbox{\bb B}}(T_n)$ is at least $n$. Fix points $p_i(n)\in \S_i(T_n)\cap \partial {\mbox{\bb B}}(2T_n)$, for each $i\in \{1,2,\ldots, k(n)\}$. Note that $\sum_{i=1}^{k(n)}{\rm
Area}(\S(p_i(n),T_n))\leq {\rm Area} (\S\cap {\mbox{\bb B}}(3T_n)). $ Since $\S$ has no boundary, then ${\rm Area} (\S\cap {\mbox{\bb B}}(3T_n))\leq
\frac{4}{3}\pi c_2(3T_n)^3$ (see item [*\[n2\]*]{} of Theorem \[T\]). Therefore, we obtain that for all $n$, there exists an $i$, such that $${\rm Area }(\S(p_i(n),T_n))\leq \frac{c}{n}T_n^3,$$ for a fixed constant $c$. By definition of $A_{\inf}(\S,3)$, we conclude that $A_{\inf}(\S,3)=0.$ Since we have shown that [*\[A3\]*]{} $\implies$ [*\[sym\]*]{}, ${\cal T}(\S)$ contains a minimal element $\S'$ with a plane of Alexandrov symmetry. Since ${\cal T}(\S)\subset {\cal T}(M)$, ${\cal T}(M)$ contains a minimal element with a plane of Alexandrov symmetry. Thus ${\it \ref{G3}
\implies \ref{sym}}$ which completes the proof of item [*\[inf3\]*]{}.
We next prove item [*\[infty\]*]{}. Assume that $M$ has an infinite number of ends. If $M$ has empty boundary, then by the arguments in the previous paragraph $A_{\inf}(M,3)=0$ and thus ${{\cal T}(M)}$ contains a minimal element with a plane of Alexandrov symmetry. By Remark \[rm25\], if $M$ has nonempty, compact boundary, then it has a fixed size regular neighborhood on its mean convex side, which is sufficient for item [*2*]{} of Theorem \[T\] to hold and then to apply the arguments in the previous paragraph. This proves that item [*\[infty\]*]{} holds.
We next prove item [*\[bal\_a\]*]{}. Arguing by contrapositive, suppose that the conclusion of item [*\[bal\_a\]*]{} fails to hold and we will prove that ${{\cal T}(M)}$ contains an element with a plane of Alexandrov symmetry. Since the conclusion of [*\[bal\_a\]*]{} fails to hold, there exists a sequence of surfaces $\S(n)\in {\mbox{\bb T}}(M)$ with end representatives $E(n)$, and positive numbers $F(n)\to\infty$ as $n\to\infty$ such that for any $R(n)>0$, there exist balls $B_n$ of radius $F(n)$ such that $$B_n\subset [{{\mbox{\bb R}}^3}- ({\mbox{\bb B}}(R(n))\cup E(n))].$$
Choose $R(n)>F(n)$ sufficiently large so that $\partial E(n)\subset
{\mbox{\bb B}}(\frac{R(n)}{2})$. After rotating $B_n$ around an axis passing through the origin, we obtain a new ball $K_n\subset {{\mbox{\bb R}}^3}-
({\mbox{\bb B}}(R(n))\cup E(n))$ of radius $F(n)$ such that $\partial K_n$ intersects $E(n)$ at a point $p_n$ of extrinsic distance at least $\frac{R(n)}{2}$ from $\partial E(n)$. After choosing a subsequence, suppose that $E(n)-p_n$ converges to a surface $\S_{\infty}\in
{\mbox{\bb T}}(M)$ which lies in a halfspace of ${{\mbox{\bb R}}^3}$ which is a limit of some subsequence of the translated balls $K_n-p_n$. By item [*\[half\]*]{}, ${\cal T}(\S_\infty)\subset {\cal T}(M)$ contains a surface with a plane of Alexandrov symmetry, which completes the proof of item [*\[bal\_a\]*]{}.
The proof of item [*\[bal\_b\]*]{} is a modification of the proof of item [*\[infty\]*]{}. In fact, if $\Sigma_n \in {\mbox{\bb T}}(M)$ is a sequence of surfaces with at least $n$ ends, $n$ going to infinity, then $A_{\inf}(M,3)=0$, which implies that ${\cal T}(M)$ contains a minimal element with a plane of Alexandrov symmetry.
We now prove that item [*\[inf2\]*]{} holds. First observe that ${\it \ref{Afinite1} \implies \ref{A2}}$ and that ${\it
\ref{Gfinite1}\implies \ref{G2}}$. Also, equation implies that ${\it \ref{A2}\implies \ref{G2}}$ and that ${\it
\ref{Afinite1}\implies \ref{Gfinite1}}$. An argument similar to the proof of ${\it \ref{sym} \implies \ref{Afinite2}}$ shows that ${\it
\ref{D} \implies \ref{Afinite1}}$. In order to complete the proof of item [*9*]{}, it suffices to show ${\it \ref{G2}\implies
\ref{D}}$. Let $\S$ be a minimal element of ${{\cal T}(M)}$ satisfying [*\[G2\]*]{}. By item [*\[inf3\]*]{}, there exists a minimal element $\S'\in {\cal T}(\S)$ with a plane of Alexandrov symmetry. By minimality of $\S$, $\S\in {\cal T}(\S')$, and so $\S$ also has a plane $P$ of Alexandrov symmetry (the same plane as $\S'$ up to some translation). In particular, $\S$ lies in a fixed slab in ${{\mbox{\bb R}}^3}$.
After a possible rotation of $\S$, assume that $P=\{x_3=0\}$ and so, $\Sigma \subset \{-a\leq x_3\leq a\}$ for some $a>0$. Since $G_{\inf}(\S,2)=0$, there exists a sequence of points $p_n=(x_1(n),
x_2(n),0)\in \Sigma$, numbers $R_n$ with $R_n\to \infty $, such that ${\rm Genus} (\S(p_n,R_n))<\frac{1}{n}R_n^2.$ Similar to the proof of ${\it \ref{G3} \implies \ref{sym}}$, the fact that $G_{\inf}(\S,
2)=0$ implies that we can find a sequence of points $q_n\in {\mbox{\bb B}}(p_n,
R_n)\cap P$ diverging in ${{\mbox{\bb R}}^3}$ and numbers $k_n$, with $k_n\to
\infty$, such that one of the following statements holds.
1. $\mbox{Genus}(\S(q_n, k_n))=0.$
2. ${\mbox{\bb B}}(q_n, k_n)\subset [\S(p_n, \frac{R_n}{2})-\S(p_n,R_n)]$ and, as $n$ varies, there exist points $s_n\in
\partial {\mbox{\bb B}}(q_n,k_n)\cap \S(p_n, R_n)\cap P$ diverging in ${{\mbox{\bb R}}^3}$.
We will consider the two cases above separately. If statement 1 holds, then a subsequence of the translated surfaces $\S-q_n$ yields a limit surface $\S_{\infty}\in {\cal T}(\S)$ of genus zero with $P$ as a plane of Alexandrov symmetry. If $\S_{\infty}$ has a finite number of ends, then it has an annular end. In this case, the end is asymptotic to a Delaunay surface [@kks1]. Therefore ${\cal
T}(\S)$ contains a Delaunay surface $\S'$ and since $\S$ is a minimal element, $\S\in {\cal T}(\S')$ which implies $\S$ itself is a Delaunay surface. Suppose $\S_\infty$ has an infinite number of ends. Note that $\S_\infty$ lies in a slab which implies that ${\rm
Area}(\S_\infty\cap {\mbox{\bb B}}(R))\leq C_2R^2$ for some constant $C_2$. In this case, a modification of the end of the proof that [*\[G3\]*]{} $\implies$ [*\[sym\]*]{} shows that for each $n\in {\mbox{\bb N}}$, there exist numbers $T_n$ with $T_n\to \infty$ such that the number $k(n)$ of components $\{\S_1(T_n), \S_2(T_n),\ldots,
\S_{k(n)}(T_n)\}$ in $\S_{\infty} - {\mbox{\bb B}}(T_n)$ is at least $n$ and, after possibly reindexing, there is a point $p_1(n)\in \S_1(T_n)\cap
\partial {\mbox{\bb B}}(2T_n)$, a constant $C$ such that ${\rm
Area}(\S_1(p_1(n), T_n)\leq \frac{C}{n} T_n^2$. This implies that one can find diverging points $q_n\in {\mbox{\bb B}}(p_1(n), T_n)\cap P$ and numbers $r_n\to \infty$ such that ${\mbox{\bb B}}(q_n,r_n)\subset [{\mbox{\bb B}}(p_1(n),
\frac{T_n}{2})-\S_1(p_1(n),T_n)]$ and there are points $s_n\in \partial{{\mbox{\bb B}}}(q_n, r_n)\cap \S_1(p_1(n), T_n)$ such that $|s_n|\to
\infty$. It follows that a subsequence of the surfaces $\S_1(p_1(n),
T_n)-s_n$ converges to a surface $\S_\infty$ which lies in halfspace whose boundary plane is a vertical plane. Item [*\[inf3\]*]{} of Theorem \[T\] implies that ${\cal T}(\S_\infty)$ contains a surface $\S'$ with the plane $P$ as a plane of Alexandrov symmetry as well as a vertical plane of Alexandrov symmetry. Therefore, $\S'$ is cylindrically bounded and so it is a Delaunay surface [@kks1]. Since $\S \in {\cal T}(\S')$, $\S$ is a Delaunay surface [@kks1].
We now consider the case where statement 2 holds. A modification of the proof of the case where statement 1 holds (this time translating $\S$ by the points $-s_n$ instead by the points $-q_n$) then demonstrates that there is a $\S'\in {\cal T}(\S)$ with both the plane $P$ and a vertical plane as planes of Alexandrov symmetry. As before, we conclude that $\S$ is a Delaunay surface. This completes the proof of item [*9*]{}.
We now prove item [*\[oneend\]*]{}. Let $\S$ be a minimal element in ${\cal T}(M)$. If $\S$ has a plane of Alexandrov symmetry and ${\mbox{\bb T}}(\S)$ contains a surface $\S'$ with more than one end, then Theorem [*\[special\]*]{}, which does not depend on the proof of this item, implies that $\S'$ has at least one annular end, from which it follows that ${{\cal T}(\S)}$ contains a Delaunay surface $D$. Since $\S$ and $D$ are minimal elements of ${\cal T}(\S)$, then $\S\in
{\cal T}(\S)={\cal T}(D)$, and so $\S$ is a translation of $D$. Since $\S$ is a Delaunay surface (a translation of $D$), then clearly every surface in ${\mbox{\bb T}}(\S)$ is also a translation of a Delaunay surface, which proves item [*\[oneend\]*]{} under the additional hypothesis that $\S$ has a plane of Alexandrov symmetry.
Thus, arguing by contradiction, suppose that $\S$ fails to have a plane of Alexandrov symmetry and ${\mbox{\bb T}}(\S)$ has a surface with more than one end. Since $\S$ is a minimal element, then $\S\in {\cal
T}(\widetilde{\S})$ for any $\widetilde{\S}\in {\cal T}(\S)$, and so no element of ${\cal T}(\S)$ has a plane of Alexandrov symmetry. By item [*\[bal\_b\]*]{}, there is a bound on the number of ends of any surface in ${\mbox{\bb T}}(\S)$. Let $\S'\in {\mbox{\bb T}}(\S)$ be a surface with the largest possible number $n\geq2$ of ends and let $\{E_1, E_2,
\ldots, E_n\}$ be pairwise disjoint end representatives for its $n$ ends. By item [*\[bal\_a\]*]{}, the ends $E_1, E_2,\ldots, E_n$ are uniformly close to each other. It now follows from the definition of ${\mbox{\bb T}}(\S')$ that every element of ${{\mbox{\bb T}}}(\S')$ must have at least $n$ components, each such component arising from a limit of translations of each of the ends $E_1, E_2, \ldots, E_n$.
By our choice of $n$, we find that every surface in ${\mbox{\bb T}}(\S')\subset
{\mbox{\bb T}}(\S)$ has exactly $n$ components. From the minimality of $\S$, $\S$ must be a component of some element $\S''\in {\mbox{\bb T}}(\S')$. But then our previous arguments imply ${\mbox{\bb T}}(\S'')$ contains a surface $\Delta$ with $n-1$ ends coming from translational limits of the components of $\S''$ different from $\S$ and at least two additional components (in fact $n$ components) arising from translational limits of $\S \subset \S''$. Hence, ${\mbox{\bb T}}(\S'') \subset {\mbox{\bb T}}(\S')$ contains a surface $\Delta$ with at least $n+1$ components, which contradicts the definition of $n$. This contradiction completes the proof of item [*\[oneend\]*]{}.
We are now in a position to prove item [*\[sca\]*]{} of the theorem. The first step in this proof is the following assertion.
\[assca\] Suppose $\S\in {{\mbox{\bb T}}}(M)\cup\{M\}$ and every element in ${\mbox{\bb T}}(\S)$ is connected. There exists a function $f\colon [1,\infty)\to [1,\infty)$ such that for every $\Omega\in{{\cal T}(\S)}$ and for all points $p,q\in \Omega$ with $1\leq
d_{{\mbox{\bbsmall R}}^3}(p,q)\leq R$, then $$d_{\Omega}(p,q)\leq f(R) d_{{\mbox{\bbsmall R}}^3}(p,q).$$ Furthermore, if no element in ${\mbox{\bb T}}(\S)$ has a plane of Alexandrov symmetry, then $\S$ is chord-arc.
Suppose $\S\in {{\mbox{\bb T}}}(M)\cup\{M\}$ and every surface in ${\mbox{\bb T}}(\S)$ is connected. If there fails to exist the desired function $f$, then there exists a positive number $R$, a sequence of surfaces $\Omega(n)\in {\mbox{\bb T}}(\S)$ and points $p_n,q_n\in \Omega (n)$ such that for $n\in {\mbox{\bb N}}$, $$1\leq d_{{\mbox{\bbsmall R}}^3}(p_n, q_n)\leq R\;\;\;{\rm
and}\;\;\; n \cdot d_{{\mbox{\bbsmall R}}^3} (p_n, q_n)\leq
d_{\Omega(n)}(p_n,q_n).$$ Since by hypothesis every surface in ${\mbox{\bb T}}(\S)$ is connected, ${\mbox{\bb T}}(\S)={\cal
T}(\S)$. As ${\cal T}(\S)$ is sequentially compact and ${{\cal T}(\S)}={\mbox{\bb T}}(\S)$, the sequence of surfaces $\Omega(n)-p_n\in {\cal
T}(\S)$ can be chosen to converge to a $\S_\infty \in {\mbox{\bb T}}(\S)={{\cal T}(\S)}$ and the points $q_n-p_n$ converge to a point $q\in\S_\infty$. Clearly $\S_\infty$ is not connected because it has a component passing through $q$ and another component passing through the origin (the intrinsic distance between $\vec{0}\in \Omega(n)-p_n$ and $q_n-p_n\in \Omega(n)-p_n$ is at least $n$). But by assumption, every surface in ${\mbox{\bb T}}(\S)$ is connected. This contradiction proves the existence of the desired function $f$.
Suppose now that ${{\cal T}(\S)}$ contains no element with a plane of Alexandrov symmetry and let $f$ be a function satisfying the first statement in the assertion. Since $\S$ is an end representative of $\S$ itself, item [*\[balls\]*]{} of the theorem implies that there exists an $R_0>0$ such that every ball in ${{\mbox{\bb R}}^3}$ of radius at least $R_0$ intersects $\S$ in some point. Let $k$ be a positive integer greater than $R_{0}+1$. Fix any two points $p,q\in \S$ of extrinsic distance at least $4k$. Let $v=\frac{q-p}{|q-p|}$, $p_0=p$ and $p_{i+1}=p_i+2kv$, where $i\in\{0,1,\ldots, n-1\} $ and $q\in
\overline{{\mbox{\bb B}}}(p_n,k)$. By our choice of $k$, an open ball of radius $k-1$ always intersects $\S$ at some point. For each $0<i<n$, let $s_i\in
\S\cap {\mbox{\bb B}}(p_i,k-1)$; we choose $s_0=p$ and $s_n=q$. Since for each $i<n$, $d_{{\mbox{\bbsmall R}}^3}(s_i, s_{i+1})\leq 4k$ and $d_{{\mbox{\bbsmall R}}^3}(s_i, s_{i+1})\geq 1$, then $d_{\S}(s_i,
s_{i+1})\leq f(4k)4k$. Using the triangle inequality and $2(n-1)k\leq d_{{\mbox{\bbsmall R}}^3}(p,q)$, we obtain $$d_{\S}(p,q)\leq \sum_{i=0}^{n-1}d_{\S}(s_i, s_{i+1})\leq
nf(4k)4k\leq 2f(4k)( d_{{\mbox{\bbsmall R}}^3}(p,q)+2k)< 4f(4k)
d_{{\mbox{\bbsmall R}}^3}(p,q).$$ Thus, $\S$ is chord-arc with constant $4f(4k)$, which completes the proof of the assertion.
We now return to the proof of item [*\[sca\]*]{}. Let $\S \in {{\cal T}(M)}$ be a minimal element. By the last statement in item [*\[oneend\]*]{}, the minimal element $\S$ satisfies ${{\cal T}(\S)}={\mbox{\bb T}}(\S)$ and so, every surface in ${\mbox{\bb T}}(\S)$ is connected. Thus, by Assertion [*\[assca\]*]{}, if $\S$ fails to have a plane $P$ of Alexandrov symmetry, then $\S$ is chord-arc. Suppose now that $\S$ has a plane $P$ of Alexandrov symmetry. If $\S$ were to fail to be chord-arc, then the proof of item [*\[inf2\]*]{} shows that either $\S$ is a Delaunay surface or else there exists an $R_0>0$ such that every ball $B$ in ${{\mbox{\bb R}}^3}$ of radius $R_0$ and centered at a point of $P$ must intersect $\S$. In the first case, $\S$ is a Delaunay surface, which is clearly chord-arc. In the second case, the existence of points in $B\cap \S$ allows one to modify the proof of Assertion [*\[assca\]*]{} to show that $\S$ is chord-arc. Thus, item [*\[sca\]*]{} of the theorem is proved.
In order to prove item [*\[n10\]*]{}, we need the following lemma.
\[lemma\] Let $\S$ be a minimal element in ${{\cal T}(M)}$. For all $D, \, \ve>0$, there exists a $d_{\ve,D}>0$ such that the following statement holds. For any $B_{\S}(p,D)\subset \S$ and for all $q\in \S$, there exists $q'\in \Sigma$ such that $B_{\S}(q',D)\subset B_\Sigma(q,d_{\ve,D})$ and $d_{\cal H}(B_\Sigma(p,D)-p,\, B_\Sigma(q',D)-q')<\ve.$ Here $B_\Sigma(p,R)$ denotes the intrinsic ball of radius $R$ centered at $p$.
Arguing by contradiction, suppose that the claim in the lemma is false. Then there exist $D, \, \ve>0$ such that the following holds. For all $n\in {\mbox{\bb N}}$, there exist intrinsic balls $B_\Sigma(p_n,D)\subset \Sigma$ and $q_n\in \Sigma$ such that for any $B_\Sigma(q',D)\subset B_\Sigma(q_n, n)$, then $d_{\cal
H}(B_\Sigma(p_n,D)-p_n,B_\Sigma(q',D)-q')>\ve.$ In what follows, we further simplify the notation and we let $B_\Sigma(p)$ denote $B_\Sigma(p,D)$. After going to a subsequence, we can assume that the set of translated surfaces, $\Sigma-p_n$, converges $C^2$ to a complete, strongly Alexandrov embedded, $CMC$ surface $\Sigma_\infty$ passing through the origin $\vec{0}$. By item [*\[oneend\]*]{}, $\S_\infty$ is connected and we consider it to be pointed so that $B_\Sigma(p_n)-p_n$ converges to $B_{\Sigma_\infty}(\vec{0})$. Also, we can assume that $B_\Sigma(q_n,n)-q_n$ converges to a complete, connected, pointed, strongly Alexandrov embedded $CMC$ surface $\Sigma_\infty'$. The previous discussion implies that for any $z\in \Sigma_\infty'$, there exists a sequence $B_\Sigma(z_n)\subset B_\Sigma(q_n,n)$, such that $$\label{eq1}
d_{\cal H}(B_\Sigma(z_n)-z_n,B_{\Sigma'_\infty}(z)-z)<\frac{\ve}{4}
\quad \text{for $n$ large}.$$ Furthermore, we can also assume that $$\label{eq12}
d_{\cal
H}(B_\Sigma(p_n)-p_n,B_{\Sigma_\infty}(\vec{0}))<\frac{\ve}{4},$$ and since $B_\Sigma(z_n)\subset B_\Sigma(q_n,n)$, then $$\label{eq2}d_{\cal H}(B_\Sigma(p_n)-p_n,B_\Sigma(z_n)-z_n)>\ve.$$
Recall that since $\Sigma$ is a minimal element, item [*\[n7\]*]{} in Theorem \[T\] implies that $$\Sigma,\;\Sigma_\infty,\;\Sigma'_\infty\in {\cal T}(\Sigma)={\cal
T}(\Sigma_\infty)={\cal T}(\Sigma'_\infty).$$ In order to obtain a contradiction it suffices to show that there exists an $\alpha>0$ such that $$d_{\cal
H}(B_{\Sigma'_\infty}(z)-z,B_{\Sigma_\infty}(\vec{0}))>\alpha$$ for any $z\in\Sigma'_\infty$ because this inequality clearly implies that $\Sigma_\infty \notin{\cal T}(\Sigma'_\infty)$. Fix $z\in\Sigma'_\infty$ and let $z_n$ and $p_n$ be as given by equations and .
In what follows, we are going to start with equation , apply the triangle inequality for the Hausdorff distance between compact sets, then apply the triangle inequality and equation , and finally we apply . For $n$ large, $$\begin{gathered}
\ve<d_{\cal H}(B_\Sigma(p_n)-p_n, B_\Sigma(z_n)-z_n)\leq \\
\leq d_{\cal H}(B_\Sigma(p_n)-p_n,B_{\Sigma'_\infty}(z)-z)+d_{\cal
H}(B_{\Sigma'_\infty}(z)-z,B_\Sigma(z_n)-z_n)<\\ <d_{\cal
H}(B_\Sigma(p_n)-p_n,B_{\Sigma_\infty}(\vec{0}))+d_{\cal
H}(B_{\Sigma_\infty}(\vec{0}),B_{\Sigma'_\infty}(z)-z)+\frac{\ve}{4}< \qquad\\
\qquad\qquad\qquad<\frac{\ve}{2}+d_{\cal
H}(B_{\Sigma'_\infty}(z)-z, B_{\Sigma_\infty}(\vec{0})).
\qquad\qquad\qquad\qquad\qquad\qquad\end{gathered}$$
This inequality implies $d_{\cal
H}(B_{\Sigma'_\infty}(z)-z,B_{\Sigma_\infty}(\vec{0}))>\frac{\ve}{2}$, which, after setting $\a=\frac{\ve}{2}$, completes the proof of the lemma.
Notice that if $X\subset \Sigma$ is a compact domain of intrinsic diameter less than $D$, then for a point $p\in\Sigma$, $X\subset
B_\Sigma(p,2D)$. The next lemma is a consequence of Lemma \[lemma\] and the following observation regarding the Hausdorff distance: Given three compact sets $A,B,X\subset \Sigma$ with $X\subset A$, if $d_{\cal H}(A,B)<\ve$, then there exists $X'\subset
B$ such that $d_{\cal H}(X,X')<\ve.$
Let $\S$ be a minimal element of ${\cal T}(M)$. For all $D, \,
\ve>0$, there exists a $d_{\ve,D}>0$ such that the following statement holds. For every smooth, connected compact domain $X\subset
\S$ with intrinsic diameter less than $D$ and for each $q\in \S$, there exists a smooth compact, connected domain $X_{q,\ve}\subset
\S$ and a translation, $i\colon {{\mbox{\bb R}}^3}\to {{\mbox{\bb R}}^3}$, such that $$d_{\S}(q,X_{q,\ve})<d_{\ve,D}\;\;\; \mbox{and} \;\;\; d_{\cal H}(X,
i(X_{q,\ve}))<\ve,$$ where $d_{\S}$ is distance function on $\S$.
In order to finish the proof of item [*\[n10\]*]{}, we remark that item [*\[sca\]*]{} implies intrinsic and extrinsic distances are comparable when the intrinsic distance between the points is at least one. Thus, the above lemma implies the first statement in item [*\[n10\]*]{}. The second statement is an immediate consequence of the first statement, which completes the proof.
Theorem \[sp2\] is now proved.
10000
[*Proof of Corollary \[cor2\].*]{} We first prove item [*1*]{} of the corollary. By equation , $A_{\sup}(M,3)=0$ implies $G_{\sup}(M,3)=0$. On the other hand, if $G_{\sup}(M,3)=0$, then for any $\Sigma\in{\cal T}(M)$, $G_{\sup}(\Sigma,3)=0$. In particular, for any minimal element $\S\in {{\cal T}(M)}$, $G_{\inf}(\Sigma,3)=0$. By item [*\[inf3\]*]{} of Theorem \[sp2\], ${\cal T}(\Sigma)$ contains a minimal element $\Sigma'$ with a plane of Alexandrov symmetry. Since $\Sigma$ is a minimal element, $\Sigma\in {\cal
T}(\Sigma')$ and therefore has a plane of Alexandrov symmetry. This proves that item [*1*]{} holds.
The proof of item [*2*]{} follows from arguments similar to the ones in the proof of item [*1*]{}, using item [*\[inf2\]*]{} of Theorem \[sp2\] instead of item [*\[inf3\]*]{}.
10000
[In regards to item [*4*]{} of Theorem \[sp2\], it has been conjectured by Meeks [@me17] that if $M$ is a properly embedded $CMC$ surface in ${{\mbox{\bb R}}^3}$ which lies in the halfspace $\{x_3\geq 0\}$, then it has a horizontal plane of Alexandrov symmetry. This conjecture holds when $M$ has finite topology [@kks1] ]{}
\[rm1\] [In $\cite{mt5}$, we give a natural generalization of Theorem \[T\] and \[sp2\] to the more general case of separating $CMC$ hypersurfaces $M$ with bounded second fundamental form in an $n$-dimensional noncompact homogeneous manifold $N$. In that paper, we obtain some interesting applications of this generalization to the classical setting where $N$ is $\mathbb{R}^n$ or hyperbolic $n$-space, $\mathbb{H}^n$, which are similar to the applications given in Theorem \[sp2\].]{}
\[rm3\] [In [@mt2], we prove that if $M\subset {{\mbox{\bb R}}^3}$ is a strongly Alexandrov embedded $CMC$ surface with bounded second fundamental form and ${{\cal T}(M)}$ contains a Delaunay surface, then $M$ is rigid. In [@smyt1], Smyth and Tinaglia show that if $M$ contains a surface with a plane of Alexandrov symmetry, then $M$ is locally rigid[^7]. In relation to these rigidity results note that Theorem \[sp2\] gives several different constraints on the geometry or the topology of $M$ that guarantee the existence of a Delaunay surface or a surface with a plane of Alexandrov symmetry in ${{\cal T}(M)}$. The first author conjectures that [*the helicoid is the only complete, embedded, constant mean curvature surface in ${{\mbox{\bb R}}^3}$ which admits more than one noncongruent, isometric, constant mean curvature immersion into ${{\mbox{\bb R}}^3}$ with the same constant mean curvature*]{}. Since intrinsic isometries of the helicoid extend to ambient isometries, this conjecture would imply that [*an intrinsic isometry of a complete, embedded, constant mean curvature surface in ${{\mbox{\bb R}}^3}$ extends to an ambient isometry of ${{\mbox{\bb R}}^3}$*]{}.]{}
Embedded $CMC$ surfaces with a plane of Alexandrov symmetry and more than one end. {#sc4}
==================================================================================
In this section we prove the following topological result that uses techniques from the proof of Theorem \[sp2\]. In the next theorem the hypothesis that the surface $M$ be embedded can be replaced by the weaker condition that it is embedded in the complement of its Alexandrov plane of symmetry.
\[special\] Suppose $M$ is a not necessarily connected, complete embedded $CMC$ surface with bounded second fundamental form, possibly empty compact boundary, a plane of Alexandrov symmetry, at least $n$ ends and every component of $M$ is noncompact. If $n$ is at least two, then $M$ has at least $n$ annular ends. Furthermore, if $M$ has empty boundary and more than one component, then each component of $M$ is a Delaunay surface.
The following corollary is an immediate consequence of the above theorem and the result of Meeks [@me17] that a connected, noncompact, properly embedded $CMC$ surface with one end must have infinite genus.
\[corfinite\] Suppose $M$ is a connected, noncompact, complete embedded $CMC$ surface with bounded second fundamental form and a plane of Alexandrov symmetry. Then $M$ has finite topology if and only if $M$ has a finite number of ends greater than one.
In regards to Theorem \[special\] when $n=\infty$, we note that there exist connected surfaces of genus zero satisfying the hypotheses of the theorem which are singly-periodic and have an infinite number of annular ends. It is important to notice that the hypothesis in Theorem \[special\] that $M$ has bounded second fundamental form is essential; otherwise, there are counterexamples (see Remark \[r4.7\]).
We first describe some of the notation that we will use in the proof of the theorem. We will assume that $M$ has a plane $P$ of Alexandrov symmetry and $P$ is the $(x_1,x_2)$-plane. We let ${\mbox{\bb S}}^1(R)=\partial (P\cap{\mbox{\bb B}}(R))$. Assume that $M$ is a bigraph over a domain $\Delta \subset P$ and $R_0$ is chosen sufficiently large, so that $\partial M \subset {\mbox{\bb B}}(R_0)$ and $\Delta -{\mbox{\bb B}}(R_0)$ contains $n$ noncompact components $\Delta_1, \Delta_2, \ldots, \Delta_n$. Let $M_1,\, M_2\subset M$ denote the bigraphs with boundary over the respective regions $\Delta_1, \,\Delta_2$. Let $X$ be the component in $P-(\De_1 \cup \De_2)$ with exactly two boundary curves $\partial_1, \,\partial_2$, each a proper noncompact curve in $P$ and such that $\partial_1 \subset
\partial \De_1$, $\partial_2\subset \partial \De_2$. The curve $\partial_1$ separates $P$ into two closed, noncompact, simply-connected domains $P_1,\, P_2$, where $\De_1\subset P_1$ and $\De_2\subset P_2$.
Now choose an increasing unbounded sequence of numbers $\{R_n\}_{n\in {\mbox{\bbsmall N}}}$ with $R_1>R_0$ chosen large enough so that for $i=1,2,$ there exists a unique component of $P_i\cap {\mbox{\bb B}}(R_1)$ which intersects $P_i \cap {\mbox{\bb S}}^1 (R_0)$ and so has $P_i \cap
{\mbox{\bb S}}^1(R_0)$ in its boundary; we will also assume that the circles ${\mbox{\bb S}}^1 (R_n)$ are transverse to $\partial \De_1 \cup
\partial \De_2$ for each $n$. By elementary separation properties, for $i=1,2$, there exists a unique component $\s_i(n)$ of $P_i \cap
{\mbox{\bb S}}^1(R_n)$ which separates $P_i$ into two components, exactly one of whose closure is a compact disk $P_i(n)$ with $P_i \cap
{\mbox{\bb S}}^1(R_0)$ in its boundary; note that the collection of domains $\{P_i(n)\}_n$ forms a compact exhaustion of $P_i$. See Figure \[fig1\].
![$P_1(1)$ is the yellow shaded region containing $\sigma_1(1)$ and an arc of $\partial_1$ in its boundary. This figure illustrates the possibility that $\Delta_1$ may equal $P_1$ while $\Delta_2$ may be strictly contained in $P_2$.[]{data-label="fig1"}](fig2.jpg){width="3in"}
Since $\s_1(n)$ is disjoint from $\s_2(n)$ and each of these sets is a connected arc in ${\mbox{\bb S}}^1(R_n)$, then, after possibly replacing the sequence $\{ R_n\}_{n\in {\mbox{\bbsmall N}}}$ by a subsequence and possibly reindexing $P_1,P_2$, for each $n\in {\mbox{\bb N}}$, the arc $\s_1(n)$ is contained in a closed halfspace $K_n$ of ${{\mbox{\bb R}}^3}$ with boundary plane $\partial K_n$ being a vertical plane passing through the origin $\vec{0}$ of ${{\mbox{\bb R}}^3}$. Let $\De_1(n)=\De_1\cap P_1 (n)$ and let $\overline{M}_1(n)\subset M_1$ be the compact bigraph over $\De_1(n)$. Let ${\widehat}{K}_n$ be the closed halfspace in ${{\mbox{\bb R}}^3}$ with $K_n\subset
{\widehat}{K}_n$ and such that the boundary plane $\partial {\widehat}{K}_n$ is a distance $\frac{2}{H}+ R_0$ from $\partial K_n$, where $H$ is the mean curvature of $M$. Note that $\partial \overline{M}_1(n)$ is contained in the union of the solid cylinder over $\overline{{\mbox{\bb B}}}(R_0)$ and the halfspace $K_n$. Thus, the distance from $\partial \overline{M}(n)$ to $\partial {\widehat}{K}_n$ is at least $\frac{2}{H}$. By the Alexandrov reflection principle and the $\frac{1}{H}$ height estimate for $CMC$ graphs with zero boundary values and constant mean curvature $H$, we find that $\overline{M}_1(n)\subset {\widehat}{K}_n$. After choosing a subsequence, the halfspaces ${\widehat}{K}_n$ converge on compact sets of ${{\mbox{\bb R}}^3}$ to a closed halfspace $K$. Since for all $n\in{\mbox{\bb N}}$, $\overline{M}_1(n) \subset \overline{M}_1(n+1)$ and $\bigcup_{n=1}^{\infty}\overline{M}_1(n) =M_1$, one finds that $M_1\subset K$. After a translation in the $(x_1,x_2)$-plane and a rotation of $M_1$ around the $x_3$-axis, we may assume that the new surface, which we will also denote by $M_1$, lies in $\{(x_1,x_2, x_3)\mid x_2>0 \}$ and it is a bigraph over a region $\De_1 \subset \{(x_1,x_2,0)\mid
x_2>0\}.$ A straightforward application of the Alexandrov reflection principle and height estimates for $CMC$ graphs shows that, after an additional translation in the $(x_1,x_2)$-plane and a rotation around the $x_3$-axis, $\De_1$ also can be assumed to contain a divergent sequence of points $p_n=(x_1(n), x_2(n), 0)\in
\partial \De_1$ such that $\frac{x_2(n)}{x_1(n)}\to 0$ as $n$ approaches infinity. See Figure \[fig2\].
![Choosing the points $p_n$ and related data. The shaded trapezoidal region is $T(n)$.[]{data-label="fig2"}](fig3.jpg){width="6in"}
\[assertionD\] The points $p_n$ can be chosen to satisfy the following additional properties:
1. The vertical line segments $\g_n$ joining $p_n$ to $(x_1(n),0,0)$ intersect $\De_1$ only at $p_n$ and $\frac{x_1(n+1)}{x_1(n)}>n$.
2. The surfaces $M_1-p_n$ converge to a surface in ${\mbox{\bb T}}(M)$ with a related component in ${\cal T}(M)$ being a Delaunay surface $F$ with $P$ as a plane of Alexandrov symmetry and axis parallel to the $x_1$-axis.
The proof that the points $p_n$ can be chosen to satisfy statement 1 is clear. To prove that they can also be chosen to satisfy statement 2 can be seen as follows. Let $S_n \subset P$ be the circle passing through the points $p_n$ and $(\frac{x_1(n)}{10},0,0)$ with center on the line $\{(x_1(n),s,0)
\mid s <x_2(n)\}$ and let $E_n$ denote the closed disk with boundary $S_n$. Consider the family of translated disks $E_n(t)=E_n -(0,t,0)$ and let $t_0$ be the largest $t\geq 0$ such that $E_n(t)$ intersects $\Delta_1$ at some point and let $D_n=E_n(t_0)$. By construction and after possibly replacing by a subsequence, points in $D_n \cap
\Delta_1$ satisfy the first statement in the assertion as well as the previous property that the ratio of their $x_2$-coordinates to their $x_1$-coordinates limit to zero as $n\to \infty$. Next replace the previous point $p_n$ by any point of $\partial D_n \cap M_1$, to obtain a new sequence of points which we also denote by $p_n$. A subsequence of certain compact regions of the translated surfaces $M-p_n$ converges to a strongly Alexandrov embedded surface ${M_{\infty}}\in {{\cal T}(M)}$ which has $P$ as a plane of Alexandrov symmetry and which lies in the halfspace $x_2\geq 0$. It follows from item [*\[half\]*]{} of Theorem \[sp2\] (and its proof) that ${{\cal T}(\S)}$ contains a Delaunay surface $D$ with axis being a bounded distance from the $x_1$-axis and which arises from a limit of translates of $M_\infty$. It is now clear how to choose the desired points described in the assertion, which again we denote by $p_n$, so that certain compact regions of the translated surfaces $M-p_n$ converge to the desired Delaunay surface $F$. This completes the proof of the assertion.
As a reference for the discussion which follows, we refer the reader to Figure \[fig2\]. By Assertion \[assertionD\], we may assume that around each point $p_n$, the surface $M_1$ is closely approximated by a translation of a fixed large compact region of the Delaunay surface $F$. Without loss of generality, we may assume that the entire line containing $\g_n$ is disjoint from the self-intersection set of $\partial \Delta_1$. Let $\G_n$ be the largest compact extension of $\g_n$ so that $\G_n-\g_n\subset \De_1$ and let ${\widehat}{\G}_n$ be a line segment extension of $\G_n$ near the end point of $\G_n$ with positive $x_2$-coordinate so that ${\widehat}{\G}_n
\cap \De_1=\G_n \cap \De_1$ and so that the length of ${\widehat}{\G}_n-\G_n$ is less than $\frac{1}{n}$. Let $q_n$ denote the end point of $\widehat{\G}_n$ which is different from the point $p_n$. Without loss of generality, we may assume that the line segments $a(n)$ in $P$ joining $q_n, q_{n+1}$ are transverse to $\partial
\De_1$ and intersect $\De_1$ in a finite collection of compact intervals. If we denote by $v(n)$ the upward pointing unit vector in the $(x_1,x_2)$-plane perpendicular to $a(n)$, then the vectors $v(n)$ converge to $(0,1,0)$ as $n$ goes to infinity.
As a reference for the discussion which follows, we refer the reader to Figure \[fig3\]. Now fix some large $n$ and consider the compact region $T(n)\subset P$ bounded by the line segments ${\widehat}{\G}_n$, ${\widehat}{\G}_{n+1}$, $a(n)$ and the line segment joining $(x_1(n),0,0)$ to $(x_1(n+1),0,0)$. Consider $T(n)$ to lie in ${\mbox{\bb R}}^2$ and let $T(n)\times {\mbox{\bb R}}\subset {{\mbox{\bb R}}^3}$ be the related convex domain in ${{\mbox{\bb R}}^3}$. Let $M_1(n)$ be the component of $M_1\cap (T(n)\times {\mbox{\bb R}})$ which contains the point $p_n$. Note that $M_1(n)$ is compact with boundary consisting of an almost circle $C(\G_n)$ which is a bigraph over an arc in $\G_n$, possibly also an almost circle $C(\G_{n+1})$ which is a bigraph over an interval in $\G_{n+1}$ and a collection of bigraph components over a collection of intervals $I_n$ in the line segment $a(n)$.
We denote by $\a(n)$ the collection of boundary curves of $M_1(n)$. Let $\a_2(n)$ be the subcollection of curves in $\a(n)$ which intersect either $\G_n$ or $\G_{n+1}$, that is, $\a_2(n)=\{C(\G_n),
C(\G_{n+1})\}$ or $\a_2(n)=\{C(\G_n)\}$. Clearly, the collection of boundary curves of $M_1(n)$ which are bigraphs over the collection of intervals $I_n=\Delta_1 \cap a(n)$ is $\a(n)-\a_2(n)$. Let $\a_3(n)$ be the subcollection of curves in $\a(n)-\a_2(n)$ which bound a compact domain $\De (\a)\subset M_1-\partial M_1$, and let $\a_4(n)=\a(n)-(\a_2(n)\cup \a_3(n))$. Note that in Figure \[fig3\], $\a_2(n)=\{C(\G_n), C(\G_{n+1})\}$, $\a_3(n)$ is empty and $\a_4(n)$ consists of the single blue curve $\partial$.
\[limit\] For $n$ sufficiently large, every boundary curve $\partial$ of $M_1(n)$ which is a graph over an interval in $I_n$, bounds a compact domain $\De(\partial)\subset
M_1-\partial M_1$; in other words, $\a_4(n)$ is empty.
For any $\a\in \a(n)$, let $\eta_\a$ denote the outward pointing conormal to $\a\subset \partial M_1 (n)$ and let $D(\alpha)$ be the planar disk bounded by $\a$. Consider a boundary component $\partial\in \a_4(n)$. By the “blowing a bubble” argument presented in [@kk2], there exists another disk ${\widehat}{D}(\partial)$ on the mean convex side of $M_1$ of the same constant mean curvature as $M_1$, $\partial {\widehat}{D}(\partial)=\partial
D(\partial)$. Moreover, ${\widehat}{D}(\partial)$ is a graph over $D(\partial)$ and ${\widehat}{D}(\partial)\cap (T(n)\times{\mbox{\bb R}})=\partial
{\widehat}{D}(\partial)=\partial$. Let $\widehat{\eta}_{\partial}$ denote the inward pointing conormal to $\partial{\widehat}{D}(\partial)$. The graphical disk ${\widehat}{D}(\partial)$ is constructed so that $\langle
\eta_{\partial}-\widehat{\eta}_{\partial}, v(n)\rangle\geq0$, see Figure \[fig3\].
![Blowing a bubble ${\widehat}{D}(\partial)$ on the mean convex side of $M_1$.[]{data-label="fig3"}](fig4.jpg){width="6in"}
The piecewise smooth surface $M_1(n)\cup(\bigcup_{\a\in\a_2(n)\cup\a_3(n)}D(\a))\cup
(\bigcup_{\a\in\a_4(n)}{\widehat}{D}(\a))$ is the boundary of a compact region $W(n)\subset {{\mbox{\bb R}}^3}$. An application of the divergence theorem given in [@kks1] to the vector field $v(n)$, considered to be a constant vector field in ${{\mbox{\bb R}}^3}$ in the region $W(n)$, gives rise to the following equation:
$$\begin{gathered}
\label{eq7}
\sum_{\a\in \a_2(n)\cup\a_3(n)}\left[\int_{\a}\langle \eta_\a,
v(n)\rangle-2H\int_{D(\a)}\langle v(n), N(n)
\rangle\right]+\\+ \sum_{\partial\in \a_4(n)}\left[\int_{\partial}\langle
\eta_\partial, v(n)\rangle-2H\int_{{\widehat}{D}(\partial)}\langle v(n), N(n)
\rangle\right]=0,\end{gathered}$$
where $H$ is the mean curvature of $M$ and $N(n)$ is the outward pointing conormal to $W(n)$. Note that $\sum_{\a\in
\a_2(n)}\left[\int_{\a}\langle\eta_{\a}, v \rangle - 2H \int_{D(\a)}
\langle v(n), N \rangle \right] =\ve(n)$ converges to zero as $n\to
\infty$ because $v(n)$ converges to $(0,1,0)$ and the curves $C(\G_n), C(\G_{n+1})$ converge to curves on Delaunay surfaces whose axes are perpendicular to $(0,1,0)$. Also note that this application of the divergence theorem in [@kks1] implies that for $\a\in
\a_3(n)$, $\int_{\a}\langle \eta_{\a},
v(n)\rangle-2H\int_{D(\a)}\langle v(n), N(n)\rangle=0.$ Thus, equation reduces to the equation: $$\label{eq8}\ve(n)+\sum_{\partial\in \a_4(n)}
\left[\int_{\partial}\langle \eta_\partial,
v(n)\rangle-2H\int_{{\widehat}{D}(\partial)}\langle v(n), N(n)
\rangle\right]=0.$$
On the other hand, for each $\partial\in \a_4(n)$ $$\label{eq9} \int_{\partial}\langle \eta_{\partial}, v(n)\rangle - 2H
\int_{{\widehat}{D}(\partial)}\langle v(n), N(n)\rangle
=\int_\partial\langle \eta_{\partial}-\widehat{\eta}_{\partial},
v(n)\rangle \geq 0$$ and the length of each $\partial\in\a_4$ is uniformly bounded from below. Since $\ve(n)$ is going to zero as $n$ goes to infinity, equations and above imply that for $n$ large, the conormals $\eta_{\partial}$ and ${\widehat}{\eta}_{\partial}$ are approaching each other uniformly (see the lower left hand corner of Figure \[fig3\]). Note that the intrinsic distance of any point on the graphs ${\widehat}{D}(\partial)$ to $\partial$ is uniformly bounded (independent of $\partial$ and $n$)[^8]. The Harnack inequality, the above remark, the facts that ${\widehat}{D}(\partial)$ is simply-connected and the second fundamental form of $M$ is bounded, imply that there exists $\de>0$ such that if $\int_{\partial}\langle \eta_{\partial} -{\widehat}{\eta}_{\partial},
v(n)\rangle <\de$, then there is a disk $\De(\partial)\subset M_1 -
M_1(n)$ which can be expressed as a small graph over ${\widehat}{D}(\partial)$. The existence of $\De(\partial)$ contradicts that $\partial\in \a_4(n)$, which means $\a_4(n)=\mbox{\O}$ for $n$ sufficiently large. This contradiction proves the assertion.
We now apply Assertion \[limit\] to prove the following key partial result in the proof of Theorem \[special\].
\[final\] $M_1$ has at least one annular end.
By Assertion \[limit\], for some fixed $n$ chosen sufficiently large, every boundary curve $\a$ of $M(n)$ in the collection $\a(n)-\a_2(n)$ bounds a compact domain $\De(\a)\subset
M_1-\partial M_1$. By the Alexandrov reflection principle and height estimates for $CMC$ graphs, we find that the surface ${\widehat}{M}(n)=M(n)\cup \bigcup_{\a\in\a(n)-\a_2(n)}\Delta(\a)$ must have two almost circles in its boundary arising from $\a_2(n)$. Let $\S(k)=\bigcup_{j\leq k}\widehat{M}(n+j)$. Note that by the Alexandrov reflection argument and height estimates for $CMC$ graphs with zero boundary values, there exist half-cylinders $C(n,k)$ in ${{\mbox{\bb R}}^3}$ which contain $\S(k)$ and have fixed radii $\frac4H$. Hence there is a limit half-cylinder $C(n)\subset {{\mbox{\bb R}}^3}$ that contains $\S(\infty)=\bigcup_{k\in {\mbox{\bbsmall N}}}\S(k)\subset M$. By the main result in [@kks1], $\S(\infty)$ is asymptotic to a Delaunay surface, which proves the assertion.
It follows from the discussion at the beginning of the proof of Theorem \[special\] and Assertion \[final\] that if $M$ has at least $n$ ends, $n>1$, then it has at least $n-1$ annular ends. It remains to prove that if $M_1, M_2$ are given as in the beginning of the proof of Theorem \[special\] with $M_1$ having an annular end, then $M_2$ has an annular end as well. To see this note that the annular end $E_1\subset M_1$ is asymptotic to the end $F$ of a Delaunay surface and so after a rotation of $M$, $M_1$ is a graph over a domain $\De_1$ which contains the axis of $F$, which we can assume to be the positive $x_1$-axis. Now translate $M_2$ in the direction $(-1,0,0)$ sufficiently far so that its compact boundary has negative $x_1$-coordinates less than $-\frac{2}{H}$; call the translated surface $M'_2$ and let $\De'_2\subset P$ be the domain over which $M_2'$ is a bigraph. If for some $n\in {\mbox{\bb N}}$ the line $L_n=\{(n,t,0)\mid t\in {\mbox{\bb R}}\}$ is disjoint from $M_2'$, then $M_2'$ is contained in a halfplane of $P$ and our previous arguments imply $M_2'$ has an annular end. Thus without loss of generality, we may assume that every line $L_n$ intersects $\partial\De'_2$ a first time at some point $s_n$ with positive $x_2$-coordinate.
For $\theta\in (0,\frac{\pi}{2}]$, let $r(\theta)$ be the ray with base point the origin and angle $\theta$ and let $W(\theta)$ be the closed convex wedge in $P$ bounded by $r(\theta)$ and the positive $x_1$-axis. Let $\theta_0$ be the infimum of the set of $\theta \in
(0,\frac{\pi}{2}]$ such that $W(\theta)\cap\{s_n\}_{n\in {\mbox{\bbsmall N}}}$ is an infinite set. Because of our previous placement of $\partial
M'_2$, a simple application of the Alexandrov reflection principle and height estimates for $CMC$ graphs with zero boundary values implies that some further translate $M_2''$ of $M_2'$ in the direction $(-1,0,0)$ must be disjoint from $r(\theta_0)$. Finally, after a clockwise rotation ${\widehat}{M}_2$ of $M''_2$ by angle $\theta_0$, our previous arguments prove the existence of an annular end of ${\widehat}{M}_2$ of bounded distance from the positive $x_1$-axis. Thus, we conclude that $M_2$ also has an annular end which completes the proof of the first statement in Theorem \[special\].
We next prove the last statement of the theorem. Suppose $M\subset
{{\mbox{\bb R}}^3}$ is a complete, properly embedded $CMC$ surface with bounded second fundamental form and with the $(x_1,x_2)$-plane $P$ as a plane of Alexandrov symmetry. Suppose $M$ contains two noncompact components $M_1, M_2$ and we will prove that each of these surfaces is a Delaunay surface.
Consider $M_1$ and $M_2$ to be two disjoint end representatives of $M$ defined as bigraphs over two disjoint connected domains $\Delta_1, \Delta_2$ in $P$, respectively.
After possibly reindexing $\Delta_1$, $\Delta_2$ and applying a rigid of ${{\mbox{\bb R}}^3}$ preserving the plane $P$, then $\Delta_1\subset \{x_2\geq
0\}$
By what we have proved so far, we know that $M_2$ has an annular end $E$ which is asymptotic to the end $D(E)$ of a Delaunay surface. Let $r_E\subset \Delta_2$ be a ray contained in the axis of $D(E)$. After a rigid motion of $M$ preserving $P$ assume $r_E$ is a ray based at the origin of $P$. The arguments used to prove the first statement of the theorem show that there are two disjoint annular ends $F,\,G$ of $M_1$ such that for $R$ large the arc $\alpha(F,G,R)$ in ${\mbox{\bb S}}^1(R)-\Delta_1$ which intersects $r_E$, has one of its endpoints in $\Delta_1\cap F$ and its other endpoint in $\Delta_1\cap G$. Let $D(F),\, D(G)$ be ends of Delaunay surfaces to which $F,\, G$ are asymptotic. Let $r_F,\, r_G \subset \Delta_1$ be rays contained in the axes of $D(F),\, D(G)$ respectively. Let $\gamma_1$ be a properly embedded arc in $\Delta_1$ consisting of $r_F,\, r_G$ and a compact arc joining their endpoints. Let $\gamma_2'$ be the proper arc in $P- {\mbox{\bb B}}(R)$ consisting of $\alpha(F,G,R)$, a boundary arc in $(E\cap \partial \Delta_1)
-{\mbox{\bb B}}(R)$ and a boundary arc in $(G\cap \partial \Delta_1)-{\mbox{\bb B}}(R)$. After a small perturbation of $\gamma_2'$ we obtain a proper arc $\gamma_2$ contained in $P-\Delta_1$ which intersects $r_E$. Note that $\gamma_1$ is contained in a halfplane and since $\gamma_2$ lies at a bounded distance from $\gamma_1$, the halfplane can be chosen to contain both $\gamma_1$ and $\gamma_2$. After a rigid motion, we may assume that this halfplane is $\{x_2\geq 0\}$. Since the region bounded by $\gamma_1$ and $\gamma_2$ is a strip by construction, by elementary separation arguments, either $\gamma_1$ lies between $\{x_2=0\}$ and $\gamma_2$ or $\g_2$ lies between $\{x_2=0\}$ and $\g_1$. If $\gamma_1$ lies between $\{x_2=0\}$ and $\gamma_2$, then $\Delta_2$ lies in the halfspace, otherwise $\Delta_1$ does. After possibly reindexing, this completes the proof.
In the discussion which follows, we refer the reader to Figure \[fig4\]. By the previous assertion, we may assume $\Delta_1\subset \{x_2\geq 0\}$. Previous arguments imply that after a rigid motion of $M$, we can further assume that $M_1$ contains as annular end $E^+$ with the property that for $n\in {\mbox{\bb N}}$ sufficiently large, the line segments $\{(n,t,0)\mid t>0\}$ intersect $\Delta_1$ for a first time in a point $p_n\in E_1$. Furthermore, $E^+$ is asymptotic to the end $D^+$ of a Delaunay surface. Also we can assume that the half axis of revolution of $D^+$ lies in $P$ and is a bounded distance from the positive $x_1$-axis.
By the Alexandrov reflection principle and height estimates for $CMC$ graphs with zero boundary values, $\De_1$ must not be contained in a convex wedge of $P$ with angle less than $\pi$. Therefore, for $n\in {\mbox{\bb N}}$ sufficiently large the line segments $\{(-n,t,0)\mid t>0\}$ intersect a second annular end $\De_1$ in points $p_{-n}\in E^-$ for a first time. In this case the annular end $E^-$ is asymptotic to the end $D^-$ of another Delaunay surface and the half axis of $D^-$ in $P$ is a bounded distance from the negative $x_1$-axis.
![A picture of $M_1$ with two bubbles blown on its mean convex side.[]{data-label="fig4"}](fig5.jpg){width="5.6in"}
Similar to our previous arguments, we define for each $n\in {\mbox{\bb Z}}$ with $|n|$ sufficiently large, curves $\g_n, \G_n, \widehat{\G}_n$ and points $q_n$ as we did before (see Figures \[fig2\] and \[fig4\]). For each $n\in {\mbox{\bb N}}$ sufficiently large, we define the line segment $a(n)\subset P$ whose end points are the points $q_{-n},\, q_n$. Now define for any sufficiently large $n$, the compact region $T(n)\subset P$ bounded by the line segments $\widehat{\G}_{-n}$, $\widehat{\G}_n$, $a(n)$ and the line segment joining $(-n, 0,0)$ to $(n,0,0)$ and let $T(n)\times {\mbox{\bb R}}\subset {{\mbox{\bb R}}^3}$ be the related convex domain in ${{\mbox{\bb R}}^3}$. Let $M_1(n)$ be the component of $M_1\cap (T(n)\times {\mbox{\bb R}})$ which contains the point $p_{-n}$. Note that $M_1(n)$ is compact with boundary consisting of an almost circle $C(\G_{-n})$ which is a bigraph over an arc on $\G_{-n}$, and an almost circle $C(\G_n)$ which is a bigraph over an arc on $\G_n$ and a collection of bigraph components over a collection of intervals $I_n$ in the line segment $a(n)$.
As in previous arguments, an assertion similar to Assertion \[limit\] holds in the new setting. With this slightly modified assertion, one finds that the almost circles $C(\G_{-n})$ and $C(\G_n)$ bound a compact domain $\widehat{M}_1(n)\subset M_1$. A slight modification of the proof of Assertion \[final\] implies $M_1$ is cylindrically bounded and so, by the main theorem in [@kks1], $M_1$ is a Delaunay surface. Note that the axis of $M_1$ is an infinite line in $\De_1$ and so $\De_2$ also lies in a halfplane of $P$. The arguments above prove that $M_2$ is also a Delaunay surface, which completes the proof of the theorem.
[\[r4.7\] Using techniques in [@kap1; @map], for every integer $n>1$, it is possible to construct a surface $M_n$ with empty boundary and $n$ ends, none of which are annular, which satisfies the hypotheses of the surface $M$ in the statement of Theorem \[special\] except for the bounded second fundamental form hypothesis. Hence, the hypothesis in the theorem that $M$ has bounded second fundamental form is a necessary one in order for the conclusion of the theorem to hold. ]{}
\
Mathematics Department, University of Massachusetts, Amherst, MA 01003
[^1]: This material is based upon work for the NSF under Award No. DMS - 0703213. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF.
[^2]: In this manuscript, “Delaunay surfaces” refers to the embedded $CMC$ surfaces of revolution discovered by Delaunay [@de1] in 1841.
[^3]: A surface has [*finite topology*]{} if it is homeomorphic to a closed surface minus a finite number of points.
[^4]: A compact surface $\S$ immersed in ${{\mbox{\bb R}}^3}$ is [*Alexandrov embedded*]{} if $\S$ is the boundary of a compact three-manifold immersed in ${{\mbox{\bb R}}^3}$.
[^5]: A [*handlebody*]{} is a three-manifold with boundary which is homeomorphic to a closed regular neighborhood of some connected, properly embedded simplicial one-complex in ${{\mbox{\bb R}}^3}$.
[^6]: A proper noncompact domain $E\subset M$ is called an [*end representative*]{} for $M$ if it is connected and has compact boundary.
[^7]: $M$ is [*locally rigid*]{} if any one-parameter family of isometric immersions $M_t$ of $M$, $t\in [0,\ve)$, $M_0=M$, with same mean curvature as $M$ is obtained by a family of rigid motions of $M$.
[^8]: This uniform intrinsic distance estimate holds since $CMC$ graphs are strongly stable (existence of a positive Jacobi function) and there are no strongly stable, complete $CMC$ surfaces in ${{\mbox{\bb R}}^3}$; see Theorem 2 in [@ror1] for a proof of this result.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- 'Chris Blake[^1]'
- Alexandra Amon
- Marika Asgari
- Maciej Bilicki
- Andrej Dvornik
- Thomas Erben
- Benjamin Giblin
- Karl Glazebrook
- Catherine Heymans
- Hendrik Hildebrandt
- Benjamin Joachimi
- Shahab Joudaki
- Arun Kannawadi
- Konrad Kuijken
- Chris Lidman
- David Parkinson
- HuanYuan Shan
- Tilman Tröster
- Jan Luca van den Busch
- Christian Wolf
- 'Angus H. Wright'
bibliography:
- 'kids1000\_ggl.bib'
date: 'Received date / Accepted date'
title: 'Testing gravity using galaxy-galaxy lensing and clustering amplitudes in KiDS-1000, BOSS and 2dFLenS'
---
Introduction {#secintro}
============
A central goal of modern cosmology is to discover whether the dark energy that appears to fill the Universe is associated with its matter-energy content, laws of gravity, or some alternative physics. A compelling means of distinguishing between these scenarios is to analyse the different observational signatures that are present in the clumpy, inhomogeneous Universe, which powerfully complements measurements of the expansion history of the smooth, homogeneous Universe [e.g., @Linder05; @Wang08; @Guzzo08; @Weinberg13; @Huterer15].
Two important observational probes of the inhomogeneous Universe are the peculiar velocities induced in galaxies by the gravitational collapse of large-scale structure, which are statistically imprinted in galaxy redshift surveys as redshift-space distortions [e.g., @Hamilton98; @Scoccimarro04; @Song09], and the gravitational lensing of light by the cosmic web, which may be measured using cosmic shear surveys [e.g., @Bartelmann01; @Kilbinger15; @Mandelbaum18]. These probes are complementary because they allow differentiation between the two space-time metric potentials which govern the motion of non-relativistic particles such as galaxy tracers, and the gravitational deflection of light. The difference or “gravitational slip” between these potentials is predicted to be zero in General Relativity, but may be significant in modified gravity scenarios [e.g., @Uzan01; @Zhang07; @Jain10; @Bertschinger11; @Clifton12].
Recent advances in weak gravitational lensing datasets, including the Kilo-Degree Survey [KiDS, @Hildebrandt20], the Dark Energy Survey [DES, @Abbott18] and the Subaru Hyper Suprime-Cam Survey [HSC, @Hikage19], have steered dramatic improvements in the quality of these observational tests. Gravitational lensing now permits accurate determination of (combinations of) important cosmological parameters such as the matter density of the Universe and normalisation of the matter power spectrum, and thereby detailed comparisons with other cosmological probes such as galaxy clustering [@Alam17b] and the Cosmic Microwave Background radiation [@Planck18]. Some of these comparisons have yielded intriguing evidence of tensions on both small and large scales [e.g., @Joudaki17; @Leauthaud17; @Lange19; @Hildebrandt20; @Asgari20a], which are currently unresolved.
In this paper we perform a fresh study of this question using the latest weak gravitational lensing dataset from the Kilo-Degree Survey, KiDS-1000 [@Kuijken19], in conjunction with overlapping galaxy spectroscopic redshift survey data from the Baryon Oscillation Spectroscopic Survey [BOSS, @Reid16] and the 2-degree Field Lensing Survey [2dFLenS, @Blake16b]. We focus in particular on a simple implementation of the lensing-clustering test which compares the amplitude of gravitational lensing around foreground galaxies (commonly known as galaxy-galaxy lensing), tracing low-redshift overdensities, with the amplitude of galaxy velocities induced by these overdensities and measured by redshift-space distortions: an amplitude-ratio test. This diagnostic was first proposed by @Zhang07 as the $E_{\mathrm{G}}$ statistic, and implemented in its current form by @Reyes10 using data from the Sloan Digital Sky Survey. These measurements have subsequently been refined by a series of studies [@Blake16a; @Pullen16; @Alam17a; @delaTorre17; @Amon18; @Singh19; @Jullo19] which have used new datasets to increase the accuracy of the amplitude-ratio determination, albeit showing some evidence of internal disagreement.
The availability of the KiDS-1000 dataset and associated calibration samples allows us to perform the most accurate existing amplitude-ratio test, on projected scales up to $100 \, h^{-1}$ Mpc, including rigorous systematic-error control. As part of this analysis we use these datasets and representative simulations to study the efficacy of different corrections for the effects of source photometric redshift errors, comparing different galaxy-galaxy lensing estimators and the relative performance of angular and projected statistics. Our analysis sets the stage for future per-cent level implementations of these tests using new datasets from the Dark Energy Spectroscopic Instrument [DESI, @DESI16], the 4-metre Multi-Object Spectrograph Telescope [4MOST, @deJong19], the Rubin Observatory Legacy Survey of Space and Time [LSST, @Ivezic19] and the [*Euclid*]{} satellite [@Laureijs11].
This paper is structured as follows: in Sect. \[sectheory\] we review the theoretical correlations between weak lensing and overdensity observables, on which galaxy-galaxy lensing studies are based. In Sect. \[secest\] we summarise the angular and projected galaxy-galaxy lensing estimators derived from these correlations, with particular attention to the effect of source photometric redshift errors. In Sect. \[secamp\] we introduce the amplitude-ratio test between galaxy-galaxy lensing and clustering observables, constructed from annular differential surface density statistics, and in Sect. \[seccov\] we derive the analytical covariances of these estimators in the Gaussian approximation, including the effects of the survey window function. We introduce the KiDS-1000 weak lensing and overlapping Luminous Red Galaxy (LRG) spectroscopic datasets in Sect. \[secdata\]. We create representative survey mock catalogues in Sect. \[secmocks\], which we use to verify our cosmological analysis in Sect. \[secmocktests\]. Finally, we describe the results of our cosmological tests applied to the KiDS-LRG datasets in Sect. \[secdatatests\]. We summarise our investigation in Sect. \[secsummary\].
Theory {#sectheory}
======
In this section we briefly review the theoretical expressions for the auto- and cross-correlations between weak gravitational lensing and galaxy overdensity observables, which form the basis of galaxy-galaxy lensing studies.
Lensing convergence and tangential shear
----------------------------------------
The observable effects of weak gravitational lensing, on a source located at co-moving co-ordinate $\chi_{\mathrm{s}}$ in sky direction ${\hat{\Omega}}$, can be expressed in terms of the lensing convergence $\kappa$ [for reviews, see @Bartelmann01; @Kilbinger15; @Mandelbaum18]. The convergence is a weighted integral over co-moving distance $\chi$ of the matter overdensity $\delta_{\mathrm{m}}$ along the line-of-sight, which we can write as, $$\kappa(\chi_{\mathrm{s}},{\hat{\Omega}}) = \frac{3 \Omega_{\mathrm{m}} H_0^2}{2
c^2} \int_0^{\chi_{\mathrm{s}}} d\chi \, \frac{\chi \,
(\chi_{\mathrm{s}} - \chi)}{\chi_{\mathrm{s}}} \,
\frac{\delta_{\mathrm{m}}(\chi,{\hat{\Omega}})}{a(\chi)} ,
\label{eqkappa1}$$ assuming (throughout this paper) a spatially-flat Universe, where $\Omega_{\mathrm{m}}$ is the matter density as a fraction of the critical density, $H_0$ is the Hubble parameter, $c$ is the speed of light, and $a = 1/(1+z)$ is the cosmic scale factor at redshift $z$. We can conveniently write Eq. \[eqkappa1\] in terms of the critical surface mass density at a lens plane at co-moving distance $\chi_{\mathrm{l}}$, $$\Sigma_{\mathrm{c}}(\chi_{\mathrm{l}},\chi_{\mathrm{s}}) =
\frac{c^2}{4 \mathrm{\pi} G}
\frac{\chi_{\mathrm{s}}}{(1+z_{\mathrm{l}}) \, \chi_{\mathrm{l}} \,
(\chi_{\mathrm{s}} - \chi_{\mathrm{l}})} ,$$ where $G$ is the gravitational constant, and $\chi_{\mathrm{s}} >
\chi_{\mathrm{l}}$. Hence, $$\kappa(\chi_{\mathrm{s}},{\hat{\Omega}}) = \overline{\rho_{\mathrm{m}}}
\int_0^{\chi_{\mathrm{s}}} d\chi \,
\Sigma_{\mathrm{c}}^{-1}(\chi,\chi_{\mathrm{s}}) \,
\delta_{\mathrm{m}}(\chi,{\hat{\Omega}}) ,
\label{eqkappa2}$$ where $\overline{\rho_{\mathrm{m}}}$ is the mean matter density.[^2]
Suppose that the overdensity is associated with an isolated lens galaxy at distance $\chi_{\mathrm{l}}$ in an otherwise homogeneous Universe. In this case, Eq. \[eqkappa2\] may be written in the form, $$\kappa(\chi_{\mathrm{l}},\chi_{\mathrm{s}},{\hat{\Omega}}) \approx
\overline{\rho_{\mathrm{m}}} \,
\Sigma_{\mathrm{c}}^{-1}(\chi_{\mathrm{l}},\chi_{\mathrm{s}}) \,
\int_0^{\chi_{\mathrm{s}}} d\chi \, \delta_{\mathrm{m}}(\chi,{\hat{\Omega}}) .
\label{eqkappa3}$$ Eq. \[eqkappa3\] motivates that the weak lensing observable can be related to the projected mass density around the lens, $\Sigma = \int
\rho_{\mathrm{m}} \, d\chi$, where $\delta_{\mathrm{m}} =
\rho_{\mathrm{m}}/\overline{\rho_{\mathrm{m}}} - 1$. The convergence may be written in terms of this quantity as, $$\kappa(\chi_{\mathrm{l}},\chi_{\mathrm{s}},{\hat{\Omega}}) \approx
\Sigma_{\mathrm{c}}^{-1}(\chi_{\mathrm{l}},\chi_{\mathrm{s}}) \left(
\Sigma - \overline{\Sigma} \right) ,$$ where $\overline{\Sigma} = \int \overline{\rho} \, d\chi$ represents the average background, emphasising that gravitational lensing traces the increment between the mass density and the background.
The average tangential shear $\gamma_{\mathrm{t}}$ at angular separation $\theta$ from an axisymmetric lens is related to the convergence as, $$\langle \gamma_{\mathrm{t}}(\theta) \rangle = \langle
\overline{\kappa}(<\theta) \rangle - \langle \kappa(\theta) \rangle
,
\label{eqgt}$$ where $\overline{\kappa}(<\theta)$ is the mean convergence within separation $\theta$. At the location of the lens, angular separations are related to projected separations as $R = \chi(z_{\mathrm{l}}) \,
\theta$. Defining the differential projected surface mass density around the lens as a function of projected separation, $$\Delta \Sigma(R) = \overline{\Sigma}(<R) - \Sigma(R) ,
\label{eqdsigdef}$$ where, $$\overline{\Sigma}(<R) = \frac{2}{R^2} \int_0^R R' \, \Sigma(R') \, dR'
,$$ we find that for a single source-lens pair at distances $\chi_{\mathrm{l}}$ and $\chi_{\mathrm{s}}$ (omitting the angled brackets), $$\gamma_{\mathrm{t}}(\theta) =
\Sigma_{\mathrm{c}}^{-1}(\chi_{\mathrm{l}},\chi_{\mathrm{s}}) \,
\Delta\Sigma(R) .
\label{eqgtdsig}$$
Galaxy-convergence cross-correlation
------------------------------------
For an ensemble of sources with distance probability distribution $p_{\mathrm{s}}(\chi)$ (normalised such that $\int
p_{\mathrm{s}}(\chi) \, d\chi = 1$), the total convergence in a given sky direction is, $$\begin{split}
\kappa({\hat{\Omega}}) &= \int d\chi_{\mathrm{s}} \,
p_{\mathrm{s}}(\chi_{\mathrm{s}}) \, \kappa(\chi_{\mathrm{s}},{\hat{\Omega}})
\\ &= \overline{\rho_{\mathrm{m}}} \int_0^\infty d\chi \,
\overline{\Sigma_{\mathrm{c}}^{-1}}(\chi) \,
\delta_{\mathrm{m}}(\chi,{\hat{\Omega}}) ,
\end{split}$$ where, $$\overline{\Sigma_{\mathrm{c}}^{-1}}(\chi) = \int_\chi^\infty
d\chi_{\mathrm{s}} \, p_{\mathrm{s}}(\chi_{\mathrm{s}}) \,
\Sigma_{\mathrm{c}}^{-1}(\chi,\chi_{\mathrm{s}}) ,
\label{eqavesigc}$$ with the lower limit of the integral applying because $\Sigma_{\mathrm{c}}^{-1}(\chi_{\mathrm{l}},\chi_{\mathrm{s}}) = 0$ for $\chi_{\mathrm{s}} < \chi_{\mathrm{l}}$. We consider forming the angular cross-correlation function of this convergence field with the projected number overdensity of an ensemble of lenses with distance probability distribution $p_{\mathrm{l}}(\chi)$, $$\delta_{\mathrm{g,2D}}({\hat{\Omega}}) = \int d\chi \, p_{\mathrm{l}}(\chi) \,
\delta_{\mathrm{g}}(\chi,{\hat{\Omega}}) .$$ The galaxy-convergence cross-correlation function at angular separation ${\vec{\theta}}$ is, $$\omega_{\mathrm{g\kappa}}({\vec{\theta}}) = \langle \kappa({\hat{\Omega}}) \,
\delta_{\mathrm{g,2D}}({\hat{\Omega}}+ {\vec{\theta}}) \rangle .$$ Expressing the overdensity fields in terms of their Fourier components we find, after some algebra, $$\omega_{\mathrm{g\kappa}}({\vec{\theta}}) = \int \frac{d^2{\vec{\ell}}}{(2\mathrm{\pi})^2} \,
C_{\mathrm{g\kappa}}({\vec{\ell}}) \, \mathrm{e}^{-\mathrm{i}{\vec{\ell}}\cdot{\vec{\theta}}} ,
\label{eqwgk}$$ where ${\vec{\ell}}$ is a 2D Fourier wavevector, and the corresponding angular cross-power spectrum $C_{\mathrm{g\kappa}}(\ell)$ is given by [@Guzik01; @Hu04; @Joachimi10], $$C_{\mathrm{g\kappa}}(\ell) = \overline{\rho_{\mathrm{m}}} \int d\chi
\, p_{\mathrm{l}}(\chi) \,
\frac{\overline{\Sigma_{\mathrm{c}}^{-1}}(\chi)}{\chi^2} \,
P_{\mathrm{gm}} \left( \frac{\ell}{\chi},\chi \right) ,
\label{eqclgk}$$ where $P_{\mathrm{gm}}(k,\chi)$ is the 3D galaxy-matter cross-power spectrum at wavenumber $k$ and distance $\chi$. Taking the azimuthal average of Eq. \[eqwgk\] over all directions ${\vec{\theta}}$, the complex exponential integrates to a Bessel function of the first kind, $J_0(x)$, such that, $$\omega_{\mathrm{g\kappa}}(\theta) = \int
\frac{d^2{\vec{\ell}}}{(2\mathrm{\pi})^2} \, C_{\mathrm{g\kappa}}({\vec{\ell}}) \,
J_0(\ell \theta) = \int \frac{d\ell \, \ell}{2\mathrm{\pi}} \,
C_{\mathrm{g\kappa}}(\ell) \, J_0(\ell \theta) .$$ Using Eq. \[eqgt\] and Bessel function identities, we can then obtain an expression for the statistical average tangential shear around an ensemble of lenses, $$\gamma_{\mathrm{t}}(\theta) =
\overline{\omega_{\mathrm{g\kappa}}}(<\theta) -
\omega_{\mathrm{g\kappa}}(\theta) = \int \frac{d\ell \,
\ell}{2\mathrm{\pi}} \, C_{\mathrm{g\kappa}}(\ell) \, J_2(\ell
\theta) .
\label{eqgtmod}$$ Likewise, we can generalise Eq. \[eqgtdsig\] to apply to broad source and lens distributions: $$\gamma_{\mathrm{t}}(\theta) = \int d\chi \, p_{\mathrm{l}}(\chi) \,
\overline{\Sigma_{\mathrm{c}}^{-1}}(\chi) \, \Delta\Sigma(R,\chi) .
\label{eqgtdsigbroad}$$ Comparing the formulations of Eqs. \[eqgtmod\] and \[eqgtdsigbroad\] allows us to demonstrate that, $$\Sigma(R) = \overline{\rho_{\mathrm{m}}} \int_{-\infty}^\infty d\Pi
\, \left[ 1 + \xi_{\mathrm{gm}}(R,\Pi) \right] ,
\label{eqsig}$$ in terms of the 3D galaxy-matter cross-correlation function $\xi_{\mathrm{gm}}(R,\Pi)$ at projected separation $R$ and line-of-sight separation $\Pi$, where the constant term “$1+$” cancels out in the evaluation of the observable $\Delta \Sigma$. After some algebra we find, $$\Delta \Sigma(R) = \overline{\rho_{\mathrm{m}}} \int_0^\infty dr \,
W(r,R) \, \xi_{\mathrm{gm}}(r) ,$$ where, $$W(r,R) = \frac{4r^2}{R^2} - \left[ \frac{4r \sqrt{r^2-R^2}}{R^2} +
\frac{2r}{\sqrt{r^2 - R^2}} \right] \, H(r-R) ,$$ where $H(x) = 0$ if $x<0$ and $H(x) = 1$ if $x>0$ is the Heaviside step function. The relations in this section make the approximations of using the Limber equation [@Limber53] and neglecting additional effects such as cosmic magnification [@Unruh20] and intrinsic alignments [@Joachimi15].
Auto-correlation functions
--------------------------
In order to determine the analytical covariance in Sect. \[seccov\], we will also need expressions for the auto-correlation functions of the convergence, $\omega_{\mathrm{\kappa\kappa}}({\vec{\theta}}) = \int
\frac{d^2{\vec{\ell}}}{(2\mathrm{\pi})^2} \, C_{\mathrm{\kappa\kappa}}({\vec{\ell}}) \,
\mathrm{e}^{-\mathrm{i}{\vec{\ell}}\cdot{\vec{\theta}}}$, and the galaxy overdensity, $\omega_{\mathrm{gg}}({\vec{\theta}}) = \int \frac{d^2{\vec{\ell}}}{(2\mathrm{\pi})^2} \,
C_{\mathrm{gg}}({\vec{\ell}}) \, \mathrm{e}^{-\mathrm{i}{\vec{\ell}}\cdot{\vec{\theta}}}$. Given two source populations with distance probability distributions $p_{\mathrm{s},1}(\chi)$ and $p_{\mathrm{s},2}(\chi)$, and associated integrated critical density functions $\overline{\Sigma^{-1}_{\mathrm{c},1}}$ and $\overline{\Sigma^{-1}_{\mathrm{c},2}}$, the angular power spectrum of the convergence is given by, $$C_{\mathrm{\kappa\kappa}}(\ell) = \overline{\rho_{\mathrm{m}}}^2
\int d\chi \, \frac{\overline{\Sigma_{\mathrm{c},1}^{-1}}(\chi) \,
\overline{\Sigma_{\mathrm{c},2}^{-1}}(\chi)}{\chi^2} \,
P_{\mathrm{mm}}\left( \frac{\ell}{\chi},\chi \right) ,$$ where $P_{\mathrm{mm}}(k,\chi)$ is the 3D (non-linear) matter power spectrum at wavenumber $k$ and distance $\chi$. Likewise, for two projected galaxy overdensity fields with distance probability distributions $p_{\mathrm{l},1}(\chi)$ and $p_{\mathrm{l},2}(\chi)$, the angular power spectrum is, $$C_{\mathrm{gg}}(\ell) = \int d\chi \, \frac{p_{\mathrm{l},1}(\chi) \,
p_{\mathrm{l},2}(\chi)}{\chi^2} \, P_{\mathrm{gg}}\left(
\frac{\ell}{\chi},\chi \right) ,
\label{eqclgg}$$ where $P_{\mathrm{gg}}(k,\chi)$ is the 3D galaxy power spectrum.
Bias model {#secbias}
----------
We computed the linear matter power spectrum $P_{\mathrm{L}}(k)$ in our models using the CAMB software package [@Lewis00], and evaluated the non-linear matter power spectrum $P_{\mathrm{mm}}(k)$ including the “halofit” corrections [@Smith03; @Takahashi12 we define the fiducial cosmological parameters used for the simulation and data analysis in subsequent sections]. We adopted a model for the non-linear galaxy-galaxy and galaxy-matter 2-point functions, appearing in Eqs. \[eqclgk\] and \[eqclgg\], following @Baldauf10 and @Mandelbaum13. This model assumes a local, non-linear galaxy bias relation via a Taylor expansion of the galaxy density field in terms of the matter overdensity, $\delta_{\mathrm{g}} = b_{\mathrm{L}} \, \delta_{\mathrm{m}} +
\frac{1}{2} b_{\mathrm{NL}} \, \delta_{\mathrm{m}}^2 + ...$, defining a linear bias parameter $b_{\mathrm{L}}$ and non-linear bias parameter $b_{\mathrm{NL}}$. The auto- and cross-correlation statistics in this model can be written in the form [@McDonald06; @Smith09], $$\begin{split}
\xi_{\mathrm{gg}} &= b_{\mathrm{L}}^2 \, \xi_{\mathrm{mm}} + 2 \,
b_{\mathrm{L}} \, b_{\mathrm{NL}} \, \xi_{\mathrm{A}} +
\frac{1}{2} \, b_{\mathrm{NL}}^2 \, \xi_{\mathrm{B}} ,
\\ \xi_{\mathrm{gm}} &= b_{\mathrm{L}} \, \xi_{\mathrm{mm}} +
b_{\mathrm{NL}} \, \xi_{\mathrm{A}} ,
\end{split}$$ where $\xi_{\mathrm{mm}}$ is the correlation function corresponding to $P_{\mathrm{mm}}(k)$, and $\xi_{\mathrm{A}}$ and $\xi_{\mathrm{B}}$ are obtained by computing the Fourier transforms of, $$\begin{split}
A(k) &= \int \frac{d^3{\vec{q}}}{(2\mathrm{\pi})^3} \, F_2({\vec{q}},{\vec{k}}-{\vec{q}})
\, P_{\mathrm{L}}(q) \, P_{\mathrm{L}}(|{\vec{k}}-{\vec{q}}|) , \\ B(k) &=
\int \frac{d^3{\vec{q}}}{(2\mathrm{\pi})^3} \, P_{\mathrm{L}}(q) \,
P_{\mathrm{L}}(|{\vec{k}}-{\vec{q}}|) ,
\end{split}$$ which depend on the mode-coupling kernel in standard perturbation theory, $$F_2({\vec{q}}_1,{\vec{q}}_2) = \frac{5}{7} + \frac{1}{2} \frac{{\vec{q}}_1
. {\vec{q}}_2}{q_1 q_2} \left( \frac{q_1}{q_2} + \frac{q_2}{q_1} \right)
+ \frac{2}{7} \left( \frac{{\vec{q}}_1 . {\vec{q}}_2}{q_1 q_2} \right)^2 .$$ We evaluated these integrals using the [FAST]{} software package [@McEwen16] and note that $\xi_{\mathrm{B}} =
\xi_{\mathrm{L}}^2$, where $\xi_{\mathrm{L}}$ is the correlation function corresponding to $P_{\mathrm{L}}(k)$. This model is only expected to be valid on scales exceeding the virial radius of dark matter haloes, since it does not address halo exclusion, the distribution of galaxies within haloes, or other forms of stochastic or non-local effects [@Asgari20b]. However, this 2-parameter bias model is adequate for our large-scale analysis, which we verify using representative mock catalogues in Sect. \[secmocktests\].
Estimators {#secest}
==========
In this section we specify estimators that may be used to measure $\gamma_{\mathrm{t}}(\theta)$ and $\Delta \Sigma(R)$ from ensembles of sources and lenses, and discuss how estimates of $\Delta \Sigma(R)$ are affected by uncertainties in source distances.
Average tangential shear $\gamma_{\mathrm{t}}(\theta)$
------------------------------------------------------
We can estimate the average tangential shear of a set of sources (s) around lenses (l) by evaluating the following expression [@Mandelbaum06], which also utilises an unclustered random lens catalogue (r) with the same selection function as the lenses: $${\hat{\gamma_{\mathrm{t}}}}(\theta) = \frac{\sum\limits_{\mathrm{ls}} w_{\mathrm{l}} \,
w_{\mathrm{s}} \, e_{\mathrm{t,ls}} - \sum\limits_{\mathrm{rs}}
w_{\mathrm{r}} \, w_{\mathrm{s}} \,
e_{\mathrm{t,rs}}}{\sum\limits_{\mathrm{rs}} w_{\mathrm{r}} \,
w_{\mathrm{s}}} .
\label{eqgtest}$$ The sums in Eq. \[eqgtest\] are taken over pairs of sources and lenses with angular separations within a bin around $\theta$, $w_i$ are weights applied to the different samples (normalised such that $\sum_{\mathrm{l}} w_{\mathrm{l}} = \sum_{\mathrm{r}}
w_{\mathrm{r}}$), and $e_{\mathrm{t}}$ indicates the tangential ellipticity of the source, projected onto an axis normal to the line joining the source and lens (or random lens).
Eq. \[eqgtest\] involves the random lens catalogue in two places. First, the tangential shear of sources around random lenses is subtracted from the data signal. The subtracted term has an expectation value of zero, but significantly decreases the variance of the estimator at large separations [@Singh17]. Second, the estimator is normalised by a sum over pairs of sources and random lenses, rather than data lenses. This ensures that the estimator is unbiased: the alternative estimator ${\hat{\gamma_{\mathrm{t}}}}= \sum_{\mathrm{ls}}
w_{\mathrm{l}} \, w_{\mathrm{s}} \, e_{\mathrm{t,ls}} /
\sum_{\mathrm{ls}} w_{\mathrm{l}} \, w_{\mathrm{s}}$ is biased by any source-lens clustering (if the angular cross-correlation function $\omega_{\mathrm{ls}}(\theta) \ne 0$), which would modify the denominator of the expression but not the numerator. The magnitude of this effect is sometimes known as the “boost” factor [@Sheldon04], $$B(\theta) = \frac{\sum\limits_{\mathrm{ls}} w_{\mathrm{l}} \,
w_{\mathrm{s}}}{\sum\limits_{\mathrm{rs}} w_{\mathrm{r}} \,
w_{\mathrm{s}}} ,$$ where the sums are again taken over source-lens pairs with angular separations within a given bin. We note that $\langle B(\theta)
\rangle = 1 + \omega_{\mathrm{ls}}(\theta)$ for unity weights.
Projected mass density $\Delta\Sigma(R)$
----------------------------------------
Assuming the source and lens distances are known, each source-lens pair may be used to estimate the projected mass density around the lenses by inverting Eq. \[eqgtdsig\]: $${\hat{\Delta \Sigma}}(R) = e_{\mathrm{t}}(R/\chi_{\mathrm{l}}) \,
\Sigma_{\mathrm{c}}(\chi_{\mathrm{l}},\chi_{\mathrm{s}}) .$$ For an ensemble of sources and lenses, the mean projected mass density may then be estimated by an expression analogous to Eq.\[eqgtest\] [@Singh17], $$\begin{split}
{\hat{\Delta \Sigma}}(R) = \frac{\sum\limits_{\mathrm{ls}} w_{\mathrm{l}} \, w_{\mathrm{s}}
\, w_{\mathrm{ls}} \, e_{\mathrm{t,ls}}(R/\chi_{\mathrm{l}}) \,
\Sigma_{\mathrm{c}}(\chi_{\mathrm{l}},\chi_{\mathrm{s}})}{\sum\limits_{\mathrm{rs}}
w_{\mathrm{r}} \, w_{\mathrm{s}} \, w_{\mathrm{rs}}} \\ -
\frac{\sum\limits_{\mathrm{rs}} w_{\mathrm{r}} \, w_{\mathrm{s}} \,
w_{\mathrm{rs}} \, e_{\mathrm{t,rs}}(R/\chi_{\mathrm{r}}) \,
\Sigma_{\mathrm{c}}(\chi_{\mathrm{r}},\chi_{\mathrm{s}})}{\sum\limits_{\mathrm{rs}}
w_{\mathrm{r}} \, w_{\mathrm{s}} \, w_{\mathrm{rs}}} ,
\end{split}
\label{eqdsigest1}$$ where we have allowed for an additional pair weight between sources and lenses, $w_{\mathrm{ls}}$, and random lenses, $w_{\mathrm{rs}}$. Assuming a constant shape noise in $e_{\mathrm{t}}$, the noise in the estimate of $\Delta \Sigma(R) = e_{\mathrm{t}} \, \Sigma_{\mathrm{c}}$ from each source-lens pair is proportional to $\Sigma_{\mathrm{c}}$, hence the optimal inverse-variance weight is $w_{\mathrm{ls}} \propto
\Sigma_{\mathrm{c}}^{-2}$, and the weighted estimator may be written, $$\begin{split}
{\hat{\Delta \Sigma}}(R) = \frac{\sum\limits_{\mathrm{ls}} w_{\mathrm{l}} \, w_{\mathrm{s}}
\, e_{\mathrm{t,ls}}(R/\chi_{\mathrm{l}}) \,
\Sigma^{-1}_{\mathrm{c,ls}}}{\sum\limits_{\mathrm{rs}} w_{\mathrm{r}} \,
w_{\mathrm{s}} \, \Sigma^{-2}_{\mathrm{c,rs}}} \\ -
\frac{\sum\limits_{\mathrm{rs}} w_{\mathrm{r}} \, w_{\mathrm{s}} \,
e_{\mathrm{t,rs}}(R/\chi_{\mathrm{r}}) \,
\Sigma^{-1}_{\mathrm{c,rs}}}{\sum\limits_{\mathrm{rs}} w_{\mathrm{r}} \,
w_{\mathrm{s}} \, \Sigma^{-2}_{\mathrm{c,rs}}} .
\end{split}
\label{eqdsigest2}$$
Photo-$z$ dilution correction for $\Delta\Sigma(R)$ {#secphotoz}
---------------------------------------------------
The difficulty faced when estimating $\Delta \Sigma$, in the typical case, is that source distances are only accessible through photometric redshifts and may contain significant errors, leading to a bias in the estimate through incorrect scaling factors $\Sigma_{\mathrm{c}}$ (we assume in this discussion that spectroscopic lens distances are available). For example, sources may apparently lie behind lenses according to their photometric redshift, whilst in fact being positioned in front of the lenses and contributing no galaxy-galaxy lensing signal, creating a downward bias in the measurement.
For a single source-lens pair, the estimated value of $\Sigma_{\mathrm{c}}$ for the pair based on the source photometric redshift, $\Sigma_{\mathrm{c,lp}}$, may differ from its true value based on the source spectroscopic redshift, $\Sigma_{\mathrm{c,ls}}$, $${\hat{\Delta \Sigma}}= e_{\mathrm{t}} \, \Sigma_{\mathrm{c,lp}} = \left( \frac{\Delta
\Sigma^{\mathrm{true}}}{\Sigma_{\mathrm{c,ls}}} \right)
\Sigma_{\mathrm{c,lp}} = \left(
\frac{\Sigma_{\mathrm{c,lp}}}{\Sigma_{\mathrm{c,ls}}} \right) \Delta
\Sigma^{\mathrm{true}} .$$ Combining many source-lens pairs allowing for a pair weight $w_{\mathrm{ls}}$ we find, $${\hat{\Delta \Sigma}}= \frac{\sum\limits_{\mathrm{ls}} w_{\mathrm{ls}} \left(
\frac{\Sigma_{\mathrm{c,lp}}}{\Sigma_{\mathrm{c,ls}}} \right)
\Delta \Sigma^{\mathrm{true}}}{\sum\limits_{\mathrm{ls}} w_{\mathrm{ls}}}
.$$ Using the optimal weight $w_{\mathrm{ls}} \propto
\Sigma^{-2}_{\mathrm{c,lp}}$ this expression may be written, $${\hat{\Delta \Sigma}}= \frac{\sum\limits_{\mathrm{ls}} \Sigma^{-1}_{\mathrm{c,lp}} \,
\Sigma^{-1}_{\mathrm{c,ls}} \, \Delta
\Sigma^{\mathrm{true}}}{\sum\limits_{\mathrm{ls}}
\Sigma^{-2}_{\mathrm{c,lp}}} .$$ The estimated value of $\Delta \Sigma$ hence contains a multiplicative bias, $\Delta \Sigma^{\mathrm{true}} = f_{\mathrm{bias}} \, \langle
{\hat{\Delta \Sigma}}\rangle$ where, $$f_{\mathrm{bias}} = \frac{\sum\limits_{\mathrm{ls}}
\Sigma^{-2}_{\mathrm{c,lp}}}{\sum\limits_{\mathrm{ls}}
\Sigma^{-1}_{\mathrm{c,lp}} \, \Sigma^{-1}_{\mathrm{c,ls}}} =
\frac{\sum\limits_{\mathrm{ls}} w_{\mathrm{ls}}}{\sum\limits_{\mathrm{ls}}
w_{\mathrm{ls}} \, \Sigma_{\mathrm{c,lp}} \,
\Sigma^{-1}_{\mathrm{c,ls}}} .
\label{eqfbias}$$ This multiplicative correction factor may be estimated at each lens redshift from a representative subset of sources with complete spectroscopic and photometric redshift information, by evaluating the sums in the numerator and denominator of Eq. \[eqfbias\] [@Nakajima12].
An alternative formulation of the photo-$z$ dilution correction may be derived from the statistical distance distribution of the sources. Provided that the lens distribution is sufficiently narrow, Eq. \[eqgtdsigbroad\] indicates that an unbiased estimate of $\Delta\Sigma$ from each lens-source pair is, $${\hat{\Delta \Sigma}}(R) = e_{\mathrm{t}}(R/\chi_{\mathrm{l}}) \, \left[
\overline{\Sigma_{\mathrm{c}}^{-1}}(\chi_{\mathrm{l}})
\right]^{-1} ,$$ where $\overline{\Sigma_{\mathrm{c}}^{-1}}$ is evaluated from Eq. \[eqavesigc\] using the source distribution $p_{\mathrm{s}}(\chi)$. This motivates an alternative estimator mirroring Eq. \[eqdsigest2\] [@Sheldon04; @Miyatake15; @Blake16a], $$\begin{split}
{\hat{\Delta \Sigma}}(R) = \frac{\sum\limits_{\mathrm{ls}} w_{\mathrm{l}} \, w_{\mathrm{s}}
\, e_{\mathrm{t,ls}}(R/\chi_{\mathrm{l}}) \,
\overline{\Sigma^{-1}_{\mathrm{c,ls}}}}{\sum\limits_{\mathrm{rs}}
w_{\mathrm{r}} \, w_{\mathrm{s}} \, \left(
\overline{\Sigma^{-1}_{\mathrm{c,rs}}} \right)^2} \\ -
\frac{\sum\limits_{\mathrm{rs}} w_{\mathrm{r}} \, w_{\mathrm{s}} \,
e_{\mathrm{t,rs}}(R/\chi_{\mathrm{r}}) \,
\overline{\Sigma^{-1}_{\mathrm{c,rs}}}}{\sum\limits_{\mathrm{rs}}
w_{\mathrm{r}} \, w_{\mathrm{s}} \, \left(
\overline{\Sigma^{-1}_{\mathrm{c,rs}}} \right)^2} .
\end{split}
\label{eqdsigest3}$$ The accuracy of these potential photo-$z$ dilution corrections must be assessed via simulations, which we consider in Sect. \[secmocktests\]. We trialled both correction methods in our analysis.
Amplitude-ratio test {#secamp}
====================
In this section we construct test statistics which utilise the relative amplitudes of galaxy clustering and galaxy-galaxy lensing to test cosmological models. We first define the input statistics for these tests.
Projected clustering $w_{\mathrm{p}}(R)$
----------------------------------------
The amplitude of galaxy-galaxy lensing is sensitive to the distribution of matter around lens galaxies, projected along the line-of-sight. We can obtain an analogous projected quantity for lens galaxy clustering by integrating the 3D galaxy auto-correlation function, $\xi_{\mathrm{gg}}$ along the line-of-sight, $$w_{\mathrm{p}}(R) = \int_{-\infty}^\infty d\Pi \,
\xi_{\mathrm{gg}}(R,\Pi) ,$$ where $\Pi$ is the line-of-sight separation. This formulation has the additional feature of reducing sensitivity of the clustering statistics to redshift-space distortions, which modulate the apparent radial separations $\Pi$ between galaxy pairs.
We can estimate $w_{\mathrm{p}}(R)$ for a galaxy sample by measuring the galaxy correlation function in $(R,\Pi)$ separation bins, and summing over the $\Pi$ direction in the range $0 < \Pi <
\Pi_{\mathrm{max}}$: $${\hat{w_{\mathrm{p}}}}(R) = 2 \sum_{{\mathrm{bins}} \; i} \Delta \Pi_i \,
{\hat{\xi}}_{\mathrm{gg}}(R,\Pi) .
\label{eqwpest}$$
The Upsilon statistics, $\Upsilon_{\mathrm{gm}}(R)$ and $\Upsilon_{\mathrm{gg}}(R)$
-----------------------------------------------------------------------------------
Eq. \[eqdsigdef\] demonstrates that the amplitude of $\Delta\Sigma(R)$ around lens galaxies depends on the surface density of matter across a range of smaller scales from zero to $R$, and hence on the galaxy-matter cross-correlation coefficient at these scales. Given that this cross-correlation is a complex function which is difficult to model from first principles, it is beneficial to reduce this sensitivity to small-scale information using the annular differential surface density statistic [@Reyes10; @Baldauf10; @Mandelbaum13], $$\begin{split}
\Upsilon_{\mathrm{gm}}(R,R_0) &= \Delta\Sigma(R) - \frac{R_0^2}{R^2}
\Delta\Sigma(R_0) \\ &= \frac{2}{R^2} \int_{R_0}^R dR' \, R' \,
\Sigma(R') - \Sigma(R) + \frac{R_0^2}{R^2} \Sigma(R_0) ,
\end{split}
\label{equpsgm}$$ which is defined such that $\Upsilon_{\mathrm{gm}} = 0$ at some small-scale limit $R = R_0$, chosen to be large enough to reduce the main systematic effects (typically, $R_0$ is somewhat larger than the size scale of dark matter haloes). In this sense, the cumulative effect from the cross-correlation function at scales $R < R_0$ is cancelled, although it is not the case that this small-scale suppression translates to Fourier space [@Baldauf10; @Asgari20b; @Park20]. In any case, the efficacy of these statistics and choice of the $R_0$ value must be validated using simulations, as we consider below.
The corresponding quantity suppressing the small-scale contribution to the galaxy auto-correlations is [@Reyes10], $$\begin{split}
& \Upsilon_{\mathrm{gg}}(R,R_0) = \rho_{\mathrm{c}} \\ & \left[
\frac{2}{R^2} \int_{R_0}^R dR' \, R' \, w_{\mathrm{p}}(R') -
w_{\mathrm{p}}(R) + \frac{R_0^2}{R^2} \, w_{\mathrm{p}}(R_0)
\right] ,
\end{split}
\label{equpsgg}$$ where $\rho_{\mathrm{c}}$ is the critical matter density. We note that if $w_{\mathrm{p}}$ is defined as a step-wise function in bins $R_i$ (with bin limits $R_{i,\mathrm{min}}$ and $R_{i,\mathrm{max}}$) then Eq. \[equpsgg\] may be written in the useful form, $$\Upsilon_{\mathrm{gg}}(R,R_0) = \frac{\rho_{\mathrm{c}}}{R^2}
\sum_{i=j}^k C_i \, w_{\mathrm{p}}(R_i) ,
\label{equpsgg2}$$ where $(k,j)$ are the bins containing $(R,R_0)$, and $$C_i = \begin{cases} R_{i,\mathrm{max}}^2 & i = j
\\ R_{i,\mathrm{max}}^2 - R_{i,\mathrm{min}}^2 & j < i < k
\\ -R_{i,\mathrm{min}}^2 & i = k \end{cases}$$ For convenience we chose $R_0$ to coincide with the centre of a separation bin, such that we could use the direct measurements of $\Delta\Sigma(R_0)$ and $w_{\mathrm{p}}(R_0)$ in Eqs. \[equpsgm\] and \[equpsgg\] without interpolation between bins (we will show below that our results are not sensitive to the choice of $R_0$).
The $E_{\mathrm{G}}$ test statistic {#seceg}
-----------------------------------
The relative amplitudes of weak gravitational lensing and the rate of assembly of large-scale structure depend on the “gravitational slip” or difference between the two space-time metric potentials. This signature is absent in General Relativity but may be significant in modified gravity scenarios [@Uzan01; @Zhang07; @Jain10; @Bertschinger11; @Clifton12].
@Zhang07 proposed that these amplitudes might be compared by connecting the velocity field and lensing signal generated by a given set of matter overdensities, probed via redshift-space distortions and galaxy-galaxy lensing, respectively. @Reyes10 implemented this consistency test by constructing the statistic, $$E_{\mathrm{G}}(R) = \frac{1}{\beta} \,
\frac{\Upsilon_{\mathrm{gm}}(R,R_0)}{\Upsilon_{\mathrm{gg}}(R,R_0)}
,
\label{eqeg}$$ where $\beta = f/b_{\mathrm{L}}$ is the redshift-space distortion parameter which governs the observed dependence of the strength of galaxy clustering on the angle to the line-of-sight, in terms of the linear growth rate of a perturbation, $f = d\ln{\delta}/d\ln{a}$. Eq. \[eqeg\] is independent of the linear galaxy bias $b_{\mathrm{L}}$ and the amplitude of matter clustering $\sigma_8$, given that $\beta \propto 1/b_{\mathrm{L}}$, $\Upsilon_{\mathrm{gm}}
\propto b_{\mathrm{L}} \, \sigma_8^2$ and $\Upsilon_{\mathrm{gg}}
\propto b_{\mathrm{L}}^2 \, \sigma_8^2$. The prediction of linear perturbation theory for General Relativity in a $\Lambda$CDM Universe is a scale-independent value $E_{\mathrm{G}}(z) =
\Omega_{\mathrm{m}}(z=0)/f(z)$, although see @Leonard15 for a detailed discussion of this approxmation.
Covariance of estimators {#seccov}
========================
In this section we present analytical formulations in the Gaussian approximation for the covariance of estimates of $\gamma_{\mathrm{t}}(\theta)$ and $\Delta \Sigma(R)$, and model how this covariance is modulated by the presence of a survey mask (i.e., by edge effects). Our covariance determination hence neglects non-Gaussian and super-sample variance components. This is a reasonable approximation in the context of the current analysis as these terms are sub-dominant (we refer the reader to Joachimi et al. (in prep.) for more details on the relative amplitude of the different covariance terms in the context of KiDS-1000).
Covariance of average tangential shear {#seccovgt}
--------------------------------------
In Appendix \[seccovgtap\] we derive the covariance of $\gamma_{\mathrm{t}}$ averaged within angular bins $\theta_m$ and $\theta_n$: $$\mathrm{Cov}[\gamma_{\mathrm{t}}^{ij}(\theta_m),\gamma_{\mathrm{t}}^{kl}(\theta_n)]
= \frac{1}{\Omega} \int \frac{d\ell \, \ell}{2\mathrm{\pi}} \,
\sigma^2(\ell) \, \overline{J_{2,m}}(\ell) \,
\overline{J_{2,n}}(\ell) ,
\label{eqgtcov}$$ where $\gamma_{\mathrm{t}}^{ij}$ denotes the average tangential shear of source sample $j$ around lens sample $i$, $\Omega$ is the total survey angular area in steradians, and $\overline{J_{2,n}}(\ell) =
\int_{\theta_{1,n}}^{\theta_{2,n}} \frac{2\mathrm{\pi}\theta \,
d\theta}{\Omega_n} \, J_2(\ell \theta)$, where the integral is between the bin limits $\theta_1$ and $\theta_2$ and $\Omega_n$ is angular area of bin $n$ (i.e. the area of the annulus between the bin limits). The variance $\sigma^2(\ell)$ is given by the expression for Gaussian random fields [e.g., @Hu04; @Bernstein09; @Joachimi10; @Krause17; @Blanchard19], $$\sigma^2(\ell) = C_{\mathrm{g\kappa}}^{il}(\ell) \,
C_{\mathrm{g\kappa}}^{kj}(\ell) + \left[
C_{\mathrm{\kappa\kappa}}^{jl}(\ell) + N_{\mathrm{\kappa\kappa}}^j
\delta^{\mathrm{K}}_{jl} \right] \, \left[
C_{\mathrm{gg}}^{ik}(\ell) + N_{\mathrm{gg}}^i
\delta^{\mathrm{K}}_{ik} \right] ,
\label{eqgtvar}$$ where $\delta^{\mathrm{K}}_{ij}$ is the Kronecker delta. The angular auto- and cross-power spectra appearing in Eq. \[eqgtvar\] may be evaluated using the expressions in Sect. \[sectheory\], and the noise terms are $N_{\mathrm{\kappa\kappa}}^i =
\sigma_{\mathrm{e}}^2/\overline{n}_{\mathrm{s}}^i$ and $N_{\mathrm{gg}}^i = 1/\overline{n}_{\mathrm{l}}^i$, where $\sigma_{\mathrm{e}}$ is the shape noise and $\overline{n}_{\mathrm{l}}^i$ and $\overline{n}_{\mathrm{s}}^i$ are the angular lens and source densities of sample $i$ in units of per steradian.
Covariance of projected mass density {#seccovdsig}
------------------------------------
The covariance of $\Delta \Sigma$ may be deduced from the covariance of $\gamma_{\mathrm{t}}$ using $\Delta \Sigma(R) =
\gamma_{\mathrm{t}}(\theta) / \overline{\Sigma_{\mathrm{c}}^{-1}}$, and by scaling angular separations to projected separations at an effective lens distance $\chi_{\mathrm{l}}$ using $\theta =
R/\chi_{\mathrm{l}}$ [@Singh17; @Dvornik18; @Shirasaki18]. We can map multipoles $\ell$ to the projected wavevector $k =
\ell/\chi_{\mathrm{l}}$ such that, $$\mathrm{Cov}[\Delta \Sigma^{ij}(R), \Delta \Sigma^{kl}(R')] =
\frac{1}{\Omega} \int \frac{dk \, k}{2\mathrm{\pi}} \, \sigma^2(k)
\, \overline{J_2}(k R) \, \overline{J_2}(k R') ,
\label{eqdsigcov}$$ where we now express the variance in terms of projected power spectra, $$\sigma^2(k) = P^{il}_{\mathrm{g\kappa}}(k) \,
P^{kj}_{\mathrm{g\kappa}}(k) + \left[
P^{jl}_{\mathrm{\kappa\kappa}}(k) + N^j_{\mathrm{\kappa\kappa}}
\delta^{\mathrm{K}}_{jl} \right] \left[ P^{ik}_{\mathrm{gg}}(k) +
N^i_{\mathrm{gg}} \delta^{\mathrm{K}}_{ik} \right] .$$ The power spectra are given by the following relations: $$P_{\mathrm{g\kappa}}(k) = \frac{\chi_{\mathrm{l}}^2 \,
C_{\mathrm{g\kappa}}(k\chi_{\mathrm{l}})}{\left[\overline{\Sigma_{\mathrm{c}}^{-1}}(\chi_{\mathrm{l}})\right]^2}
\approx \overline{\rho_{\mathrm{m}}} \int d\chi \,
p_{\mathrm{l}}(\chi) \, P_{\mathrm{gm}}(k,\chi) ,$$ $$\begin{split}
& P_{\mathrm{\kappa\kappa}}(k) = \chi_{\mathrm{l}}^2 \,
C_{\mathrm{\kappa\kappa}}(k\chi_{\mathrm{l}}) \\ & =
\chi_{\mathrm{l}}^2 \, \overline{\rho_{\mathrm{m}}}^2 \int d\chi \,
\left[ \frac{\overline{\Sigma_{\mathrm{c},1}^{-1}}(\chi) \,
\overline{\Sigma_{\mathrm{c},2}^{-1}}(\chi)}{\overline{\Sigma_{\mathrm{c},1}^{-1}}(\chi_{\mathrm{l}})
\, \overline{\Sigma_{\mathrm{c},2}^{-1}}(\chi_{\mathrm{l}})}
\right] \left( \frac{\chi_{\mathrm{l}}^2}{\chi^2} \right) \,
P_{\mathrm{mm}} \left( \frac{k \, \chi_{\mathrm{l}}}{\chi},\chi
\right) ,
\end{split}$$ $$P_{\mathrm{gg}}(k) = \chi_{\mathrm{l}}^2
C_{\mathrm{gg}}(k\chi_{\mathrm{l}}) \approx \int d\chi \, p_1(\chi) \,
p_2(\chi) \, P_{\mathrm{gg}}(k,\chi) ,$$ and the noise terms are, $$N_{\mathrm{\kappa\kappa}} = \frac{\sigma_{\mathrm{e}}^2 \,
\chi_{\mathrm{l}}^2}{\overline{n}_{\mathrm{s}} \, \left[
\overline{\Sigma_{\mathrm{c}}^{-1}}(\chi_{\mathrm{l}}) \right]^2}
, \; \; \; N_{\mathrm{gg}} =
\frac{\chi_{\mathrm{l}}^2}{\overline{n}_{\mathrm{l}}} .$$
Covariance of remaining statistics
----------------------------------
The expression for the analytical covariance of $w_{\mathrm{p}}(R)$ may be derived as [see also, @Singh17], $$\begin{split}
&\mathrm{Cov}[w_{\mathrm{p}}(R), w_{\mathrm{p}}(R')] = \\ &\frac{2
L_\parallel \Pi_{\mathrm{max}}}{\Omega} \int \frac{dk \,
k}{2\mathrm{\pi}} \, \sigma^2(k) \, J_0(k R) \, J_0(k R') ,
\end{split}$$ where $L_\parallel$ is the total co-moving depth of the lens redshift slice and the expression for the variance is, $$\sigma^2(k) = \left[ P_{\mathrm{gg}}(k) + N_{\mathrm{gg}} \right]^2 ,$$ where $P_{\mathrm{gg}}(k)$ and $N_{\mathrm{gg}}$ are the 2D projected power spectra and noise as defined in Sect. \[seccovdsig\].
We determined the analytical covariance of $\Upsilon_{\mathrm{gm}}(R,R_0)$ from the covariance of $\Delta\Sigma(R)$: $$\begin{split}
& \mathrm{Cov}[\Upsilon_{\mathrm{gm}}(R,R_0) ,
\Upsilon_{\mathrm{gm}}(R',R_0)] = \mathrm{Cov}[\Delta\Sigma(R) ,
\Delta\Sigma(R')] \\ &- \frac{R_0^2}{R'^2}
\mathrm{Cov}[\Delta\Sigma(R) , \Delta\Sigma(R_0) ] -
\frac{R_0^2}{R^2} \mathrm{Cov}[\Delta\Sigma(R') , \Delta\Sigma(R_0)
] \\ &+ \frac{R_0^4}{R^2 R'^2} \mathrm{Var}[\Delta\Sigma(R_0)] .
\end{split}
\label{equpsgmcov}$$ For the case of $\Upsilon_{\mathrm{gg}}(R,R_0)$, we propagated the covariance using Equation \[equpsgg2\]: $$\begin{split}
&\mathrm{Cov}[\Upsilon_{\mathrm{gg}}(R,R_0) ,
\Upsilon_{\mathrm{gg}}(R',R_0)] =
\\ &\frac{\rho_{\mathrm{c}}^2}{R^2 \, R'^2} \sum_i \sum_j C_i \, C_j
\, \mathrm{Cov}[w_{\mathrm{p}}(R_i) , w_{\mathrm{p}}(R_j)] .
\end{split}
\label{equpsggcov}$$
We evaluated the covariance of the $E_{\mathrm{G}}$ statistic, where required, by assuming small fluctuations in the variables in Eq. \[eqeg\] with respect to their mean, neglecting any correlations between the measurements: $$\begin{split}
&\frac{\mathrm{Cov}[E_{\mathrm{G}}(R) \,
E_{\mathrm{G}}(R')]}{E_{\mathrm{G}}(R) \, E_{\mathrm{G}}(R')} =
\frac{\mathrm{Cov}[\Upsilon_{\mathrm{gm}}(R,R_0),\Upsilon_{\mathrm{gm}}(R',R_0)]}{\Upsilon_{\mathrm{gm}}(R,R_0)
\, \Upsilon_{\mathrm{gm}}(R',R_0)} \\ &+
\frac{\mathrm{Cov}[\Upsilon_{\mathrm{gg}}(R,R_0),
\Upsilon_{\mathrm{gg}}(R',R_0)]}{\Upsilon_{\mathrm{gg}}(R,R_0)
\, \Upsilon_{\mathrm{gg}}(R',R_0)} +
\frac{\sigma_\beta^2}{\beta^2} ,
\end{split}
\label{eqegcov}$$ where $\sigma_\beta$ is the error in the measurement of $\beta$. This neglect of correlations is an approximation, justified in the case of our dataset by the fact that the sky area used for the galaxy clustering measurement is substantially different to the subsample used for galaxy-galaxy lensing (see Joachimi et al. (in prep.) for a detailed justification of this approximation), and that the projected lens clustering measurement ($\Upsilon_{\mathrm{gg}}$) is largely insensitive to redshift-space distortions ($\beta$) owing to the projection over the line-of-sight separations. We note that in our fiducial fitting approach, we determined the scale-independent statistic $\langle E_G \rangle$ through direct fits to $\Upsilon_{\mathrm{gm}}$ and $\Upsilon_{\mathrm{gg}}$ as discussed in Sect. \[secdatatests\], without requiring the covariance of $E_G(R)$.
Modification of noise term
--------------------------
We can replace the noise terms in Sects. \[seccovgt\] and \[seccovdsig\] with a more accurate computation using the survey source and lens distributions. Neglecting the random lens term (which is not important on the small scales for which the noise term is significant), we find that the variance associated with the $\gamma_{\mathrm{t}}$ estimator in Eq. \[eqgtest\] is [e.g., @Miyatake19], $$\mathrm{Var}[\gamma_{\mathrm{t}}(\theta)] = \frac{\sum\limits_{\mathrm{ls}}
w_{\mathrm{l}}^2 \, w_{\mathrm{s}}^2 \,
\sigma_{\mathrm{e}}^2}{\left( \sum\limits_{\mathrm{rs}} w_{\mathrm{r}} \,
w_{\mathrm{s}} \right)^2} .$$ Likewise, the variance associated with the $\Delta \Sigma$ estimator in Eq. \[eqdsigest1\] is, $$\mathrm{Var}[\Delta\Sigma(R)] = \frac{\sum\limits_{\mathrm{ls}}
w_{\mathrm{l}}^2 \, w_{\mathrm{s}}^2 \, w_{\mathrm{ls}}^2 \,
\sigma_{\mathrm{e}}^2 \left( \Sigma_{\mathrm{c,ls}}
\right)^2}{\left( \sum\limits_{\mathrm{rs}} w_{\mathrm{r}} \,
w_{\mathrm{s}} \, w_{\mathrm{rs}} \right)^2} .$$ We adopted these noise terms in our covariance model.
Modification for survey window
------------------------------
Eqs. \[eqgtcov\] and \[eqdsigcov\] for the analytical covariance are modified by the survey window function. We can intuitively understand the need for this modification by considering that, whilst Fourier transforms assume periodic boundary conditions, the boundaries of the survey restrict the number of source-lens pairs on scales that are a significant fraction of the survey dimensions.
In Appendix \[seccovwinap\] we derive how the covariance of a cross-correlation function $\xi(r)$ between two Gaussian fields is modified by the window function of the fields, $W_1({\vec{x}})$ and $W_2({\vec{x}})$ [see also, @Beutler17]. We find, $$\begin{split}
& \mathrm{Cov}[\xi(r),\xi(s)] \approx \frac{A_3(r,s)}{A_2(r) \,
A_2(s)} \\ & \frac{1}{2\mathrm{\pi}} \int dk \, k \, \left[
P_{11}(k) \, P_{22}(k) + P^2_{12}(k) \right] \, J_0(kr) \, J_0(ks)
,
\end{split}$$ where $P_{11}$, $P_{22}$ and $P_{12}$ are the auto- and cross-power spectra of the fields and the pre-factors are given by, $$\begin{split}
A_2(r) &= \int_{\mathrm{bin} \, r} d^3{\vec{r}}\int d^2{\vec{x}}\, W_1({\vec{x}}) \,
W_2({\vec{x}}+{\vec{r}}) \\ A_3(r,s) &= \int_{\mathrm{bin} \, r} d^3{\vec{r}}\int_{\mathrm{bin} \, s} d^3{\vec{s}}\int d^2{\vec{x}}\, A_{12}({\vec{x}},{\vec{r}}) \,
A_{12}({\vec{x}},{\vec{s}}) ,
\end{split}$$ where the integrals over ${\vec{r}}$ and ${\vec{s}}$ are performed within the separation bin, and we have written $A_{12}({\vec{x}},{\vec{r}}) = W_1({\vec{x}}) \,
W_2({\vec{x}}+{\vec{r}})$. We hence approximated the dependence of the covariance on the survey window by replacing the survey area in Eqs. \[eqgtcov\] and \[eqdsigcov\] by the expression $A_2(r) \,
A_2(s) / A_3(r,s)$.
We calculated the terms $A_2$ and $A_3$ using the mean and covariance of the pair count $R_{\mathrm{s}}R_{\mathrm{l}}(r)$ between random source and lens realisations [@Landy93], which have respective densities $\overline{n}_{\mathrm{s}}$ and $\overline{n}_{\mathrm{l}}$. The mean pair count in a separation bin at scale $r$ (between $r_1$ and $r_2$), containing bin area $A_{\mathrm{bin}}(r) = \mathrm{\pi}
(r_2^2 - r_1^2)$, is $$\langle R_{\mathrm{s}}R_{\mathrm{l}}(r) \rangle =
\overline{n}_{\mathrm{s}} \, \overline{n}_{\mathrm{l}} \,
A_{\mathrm{bin}}(r) \, A_2(r),$$ which allows us to find $A_2(r)$, given that the other variables are known. The covariance of the pair count between separation bins $r$ and $s$ is, $$\begin{split}
& \mathrm{Cov}[ R_{\mathrm{s}}R_{\mathrm{l}}(r) ,
R_{\mathrm{s}}R_{\mathrm{l}}(s) ] = \\ & \overline{n}_{\mathrm{s}}
\, \overline{n}_{\mathrm{l}} \, A_{\mathrm{bin}}(r) \left[ A_2(r) \,
\delta^K_{rs} + \left( \overline{n}_{\mathrm{s}} +
\overline{n}_{\mathrm{l}} \right) A_{\mathrm{bin}}(s) \, A_3(r,s)
\right] ,
\end{split}$$ which allows us to determine $A_3(r,s)$. For all the source-lens configurations and separation bins considered in this study, we find that the area correction factor differs from 1.0 by less than $10\%$.
Propagation of errors in multiplicative corrections {#seccovprop}
---------------------------------------------------
Galaxy-galaxy lensing measurements are subject to multiplicative correction factors arising from shape measurement calibration (see Sect. \[secdatakids\]) and, in the case of $\Delta\Sigma$, owing to photometric redshift dilution (see Sect. \[secphotoz\]). We propagated the uncertainties in these correction factors, which are correlated between different source and lens samples, into the analytical covariance of the measurements. Taking $\Delta\Sigma$ as an example and writing a general amplitude correction factor as $\alpha$, the relation between the corrected and analytical statistics (denoted by the superscripts “corr” and “ana”, respectively) is, $$\Delta\Sigma^{\mathrm{corr}}_{ijk} = \frac{\alpha_{ij} \,
\Delta\Sigma^{\mathrm{ana}}_{ijk}}{\langle \alpha_{ij} \rangle} ,$$ which is normalised such that $\langle
\Delta\Sigma^{\mathrm{corr}}_{ijk} \rangle =
\Delta\Sigma^{\mathrm{ana}}_{ijk}$, where $i$ denotes the lens sample, $j$ the source sample and $k$ the separation bin. We hence find, $$\begin{split}
\mathrm{Cov}[ \Delta\Sigma^{\mathrm{corr}}_{ijk} ,
\Delta\Sigma^{\mathrm{corr}}_{lmn} ] &= \mathrm{Cov}[
\Delta\Sigma^{\mathrm{ana}}_{ijk} ,
\Delta\Sigma^{\mathrm{ana}}_{lmn} ] \left( 1 + C_{ij,lm} \right)
\\ &+ \langle \Delta\Sigma^{\mathrm{ana}}_{ijk} \rangle \langle
\Delta\Sigma^{\mathrm{ana}}_{lmn} \rangle \, C_{ij,lm} ,
\end{split}
\label{eqcovprop}$$ where, $$C_{ij,lm} = \frac{\langle \alpha_{ij} \alpha_{lm} \rangle - \langle
\alpha_{ij} \rangle \langle \alpha_{lm} \rangle}{\langle
\alpha_{ij} \rangle \langle \alpha_{lm} \rangle} =
\frac{\mathrm{Cov}[ \alpha_{ij}, \alpha_{lm}]}{\langle \alpha_{ij}
\rangle \langle \alpha_{lm} \rangle} .
\label{eqcovprop2}$$ We describe our specific implementation of these equations in the case of the KiDS dataset in Sect. \[secdatatests\].
Data {#secdata}
====
KiDS-1000 {#secdatakids}
---------
The Kilo-Degree Survey is a large optical wide-field imaging survey optimised for weak gravitational lensing analysis, performed with the OmegaCAM camera on the VLT Survey Telescope at the European Southern Observatory’s Paranal Observatory. The survey covers two regions of sky each containing several hundred square degrees, KiDS-North and KiDS-South, in four filters $(u,g,r,i)$. The companion VISTA-VIKING survey has provided complementary imaging in near-infrared bands $(Z,Y,J,H,K_{\mathrm{s}})$, resulting in a deep, wide, 9-band imaging dataset.
Our study is based on the fourth public data release of the project, KiDS-1000 [@Kuijken19], which comprises $1 \, 006$ deg$^2$ of multi-band data, more than doubling the previously-available coverage. We used an early-science release of the KiDS-1000 shear catalogues, which was created using the exact pipeline version and PSF modelling strategy implemented in @Hildebrandt17 for the KiDS-450 release. We note that these catalogues have not undergone any rigorous assessment for the presence of cosmic shear systematics, but they are sufficient for the galaxy-galaxy lensing science presented in this paper, as this is less susceptible to systematic errors in the lensing catalogues. The raw pixel data was processed by the [ THELI]{} and [ASTRO\_WISE]{} pipelines [@Erben13; @deJong15], and source ellipticities were measured using [lensfit]{} [@Miller13], assigning an optimal weight for each source, and calibrated by a large suite of image simulations [@Kannawadi19]. Photometric redshifts $z_{\mathrm{B}}$ were determined from the 9-band imaging for each source using the Bayesian code [BPZ]{} [@Benitez00], calibrated using spectroscopic sub-samples [@Hildebrandt20], and used to divide the sources into tomographic bins according to the value of $z_{\mathrm{B}}$.
BOSS
----
The Baryon Oscillation Spectroscopic Survey [BOSS, @Dawson13] is the largest existing galaxy redshift survey, which was performed using the Sloan Telescope between 2009 and 2014. BOSS mapped the distribution of $1.5$ million Luminous Red Galaxies (LRGs) and quasars across $\sim 10 \, 000$ deg$^2$, inspiring a series of cosmological analyses including the most accurate existing measurements of baryon acoustic oscillations and redshift-space distortions in the galaxy clustering pattern [@Alam17b]. The final (Data Release 12) large-scale structure catalogues are described by @Reid16; we used the combined LOWZ and CMASS LRG samples in our study.[^3]
2dFLenS
-------
The 2-degree Field Lensing Survey [2dFLenS, @Blake16b] is a galaxy redshift survey performed at the Australian Astronomical Observatory in 2014-2015 using the 2-degree Field spectroscopic instrument, with the goal of extending spectroscopic-redshift coverage of gravitational lensing surveys in the southern sky, particularly the KiDS-South region. The 2dFLenS sample covers an area of 731 deg$^2$ and includes redshifts for $40 \, 531$ LRGs in the redshift range $z <
0.9$, selected by applying BOSS-inspired colour-magnitude cuts to the VST-ATLAS imaging data.[^4] The 2dFLenS dataset has already been utilised in conjunction with the KiDS-450 lensing catalogues to perform a previous implementation of the amplitude-ratio test [@Amon18], a combined cosmological analysis of cosmic shear tomography, galaxy-galaxy lensing and galaxy multipole power spectra [@Joudaki18] and to determine photometric redshift calibration by cross-correlation [@Johnson17; @Hildebrandt20]. In our study we utilised the 2dFLenS LRG sample which overlapped with the KiDS-1000 pointings in the Southern region.
Fig. \[figoverlap\] illustrates the overlaps of the KiDS-1000 source catalogues in the North and South survey regions with the BOSS and 2dFLenS LRG catalogues.
{width="\textwidth"}
Mocks {#secmocks}
=====
We used the [MICECATv2.0]{} simulation [@Fosalba15a; @Crocce15; @Fosalba15b] to produce representative KiDS lensing source catalogues and LRG lens catalogues for testing the estimators, models and covariances described above. The Marenostrum Institut de Ciencias de l’Espai (MICE) catalogues cover an octant of the sky ($0 < \mathrm{RA} < 90^\circ$, $0 < \mathrm{Dec} < 90^\circ$) for redshift range $z < 1.4$. We used boundaries at constant RA and Dec to divide this area into 10 sub-samples, each of area 516 deg$^2$. The fiducial set of cosmological parameters for the mock is $\Omega_{\mathrm{m}} = 0.25$, $h = 0.7$, $\Omega_{\mathrm{b}} =
0.044$, $\sigma_8 = 0.8$ and $n_{\mathrm{s}} = 0.95$.
Mock source catalogue
---------------------
We constructed the representative mock source catalogue by applying the following steps (see van den Busch et al. (in prep.) for a full description of the MICE KiDS source mocks):
- The MICE catalogue is non-uniform across the octant: the region $\mathrm{Dec} < 30^\circ$ AND \[($\mathrm{RA} < 30^\circ$) OR ($\mathrm{RA} > 60^\circ$)\] has a shallower redshift distribution than the remainder. We homogeneized the catalogue with the cut ${\tt des\_asahi\_full\_i\_true} < 24$, such that we could construct mocks using the complete octant.
- The MICE catalogue shears $(\gamma_1, \gamma_2)$ are defined by the position angle relative to the declination axis. Given the MICE system for mapping 3D positions to (RA, Dec) co-ordinates, the KiDS conventions can be recovered by the following transformations: $\mathrm{RA} \rightarrow 90^\circ - \mathrm{RA}$, $\gamma_1
\rightarrow -\gamma_1$ ($\gamma_2$ is effectively negated twice and therefore unchanged).
- We constructed a KiDS-like photometric realisation based on the galaxy sizes and shapes, median KiDS seeing and limiting magnitudes, including photometric noise (see van den Busch et al. in prep.). We ran BPZ photometric redshift estimation [@Benitez00] on the mock source magnitudes and sizes, assigning $z_{\mathrm{B}}$ values for each object.
- We used a KDTree algorithm to assign weights to the mock sources on the basis of a nearest-neighbour match to the data catalogue in magnitude space.
- We randomly sub-sampled the catalogue to match the KV450 effective source density.
- We produced noisy shear components $(e_1, e_2)$ as $e = (\gamma
+ n)/(1 + n \gamma^*)$ [@Seitz97] where $\gamma = \gamma_1 +
\gamma_2 \, \mathrm{i}$, $e = e_1 + e_2 \, \mathrm{i}$ and $n = n_1
+ n_2 \, \mathrm{i}$, where $n_1$ and $n_2$ are drawn from Gaussian distributions with standard deviation $\sigma_{\mathrm{e}} = 0.288$ [@Hildebrandt20].
The redshift distribution estimates of the KiDS data and MICE mock source tomographic samples are displayed in the left panel of Fig. \[fignzmock\], illustrating the reasonable match between the two catalogues.
{width="\textwidth"}
Mock lens catalogue
-------------------
We constructed the representative mock LRG lens catalogue from the MICE simulation as follows. We used the galaxy magnitudes [ sdss\_g\_true]{}, [sdss\_r\_true]{}, [sdss\_i\_true]{} and first applied the MICE evolution correction to these magnitudes as a function of redshift, $m \rightarrow m - 0.8*[\mathrm{arctan}(1.5*z) -
0.1489]$ [@Crocce15]. We then constructed the LRG lens catalogues using the BOSS LOWZ and CMASS colour cuts in terms of the variables, $$\begin{split}
& c_\parallel = 0.7 \, (g-r) + 1.2 \, (r-i-0.18) , \\
& c_\perp = (r-i) - (g-r)/4 - 0.18 , \\
& d_\perp = (r-i) - (g-r)/8 .
\end{split}$$ Applying the original BOSS colour-magnitude selection cuts [@Eisenstein11] to the MICE mock did not reproduce the BOSS redshift distribution (which is unsurprising, since this mock has not been tuned to do so; the BOSS data is also selected from noisy observed magnitudes). Our approach to resolve this issue, following @Crocce15, was to vary the colour and magnitude selection cuts to minimise the deviation between the mock and data redshift distributions. We applied the following LOWZ selection cuts (where we indicate our changed values in bold font, and the previous values immediately following in square brackets): $$\begin{split}
& 16.0 < r < \mathbf{20.0} [19.6] , \\
& r < \mathbf{13.35} [13.5] + c_\parallel/0.3 , \\
& |c_\perp| < 0.2 .
\end{split}$$ We applied the following CMASS selection cuts: $$\begin{split}
& 17.5 < i < \mathbf{20.06} [19.9] , \\
& r - i < 2 , \\
& d_\perp > 0.55 , \\
& i < \mathbf{19.98} [19.86] + 1.6 \, (d_\perp-0.8) .
\end{split}$$ The resulting redshift distributions of the MICE lens mock (original, and after adjustment of the colour selection cuts) and the BOSS data are shown in the right panel of Fig. \[fignzmock\], illustrating that our modified selection produced a much-improved representation of the BOSS dataset. The clustering amplitude of the MICE LRG mock catalogues was consistent with a galaxy bias factor $b \approx 2$, although did not precisely match the clustering of the BOSS dataset, since it was not tuned to do so. However, these representative catalogues nonetheless allowed us to test our analysis procedures.
Simulation tests {#secmocktests}
================
In this section we analyse the representative source and lens catalogues constructed from the MICE mocks described in Sect.\[secmocks\]. Our specific goals are to:
- Test that the non-linear galaxy bias model specified in Sect. \[secbias\] is adequate for modelling the galaxy-galaxy lensing and clustering statistics across the relevant scales.
- Test that the approaches to the photo-$z$ dilution correction of $\Delta\Sigma$ described in Sect. \[secphotoz\] recovered results consistent with those obtained using source spectroscopic redshifts.
- Use the multiple mock realisations and jack-knife techniques to test that the covariance of the estimated statistics is consistent with the analytical Gaussian covariance specified in Sect. \[seccov\].
- Test that the $E_{\mathrm{G}}$ test statistics constructed from the mock as described in Sect. \[seceg\] are consistent with the theoretical expectation, and determine the degree to which this result depends on the choice of the small-scale cut-off parameter $R_0$ (see Eq. \[equpsgm\]).
- Test that, given our galaxy bias model, the galaxy-galaxy lensing and clustering statistics may be jointly described by a normalisation parameter $\sigma_8$ that is consistent with the mock fiducial cosmology, and use this test to assess the relative precision of angular and projected estimators.
Consistent with our subsequent data analysis, we divided the source catalogues into 5 different tomographic samples by the value of the BPZ photometric redshift, with divisions $z_{\mathrm{B}} = [0.1, 0.3,
0.5, 0.7, 0.9, 1.2]$ [following @Hildebrandt20]. We divided the lens catalogue into 5 slices of spectroscopic redshift $z_{\mathrm{l}}$ of width $\Delta z_{\mathrm{l}} = 0.1$ in the range $0.2 < z_{\mathrm{l}} < 0.7$. This narrow spectroscopic slicing minimises systematic effects due to redshift evolution across the lens slice [@Leauthaud17; @Singh19].
Measurements {#secmicemeas}
------------
We measured the following statistics:
- The average tangential shear $\gamma_{\mathrm{t}}(\theta)$ between all tomographic pairs of source and lens samples, in 15 logarithmically-spaced angular bins in the range $0.005^\circ <
\theta < 5^\circ$, using the estimator of Eq. \[eqgtest\]. This measurement is displayed in Fig. \[figgttom\] as the mean of the 10 individual mock realisations (which each have area 516 deg$^2$).
- The projected mass density $\Delta\Sigma(R)$ between all tomographic pairs of source and lens samples, in 15 logarithmically-spaced projected separation bins in the range $0.1 <
R < 100 \, h^{-1}$ Mpc. The mock mean measurement is displayed in Fig. \[figdsigtom\], in units of $h \, M_\odot$ pc$^{-2}$. When performing a $\Delta\Sigma$ measurement between source and lens samples we only included individual source-lens galaxy pairs with $z_{\mathrm{B}} > z_{\mathrm{l}}$, for which the source photometric redshift lies behind the lens spectroscopic redshift (adopting an alternative cut $z_{\mathrm{B}} > z_{\mathrm{l}} + 0.1$ did not change the results significantly). We applied the photo-$z$ dilution correction $f_{\mathrm{bias}}$ computed using Eq. \[eqfbias\] based on the point photo-$z$ values, and we study the efficacy of this correction in Sect. \[secmicephotoz\] below.
- The projected clustering $w_{\mathrm{p}}(R)$ of the lens samples in the same projected separation bins as above, using the estimator of Eq. \[eqwpest\] with $\Pi_{\mathrm{max}} = 100 \, h^{-1}$ Mpc. The mock mean measurement of $w_{\mathrm{p}}(R)$ is displayed as the third row of Fig. \[figstatmice\].
{width="\textwidth"}
{width="\textwidth"}
{width="\textwidth"}
We computed the covariance matrix for each statistic using the analytical Gaussian covariance specified in Sect. \[seccov\], where we initially used a fiducial lens linear bias factor $b_{\mathrm{L}} =
1.8$, and iterated this value following a preliminary fit to the projected lens clustering. Fig. \[figdelsigerrmice\] compares three different determinations of the error in $\Delta\Sigma(R)$ for each individual 516 deg$^2$ realisation of the MICE mocks: using the analytical covariance, using a jack-knife analysis, and derived from the standard deviation of the 10 realisations. For the jack-knife analysis, we divided the sample into $7 \times 7$ angular regions using constant boundaries in $\mathrm{RA}$ and $\mathrm{Dec}$, such that each region contained the same angular area $10.5$ deg$^2$. In Fig. \[figdelsigerrmice\] we display the comparison as a ratio between the jack-knife or realisations error, and the analytical error.
{width="\textwidth"}
We find that in the range $R > 1 \, h^{-1}$ Mpc, where the model provides a reasonable description of the measurements, the average (fractional) absolute difference between the analytical and jack-knife errors is $15\%$, and between the analytical and realisation scatter is $21\%$ (which is the expected level of difference given the error in the variance for 10 realisations). Small differences between these error estimates may arise due to the Gaussian approximation in the analytical covariance, the exact details of the survey modelling, or the scale of the jack-knife regions.
Fig. \[figdelsigcovfullmice\] displays the full analytical covariance matrix of $\Delta\Sigma(R)$ – spanning 5 lens redshift slices, 5 source tomographic samples and 15 bins of scale – as a correlation matrix with $375 \times 375$ entries. We note that there are significant off-diagonal correlations between measurements utilising the same lens or source sample, and between different scales.
We combined the correlated $\Delta\Sigma$ measurements for each individual lens redshift slice, averaging over the five different source tomographic samples, using the procedure described in Appendix \[seccombap\]. The resulting combined $\Delta\Sigma$ measurement for each lens redshift sample (again corresponding to a mock mean) is shown as the first row in Fig. \[figstatmice\].
We then used the $\Delta\Sigma(R)$ and $w_{\mathrm{p}}(R)$ measurements to infer the Upsilon statistics, $\Upsilon_{\mathrm{gm}}(R,R_0)$ and $\Upsilon_{\mathrm{gg}}(R,R_0)$, using Eqs. \[equpsgm\] and \[equpsgg\] respectively, adopting a fiducial value $R_0 = 2 \, h^{-1}$ Mpc (we consider the effect of varying this choice below). These measurements are shown in the second and fourth rows of Fig. \[figstatmice\]. We determined the covariance of $\Upsilon_{\mathrm{gm}}(R,R_0)$ and $\Upsilon_{\mathrm{gg}}(R,R_0)$ using error propagation following Eqs. \[equpsgmcov\] and \[equpsggcov\], respectively.
Finally, we determined the $E_{\mathrm{G}}(R)$ statistic for each lens redshift slice using Eq. \[eqeg\] where, for the purposes of these tests focussed on galaxy-galaxy lensing, we assumed a fixed input value for the redshift-space distortion parameter $\beta =
f(z)/b_{\mathrm{L}}(z)$, where we evaluated $f(z) =
\Omega_{\mathrm{m}}(z)^{0.55}$ using the fiducial cosmology of the MICE simulation – we note that the exponent $0.55$ is an excellent approximation to the solution of the differential growth equation in $\Lambda$CDM cosmologies [@Linder05] – and $b_{\mathrm{L}}(z)$ is the best-fitting linear bias parameter to the $\Upsilon_{\mathrm{gg}}$ measurements for each lens redshift slice $z$. Hence, systematic errors associated with redshift-space distortions lie beyond the scope of this study, and in our subsequent data analysis we will infer the required $\beta$ values from existing literature. We propagated errors in $E_{\mathrm{G}}$ using Eq. \[eqegcov\] (and assuming no error in $\beta$ in the case of the mocks). Our $E_{\mathrm{G}}$ measurements are shown as the fifth row in Fig. \[figstatmice\].
We generated fiducial cosmological models for these statistics using a non-linear matter power spectrum $P(k,z)$ corresponding to the fiducial cosmological parameters of the MICE simulation listed in Sect. \[secmocks\]. We determined the best-fitting linear and non-linear galaxy bias parameters $(b_{\mathrm{L}}, b_{\mathrm{NL}})$ by fitting to the $\Upsilon_{\mathrm{gg}}$ measurements for each lens redshift slice for scales $R > 5 \, h^{-1}$ Mpc, and applied these same bias parameters to the galaxy-galaxy lensing models. The models plotted in Figs. \[figgttom\], \[figdsigtom\] and \[figstatmice\] do not otherwise contain any free parameters. In Fig. \[figstatmice\] we display corresponding $\chi^2$ statistics between the models and mock mean data, demonstrating a satisfactory goodness-of-fit in general. We evaluated the $\chi^2$ statistics for $R > 5 \, h^{-1}$ Mpc for $\Delta\Sigma$, $w_{\mathrm{p}}$ and $\Upsilon_{\mathrm{gg}}$, and using all scales for $\Upsilon_{\mathrm{gm}}$ and $E_{\mathrm{G}}$.
Photo-$z$ dilution correction {#secmicephotoz}
-----------------------------
Within our mock analysis we considered three different implementations of the photo-$z$ dilution correction necessary for the $\Delta\Sigma(R)$ measurements, as described in Sect. \[secphotoz\].
- We used the source spectroscopic redshift values (which are available given that this is a simulation) to produce a baseline $\Delta\Sigma$ measurement free of photo-$z$ dilution.
- Our fiducial analysis choice: we used the source photometric redshift point values in the estimator of Eq. \[eqdsigest2\], adopting a source-lens pair cut $z_{\mathrm{B}} > z_{\mathrm{l}}$ and correcting for the photo-$z$ dilution using the $f_{\mathrm{bias}}$ factor of Eq. \[eqfbias\]. We also considered the same case, excluding the $f_{\mathrm{bias}}$ correction factor.
- We used the redshift probability distributions for each source tomographic sample to determine $\overline{\Sigma_{\mathrm{c}}^{-1}}$ relative to each lens redshift using Eq. \[eqavesigc\], and then estimated $\Delta\Sigma$ using Eq. \[eqdsigest3\]. We refer to this as the $P(z)$ distribution-based method.
The results of these $\Delta\Sigma$ analyses are compared in Fig. \[figmicephotoz\] for each lens redshift slice, where measurements corresponding to the different source tomographic samples have been optimally combined. We find that, other than in the case where the $f_{\mathrm{bias}}$ correction is excluded, both the point-based and distribution-based photo-$z$ dilution corrections produce $\Delta\Sigma$ measurements which are statistically consistent with the baseline measurements using the source spectroscopic redshifts. We further verify in Sect. \[secmicecosmo\] that these analysis choices do not create significant differences in cosmological parameter fits.
{width="\textwidth"}
Recovery of cosmological parameters {#secmicecosmo}
-----------------------------------
Finally, we verified that our analysis methodology recovered the fiducial cosmological parameters of the MICE simulation within an acceptable statistical accuracy. In this study we focus only on the amplitudes of the clustering and lensing statistics, keeping all other cosmological parameters fixed. In particular we test the recovery of the $E_{\mathrm{G}}$ statistics, and the recovery of the $\sigma_8$ normalisation, marginalising over galaxy bias parameters.
First, we determined a scale-independent $E_{\mathrm{G}}$ value (which we denote $\langle E_{\mathrm{G}} \rangle$) for each lens redshift slice from the MICE mock mean statistics displayed in Fig. \[figstatmice\]. We considered two approaches to this determination. In one approach, we fit a constant value to the $E_{\mathrm{G}}(R)$ measurements shown in the fifth row of Fig. \[figstatmice\], using the corresponding analytical covariance matrix. This approach has the disadvantage that it is based on the ratio of two noisy quantities $\Upsilon_{\mathrm{gm}}/\Upsilon_{\mathrm{gg}}$, which may result in a biased or non-Gaussian result. Our second approach avoided this issue by including $\langle E_{\mathrm{G}} \rangle$ as an additional parameter in a joint fit to the $\Upsilon_{\mathrm{gm}}$ and $\Upsilon_{\mathrm{gg}}$ statistics for each lens redshift slice, where $\langle E_{\mathrm{G}} \rangle$ changed the amplitude of $\Upsilon_{\mathrm{gm}}$ relative to $\Upsilon_{\mathrm{gg}}$. Specifically, we fit the model, $$\begin{split}
\Upsilon_{\mathrm{gm}}(R) &= A_{\mathrm{E}} \, b_{\mathrm{L}} \,
\Upsilon_{\mathrm{gm}}(R,\sigma_8=0.8,b_{\mathrm{L}},b_{\mathrm{NL}})
\\ \Upsilon_{\mathrm{gg}}(R) &=
\Upsilon_{\mathrm{gg}}(R,\sigma_8=0.8,b_{\mathrm{L}},b_{\mathrm{NL}})
,
\end{split}
\label{eqaefit}$$ in terms of an amplitude parameter $A_{\mathrm{E}}$ and galaxy bias parameters $b_{\mathrm{L}}$ and $b_{\mathrm{NL}}$, and then determined $\langle E_{\mathrm{G}} \rangle = A_{\mathrm{E}} \, b_{\mathrm{L}} \,
E_{G,\mathrm{fid}}$, where $E_{G,\mathrm{fid}}(z) =
\Omega_{\mathrm{m}}/f(z)$ where $\Omega_{\mathrm{m}}$ is the fiducial matter density parameter of the MICE mocks and $f(z)$ is the theoretical growth rate of structure based on this matter density. We note that the $b_{\mathrm{L}}$ factor in the above equation for $\Upsilon_{\mathrm{gm}}$ arises as a consequence of our treatment of $\beta$ as a fixed input parameter as described in Sect. \[secmicemeas\], and ensures that $A_{\mathrm{E}}$ is constrained only by the relative ratio $\Upsilon_{\mathrm{gm}}/\Upsilon_{\mathrm{gg}}$, and not the absolute amplitude of these functions.
Fig. \[figmiceegvsz\] displays the different determinations of $\langle E_{\mathrm{G}} \rangle$ in each lens redshift slice. The left panel compares measurements using the four different treatments of photo-$z$ dilution shown in Fig. \[figmicephotoz\], confirming that these methods produce consistent $E_{\mathrm{G}}$ determinations (other than the case in which $f_{\mathrm{bias}}$ is excluded; our fiducial choice is the direct photo-$z$ pair counts with $z_{\mathrm{B}} > z_{\mathrm{l}}$). The middle panel compares $\langle E_{\mathrm{G}} \rangle$ fits varying the small-scale parameter $R_0$ (where our fiducial choice is $R_0 = 2.0 \, h^{-1}$ Mpc, and we also considered choices corresponding to the adjacent separation bins $1.2$ and $3.1 \, h^{-1}$ Mpc). The right panel alters the method used to determine $\langle E_{\mathrm{G}} \rangle$, comparing the default choice using the non-linear bias model, a linear model where we fix $b_{\mathrm{NL}} = 0$, and a direct fit to the scale-dependent $E_{\mathrm{G}}(R)$ values. Reassuringly, all these methods yielded very similar results.
{width="\textwidth"}
We compared these determinations to the model prediction $E_{\mathrm{G}}(z) = \Omega_{\mathrm{m}}/f(z)$ shown in Fig. \[figmiceegvsz\]. Other than for the case where the $f_{\mathrm{bias}}$ correction is excluded, both the point-based and distribution-based photo-$z$ dilution corrections produce determinations of $\langle E_{\mathrm{G}} \rangle$ which recover the fiducial value. This conclusion holds independently of the chosen value of $R_0$, although higher $R_0$ values produce slightly increased error ranges. The different modelling approaches also produce consistent results.
Next, we utilised our mock dataset to perform a fit of the cosmological parameter $\sigma_8$ to the joint lensing and clustering statistics, marginalising over different bias parameters $(b_{\mathrm{L}}, b_{\mathrm{NL}})$ for each redshift slice (such that we perform an 11-parameter fit, comprised of $\sigma_8$ and two bias parameters for each of the five lens redshift slices). We fixed the remaining cosmological parameters, and performed our parameter fit using a Markov Chain Monte Carlo method implemented using the [ emcee]{} package [@ForemanMackey13]. We used wide, uniform priors for each fitted parameter.
As above, we adopted for our fiducial analysis the point photo-$z$ dilution correction using $f_{\mathrm{bias}}$, and we performed fits to the $\Upsilon_{\mathrm{gm}}(R)$ and $\Upsilon_{\mathrm{gg}}(R)$ statistics with $R_0 = 2 \, h^{-1}$ Mpc, considering the same analysis variations as above. For this fiducial case, we obtained a measurement $\sigma_8 = 0.779 \pm 0.019$, consistent with the MICE simulation cosmology $\sigma_8 = 0.8$. The $\chi^2$ statistic of the best-fitting model is $69.9$ for 64 degrees of freedom (d.o.f.). Fig. \[figmicesig8\] displays the dependence of the $\sigma_8$ measurements on the analysis choices. All methodologies using the non-linear bias model recovered the fiducial $\sigma_8$ value, with the exception of excluding the $f_{\mathrm{bias}}$ correction. Adopting a linear bias model instead produced a significantly poorer recovery.
We also considered fitting to different pairs of lensing-clustering statistics: $\Delta\Sigma(R)$ and $w_{\mathrm{p}}(R)$ for $R >
R_{\mathrm{min}} = 5 \, h^{-1}$ Mpc, compared to $\gamma_{\mathrm{t}}(\theta)$ and $w_{\mathrm{p}}(R)$, where we applied a minimum-scale cut in $\theta$ which matches $R_{\mathrm{min}}$ in each lens redshift slice. These alternative statistics also successfully recovered the fiducial value of $\sigma_8$, with errors of $0.018$ (for $\Delta\Sigma$) and $0.022$ (for $\gamma_{\mathrm{t}}$). According to this analysis, the projected statistics produced a $\sim 20\%$ more accurate $\sigma_8$ value than the angular statistics, in agreement with the results of @Shirasaki18.
We conclude this section by noting that the application of our analysis pipeline to the MICE lens and source mocks successfully recovered the fiducial $E_{\mathrm{G}}$ and $\sigma_8$ parameters of the simulation, and is robust against differences in photo-$z$ dilution correction, choice of the small-scale parameter $R_0$, and choice of statistic included in the analysis \[$\gamma_{\mathrm{t}}(\theta)$, $\Delta \Sigma(R)$ or $\Upsilon_{\mathrm{gm}}(R)$\].
Results {#secdatatests}
=======
Measurements {#measurements}
------------
We now summarise the galaxy-galaxy lensing and clustering measurements we generated from the KiDS-1000 and overlapping LRG datasets. We cut these catalogues to produce overlapping subsets for our galaxy-galaxy lensing analysis, by only retaining sources and lenses within the set of KiDS pointings which contain BOSS or 2dFLenS galaxies. The resulting KiDS-N sample comprised $15 \, 150 \, 250$ KiDS shapes and $47 \, 332$ BOSS lenses within 474 KiDS pointings with total unmasked area $366.0$ deg$^2$, and the KiDS-S sample consisted of $16 \, 994 \,
252$ KiDS shapes and $18 \, 903$ 2dFLenS lenses within 478 KiDS pointings with total unmasked area $382.1$ deg$^2$. We also utilised BOSS and 2dFLenS random catalogues in our analysis, with the same selection cuts and size 50 times bigger than the datasets, sub-sampled from the master random catalogues provided by @Reid16 and @Blake16b, respectively.
We split the KiDS-1000 source catalogue into 5 different tomographic samples by the value of the BPZ photometric redshift, using the same bin divisions $z_{\mathrm{B}} = [0.1, 0.3, 0.5, 0.7, 0.9, 1.2]$ adopted in Sect. \[secmocktests\]. The effective source density of each tomographic sample is $n_{\mathrm{eff}} = [0.88, 1.33, 2.04,
1.49, 1.26]$ arcmin$^{-2}$ [@Hildebrandt20], estimated using the method of @Heymans12. We divided the BOSS and 2dFLenS LRG catalogues into 5 spectroscopic redshift slices of width $\Delta
z_{\mathrm{l}} = 0.1$ in the range $0.2 < z_{\mathrm{l}} < 0.7$.
We measured the average tangential shear $\gamma_{\mathrm{t}}(\theta)$ and projected mass density $\Delta\Sigma(R)$ between all pairs of KiDS-1000 tomographic source samples and LRG redshift slices in the North and South regions, using the same estimators and binning as utilised for the MICE mocks in Sect. \[secmicemeas\] and applying a multiplicative shear bias correction for each tomographic sample [@Kannawadi19]. For the $\Delta\Sigma$ measurement, we again restricted the source-lens pairs such that $z_{\mathrm{B}} >
z_{\mathrm{l}}$, and (in our fiducial analysis) applied a point-based photo-$z$ dilution correction.
We generated an analytical covariance matrix for each measurement, initially using a fiducial lens linear bias factor $b_{\mathrm{L}} =
2$, and iterating this value following a preliminary fit to the projected lens clustering. We tested that the analytical error determination agreed sufficiently well with a jack-knife error analysis where the regions were defined as the KiDS pointings; the results of this test and the overall analytical covariance are visually similar to their equivalents for the MICE mocks shown in Figs. \[figdelsigerrmice\] and \[figdelsigcovfullmice\], and we do not repeat these figures. We used the KV450 spectroscopic calibration sample with DIR weights [@Hildebrandt20] to estimate the redshift distribution of each tomographic source sample for use in the analytical covariance matrix, in the modelling of $\gamma_{\mathrm{t}}(\theta)$ and in the distribution-based correction to $\Delta\Sigma(R)$ for photo-$z$ dilution, and to determine the $f_{\mathrm{bias}}$ values for the point-based photo-$z$ dilution correction.
We propagated the uncertainties in the multiplicative correction factors due to the shear calibration bias and photometric redshift dilution using the method described in Sect. \[seccovprop\]. Regarding the multiplicative shear calibration, we followed @Hildebrandt20 in adopting an error $\sigma_{\mathrm{m}} =
0.02$ that is fully correlated across all samples, such that $\mathrm{Cov}[ \alpha_{ij}, \alpha_{lm} ] = \sigma_{\mathrm{m}}^2$ in Eq. \[eqcovprop2\].
The sample variance in the spectroscopic training set can be characterised by an uncertainty in mean spectroscopic redshift which varies for each tomographic sample in the range $\sigma_{\mathrm{z}} =
0.011 \rightarrow 0.039$ [see @Hildebrandt20 Table 2]. We propagated these errors into the determination of $f_{\mathrm{bias}}$ by re-evaluating Eq. \[eqfbias\] shifting all the spectroscopic redshifts by a small amount to determine the derivatives $\partial
f_{\mathrm{bias},ij}/\partial z_j$, where $i$ denotes the lens sample and $j$ the source sample. Using error propagation, we then scaled the derivatives by the errors $\sigma_{\mathrm{z},j}$ to find the covariance matrix of the uncertainties, $$\mathrm{Cov}[ f_{\mathrm{bias},ij} \, f_{\mathrm{bias},lm} ] =
\frac{\partial f_{\mathrm{bias},ij}}{\partial z_j} \, \frac{\partial
f_{\mathrm{bias},lm}}{\partial z_m} \, \sigma^2_{\mathrm{z},j} \,
\delta^{\mathrm{K}}_{jm} ,$$ where the final Kronecker delta $\delta^{\mathrm{K}}_{jm}$ indicates that these uncertainities are correlated for different lens samples corresponding to the same source sample, but uncorrelated between source samples (we refer the reader to Joachimi et al. (in prep.) for further investigation of this point). This uncertainty can be propagated into the analytical covariance matrix using Eqs. \[eqcovprop\] and \[eqcovprop2\] with $\mathrm{Cov}[
\alpha_{ij}, \alpha_{lm} ] = \mathrm{Cov}[ f_{\mathrm{bias},ij} \,
f_{\mathrm{bias},lm} ]$.
We used the analytical covariance matrices to combine the separate KiDS-N and KiDS-S measurements into a single joint estimate of the galaxy-galaxy lensing statistics and associated covariance, which we utilised in the remainder of this study (we test the consistency of the individual BOSS and 2dFLenS results in Sect. \[seckidssys\]). We display the KiDS $\gamma_{\mathrm{t}}(\theta)$ and $\Delta\Sigma(R)$ galaxy-galaxy lensing measurements in the different tomographic combinations in Figs. \[figgttom\] and \[figdsigtom\]. We note again that there are some differences between the galaxy-galaxy lensing signals measured in the mocks and data, given that the mocks have not been tuned to reproduce the BOSS and 2dFLenS clustering properties. These differences are particularly evident on the smallest scales, owing to an inconsistent halo occupation. Our study does not require the mocks to precisely replicate the data in order to test our analysis framework.
We obtained the most accurate measurement of the projected correlation function $w_{\mathrm{p}}(R)$ of each lens redshift slice using the full BOSS DR12 dataset, combining the LOWZ and CMASS selections and spanning $9 \, 376$ deg$^2$ [@Reid16]. We adopted the same spatial separation bins as for the MICE mocks, again assuming $\Pi_{\mathrm{max}} = 100 \, h^{-1}$ Mpc. When analysing the BOSS sample we included completeness weights but excluded “FKP” weights [@Feldman94], which are designed to optimise the clustering signal-to-noise ratio but may not be appropriate in the case of galaxy-galaxy lensing. The full 2dFLenS dataset is too small to offer a competitive measurement of $w_{\mathrm{p}}(R)$, although given that it was selected using BOSS-inspired colour-magnitude cuts, we assumed that the BOSS clustering is representative of the combined LRG sample (and we test this approximation in Sect. \[seckidssys\]). Since the overlap of the KiDS-N source catalogue and full BOSS sample is small ($4\%$ of BOSS), we also assumed that the galaxy-galaxy lensing and clustering measurements are uncorrelated.
We combined the correlated $\Delta\Sigma$ measurements for each lens redshift slice, averaging over the different source samples. Fig. \[figstatkids\] displays these measurements, together with the projected clustering $w_{\mathrm{p}}(R)$ of the full BOSS sample, the corresponding $\Upsilon_{\mathrm{gm}}(R,R_0)$ and $\Upsilon_{\mathrm{gg}}(R,R_0)$ statistics assuming a fiducial choice $R_0 = 2 \, h^{-1}$ Mpc (we consider the impact of varying this choice in Sect. \[seckidssys\]), and the direct $E_{\mathrm{G}}(R)$ estimate using Eq. \[eqeg\].
{width="\textwidth"}
We generated fiducial cosmological models for these statistics using a non-linear matter power spectrum corresponding to the best-fitting “TTTEEE+lowE+lensing” [*Planck*]{} cosmological parameters [@Planck18]. When producing the models overplotted in Figs. \[figgttom\], \[figdsigtom\] and \[figstatkids\], we determined best-fitting linear and non-linear galaxy bias parameters by fitting to the $\Upsilon_{\mathrm{gg}}$ measurements for each lens redshift slice for the separation range $R > 5 \, h^{-1}$ Mpc, and applied these same bias parameters to the galaxy-galaxy lensing models. Values of the $\chi^2$ statistic for each statistic and lens redshift slice, produced using these models and the analytical covariance, are displayed in each panel of Fig. \[figstatkids\], and indicate that the measurements are consistent with the model.
Redshift-space distortion inputs
--------------------------------
We adopted values of the redshift-space distortion parameters $\beta$ for the BOSS sample as a function of redshift by interpolating the literature analysis of @Zheng19, who provide RSD measurements in narrow redshift slices [which also agree with the compilation of results in @Alam17b]. In order to interpolate these measurements to our redshift locations, which slightly differ from the bin centres of @Zheng19, we created a Gaussian process model for $\beta(z)$ and its errors, which lie in the range $12-20\%$ as a function of redshift, using the sum of a Matern kernel and white noise kernel. We note that the error in $\beta$ makes up roughly half the variance budget for $E_{\mathrm{G}}$ in the lowest lens redshift slice (i.e., increases the total error by $\sim \sqrt{2}$), but is subdominant for the other redshift slices.
Amplitude-ratio test $E_{\mathrm{G}}$ {#secegkids}
-------------------------------------
We used the KiDS-1000 and LRG clustering and galaxy-galaxy lensing measurements, with the previously-published values of $\beta$, to determine a scale-independent value of the amplitude ratio, $\langle
E_{\mathrm{G}} \rangle$. We adopted the same fiducial analysis method as for the MICE mocks: we performed a joint fit to the $\Upsilon_{\mathrm{gm}}$ and $\Upsilon_{\mathrm{gg}}$ measurements for each redshift slice, varying $A_{\mathrm{E}}$ and the bias parameters $b_{\mathrm{L}}$ and $b_{\mathrm{NL}}$ as in Eq. \[eqaefit\], treating $A_{\mathrm{E}}$ as an additional amplitude parameter for $\Upsilon_{\mathrm{gm}}$. We then deduced $\langle E_{\mathrm{G}}
\rangle = A_{\mathrm{E}}/\beta$, propagating the errors in $\beta$ assuming all these statistics are independent (please see Sect. \[seceg\] for a note on this approximation). We used the analytical covariance matrices for these statistics, assumed $R_0 = 2
\, h^{-1}$ Mpc, and fit the model to $R > 5 \, h^{-1}$ Mpc for $\Upsilon_{\mathrm{gg}}$ and to all scales for $\Upsilon_{\mathrm{gm}}$. We consider the effect of varying these analysis choices in Sect. \[seckidssys\]. In particular, we note that fitting the directly-determined $E_{\mathrm{G}}(R)$ values (shown in the fifth row of Fig. \[figstatkids\]) produced results which were entirely consistent with our fiducial analysis.
Our resulting fits for $\langle E_{\mathrm{G}} \rangle$ were $[0.43
\pm 0.09, 0.45 \pm 0.07, 0.33 \pm 0.06, 0.38 \pm 0.07, 0.34 \pm
0.08]$ for redshifts $z = [0.25, 0.35, 0.45, 0.55, 0.65]$. The measurements have a small degree of correlation, owing to sharing a common source sample, and the analytical covariance matrix is listed in Table \[tabegcov\]. We plot these measurements in Fig. \[figegall\], together with a literature compilation [@Reyes10; @Blake16a; @Pullen16; @Alam17a; @delaTorre17; @Amon18; @Singh19; @Jullo19]. The purple shaded stripe in Fig. \[figegall\] illustrates the $68\%$ confidence range of the prediction of the [*Planck*]{} “TTTEEE+lowE+lensing” parameter chain assuming a flat $\Lambda$CDM Universe.
{width="\textwidth"}
Redshift $0.25$ $0.35$ $0.45$ $0.55$ $0.65$
---------- ---------- ---------- ---------- ---------- ----------
$0.25$ $76.821$ $3.477$ $3.251$ $3.330$ $3.178$
$0.35$ $3.477$ $52.101$ $1.559$ $1.633$ $1.509$
$0.45$ $3.251$ $1.559$ $34.674$ $1.088$ $1.017$
$0.55$ $3.330$ $1.633$ $1.088$ $43.179$ $0.893$
$0.65$ $3.178$ $1.509$ $1.017$ $0.893$ $67.585$
: The covariance matrix corresponding to our measurements $E_{\mathrm{G}} = [0.43, 0.45, 0.33, 0.38, 0.34]$ at $z = [0.25,
0.35, 0.45, 0.55, 0.65]$. Each entry has been multiplied by $10^4$ for clarity of display. The cross-correlation between different redshift slices is small.[]{data-label="tabegcov"}
Our measurements provide the best existing determination of the lensing-clustering amplitude ratio (noting that previous measurements displayed in Figure \[figegall\] typically correspond to significantly wider lens redshift ranges than our study), which is consistent with matter density values $\Omega_{\mathrm{m}} \sim 0.3$. Varying the $\Omega_{\mathrm{m}}$ parameter within a flat $\Lambda$CDM cosmological model, assuming $E_{\mathrm{G}} = \Omega_{\mathrm{m}}/f =
\Omega_{\mathrm{m}}(0)/\Omega_{\mathrm{m}}(z)^{0.55}$, we find $\Omega_{\mathrm{m}} = 0.27 \pm 0.04$ (with a minimum $\chi^2 = 1.2$ for 4 d.o.f.). In principle in linear theory, this measurement is insensitive to the other cosmological parameters in a flat $\Lambda$CDM scenario. The resulting error in $\Omega_{\mathrm{m}}$ is, naturally, somewhat larger than that provided by analyses utilising the full shape of the cosmic shear and clustering functions [e.g., @Troester20], albeit requiring fewer model assumptions.
We tested for the scale dependence of the $E_{\mathrm{G}}(R)$ measurements (in the fifth row of Fig. \[figstatkids\]) by jointly fitting an empirical 6-parameter model $E_{\mathrm{G}}(R,z_i) = A_i
\left[ 1 + \alpha \, \log_{10}(R) \right]$ to all the redshift slices, where $\alpha$ quantifies the fractional variation in $E_{\mathrm{G}}$ per decade in projected scale $R$ (in $h^{-1}$ Mpc) and $A_i$ is a free amplitude for each of the five lens redshift slices. We obtained a $68\%$ confidence region $\alpha = 0.17 \pm 0.26$ (with a minimum $\chi^2 = 35.9$ for 34 d.o.f.), which is consistent with no scale dependence, as predicted in the standard gravity scenario. These fits are displayed in Fig. \[figegscale\].
{width="\textwidth"}
Systematics tests {#seckidssys}
-----------------
We now consider the effect on our cosmological fits of varying our fiducial analysis choices. Fig. \[figkidsegvsz\] is a compilation of different determinations of $\langle E_{\mathrm{G}} \rangle$ in each lens redshift slice. The upper-left panel compares the fits varying the photo-$z$ dilution correction for $\Delta\Sigma$ between the point-based and distribution-based approaches, and the upper-right panel shows measurements varying the small-scale parameter $R_0$. The lower-left panel considers separate determinations based on the BOSS and 2dFLenS samples (using the BOSS clustering measurements in both cases). The lower-right panel alters the fitting method to only use a linear-bias model (set $b_{\mathrm{NL}} = 0$), and to use a direct fit to the $E_{\mathrm{G}}(R)$ measurements presented in the fifth row of Fig. \[figstatkids\], as opposed to our fiducial fits to $\Upsilon_{\mathrm{gm}}$ and $\Upsilon_{\mathrm{gg}}$. In all cases, the systematic variation of the recovered $E_{\mathrm{G}}$ values is negligible compared to the statistical errors (noting that the BOSS and 2dFLenS comparison is also subject to sample variance error). We find that varying the photo-$z$ dilution correction, choice of $R_0$ and fitting method produce a systematic variation of $0.08$-$\sigma$, $0.24$-$\sigma$ and $0.38$-$\sigma$ respectively, when expressed as a fraction of the statistical error $\sigma$.
{width="\textwidth"}
Summary {#secsummary}
=======
We have used the latest weak gravitational lensing data release from the Kilo-Degree Survey, KiDS-1000, together with overlapping galaxy redshift survey data from BOSS and 2dFLenS, to perform cosmological tests associated with the relative amplitude of galaxy-galaxy lensing and galaxy clustering statistics. We quantified our results using the $E_{\mathrm{G}}(R)$ statistic, which we were able to measure up to projected separations of $100 \, h^{-1}$ Mpc, recovering a scale-independent value of $\langle E_{\mathrm{G}} \rangle$ with accuracies in the range 15-20% in five lens redshift slices of width $\Delta z = 0.1$. The scale-dependence and redshift-dependence of these measurements are consistent with the theoretical expectation of General Relativity in a Universe with matter density $\Omega_{\mathrm{m}} \sim 0.3$. The measurements are consistent with a scale-independent model for $E_{\mathrm{G}}$, and constrain allowed variation within $25\%$ (1-$\sigma$) error per decade in projected scale. Fitting our $E_{\mathrm{G}}$ dataset with a flat $\Lambda$CDM model, we find $\Omega_{\mathrm{m}} = 0.27 \pm 0.04$.
We demonstrated that our results are robust against different analysis methodologies. In particular, we showed that:
- Source photometric redshift errors cause a significant dilution of the inferred projected mass density $\Delta\Sigma$, by causing unlensed foreground sources to appear in the background of lenses. We demonstrated that this dilution may be corrected either by modifying the estimator for $\Delta\Sigma$ to include the redshift probability distribution of the sources, or by using a source spectroscopic calibration sample to compute the multiplicative bias, producing consistent results. When applied to the mock catalogue, these estimation methods recovered the $\Delta\Sigma$ measurement obtained using source spectroscopic redshifts.
- A Gaussian analytical covariance for the galaxy-galaxy lensing statistics, with suitable modifications for the survey selection function and small-scale noise terms, predicted errors which agreed within $20\%$ with estimates from a jack-knife procedure and from the variation across mock realisations.
- Our amplitude-ratio test, based on the annular differential density statistics $\Upsilon_{\mathrm{gm}}$ and $\Upsilon_{\mathrm{gg}}$, is insensitive to the small-scale parameter $R_0$ adopted in these statistics, producing consistent results for choices in the range $1 < R_0 < 3 \, h^{-1}$ Mpc. We also obtained consistent results analysing BOSS and 2dFLenS separately.
We performed an additional series of tests jointly fitting an overall amplitude $\sigma_8$ to the galaxy-galaxy lensing and clustering statistics in our mocks, marginalising over linear and non-linear bias parameters. For our scenario, the projected galaxy-galaxy lensing measurements produced a slightly more accurate determination of $\sigma_8$ than the angular statistics after matching of scales, although the results are fully consistent.
Our analysis sets the stage for upcoming, increasingly accurate, cosmological tests using amplitude-ratio statistics, as gravitational lensing and galaxy redshift samples continue to grow. These datasets will continue to offer rich possibilities for placing tighter constraints on allowed gravitational physics.
CB is grateful to Rossana Ruggeri, Alexie Leauthaud, Johannes Lange and Sukhdeep Singh for useful discussions on galaxy-galaxy lensing measurements.\
We acknowledge support from: the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO) through project number CE110001020 (CB, KG); the European Research Council under grant numbers 647112 (MA, BG, CH, TT), 770935 (AD, HH, JLvdB, AHW) and 693024 (SJ); the Polish Ministry of Science and Higher Education through grant DIR/WK/2018/12 and the Polish National Science Center through grant 2018/30/E/ST9/00698 (MB); the Max Planck Society and the Alexander von Humboldt Foundation in the framework of the Max Planck-Humboldt Research Award endowed by the Federal Ministry of Education and Research (CH); Heisenberg grant Hi 1495/5-1 of the Deutsche Forschungsgemeinschaft (HH); the Beecroft Trust (SJ); Vici grant 639.043.512 financed by the Netherlands Organisation for Scientific Research (AK); the Alexander von Humboldt Foundation (KK); the NSFC of China grant 11973070, the Shanghai Committee of Science and Technology grant 19ZR1466600 and Key Research Program of Frontier Sciences grant ZDBS-LY-7013 (HYS); and the European Union’s Horizon 2020 research and innovation programme under the Marie Sk[l]{}odowska-Curie grant 797794 (TT).\
The 2dFLenS survey is based on data acquired through the Australian Astronomical Observatory, under program A/2014B/008. It would not have been possible without the dedicated work of the staff of the AAO in the development and support of the 2dF-AAOmega system, and the running of the AAT.\
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.\
We have used [matplotlib]{} [@Hunter07] for the generation of scientific plots, and this research also made use of [astropy]{}, a community-developed core Python package for Astronomy [@Astropy13].\
\
[*Author contributions:*]{} All authors contributed to the development and writing of this paper. The authorship list is given in two groups: the lead author (CB), followed by an alphabetical group who have made a significant contribution to either the data products or to the scientific analysis.
Covariance of average tangential shear {#seccovgtap}
======================================
We may evaluate the covariance of $\gamma_{\mathrm{t}}$ between scales $\theta$ and $\theta'$ using Eq. \[eqgtmod\], $$\begin{split}
&
\mathrm{Cov}[\gamma_{\mathrm{t}}^{ij}(\theta),\gamma_{\mathrm{t}}^{kl}(\theta')]
= \\ & \int \frac{d^2{\vec{\ell}}}{(2\mathrm{\pi})^2} \int
\frac{d^2{\vec{\ell}}'}{(2\mathrm{\pi})^2} \,
\mathrm{Cov}[C_{\mathrm{g\kappa}}^{ij}({\vec{\ell}}) ,
C_{\mathrm{g\kappa}}^{kl}({\vec{\ell}}')] \, J_2(\ell \theta) \, J_2(\ell'
\theta') ,
\end{split}
\label{eqgtcovap}$$ where $\gamma_{\mathrm{t}}^{ij}$ denotes the average tangential shear of source sample $j$ around lens sample $i$. We adopt an approximation that different multipoles ${\vec{\ell}}$ are uncorrelated such that, $$\mathrm{Cov}[C_{\mathrm{g\kappa}}^{ij}({\vec{\ell}}) ,
C_{\mathrm{g\kappa}}^{kl}({\vec{\ell}}')] = \delta_{\mathrm{D}}({\vec{\ell}}- {\vec{\ell}}')
\, \sigma^2(\ell) ,$$ where $\delta_{\mathrm{D}}$ is the Dirac delta function, and the variance $\sigma^2(\ell)$ is given by Equation \[eqgtvar\].
Eq. \[eqgtcovap\] becomes, after integrating the delta function $\int
d^2{\vec{\ell}}' \, f({\vec{\ell}}') \, \delta_{\mathrm{D}}({\vec{\ell}}- {\vec{\ell}}') =
\frac{(2\mathrm{\pi})^2}{\Omega} \, f({\vec{\ell}})$ where $\Omega$ is the total survey angular area in steradians, $$\mathrm{Cov}[\gamma_{\mathrm{t}}^{ij}(\theta),\gamma_{\mathrm{t}}^{kl}(\theta')]
= \frac{1}{\Omega} \int \frac{d\ell \, \ell}{2\mathrm{\pi}} \,
\sigma^2(\ell) \, J_2(\ell \theta) \, J_2(\ell \theta') .$$ If the measurements of $\gamma_{\mathrm{t}}$ are averaged within angular bins $m$ and $n$, where the angular area of the $i$th bin is $\Omega_i$ (i.e. the area of the annulus between the bin limits), then the covariance between the bins is, $$\begin{split}
C_{mn} &= \int_m \frac{d^2\theta}{\Omega_{\mathrm{m}}} \int_n
\frac{d^2\theta'}{\Omega_n} \,
\mathrm{Cov}[\gamma^{ij}_{\mathrm{t}}(\theta),\gamma^{kl}_{\mathrm{t}}(\theta')]
\\ &= \frac{1}{\Omega} \int \frac{d\ell \, \ell}{2\mathrm{\pi}} \,
\sigma^2(\ell) \, \overline{J_{2,m}}(\ell) \,
\overline{J_{2,n}}(\ell) ,
\end{split}$$ where $\overline{J_{2,n}}(\ell) = \int_{\theta_{1,n}}^{\theta_{2,n}}
\frac{2\mathrm{\pi}\theta \, d\theta}{\Omega_n} \, J_2(\ell \theta)$. We also note that the contribution of any constant term in the covariance $\sigma^2(\ell) = C$ (such as the noise terms) is, $$\begin{split}
C_{mn} &= C \, \int \frac{2\mathrm{\pi}\theta \,
d\theta}{\Omega_{\mathrm{m}}} \int \frac{2\mathrm{\pi}\theta' \,
d\theta'}{\Omega_n} \frac{1}{\Omega} \int \frac{d\ell \,
\ell}{2\mathrm{\pi}} \, J_2(\ell \theta) \, J_2(\ell \theta')
\\ &= \frac{C}{\Omega \, \Omega_n} \delta^{\mathrm{K}}_{mn} ,
\end{split}$$ using the Bessel function relation $\int_0^\infty J_n(ax) \, J_n(bx)
\, x \, dx = \delta_{\mathrm{D}}(a-b)/b$.
Modification of covariance for survey window {#seccovwinap}
============================================
We derive how the covariance of a cross-correlation function between two Gaussian fields, $\delta_1({\vec{x}})$ and $\delta_2({\vec{x}})$, is modified by the window function of the fields, $W_1({\vec{x}})$ and $W_2({\vec{x}})$. We adopt the case of a 2D flat sky, where the vector separation ${\vec{r}}$ between two points has magnitude $r$ and orientation angle $\theta$. An estimator of the cross-correlation function of the fields at separation $r$ is, $${\hat{\xi}}(r) = \frac{1}{A_2(r)} \int \frac{d\theta}{2\mathrm{\pi}} \int
d^2{\vec{x}}\, \delta_1({\vec{x}}) \, \delta_2({\vec{x}}+{\vec{r}}) \, W_1({\vec{x}}) \,
W_2({\vec{x}}+{\vec{r}}) ,$$ where $A_2(r) = \int \frac{d\theta}{2\mathrm{\pi}} \int d^2{\vec{x}}\,
W_1({\vec{x}}) \, W_2({\vec{x}}+{\vec{r}})$. The expectation value of this expression is, $$\begin{split}
\langle {\hat{\xi}}\rangle &= \frac{1}{A_2(r)} \int
\frac{d\theta}{2\mathrm{\pi}} \int d^2{\vec{x}}\langle \delta_1({\vec{x}})
\delta_2({\vec{x}}+{\vec{r}}) \rangle W_1({\vec{x}}) W_2({\vec{x}}+{\vec{r}}) \\ &=
\frac{1}{A_2(r)} \int \frac{d^2{\vec{k}}}{(2\mathrm{\pi})^2} \,
P_{12}({\vec{k}}) \int \frac{d\theta}{2\mathrm{\pi}} \int d^2{\vec{x}}W_1({\vec{x}})
W_2({\vec{x}}+{\vec{r}}) \mathrm{e}^{-\mathrm{i}{\vec{k}}\cdot{\vec{r}}} \\ &\approx
\frac{1}{A_2(r)} \frac{1}{2\mathrm{\pi}} \int dk \, k \, P_{12}(k)
\, \int \frac{d\theta}{2\mathrm{\pi}} \, A_2(r) \,
\mathrm{e}^{-\mathrm{i}kr\cos{\theta}} \\ &= \frac{1}{2\mathrm{\pi}}
\int dk \, k \, P_{12}(k) \, J_0(kr) ,
\end{split}
\label{eqxiwin}$$ where we have introduced the cross-power spectrum $P_{12}(k)$, and the approximation in the third line of Eq. \[eqxiwin\] ignores the $\theta$ dependence of $\int d^2{\vec{x}}\, W_1({\vec{x}}) \, W_2({\vec{x}}+{\vec{r}})$.
The covariance of the estimator may be deduced from, $$\begin{split}
& \langle {\hat{\xi}}({\vec{r}}) \, {\hat{\xi}}({\vec{s}}) \rangle = \frac{1}{A_2(r) \,
A_2(s)} \int d^2{\vec{x}}\int d^2{\vec{y}}\, \\ & A_{12}({\vec{x}},{\vec{r}}) \,
A_{12}({\vec{y}},{\vec{s}}) \, \langle \delta_1({\vec{x}}) \, \delta_2({\vec{x}}+{\vec{r}}) \,
\delta_1({\vec{y}}) \, \delta_2({\vec{y}}+{\vec{s}}) \rangle ,
\end{split}$$ where we have written $A_{12}({\vec{x}},{\vec{r}}) = W_1({\vec{x}}) \, W_2({\vec{x}}+{\vec{r}})$. Expanding this expression using Wick’s theorem for a Gaussian random field, $\langle \delta_1 \, \delta_2 \, \delta_3 \, \delta_4 \rangle =
\langle \delta_1 \, \delta_2 \rangle \langle \delta_3 \, \delta_4
\rangle + \langle \delta_1 \, \delta_3 \rangle \langle \delta_2 \,
\delta_4 \rangle + \langle \delta_1 \, \delta_4 \rangle \langle
\delta_2 \, \delta_3 \rangle$, we find $$\begin{split}
& \mathrm{Cov}[{\hat{\xi}}({\vec{r}}),{\hat{\xi}}({\vec{s}})] = \frac{1}{A_2(r) \, A_2(s)} \int
d^2{\vec{x}}\int d^2{\vec{y}}\, \\ & A_{12}({\vec{x}},{\vec{r}}) \, A_{12}({\vec{y}},{\vec{s}}) \, [
\langle \delta_1({\vec{x}}) \, \delta_1({\vec{y}}) \rangle \langle
\delta_2({\vec{x}}+{\vec{r}}) \, \delta_2({\vec{y}}+{\vec{s}}) \rangle \\ & + \langle
\delta_1({\vec{x}}) \, \delta_2({\vec{y}}+{\vec{s}}) \rangle \langle
\delta_2({\vec{x}}+{\vec{r}}) \, \delta_1({\vec{y}}) \rangle ] .
\end{split}$$ Using $\langle \delta_i({\vec{x}}) \, \delta_j({\vec{y}}) \rangle = \langle
\delta_i({\vec{x}}) \, \delta^*_j({\vec{y}}) \rangle = \int
\frac{d^2{\vec{k}}}{(2\mathrm{\pi})^2} \, P_{ij}({\vec{k}}) \,
\mathrm{e}^{-\mathrm{i}{\vec{k}}\cdot({\vec{x}}-{\vec{y}})}$, and omitting some algebra, the first term evaluates to, $$\int \frac{d^2{\vec{k}}}{(2\mathrm{\pi})^2} \, P_{11}({\vec{k}}) \, P_{22}({\vec{k}})
\, \mathrm{e}^{\mathrm{i}{\vec{k}}\cdot({\vec{r}}-{\vec{s}})} \int d^2{\vec{x}}\,
A_{12}({\vec{x}},{\vec{r}}) \, A_{12}({\vec{x}},{\vec{s}}) ,$$ and the second term evaluates to, $$\int \frac{d^2{\vec{k}}}{(2\mathrm{\pi})^2} \, P^2_{12}({\vec{k}}) \,
\mathrm{e}^{\mathrm{i}{\vec{k}}\cdot({\vec{r}}-{\vec{s}})} \int d^2{\vec{x}}\,
A_{12}({\vec{x}},{\vec{r}}) \, A_{12}({\vec{x}},{\vec{s}}) .$$ The expression for the covariance is then, $$\begin{split}
&\mathrm{Cov}[{\hat{\xi}}({\vec{r}}),{\hat{\xi}}({\vec{s}})] = \frac{\int d^2{\vec{x}}\,
A_{12}({\vec{x}},{\vec{r}}) \, A_{12}({\vec{x}},{\vec{s}})}{A_2(r) \, A_2(s)} \\ &\int
\frac{d^2{\vec{k}}}{(2\mathrm{\pi})^2} \, \left[ P_{11}({\vec{k}}) \,
P_{22}({\vec{k}}) + P^2_{12}({\vec{k}}) \right] \,
\mathrm{e}^{\mathrm{i}{\vec{k}}\cdot({\vec{r}}-{\vec{s}})} .
\end{split}$$ Averaging the estimator over angles we obtain, $$\begin{split}
& \mathrm{Cov}[{\hat{\xi}}(r),{\hat{\xi}}(s)] \approx \frac{A_3(r,s)}{A_2(r) \,
A_2(s)} \\ & \frac{1}{2\mathrm{\pi}} \int dk \, k \, \left[
P_{11}(k) \, P_{22}(k) + P^2_{12}(k) \right] \, J_0(kr) \, J_0(ks)
,
\end{split}$$ where $A_3(r,s) = \int d^3{\vec{r}}\int d^3{\vec{s}}\int d^2{\vec{x}}\,
A_{12}({\vec{x}},{\vec{r}}) \, A_{12}({\vec{x}},{\vec{s}})$.
Combining correlated tomographic slices {#seccombap}
=======================================
In order to reduce the size of a data vector, we can optimally combine separate correlated estimates of a statistic, such as a galaxy-galaxy lensing measurement for a given lens sample against different tomographic source slices. This procedure is an example of data compression [@Tegmark97].
Suppose we have measured a given statistic at $N_{\mathrm{r}}$ different scales, for $N_{\mathrm{s}}$ source tomographic slices, and we wish to average the statistic over source samples, where the measurements in the different slices are correlated. We’ll arrange these quantities in a data vector $\mathbf{x}$ of length $N_{\mathrm{s}} N_{\mathrm{r}}$ with corresponding covariance matrix $\mathbf{C}$ of dimension $N_{\mathrm{s}} N_{\mathrm{r}} \times
N_{\mathrm{s}} N_{\mathrm{r}}$. The operation to combine the different tomographic slices to a compressed data vector $\mathbf{y}$ of length $N_{\mathrm{r}}$ can be written as, $$\mathbf{y} = \mathbf{w}^{\mathrm{T}} \mathbf{x} ,$$ where $\mathbf{w}$ is a weight matrix of dimension $N_{\mathrm{s}}
N_{\mathrm{r}} \times N_{\mathrm{r}}$, and we normalise the weights such that the column corresponding to each scale bin sums to unity. The optimal choice of weight matrix [@Tegmark97] is, $$\mathbf{w} = \mathbf{C}^{-1} \, \mathbf{D} ,$$ where $\mathbf{D}$ is a matrix of dimension $N_{\mathrm{s}}
N_{\mathrm{r}} \times N_{\mathrm{r}}$, whose columns consist of $N_{\mathrm{s}} N_{\mathrm{r}}$ entries for each final scale bin, with value 1 when the entry in $\mathbf{x}$ corresponds to the same scale bin, and value 0 otherwise. The resulting covariance matrix of $\mathbf{y}$ is, $$\mathbf{C}_{\mathrm{y}} = \mathbf{w}^{\mathrm{T}} \, \mathbf{C} \,
\mathbf{w} ,
\label{eqcovcompress}$$ which has dimension $N_{\mathrm{r}} \times N_{\mathrm{r}}$.
We found that this data compression scheme is more robust against numerical issues with the matrix inverse (and suffers negligible loss in precision) if we replaced the weight matrix with $\mathbf{w} =
\mathbf{V}^{-1} \, \mathbf{D}$, where $\mathbf{V}$ is a diagonal matrix just containing the variance of the measurements. In this implementation the weight matrix is slightly sub-optimal, but we retained the full covariance matrix in Eq. \[eqcovcompress\] to ensure correct error propagation.
[^1]: E-mail: [email protected]
[^2]: We refer the reader to @Dvornik18 Appendix C for a full discussion of the different definitions of $\Sigma_{\mathrm{c}}$ that have been adopted in the literature.
[^3]: The BOSS large-scale structure samples are available for download at the link <https://data.sdss.org/sas/dr12/boss/lss/>.
[^4]: The 2dFLenS dataset is publicly available at the link <http://2dflens.swin.edu.au>.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Charge transfer statistics of quantum particles is obtained by analysing the time evolution of the many-body wave function. Exploiting properly chosen gauge transformations, we construct the probabilities for transfers of a discrete number of particles. Generally, the derived formula for counting statistics differs from the one previously obtained by Levitov [*et al.*]{} (J. of Math. Phys. [**37**]{}, 4845 (1996)). The two formulae agree only if the initial state is prohibited from being a superposition of different charge states. Their difference is illustrated for cases of a single particle and a tunnel junction, and the role of charge coherence is demonstrated.'
author:
- 'A. Shelankov'
- 'J. Rammer'
title: Charge transfer counting statistics revisited
---
Recently the question of counting statistics of charge transfer has attracted considerable interest due to its relevance to electronic transport in nanostructures where the discreteness of the electronic charge is reflected in quantum transport properties [@LevLeeLes96; @Les89; @Khl87; @But90; @BeeBut92; @BeeSch01]. Inspired by the concept of photon counting in optics, counting statistics of particles addresses a fundamental question of quantum transport, viz. the probability distribution for the number of charges transferred between different spatial regions of a system in a given time span. The objective is to get complete information about the fluctuations in particle currents, [*i.e.*]{}, correlations of any order. Generally, cross-correlations in charge transfers in different conducting channels of a mesoscopic system are of interest [@BorBelBru02]. In a seminal paper by Levitov, Lee and Lesovik [@LevLeeLes96], a formula for counting statistics was proposed by considering a [*gedanken*]{} experiment in which a spin is coupled to the electrons in a quantum wire whose transfer statistics are to be counted. The precession of the spin then counts the number of electrons passing either to the left or to the right of a chosen point in the wire. Applying quantum measurement arguments to a system interacting with an idealized measuring device, counting statistics was also considered by Nazarov and Kindermann [@NazKin].
Here, we shall develop counting statistics from a different point of view; instead of analysing a measurement process, we extract information about particle transfer directly from the wave function of a many-body system. We derive a formula for counting statistics which turns out to differ from the one of Ref. [@LevLeeLes96]. Our approach allows us to establish the circumstances under which the counting formula obtained in Ref. [@LevLeeLes96] is not applicable.
In classical mechanics, the notion of counting statistics is unproblematic: when complete information about a system is known – the trajectories of individual particles are known – there is a unique answer to the question of how many particles are transferred from, say, the left to the right in a given time interval. In quantum mechanics the situation is not so innocent. Even when full information about a many-body system is available, i.e., its time-dependent wave function is known, there is no straightforward algorithm to extract the probabilities in question, since there is no quantum operator representing the number of transferred particles. To circumvent this difficulty, a [*gedanken*]{} experiment was in Ref. [@LevLeeLes96] used as the basis for establishing a counting formula: In the experiment, the rotation $\chi (\lambda )$ of a spin coupled to the charge current via a gauge field was “measured” as a function of the coupling constant $\lambda $ (and measuring time interval $\tau $). [*Assuming*]{} the generating function $\chi
(\lambda )$ to be $2\pi$-periodic, the Fourier coefficients in front of $\exp(i m \lambda)$ were interpreted as the probabilities for the passage of $m$ particles. We point out that the [*interpretation*]{} of the experiment is based on ingenious intuition rather than following unequivocally from the principles of quantum mechanics. Besides, it contains an ambiguity: as we show later, $\chi (\lambda )$ may have $\exp(\pm i \lambda/2 )$ components, which could then be interpreted as half-integer charge transfers. Also, positive definiteness of the prescribed probabilities cannot be established. A [*gedanken*]{} experiment, not being a realistic one, is in fact a vehicle for analysing the wave function of a system. Therefore we shall develop an approach to counting statistics of quantum particles which is based solely on an analysis of the wave function.
We consider a system partitioned into two parts by the plane at $x=0$, referred to as left and right. We want to tag the particles with a “non-demolishing marker,” [*i.e.*]{}, a marker that does not disturb the quantum dynamics. The marker should provide information on whether a particle has crossed the interface and in what direction. As explained below, this can be achieved by introducing the gauge transformation $$\hat{U}_{\lambda }= \exp\left[i \lambda \sum\limits_{k}\theta (-x_{k})\right] ,
\label{vrc}$$ where $x_{k}$ is the coordinate of the $k$-th particle of the many-body system. In order to demonstrate how the gauge transformation serves as a marker, we consider the case of a single particle subject to a potential. Let $\psi_{L}(x)$ and $\psi_{R}(x)$ denote normalized initial wave packet states located only on the left or right, respectively. A state is evolved in time according to $\psi
( \tau )= {\cal E}\psi (0)$, where ${\cal E}= \exp[-i{\cal H} \tau ]$ and ${\cal H}$ is the Hamiltonian of the system. For each initial state, specified at time $t=0$, the time evolution operator ${\cal E}$ produces a state which is a coherent superposition of left and right components: $${\cal E} \psi_{L} = \psi_{L \rightarrow L} + \psi_{L
\rightarrow R} \quad,\quad {\cal E} \psi_{R} = \psi_{R \rightarrow L}
+ \psi_{R \rightarrow {R}} \quad,$$ where the last symbol in the subscript on the r.h.s. indicates the location of the wave packet to be on the left (L) or right (R). Of importance will be the gauge transformed evolution operator $${\cal E}_{\lambda }= U_{\lambda}^{\dagger}{\cal
E}U_{\lambda}.
\label{czc}$$ Letting it operate on the considered initial states, the following marked final states emerges $${\cal E}_{\lambda } \psi_{L} = \psi_{L
\rightarrow L} + e^{i\lambda } \, \psi_{L \rightarrow R} \quad,\quad
{\cal E}_{\lambda } \psi_{R} = e^{- i\lambda } \, \psi_{R \rightarrow
L} + \psi_{R \rightarrow {R}} \quad ,$$ indeed states exhibiting the intended transfer marking. One immediately realises that the weights $||\psi_{L \rightarrow L} ||^2 = \langle \psi_{L\rightarrow L}|
\psi_{L\rightarrow L} \rangle$ and $||\psi_{L \rightarrow R} ||^2=
\langle \psi_{L\rightarrow R}| \psi_{L\rightarrow R} \rangle $ are the probabilities for the charge transfers $m=0$ and $m=1$, respectively. Analogously, $||\psi_{R \rightarrow L} ||^2$ and $||\psi_{R
\rightarrow R} ||^2$ are the probabilities for transfers $m=-1$ and $m=0$ for the initial state on the right. One is able to extract the components, $\psi_{m}$, of the final states ${\cal E} \psi_{L} $ and $
{\cal E} \psi_{R} $ which corresponds to $m$ particle transfers from left to right by the following operation $$\psi_{m} = \int\limits_{0}^{2\pi } \frac{d \lambda }{2\pi }
\; e^{-i m \lambda }\;{\cal E}_{\lambda } \psi^{(0)} ,
\label{2rc}$$ where $\psi^{(0)}$ is the initial state, i.e., either $\psi_{L}$ or $\psi_{R}$. The probabilities for the possible particle transfers can therefore also be expressed on the form $P_m =
\langle\psi_{m}|\psi_{m}\rangle$.
Next we demonstrate that the procedure works for an arbitrary initial state and consider a superposition of the wave functions located on the left and right, $\psi^{(0)} = A \psi_{L} + B \psi_{R} $. The operation in Eq. (\[2rc\]) then produces the three charge transfer states $$\psi_{m} = \left\{
\begin{array}{rcc}
A\psi_{L \rightarrow R} &\;\;,\;\;& m=1 \;;\\
A \psi_{L \rightarrow L} + B\psi_{R \rightarrow R}
&\;\;,\;\;& m=0 \;;\\
B \psi_{R \rightarrow L} &\;\;,\;\;& m=-1 \;.
\end{array}
\right.$$ We note that the weight of these states, $|| \psi_{m}||^2 = \langle
\psi_{m}| \psi_{m}\rangle $, are the probabilities for transfers of $m$ particles for a general initial state. Indeed, $\langle\psi_{1}|\psi_{1}\rangle = |A|^2 \langle\psi_{L \rightarrow
R}|\psi_{L \rightarrow R}\rangle$ is the product of the probability initially to be on the left side and the conditional probability to transfer from the left to the right, and analogously for the terms $\langle\psi_{0}|\psi_{0}\rangle$ and $\langle\psi_{-1}|\psi_{-1}\rangle$.
Using Eq. (\[2rc\]), the transfer probability, $P_m =
\langle\psi_{m}|\psi_{m}\rangle$, can be expressed as $$P_{m} =
\int\limits_{0}^{2\pi }\frac{d\Lambda}{2\pi}
\frac{d\lambda}{2\pi}e^{-i m \lambda}
\langle\psi^{(0)}| {\cal E}_{\Lambda - \frac{\lambda}{2}}^{\dagger}
{\cal E}_{\Lambda+\frac{\lambda}{2}}|\psi^{(0)}\rangle ,
\label{5rc}$$ and the generating function, $\chi_{\tau }(\lambda )=
\sum\limits_{m}P_{m} e^{i m \lambda }$, reads $$\chi_{\tau}(\lambda) = \int\limits_{0}^{2\pi} \frac{d \Lambda }{2\pi }
\; \chi_{\tau} (\lambda , \Lambda )\;\; , \;\;
\chi_{\tau} (\lambda, \Lambda ) =
\langle{\cal E}_{\Lambda - \frac{\lambda}{2}}^{\dagger}
{\cal E}_{\Lambda+\frac{\lambda}{2}} \rangle\, ,
\label{9rc}$$ where the averaging is with respect to the initial state $\psi^{(0)}$, or the density matrix of the system. This expression is valid for a many-body system provided the corresponding gauge-transformed evolution operator Eq. (\[czc\]) is used. Inverting the argument, Fourier transformation with respect to $\lambda $ of the $2\pi
-$periodic function $\chi_{\tau}(\lambda)$, Eq. (\[9rc\]), generates the coefficients $P_{m}$ (for integer $m$’s) which are guaranteed to be positive, and $\sum_{m}P_{m}=1$. Their meaning is that of probabilities for integer charge transfers.
Expressing the evolution operator via the Hamiltonian, the integrand of the generating function in Eq. (\[9rc\]) can be written as $$\chi_{\tau}(\lambda, \Lambda ) =
\left\langle
T_{K}\exp \left[- i \int\limits_{{\cal C}_{\tau}} dt'\;
{\cal H}_{\gamma (t')}(t')\right]
\right\rangle_{0} ,
\label{tsc}$$ where ${\cal H}_{\gamma } = U_{\gamma } {\cal H}
U_{\gamma}^{\dagger}$, and ${\cal H}$ is the Hamiltonian for the system, and the Keldysh contour ${\cal C}_{\tau}$ proceeds from $t_{-}=- \infty $ to $\tau $ and back again as $t_{+}$ from $\tau $ to $- \infty $; $T_{K}$ denotes the time ordering on the contour. The projecting gauge field $\gamma $ is zero outside the measuring interval, and $\gamma (t'_{\mp}) = \Lambda \pm \frac{\lambda}{2} $ for $0< t' < \tau $, and the average in Eq. (\[tsc\]) is taken with respect to the density matrix in the far past. The expression in Eq. (\[tsc\]) can be evaluated using standard field theoretical methods.
The formula, Eq. (\[9rc\]), differs from the generating function proposed by Levitov [*et al.*]{} [@LevLeeLes96; @LevRez01]. The latter, denoting it $\chi_{\tau}^{\rm{L}} $, is obtained by setting $\Lambda =0$ in Eq. (\[9rc\]) $$\chi_{\tau}^{\rm{L}}(\lambda )=
\langle
{\cal E}_{-\frac{\lambda}{2}}^{\dagger} {\cal
E}_{\frac{\lambda}{2}}
\rangle .
\label{maK}$$ To verify that the two counting formulas are not equivalent, we calculate $\chi_{\tau}^{\rm{L}}(\lambda )$ for the single particle case where the initial state is the previously considered superposition of right and left located wave packets, and obtain $$\chi_{\tau}^{\rm{L}} (\lambda ) = \chi_{\tau }(\lambda ) +
4i \sin \frac{\lambda}{2} \Re
\left(A^{*} B \, \langle
\psi_{L\rightarrow R} | \psi_{R\rightarrow R} \rangle\right).
\label{esc5}$$ Indeed, $\chi_{\tau }^{\rm{L}}$ differs from our generating function. We find that the difference is the additional term in $\chi_{\tau
}^{\rm{L}}$ which is $4\pi -$periodic in $\lambda $. In Ref. [@LevLeeLes96] where the procedure for charge transfer counting were based on the Fourier expansion of $\chi_{\tau}^{\rm{L}}(\lambda )$, this implies that half-integer charge transfers would occur.
To investigate further the difference between the two approaches, we consider counting statistics from a different perspective. Introducing the Hermitian operators $${\cal P}_{n} = \int\limits_{0}^{2\pi } \frac{d\gamma}{2\pi} e^{-i n
\gamma }U_{\gamma }
\quad,\quad n = 0, \pm 1, \ldots ,
\label{fsc}$$ through the marker gauge transformation, Eq. (\[vrc\]), we realize their meaning by noting that the operator ${\cal P}_{n}$ projects a state $|\psi \rangle $ onto the component $|\psi_{n}\rangle = {\cal
P}_{n}|\psi \rangle $ which corresponds to exactly $n$ particles on the left. These projection operators, similar to the ones introduced by P. W. Anderson in superconductivity, turn out to be suitable tools for the kind of vivisection of a quantum state needed to obtain the probability distribution for discrete charge transfers. Being states with definite particle number on the left, the projections $|\psi_{n}\rangle $ are eigenfunctions of the operator $U_{\gamma }$, $ U_{\gamma } |\psi_{n}\rangle = e^{i \gamma n} |\psi_{n}\rangle
$. This property can be expressed on operator form as ${\cal P}_{n} =
e^{- i \gamma n} U_{\gamma }{\cal P}_{n} $, whereby $ \sum_{n} e^{i n
\gamma }{\cal P}_{n} = U_{\gamma } \,. $ Consequently, Eq. (\[5rc\]) can be transformed into $$P_{m} =
\sum_{n}
\langle\psi^{(0)}|
{\cal P}_{n}{\cal E}^{\dagger} {\cal P}_{n-m}
{\cal E} {\cal P}_{n}
|\psi^{(0)}\rangle ,
\label{psc}$$ producing a different way of expressing the transfer probability. According to quantum mechanics, the matrix element $\langle\psi^{(0)}|
{\cal P}_{n}{\cal E}^{\dagger} {\cal P}_{n-m} {\cal E} {\cal P}_{n}
|\psi^{(0)}\rangle = ||{\cal P}_{n-m} {\cal E} {\cal P}_{n} \psi^{(0)}
||^2$ is the probability for the transition from a state with $n$ particles on the left to a state with $n-m$ particles on the left. The quantity $P_{m}$ is thus the probability for a transfer of $m$ particles to the right in a time span $\tau$ [*given*]{} that a measurement of the charge state is performed initially. For an arbitrary mixture of states, $$P_{m} = \sum\limits_{n}
\;\langle
{\cal P}_{n}{\cal E}^{\dagger} {\cal P}_{n-m}
{\cal E} {\cal P}_{n}
\rangle
\; ,
\label{psc02}$$ where, as in Eq. (\[9rc\]), the average means taking trace with respect to the density matrix $\rho_{0}$ at time $t=0$ when the counting is initiated. (We recall that the evolution operator ${\cal
E}$ evolves the system from time $t=0$ to $t= \tau$.) For a classical statistical ensemble, this is how the statistics of particle transfers is evaluated, and we conclude that Eq. (\[psc02\]) and therefore the generating function, Eq. (\[9rc\]), indeed has the correct classical limit.
In terms of the charge projection operators, the generating function Eq. (\[9rc\]) becomes $$\chi_{\tau}(\lambda ) = \sum\limits_{n}
\;
\langle
{\cal P}_{n}{\cal E}^{\dagger}_{-\frac{\lambda}{2}}
{\cal E}_{\frac{\lambda}{2}} {\cal P}_{n}
\rangle \, .
\label{xvc02}$$ We infer from this expression that if $\rho_{0}$ is diagonal in the representation of the charge states, one of the projection operators can be removed from Eq. (\[xvc02\]), and the sum over the remaining projectors is unity; equivalently, $\chi_{\tau }(\lambda, \Lambda )$ does not depend on $\Lambda $ and the integration with respect to $\Lambda $ can be omitted. In [*this*]{} case, the generating function, Eq. (\[xvc02\]), reduces to the form on the r.h.s. of Eq. (\[maK\]), which is identical to the result of Ref. [@LevLeeLes96]. The physical origin of the difference between the two formulae is thus the charge off-diagonal components of the density matrix. As one can show, it is the latter which produces the unphysical $4\pi $-periodic part of the generating function of Ref. [@LevLeeLes96].
We observe that the simple physical picture in which the current is build of transfers of integer number of particles meets with difficulties for a general initial state, when the system may be in a superposition of different charge states, and $\rho_{0}$ is non-diagonal in charge space. Indeed, one expects that $Q(\tau
)=e\sum_{m} m P_{m}$ should equal the average charge transfer, $\int_{0}^{\tau } \langle \hat{I}(t) \rangle dt$, $\hat{I}$ being the Heisenberg current operator. Calculating $\dot{Q}$ from Eq. (\[9rc\]) or Eq. (\[xvc02\]), $ \dot{Q}(t) = \sum_{n} \langle
{\cal P}_{n} \hat{I}(t) {\cal P}_{n} \rangle \;. \label{nwc} $ Clearly, $\dot{Q}$ is identical to $\langle \hat{I} \rangle $ only when $\rho_{0}$ is charge diagonal. A similar difficulty emerges for the generating function of Ref.[@LevLeeLes96]: the expression $Q=
\sum_{m} m P_{m}^{L}$ gives the correct expectation value for the transferred charge [*only*]{} if one allows $m$ to assume [*half-integer*]{} as well as integer values, the former being due to the $4 \pi $-periodic part of $\chi_{\tau }^{L}(\lambda )$ generated by the charge off-diagonal elements of $\rho_{0}$. The analysis points to an ambiguity in counting statistics, a trade-off between having a probability distribution for discrete charge transfers and the generation of proper current correlation functions. The resolution is shown to be tied to the charge structure of the initial state of the system.
As an illustration, we evaluate the counting statistics for a tunnel junction using Eq. (\[tsc\]); the problem was considered previously in Ref. [@LevRez01] using the generating function in Eq. (\[maK\]). The system consists of two weakly connected metallic regions, left and right. The Hamiltonian reads $ {\cal H} = {\cal
H}_{0} + V^{T}$, where ${\cal H}_{0}$ refers to the isolated regions and $V^{T}= V_{r \leftarrow l} + V_{l \leftarrow r}$ is the tunneling part, where $V_{r \leftarrow l}$ ( $V_{l \leftrightarrow r}$) describe transitions from left to right (right to left), $V_{r \leftarrow l}=
V_{l \leftarrow r }^{\dagger} $. The gauge transformation $U_{\gamma
}$ affects only the tunneling part, and in Eq. (\[tsc\]), $${\cal H}_{\gamma } = {\cal H}_{0} + V_{\gamma }^{T}\;,\;
V_{\gamma }^{T}=
e^{i \gamma }V_{r \leftarrow l}+ e^{- i \gamma }V_{l \leftarrow r}\,.
\label{zsc}$$ As in Ref. [@LevRez01], we consider only the leading contributions with respect to tunneling in Eq. (\[tsc\]), and $W(\lambda,\Lambda)
\equiv \ln \chi (\lambda , \Lambda )$ can be evaluated as $$W(\lambda,\Lambda) =
- \frac{1}{2}
\left\langle T_{K}
\int\limits_{- \infty }^{\tau}
dt_{1} dt_{2}
\hat{V}_{\gamma (t_{1})}^{T}(t_{1})
\hat{V}_{\gamma (t_{2})}^{T}(t_{2})
\right\rangle ,
\label{yvc}$$ where $\hat{V}^{T}(t)$ is the tunneling operator in the interaction picture; for given $\lambda $ and $\Lambda $, the projecting field $\gamma $ is the function of the Keldysh time $t_{\pm}$ introduced below Eq. (\[tsc\]). One obtains two contributions, $W = W_{MM}+
W_{MP}$. The first term, $W_{MM}$, originates from the time domain where both time arguments $t_{1}$ and $t_{2}$ in Eq. (\[yvc\]) belong to the measuring interval from 0 to $\tau $, and the second one, $W_{MP}$, is the contribution from the region where one of the times $t_{1}$ or $t_{2}$ is prior to the start of the measurement, $t=0$.
The term $W_{MM}$ is $2\pi $-periodic in $\lambda $ and does not depend on $\Lambda $. It coincides with the result of Ref. [@LevRez01]: $ W_{MM} = (e^{i \lambda} -1)\; w_{+}(\tau ) \;
+ \; (e^{- i \lambda} -1)\; w_{-}(\tau ) $ where $$w_{+}(\tau )
=
2\pi \tau
\int dE\, d \epsilon \,
T_{E_{-}, E_{+}}\,
n_{E_{-}}^{l}
\left(1 - n_{E_{+}}^{r}\right)
\Delta_{\tau }(\epsilon)
\quad,\quad \Delta_{\tau }(\epsilon)
=
\frac{1}{2 \pi \tau}
\left|
\frac{e^{i \epsilon \tau} - 1}{\epsilon}
\right|^2 \; .$$ Here $E_{\pm}= E \pm\frac{\epsilon }{2}$, $n_{E}^{l}$ and $n_{E}^{r}$ are the electron distribution functions for the energy $E$ in the left and right regions respectively, and $$T_{E,E'}=
\sum_{p,p'}|V_{p,p'}^{T}|^2 \delta (E - \varepsilon _{p}) \delta (E' -
\varepsilon _{p'}),$$ $\varepsilon_{p}$ being the single particle energy; one obtains $w_{-}$ by substituting $n^{l,r}$ for $(1-
n^{l,r})$ in $w_{+}$.
The second term, $W_{MP}$, has the form $$W_{MP}
= 2i \sin \frac{\lambda}{2}\,\Re\left(e^{i \Lambda } w_{MP}(\tau
)\right) \; ,$$ where $$w_{MP}(\tau ) = 2\int dE\, d \epsilon \,
T_{E_{-}, E_{+}}
\left( n_{E_{-}}^{l}
- n_{E_{+}}^{r}\right)\;
\frac{e^{i \epsilon^{+}\tau}-1}{(\epsilon^{+})^2}
\quad,\quad \epsilon^{+}= \epsilon +i 0
\; ,$$ and $W_{MP}$ is thus a $4 \pi -$periodic function of $\lambda $.
The generating function factorizes, $\chi = \chi_{MM} \, \chi_{MP}$, where $\chi_{MM}= e^{W_{MM}}$ is identical to the result in Ref.[@LevRez01], and $$\chi_{MP}(\tau , \lambda )
=
\sum\limits_{m= - \infty }^{\infty }
e^{im \lambda }
J_{m}^2\left( w_{MP}(\tau ) \right)\; ,
\label{zvc}$$ $J_{m}$ being the Bessel function. If $T_{E,E'}$ is featureless on the scale of the Fermi energy, $E_{F}$, $w_{MP}$ is a constant once $\tau \gg \hbar/ E_{F} $, $$w_{MP}(\infty )= \pi \,\frac{R_{0}}{R_{T}} \; ,
\label{2vc}$$ where $R_{T}$ is the tunneling resistance and $R_{0}= 2\pi \hbar/e^2$.
The calculation for the tunnel junction qualitatively agrees with the result we obtained in the single-particle case. We observe again that the formula of Ref. [@LevLeeLes96; @LevRez01], which is identical to $\chi_{\tau }(\lambda ,\Lambda =0 )$, contains $4 \pi $-periodic terms. According to the derivation, they originate from tunneling events which occur before the measurement started and create charge off-diagonal elements in the density matrix by the time $t=0$. For large enough measuring times, $W_{MP}$ saturates unlike $W_{MM}$ which grows linearly with time, and $W_{MP}$ represents memory of the initial state of the system, amounting to intrinsic voltage independent charge fluctuations. The latter, however, need not be small, as seen from Eq. (\[2vc\]). According to our analysis, the results in [@LevRez01], based on Ref. [@LevLeeLes96], are valid only if the two electrodes are not connected before the measurement whereby charge superposition is prevented.
In this paper we have reconsidered counting statistics applying the rules of quantum mechanics. We confirm the counting formula of Levitov [*et al.*]{}, but only for cases where the initial state of the system is charge diagonal, [*i.e.*]{}, when superposition of different charge states is absent. Our approach leads to a novel formula for the probability distribution of integer charge transfers which is valid for a general charge coherent state.
The role of charge coherence, which is a ubiquitous feature of any many-body quantum state, should be examined in each particular case. For a system of [*non-interacting*]{} particles, one may argue that details of the initial state are of minor importance since the net contribution of the non-diagonal elements tends to average out. Our tunnel junction results show how it comes about in this particular case: Even though charge coherence is present in the initial state at $t=0$, created by prior tunneling events, its contribution expressed by $w_{MP}$ does not grow with time, diminishing in importance at large measuring times. Nevertheless, the charge coherence present at the start of counting, does noticeably change the statistics in accordance with our formula Eq. (\[zvc\]), especially for short measuring times. For [*interacting*]{} systems the situation is similar, provided the charge structure of the quantum state can be expressed in terms of quasi-particles. An important counterexample is a superconductor, where the superposition of different charge states is rigidly maintained, as required by the number-phase uncertainty relation. Although the theoretical objects entering the formula of Levitov [*et al.*]{} can be calculated for superconductors [@BelNaz01] and even measured [@LevLeeLes96; @NazKin], they cannot be interpreted as charge transfer probabilities. Since charge off-diagonal elements are important, the formula of Ref. [@LevLeeLes96] cannot be used to calculate the statistics of charge transfer related to the current of Cooper pairs. The fact that the previous counting formula leads to negative “probabilities” in the case of a Josephson junction [@BelNaz01], is understandable in view of this observation.
In conclusion, we have shown that by using gauge and charge projection operators to analyze the structure of a quantum state of an arbitrary system, one is able to construct a probability distribution for charge transfers of particles obeying quantum dynamics. The constructed function is a proper probability distribution, i.e., positive definite and normalized, and the probability distribution for charge transfer counting of classical mechanics emerges in the correspondence limit. The charge transfer in a tunnel junction is considered, and the modification of counting statistics due to charge coherence has been demonstrated.
We are grateful to J. Wabnig, Yu. Makhlin, and A. Shnirman for discussions. This work was supported by The Swedish Research Council. The paper was completed during a visit of one of us (A. S.) to Institut für Theoretische Festkörperphysik, Karlsruhe University, and the hospitality extended during the visit is greatly appreciated; financial support from SFB 195 of the DFG is acknowledged.
[99]{}
Also at A. F. Ioffe Physico-Technical
L. S. Levitov, H. W. Lee, G. B. Lesovik, J. of Math. Phys. [**37**]{}, 4845 (1996).
G. B. Lesovik, JETP Lett. [**49**]{}, 592 (1989).
V. K. Khlus, Sov. Phys. JETP [**66**]{}, 1243 (1987).
M. Buttiker, Phys. Rev. Lett. [**65**]{}, 2901 (1990).
C. W. J. Beenakker, M. Buttiker, Phys. Rev. B[**46**]{}, 1889 (1992).
C. W. J. Beenakker, H. Schomerus, Phys. Rev. Lett. [**86**]{}, 700 (2001).
J. Börlin, W. Belzig, C. Bruder, Phys. Rev. Lett. [**88**]{}, 197001 (2002); P. Samuelsson, M. Büttiker, Phys. Rev. Lett. [**89**]{}, 046601 (2002).
Yu. V. Nazarov, M. Kindermann, cond-mat/0107133.
L. S. Levitov, M. Reznikov, cond-mat/0111057.
W. Belzig, Yu. V. Nazarov, Phys. Rev. Lett. [**87**]{}, 067006 (2001).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Using a multi-channel analysis of $W_{L}W_{L}$ scattering signals, I study the LHC’s ability to distinguish among various models of strongly interacting electroweak symmetry breaking sectors.'
author:
- |
William B. Kilgore\
[*Fermi National Accelerator Laboratory*]{}\
[*P.O. Box 500*]{}\
[*Batavia, IL 60510, USA*]{}
title: |
Distinguishing Among Models of Strong ${\bf W}_{\bf L}{\bf
W}_{\bf L}$ Scattering at the LHC
---
[Distinguishing Among Models of Strong ${\rm W}_{\rm L}{\rm
W}_{\rm L}$ Scattering at the LHC [^1]\
]{} [William B. Kilgore [^2]\
]{} [*Fermi National Accelerator Laboratory\
P.O. Box 500\
Batavia, IL 60510, USA*]{}
[ Using a multi-channel analysis of $W_{L}W_{L}$ scattering signals, I study the LHC’s ability to distinguish among various models of strongly interacting electroweak symmetry breaking sectors. ]{}
Introduction
============
The most important question in particle physics today concerns the nature of the electroweak symmetry breaking mechanism. One of the most interesting and experimentally challenging possibilities is that the electroweak symmetry is broken by some new strong interaction. If this is the case, there may be no light quanta (of order a few hundred GeV or less), such as the Higgs boson, supersymmetric partners, [*etc*]{}., associated with the symmetry breaking sector. There will however be an identifiable signal of the symmetry breaking sector: strong $W_{L}W_{L}$ scattering.
The Goldstone boson equivalence theorem [@ET] states that at high energy, longitudinally polarized massive gauge bosons “remember” that they are the Goldstone bosons of the symmetry breaking sector. Accordingly, longitudinal gauge bosons in high energy scattering amplitudes can be replaced by the corresponding Goldstone bosons. For weakly interacting symmetry breaking sectors, this is merely a computational convenience. For strongly interacting symmetry breaking sectors, however, the equivalence theorem, coupled with the effective-[*W*]{} approximation [@ewa] becomes a powerful tool for modeling high energy gauge boson scattering amplitudes.
Observing strong $W_{L}W_{L}$ scattering presents a very difficult experimental challenge. The scattering amplitudes grow with center of mass energy, but do not become large until the mass of the $W_{L}W_{L}$ system exceeds $\sim$1 TeV. At the LHC, the luminosity at such energies will be small and falling steeply, so that even though the scattering amplitudes are large, the cross section will be small, amounting to no more than tens of events per year. Nevertheless, it has been shown [@LHCgold; @mcwk1; @mcwk2] that for all but a few pathological cases [@Stealth] the LHC will be able to establish the presence of strong $W_{L}W_{L}$ scattering, if it exists, in at least one scattering channel. It has also been shown that if strong $W_{L}W_{L}$ scattering is dominated by a single low-lying ($\sim$1 TeV) resonance, that resonance can be identified. The purpose of this study is to take a first look at the difficult task of distinguishing among different models of the symmetry breaking sector, even when there is not a single identifiable resonance. I will perform a multi-channel analysis on several different models of the symmetry breaking sector, comparing the predicted signals in each $W_{L}W_{L}$ scattering channel to those predicted by other models.
As the basis for this study, I will use the background calculations and signal identification cuts of Bagger [*et. al.*]{} [@LHCgold], in which a standard set of cuts is identified for each scattering channel, and imposed consistently on the background processes and on a variety of models of strongly interacting symmetry breaking sectors.
$W_{L}W_{L}$ Scattering Channels
================================
This analysis looks at $W_{L}W_{L}$ scattering into 5 different final states:
… $ZZ \rightarrow \ell^+\ell^-\ell^+\ell^-$
… $ZZ \rightarrow \ell^+\ell^-\nu\overline\nu$
… $W^{\pm}Z \rightarrow \ell^{\pm}\nu\ell^+\ell^-$
… $W^+W^- \rightarrow \ell^+\nu\ell^-\overline\nu$
… $W^{\pm}W^{\pm} \rightarrow \ell^{\pm}\nu\ell^{\pm}\nu$
The $ZZ \rightarrow \ell^+\ell^-\nu\overline\nu$ is included because the small branching fraction of [*Z*]{} bosons into charged leptons severely limits the statistical significance of the $ZZ \rightarrow
\ell^+\ell^-\ell^+\ell^-$ process.
In general, longitudinal $W_L$ pair production is dominated by the $W_{L}W_{L}$ fusion process, in which two incoming quarks radiate longitudinal $W_L$ bosons, which then rescatter off of one another as in Figure \[fig:wwscatt\].
Vector resonances also receive sizable contributions from $q\overline{q}'$ annihilation into a gauge boson followed by mixing of the gauge boson into the vector resonance state, $q\overline{q}'
\rightarrow W^* \rightarrow \rho \rightarrow W_{L}W_{L}$.
The background in this study is taken to be the standard model with a light (100 GeV) Higgs boson. The signal for strong $W_{L}W_{L}$ scattering is an observable excess of gauge boson pairs over the expected rate from the standard model. The dominant background processes are $W_{L}W_{L}$ fusion into transverse W pairs ($qq \rightarrow
qq'W_TW_T (W_TW_L)$), $q\overline{q}'$ annihilation into W pairs plus jets, and top quark induced backgrounds.
The strongly interacting vector boson fusion process gives the signal events several distinctive characteristics which allow them to be distinguished from the background. The incoming quarks tend to emit longitudinal gauge bosons in the forward direction which then rescatter strongly off of one another. The forward emission tends to give the spectator quarks little recoil transverse momentum while the strong scattering process, which grows stronger with increasing center of mass energy, tends to be isotropic, throwing a large number of events into central rapidity regions. Thus, the signal is characterized by high invariant mass back-to-back gauge boson pairs accompanied by two forward jets from the spectator quarks and little central jet activity.
This is to be contrasted with the various background processes. $q\overline{q}'$ annihilation tends to produce transversely polarized gauge bosons and no forward spectator jets. When jets are produced in association with $q\overline{q}'$ annihilation, they often appear in central rapidities. Top induced backgrounds tend to produce very active events, characterized by jet activity in the vicinity of the gauge bosons. Perhaps the most dangerous background is the gauge boson fusion process producing at least one transversely polarized gauge boson since this process produces events with the same topology as the signal. Still, there are important differences. Interactions involving transversely polarized gauge bosons are weak (characterized by the weak gauge coupling) at all energies. In order for energetic gauge bosons to be thrown into the central region, they must typically recoil off of the emitting quarks, rather than off of one another. This hard recoil off of the quarks tends to throw the accompanying jets into the central region, rather than the forward.
These signatures can be used to help formulate a set of cuts which will enhance the signal at the expense of the background. One expects to find very energetic leptons in the central region of the detector. In addition, the leptons from one gauge boson tend to be back-to-back with those from the other gauge boson. In $ZZ$ modes, the invariant mass of the $ZZ$ pair tends to be large. In other modes, which cannot be fully reconstructed, the transverse mass of the gauge boson pair tends to be large. In addition, one can veto events with significant central jet activity, and tag for the forward spectator jets. The standard cuts used in Reference [@LHCgold] are summarized in Table \[cutstable\],
[lcc]{}\
$ZZ($4$\ell)$ & Leptonic Cuts & Jet Cuts\
\
& $\vert y({\ell}) \vert < 2.5$ & $E_{tag} > 0.8$ TeV\
& $p_T(\ell) > 40$ GeV & $3.0 < \vert y_{tag} \vert < 5.0$\
& $p_T(Z) > p_{cm}(Z)/2$ & $p_{T\ tag} > 40$ GeV\
& $M({ZZ}) > 500$ GeV & No Veto\
\
$ZZ(\ell\ell\nu\nu)$ & Leptonic Cuts & Jet Cuts\
\
& $\vert y({\ell}) \vert < 2.5 $ & $E_{tag} > 0.8$ TeV\
& $p_T(\ell) > 40$ GeV & $3.0 < \vert y_{tag} \vert < 5.0$\
& $p_T^{\rm miss} > 250$ GeV & $p_{T\ tag} > 40$ GeV\
& $M_T(ZZ) > 500$ GeV & $p_{T\ veto} > 60$ GeV\
& $p_T{(\ell\ell)}>M_T(ZZ)/4$ & $ \vert y_{veto} \vert < 3.0$\
\
$W^+W^-$ & Leptonic Cuts & Jet Cuts\
\
& $\vert y({\ell}) \vert < 2.0 $ & $E_{tag} > 0.8$ TeV\
& $p_T(\ell) > 100$ GeV & $3.0 < \vert y_{tag} \vert < 5.0$\
& $\Delta p_T({\ell\ell}) > 440$ GeV & $p_{T\ tag} > 40~$GeV\
& $\cos\phi_{\ell\ell} < -0.8$ & $p_{T\ veto} > 30$ GeV\
& $M({\ell\ell}) > 250$ GeV & $ \vert y_{veto} \vert < 3.0$\
\
$W^\pm Z$ & Leptonic Cuts & Jet Cuts\
\
& $\vert y({\ell}) \vert < 2.5 $ & $E_{tag} > 0.8$ TeV\
& $p_T(\ell) > 40$ GeV & $3.0 < \vert y_{tag} \vert < 5.0$\
& $ p_{T}^{\rm miss} > 50$ GeV & $p_{T\ tag} > 40$ GeV\
& $p_T(Z) > {1\over4} M_T(WZ) $ & $p_{T\ veto} > 60$ GeV\
& $M_T(WZ) > 500\ {\rm GeV}$ & $ \vert y_{veto} \vert < 3.0$\
\
$W^\pm W^\pm $ & Leptonic Cuts & Jet Cuts\
\
& $\vert y({\ell}) \vert < 2.0 $ &\
& $p_T(\ell) > 70$ GeV & $3.0 < \vert y_{tag} \vert < 5.0$\
& $\Delta p_T(\ell\ell) > 200$ GeV & $p_{T\ tag} > 40$ GeV\
& $\cos\phi_{\ell\ell} < -0.8$ & $p_{T\ veto} > 60$ GeV\
& $M(\ell\ell) > 250$ GeV & $\vert y_{veto} \vert < 3.0$\
where $p_{cm}(Z)$ is the magnitude of the [*Z*]{} boson momentum in the diboson center of mass, $$p_{cm}(Z) = {1\over2}\sqrt{M^2(ZZ) - 4M_Z^2},
\label{eq:pcm}$$ and the transverse masses are $$\begin{aligned}
\label{eq:transmass}
M_T^2(ZZ)&=&\left[\sqrt{M_Z^2+p_T^2(\ell\ell)} +
\sqrt{M_Z^2+|p_{T}^{\rm miss}|^2}\right]^2\nonumber\\
&& - \left[{\vec{p}}_T(\ell\ell) + {\vec{p}}_T^{\rm miss}\right]^2\nonumber\\
\\
M_T^2(WZ)&=&\left[\sqrt{M^2(\ell\ell\ell) + p_T^2(\ell\ell\ell)} +
|p_{T}^{\rm miss}|\right]^2\nonumber\\
&& - \left[{\vec{p}}_T(\ell\ell\ell) + {\vec{p}}_T^{\rm miss}\right]^2.\nonumber\end{aligned}$$
The cuts in Table \[cutstable\] are chosen to maximize the significance of each channel in the 1 TeV Higgs model. These cuts are not well suited for observing vector resonances in the $W^{\pm}Z$ channel. In the Higgs model, this channel, like all others, is dominated by the vector boson fusion process. The cuts therefore call for a forward jet tag. In vector resonance models, however, more than half of the signal in the $W^{\pm}Z$ channel comes from direct $q\overline{q}'$ annihilation via mixing of the gauge boson and vector resonance states. Since these events are not accompanied by forward spectator jets, the jet tag cuts them out of the event sample. Reference [@LHCgold] uses a special cut to enhance the $W^{\pm}Z$ signal in vector resonance models, but does not apply this cut to the other models.
Models
======
Formalism and the Lagrangian
----------------------------
Models of strongly interacting symmetry breaking sectors typically fall into one of three categories:
… Nonresonant models.
… Models with scalar resonances.
… Models with vector resonances.
Reference [@LHCgold] describes eight different models of the symmetry breaking sector; three nonresonant models, three scalar resonance models, and two vector resonance models. The three nonresonant models differ in the unitarization procedures imposed upon them. The three scalar resonance models are the standard model with a 1 TeV Higgs boson, a nonlinearly realized chiral model with a 1 TeV scalar – isoscalar resonance (which differs from a Higgs boson by the strength of its coupling to the Goldstones), and an [*O(2N)*]{} symmetric scalar interaction. The vector resonance models incorporate vector – isovector resonances of masses 1 TeV and 2.5 TeV in a nonlinearly realized chiral symmetric interaction.
In this study I will use five of the models from Reference [@LHCgold]: the K-matrix unitarized nonresonant model, the standard model, the chiral symmetric scalar resonance model and the vector resonance models. A single Lagrangian, transforming under a nonlinearly realized [*SU(2)*]{}${}_L\otimes$ [*SU(2)*]{}${}_R$ chiral symmetry, can be written down for all of these models, with particular couplings taking special values or set to zero as necessary. The Goldstone boson fields, $\pi^a$, are parameterized by the field $$\xi = \exp{i{\sigma^a\pi^a\over2v}},
\label{eq:xidef}$$ where $\sigma^a$ are the Pauli matrices and $v$ is the electroweak vacuum expectation value. Under chiral rotations, $\xi$ transforms as $$\xi \rightarrow \xi' \equiv L\xi U^\dagger = U\xi R,
\label{eq:xitfn}$$ where [*L*]{}, [*R*]{} and [*U*]{} are elements of [*SU(2)*]{} and [*U*]{} is a nonlinear function of [*L*]{}, [*R*]{} and $\pi^a$.
With $\xi$ and its Hermitean conjugate $\xi^\dagger$, one can construct left- and right-handed currents, $$\begin{aligned}
\label{eq:LRcur}
J^{\mu}_L &=& \xi^\dagger\partial^{\mu}\xi \rightarrow
UJ^{\mu}_LU^\dagger + U\partial^{\mu}U^\dagger,\nonumber\\[-8pt]
\\
J^{\mu}_R &=& \xi\partial^{\mu}\xi^\dagger \rightarrow
UJ^{\mu}_RU^\dagger + U\partial^{\mu}U^\dagger.\nonumber\end{aligned}$$ Note the inhomogeneous term $U\partial^{\mu}U^\dagger$, meaning that these currents transform as gauge fields under the diagonal $SU(2)$. From these chiral currents, one can form axial and vector currents, $$\begin{aligned}
\label{eq:AVcur}
{\cal A}^{\mu} &=& J^{\mu}_L - J^{\mu}_R\rightarrow
U{\cal A}^{\mu}_LU^\dagger,\nonumber\\[-8pt]
\\
V^{\mu} &=& J^{\mu}_L + J^{\mu}_R\rightarrow
UV^{\mu}U^\dagger + 2U\partial^{\mu}U^\dagger.\nonumber\end{aligned}$$ The axial vector current transforms homogeneously under chiral transformation [*U*]{} but the vector current transforms inhomogeneously. This suggests that when we add the vector resonance $\rho_{\mu} = \rho_{\mu}^a\sigma^a/2$, it must transform as a gauge field under chiral transformations $$\rho_{\mu} \rightarrow U\rho_{\mu}U^\dagger +
i\tilde{g}^{-1}U\partial^{\mu}U^\dagger.
\label{eq:rhotfn}$$ Now a new vector current can be formed which transforms homogeneously under chiral transformations, $${\cal V}^{\mu} = V^{\mu} + 2i\tilde{g}\rho^{\mu} \rightarrow U{\cal
V}^{\mu}U^\dagger.
\label{eq:Vtfn}$$
With these pieces and a scalar – isoscalar field $S$, we can construct the Lagrangian, $$\begin{aligned}
\label{eq:chilag}
{\cal L} &=& -{1\over4}v^2{\rm Tr}{\cal A}^{\mu}{\cal A}_{\mu}
- {a\over4}v^2{\rm Tr}{\cal V}^{\mu}{\cal V}_{\mu}
- {\lambda\over2}vS{\rm Tr}{\cal A}^{\mu}{\cal A}_{\mu}\nonumber\\
\\[-10pt]
&&- {1\over2}{\rm Tr}\rho_{\mu\nu}\rho^{\mu\nu}
+ {1\over2}\partial^{\mu}S\partial_{\mu}S - {1\over2}M^2_{S}S^2 + \dots,
\nonumber\end{aligned}$$ where $\rho^a_{\mu\nu}$ is the field strength tensor of the vector field $\rho^a_{\mu}$, and the ellipsis indicates higher derivative terms and other terms such as couplings between the scalar resonance and vector current which do not contribute to elastic $W_{L}W_{L}$ scattering.
In this notation, the resonances have masses and widths $$\begin{aligned}
\label{eq:resparam}
M_S = M_S && \Gamma_S = {3\lambda^2 M_S^3\over{32\pi v^2}}\nonumber\\
\\[-10pt]
M_\rho = a\tilde{g}^2v^2 && \Gamma_\rho = {aM_\rho^3\over{192\pi v^2}}
\nonumber\end{aligned}$$ Note that if $\lambda=1, a=0$, the scalar resonance $S$ is identical to an ordinary Higgs boson of the standard model. The Lagrangian in Equation \[eq:chilag\] can thus parametrize a linear realization of $SU(2)_L\otimes SU(2)_R$ even though it is written in the language of non-linear realizations.
Details of Particular Models
----------------------------
In this analysis, I will use the results from the following models described in Reference [@LHCgold].
The standard model with a 1.0 TeV Higgs boson ($\Gamma_S =$ 0.49 TeV). In the Lagrangian of Equation \[eq:chilag\], this corresponds to setting $M_S =$ 1.0 TeV, $\lambda =$ 1, $a =$ 0.
A scalar resonance with $M_S =$ 1.0 TeV, $\Gamma_S =$ 0.35 TeV, corresponding to $\lambda =$ 0.84, $a =$ 0.
A vector resonance with $M_\rho =$ 1.0 TeV, $\Gamma_\rho =$ 0.0057 TeV, corresponding to $\lambda =$ 0, $a =$ 0.208, $\tilde{g} =$ 8.9.
A vector resonance with $M_\rho =$ 2.5 TeV, $\Gamma_\rho =$ 0.52 TeV, corresponding to $\lambda =$ 0, $a =$ 1.21, $\tilde{g} =$ 9.2.
A non-resonant model corresponding to $\lambda =$ 0, $a =$ 0.
Note that the vector resonances considered are quite narrow. If one were to scale up QCD, vector resonances with masses of 1.0 and 2.5 TeV would have widths of 0.059 and 0.92 TeV respectively. The resonances in this study are taken to be so narrow in order to avoid constraints on the mixing of the [*Z*]{} boson with the resonance. These constraints come from the effect of the vector resonance on the spectral function of the [*Z*]{} boson. They could be relaxed if one were to assume, for instance, the presence of an axial vector resonance which would have a balancing effect on the spectral function, yet would not affect elastic $W_{L}W_{L}$ scattering [@PeskTak].
Analysis
========
It has been well established [@LHCgold; @mcwk1; @mcwk2] that the LHC will be able to demonstrate the existence or nonexistence of a strongly interacting electroweak symmetry breaking sector through direct observation of an excess of $W_{L}W_{L}$ events in at least one scattering channel. If such an excess is observed, one will want to understand what sort of interaction is responsible for the excess. Given the limited reach of the LHC into multi-TeV energies, a realistic goal is to try to fit the observed event rates in the various scattering channels to the predictions of various resonance models.
To that end, I take the predicted event rates (signal plus background) for each of the five models in turn, smear these rates by Poisson statistics and then compare the smeared results to the expectations of each model. By computing the mean chi-square with which the smeared “data” fits each model, I can determine the confidence level at which each model can be separated from the others.
In this study, I use the event rates for a single canonical LHC year of 100 ${\rm fb}^{\rm-1}$. One could argue that the LHC will run for several years and that the event rates should be multiplied by some factor such as 3 or 5. At present, however, I am concerned with what can be determined in a single year of running at design luminosity and will not speculate on the ultimate performance or lifetime of the LHC. The predicted event rates for the models are shown in Table \[sigtable\].
[c|ccccc]{} $\vphantom{\displaystyle{1\over2}}$ & $ZZ(4\ell)$ & $ZZ(2\ell)$ & $W^+W^-$ & $W^{\pm}Z$ & $W^{\pm}W^\pm$\
\
Bkg. & 0.7 & 1.8 & 12 & 4.9 & 3.7\
\
SM & 9 & 29 & 27 & 1.2 & 5.6\
\
[*S*]{} 1.0 & 4.6 & 17 & 18 & 1.5 & 7.0\
\
$\rho$ 1.0 & 1.4 & 4.7 & 6.2 & 4.5 & 12\
\
$\rho$ 2.5 & 1.3 & 4.4 & 5.5 & 3.3 & 11\
\
LET & 1.4 & 4.5 & 4.6 & 3.0 & 13\
Note again that the standard cuts for the $W^{\pm}Z$ channel given in Table \[cutstable\] are not optimized for the detection of vector resonances since they cut out the half of the signal that comes from direct $q\overline{q}'$ annihilation. Since the optimized cut is not applied to all models, I cannot use it for a quantitative analysis. I will however indicate its qualitative effect on the results below.
Results
=======
The results of the analysis are presented in Table \[chisqtable\].
--------------------------- ------------------------------- ------------------------------- ------------------------------- ------------------------------- -------------------------------
Higgs Scalar Vector Vector LET-K
(1.0, 0.49) (1.0, 0.35) (1.0, 0.0057) (2.5, 0.52)
Higgs
$M_H=$ 1.0 TeV $\langle\chi^2\rangle =$ 0.82 $\langle\chi^2\rangle =$ 3.44 $\langle\chi^2\rangle =$ 26.3 $\langle\chi^2\rangle =$ 28.1 $\langle\chi^2\rangle =$ 28.1
$\Gamma_H=$ 0.49 TeV
Scalar
$M_S=$ 1.0 TeV $\langle\chi^2\rangle =$ 2.17 $\langle\chi^2\rangle =$ 0.82 $\langle\chi^2\rangle =$ 7.74 $\langle\chi^2\rangle =$ 8.33 $\langle\chi^2\rangle =$ 8.56
$\Gamma_S=$ 0.35 TeV
Vector
$M_\rho=$ 1.0 TeV $\langle\chi^2\rangle =$ 7.72 $\langle\chi^2\rangle =$ 3.75 $\langle\chi^2\rangle =$ 0.82 $\langle\chi^2\rangle =$ 0.93 $\langle\chi^2\rangle =$ 0.95
$\Gamma_\rho=$ 0.0057 TeV
Vector
$M_\rho=$ 2.5 TeV $\langle\chi^2\rangle =$ 7.51 $\langle\chi^2\rangle =$ 3.59 $\langle\chi^2\rangle =$ 0.81 $\langle\chi^2\rangle =$ 0.82 $\langle\chi^2\rangle =$ 0.86
$\Gamma_\rho=$ 0.52 TeV
LET-K $\langle\chi^2\rangle =$ 8.08 $\langle\chi^2\rangle =$ 3.99 $\langle\chi^2\rangle =$ 0.86 $\langle\chi^2\rangle =$ 0.90 $\langle\chi^2\rangle =$ 0.82
--------------------------- ------------------------------- ------------------------------- ------------------------------- ------------------------------- -------------------------------
One can see that scalar resonance models are easily distinguished from vector resonance and non-resonant models. More surprising is that the 1.0 TeV Higgs boson is reasonably well separated from the narrower 1.0 TeV scalar resonance. The reason for this is that a Higgs theory is a renormalizable, [*unitary*]{} theory. The couplings of the gauge bosons to the Higgs cuts off the growth of the scattering amplitudes in all channels and unitarizes them. (Actually, tree level unitarity [*is*]{} violated when the Higgs is more massive than $\sim$800 GeV, but the theory is still renormalizable, and still unitary when higher order corrections are considered. The scalar resonance model is merely a low energy effective theory and is neither renormalizable nor unitary.) The smaller coupling of the narrower resonance to the gauge bosons is insufficient to unitarize the amplitudes.
The effect of this coupling strength is easily seen from Table \[sigtable\]. The amplitudes for $W^+W^-$ and $ZZ$ production are dominated by $s$-channel scalar exchange in the resonance region. The smaller coupling of the narrower resonance reduces the size of the signal in these channels. In $W^{\pm}Z$ and $W^{\pm}W^{\pm}$ production, [*t*]{}-channel scalar exchange reduces the magnitude of the scattering amplitudes. In these cases, the smaller coupling of the resonance causes the amplitudes to be reduced less than they would be by the Higgs, leading to larger signals.
Table \[chisqtable\] is somewhat misleading and overly pessimistic in that it indicates that vector resonance models cannot be distinguished from one another, nor from non-resonant models. This result is an artifact of the forward jet tag in the $W^{\pm}Z$ channel, which removes signal events due to $q\overline{q}'$ annihilation. By eliminating the jet tag and looking in a window of transverse [*WZ*]{} mass surrounding the resonance, the 1.0 TeV vector state can be easily identified [@LHCgold], and the model separated from the others with a high degree of confidence. The 2.5 TeV resonance, however, is too massive to be produced copiously, and cannot be distinguished from non-resonant strong scattering. Using considerably broader vector resonances, This conclusion is supported by References [@mcwk1; @mcwk2], which have found that vector resonances can be clearly identified in the $W^{\pm}Z$ channel up to masses of 2.0 TeV, but that resonances above 2.5 TeV are difficult to distinguish from non-resonant strong scattering.
Discussion
==========
There are many ways in which this analysis can be improved. One of the most obvious improvements would be in the choice of cuts. This analysis applies the same basic set of cuts, optimized for the 1 TeV Higgs signal, to all models. This strategy serves the purpose for which it was intended by setting a standard by which one can tell if strong $W_{L}W_{L}$ scattering is occurring, but it is not well suited to the present analysis which attempts to distinguish among models of strong scattering. In particular, since the $W^{\pm}Z$ signal in a Higgs model is optimized by using forward jet tags, the cuts remove much of the $W^{\pm}Z$ signal that occurs in a vector resonance model. A better analysis would optimize the cuts in each scattering channel for each model. One would then need to compute the performance of each model under the other models’ optimized cuts. Given a set of cuts, one can easily compute the performance of the various models. The difficulty lies in performing the optimization. The detailed background investigations that would be required are beyond the scope of this study.
This study would also be improved by adding more models. It would be interesting to determine the reach for identifying vector resonances more precisely. It would also be interesting to look at models with both scalar and vector resonances and study how their signal patterns interfere with one another.
Yet another improvement on this study would be to move beyond its reliance on gold plated purely leptonic modes. The ATLAS and CMS collaborations have both studied searches for 1 TeV Higgs bosons decaying via “silver plated” modes, in which one gauge boson decays leptonically while the other decays into jets, with positive results [@ATLAS; @CMS]. The benefit of using the silver plated modes is that the hadronic branching fraction is much larger than the leptonic branching fraction, providing a sizable increase in rate. On the other hand, the hadronic decay modes are much messier and depend much more sensitively on the details of calorimetric performance. In addition, one cannot determine the charge of the hadronically decaying gauge boson, obscuring the clean separation of scattering channels. A full investigation of the detection of silver plated modes must await a better understanding of the actual detectors, and will be best performed by the experimental collaborations themselves.
Conclusions
===========
The LHC will be able to establish the presence or absence of strong $W_{L}W_{L}$ for most models of the strongly interacting symmetry breaking sector. Making use of all $W_{L}W_{L}$ scattering channels, this analysis shows that the LHC will not only be able to identify low lying resonances, but will also be able to distinguish among different resonance models. In the few models studied here, it is apparent that resonances near 1 TeV can be readily identified but that models with resonances above 2.5 TeV are indistinguishable from non-resonant models. A more definite limit on resonance identification and ultimately on the LHC’s ability to distinguish among strong scattering models requires a more complete analysis along the lines detailed above.
Acknowledgments
===============
I would like to thank Persis Drell and Sekhar Chivukula for helpful comments during this analysis. Fermilab is operated by Universities Research Association, Inc., under contract DE-AC02-76CH03000 with the U.S. Department of Energy.
to 2.5cm
[2]{}
J.M. Cornwall, D.N. Levin, and G. Tiktopoulos, [*Phys. Rev. D*]{} 10 (1974) 1145;\
C.E. Vayonakis, [*Lett. Nuovo Cim.*]{} 17 (1976) 383;\
B.W. Lee, C. Quigg, and H. Thacker, [*Phys. Rev. D*]{} 16 (1977) 1519;\
M.S. Chanowitz and M.K. Gaillard, [*Nucl. Phys. B*]{} 261 (1985) 379. M.S. Chanowitz and M.K. Gaillard, [*Phys. Lett. B*]{} 142 (1984) 85;\
G. Kane, W. Repko, B. Rolnick, [*Phys. Lett. B*]{} 148 (1984) 367;\
S. Dawson, [*Nucl. Phys. B*]{} 29 (1985) 42. J. Bagger, [*et al*]{}, [*Phys. Rev. D*]{} 52 (1995) 3878. (hep-ph/9504426) M.S. Chanowitz and W.B. Kilgore, [*Phys. Lett. B*]{} 322 (1994) 147. (hep-ph/9412275) M.S. Chanowitz and W.B. Kilgore, [*Phys. Lett. B*]{} 347 (1995) 387. (hep-ph/9311336) R.S. Chivukula and M. Golden, [*Phys. Lett B*]{} 267 (1991) 233;\
T. Binoth and J.J. van der Bij, FREIBURG-THEP-96-04 (1996). (hep-ph/9603427);\
T. Binoth and J.J. van der Bij, FREIBURG-THEP-96-15 (1996). (hep-ph/9608245) M.E. Peskin and T. Takeuchi, [*Phys. Rev. Lett.*]{} 65 (1990) 964;\
[*Phys. Rev. D*]{} 46 (1991) 381. ATLAS Collaboration, Technical Proposal, CERN/LHCC 94-43 . CMS Collaboration, Technical Proposal, CERN/LHCC 94-38.
[^1]: Contributed to the proceedings of DPF/DPB Summer Study on New Directions for High Energy Physics, Snowmass, Colorado, June 25-July 12, 1996
[^2]: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing – data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new information in an already trained network. As a result, methods for continuous learning are potentially highly impactful in enabling the application of deep networks to dynamic data sets. Here, inspired by the process of adult neurogenesis in the hippocampus, we explore the potential for adding new neurons to deep layers of artificial neural networks in order to facilitate their acquisition of novel information while preserving previously trained data representations. Our results on the MNIST handwritten digit dataset and the NIST SD $\mathbf{19}$ dataset, which includes lower and upper case letters and digits, demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.
*Keywords—deep learning, autoencoder, class conditional sampling, replay, hippocampus, deep neural networks*
author:
-
bibliography:
- 'bibliography.bib'
title: |
Neurogenesis Deep Learning\
[Extending deep networks to accommodate new classes]{}
---
Introduction
============
Machine learning methods are powerful techniques for statistically extracting useful information from “big data” throughout modern society. In particular, deep learning (DL) and other deep neural network (DNN) methods have proven successful in part due to their ability to utilize large volumes of unlabeled data to progressively form sophisticated hierarchical abstractions of information [@lecun2015deep; @schmidhuber2015deep]. While DL’s training and processing mechanisms are quite distinct from biological neural learning and behavior, the algorithmic structure is somewhat analogous to the visual processing stream in mammals in which progressively deeper layers of the cortex appear to form more abstracted representations of raw sensory information acquired by the retina [@van1992information].
DNNs are typically trained once, either with a large amount of labelled data or with a large amount of unlabeled data followed by a smaller amount of labeled data used to “fine-tune” the network for some particular function, such as handwritten digit classification. This training paradigm is often very expensive, requiring several days on large computing clusters [@le2013building], so ideally a fully trained network will continue to prove useful for a long duration even if the application domain changes. DNNs have found some successes in transfer learning, due to their general-purpose feature detectors at shallow layers of a network [@cirecsan2012transfer; @yosinski2014transferable], but our focus is on situations where that is not the case. DNNs’ features are known to get more specialized at deeper layers of a network and therefore presumably less robust to new classes of data. In this work, we focus on inputs that a trained network finds difficult to represent. In this regard, we are addressing the problem of continuous learning (CL). In reality, DNNs may not be robust to concept drift, where the data being processed changes gradually over time (e.g., a movie viewer’s preferred genres as they age), nor transfer learning, where a trained model is repurposed to operate in a different domain. Unlike the developing visual cortex, which is exposed to varying inputs over many years, the data used to train DNNs is typically limited in scope, thereby diminishing the applicability of networks to encode information statistically distinct from the training set. The impact of such training data limitations is a relatively minor concern in cases where the application domain does not change (or changes very slowly). However, in domains where the sampled data is unpredictable or changes quickly, such as what is seen by a cell phone camera, the value of a static deep network may be quite limited.
One mechanism the brain has maintained in selective regions such as the hippocampus is the permissive birth of new neurons throughout one’s lifetime, a process known as adult neurogenesis [@aimone2014regulation]. While the specific function of neurogenesis in memory is still debated, it clearly provides the hippocampus with a unique form of plasticity that is not present in other regions less exposed to concept drift. The process of biological neurogenesis is complex, but two key observations are that new neurons are preferentially recruited in response to behavioral novelty and that new neurons gradually learn to encode information (e.g., they are not born with pre-programmed representations, rather they learn to integrate over inputs during their development) [@aimone2011resolving].
We consider the benefits of neurogenesis on DL by exploring whether “new” artificial neurons can facilitate the learning of novel information in deep networks while preserving previously trained information. To accomplish this, we consider a specific illustrative example with the MNIST handwritten digit dataset [@lecun1998gradient] and the larger NIST SD $19$ dataset [@grother1995nist] that includes handwritten digits as well as upper and lower case letters. An autoencoder (AE) is initially trained with a subset of a dataset’s classes and continuous adaptation occurs by learning each remaining class. Our results demonstrate that neurogenesis with hippocampus-inspired “intrinsic replay” (IR) enables the learning of new classes with minimal impairment of original representations, which is a challenge for conventional approaches that continue to train an existing network on novel data without structural changes.
Related Work
------------
In the field of machine learning, transfer learning addresses the problem of utilizing an existing trained system on a new dataset containing objects of a different kind. Over the past few years, researchers have examined different ways of transferring classification capability from established networks to new tasks. Recent approaches have taken a horizontal approach, by transferring layers, rather than a more finely grained vertically oriented approach of dynamically creating or eliminating individual nodes in a layer. Neurogenesis has been proposed to enable the acquisition of novel information while minimizing the potential disruption of previously stored information [@aimone2011resolving]. Indeed, neurogenesis and similar processes have been shown to have this benefit in a number of studies using shallow neural networks [@appleby2011role; @appleby2009additive; @carpenter1988art; @chambers2007network; @chambers2004simulated; @crick2006apoptosis; @wiskott2006functional; @aimone2011modeling], although these studies have typically focused on more conventional transfer learning, as opposed to the continuous adaptation to learning considered here.
An adaptive DNN architecture by Calandra, et al, shows how DL can be applied to data unseen by a trained network [@calandra2012learning]. Their approach hinges on incrementally re-training deep belief networks (DBNs) whenever concept drift emerges in a monitored stream of data and operates within constant memory bounds. They utilize the generative capability of DBNs to provide training samples of previously learned classes. Class conditional sampling from trained networks has biological inspiration [@carr2011hippocampal; @louie2001temporally; @stickgold2005sleep; @felleman1991distributed] as well as historical and artificial neural network implementations [@hinton1995wake; @salakhutdinov2009learning; @gregor2015draw; @rudy2014generative].
Yosinski evaluated transfer capability via high-level layer reuse in specific DNNs [@yosinski2014transferable]. Transferring learning in this way increased recipient network performance, though the closer the target task was to the base task, the better the transfer. Transferring more specific layers could actually cause performance degradation however. Likewise, Kandaswamy, et al., used layer transfer as a means to transfer capability in Convolutional Neural Networks and Stacked Denoising AEs [@kandaswamy2014improving]. Transferring capability in this way resulted in a reduction in overall computation time and lower classification errors. These papers use fixed-sized DNNs, except for additional output nodes for new classes, and demonstrate that features in early layers are more general than features in later layers and thus, more transferable to new classes.
The Neurogenesis Deep Learning Algorithm
========================================
Neurogenesis in the brain provides a motivation for creating DNNs that adapt to changing environments. Here, we introduce the concept of neurogenesis deep learning (NDL), a process of incorporating new nodes in any level of an existing DNN (Figure \[newNodesFigure\]) to enable the network to adapt as the environment changes. We consider the specific case of adding new nodes to pre-train a stacked deep AE, although the approach should extend to other types of DNNs as well. An AE is a type of neural network designed to encode data such that they can be decoded to produce reconstructions with minimal error. The goal of many DNN algorithms is to learn filters or feature detectors (i.e., weights) where the complexity or specialization of the features increases at deeper network layers. Although successive layers of these feature detectors could require an exponential expansion of nodes to guarantee that all information is preserved as it progresses into more sophisticated representations (“lossless encoding”), in practice, deep AEs typically use a much more manageable number of features by using the training process to select those features that best describe the training data. However, there is no guarantee that the representations of deeper layers will be sufficient to losslessly encode novel information that is not representative of the original training set. It is in this latter case that we believe NDL to be most useful, as we have previously suggested that biological neurogenesis addresses a similar coding challenge in the brain [@aimone2011resolving].
The first step of the NDL algorithm occurs when a set of new data points fail to be appropriately reconstructed by the trained network. A reconstruction error (RE) is computed at each level of a stacked AE (pair of encode/decode layers) to determine when a level’s representational capacity is considered insufficient for a given application. An AE parameterized with weights, $W$, biases, $b$, and activation function, $s$, is described from input, $x$, to output as $N$ encode layers followed by $N$ decode layers. $$\begin{aligned}
\scriptstyle \text{Encoder: } f_{\theta_N} \circ f_{\theta_{N-1}} \cdots f_{\theta_2} \circ f_{\theta_1}(x) \text{ where }& \scriptstyle y = f_\theta(x) = s(Wx + b)\\
\scriptsize \text{Decoder: } \scriptstyle g_{\theta'_N} \circ g_{\theta'_{N-1}} \cdots g_{\theta'_2} \circ g_{\theta'_1}(y) \text{ where }& \scriptstyle g_{\theta'}(y) = s(W'y + b') \label{eq2}\end{aligned}$$ Global RE is computed at level $L$ of an AE by encoding an input through $L$ encode layers, then propagating through the corresponding $L$ decode layers to the output. $$\begin{aligned}
\scriptstyle RE_{Global, L} (x) = (x - g_{\theta'_N} \circ \cdots g_{\theta'_{N-L}} \circ f_{\theta_L} \circ \cdots f_{\theta_1}(x))^2\end{aligned}$$ When a data sample’s RE is too high, the assumption is that the AE level under examination does not contain a rich enough set of nodes (or features as determined by each node’s weights) to accurately reconstruct the sample. Therefore, it stands to reason that a sufficiently high RE warrants the addition of a new feature detector (node).
The second step of the NDL algorithm is adding and training new nodes, which occurs when a critical number of input samples (outliers) fail to achieve adequate representation at some level of the network. A new node is also added if the previous level added one or more nodes. This process does not require labels, relying entirely on the quality of a sample’s representation computed from its reconstruction. If the RE is too high (greater than a user-specified threshold determined from the statistics of reconstructing previously seen data classes), then nodes are added at that level up to a user-specified maximum number of new nodes. The new nodes are trained using all nodes in the level for reconstruction on all outliers. In other words, during training of the new nodes, the reconstructions, errors, gradients, and weight updates are calculated as a function of an AE that uses the entire set of nodes in the current level within a single hidden layer AE (SHL-AE). In order to not disturb the existing feature detectors, only the encoder weights connected to the new nodes are updated in the level under consideration. Decoder weights connected to existing feature detectors (nodes) are allowed to change slightly at the learning rate divided by $100$.
![\[newNodesFigure\] Illustration of NDL processing MNIST digits (orange/red circles indicate accuracte/inaccurate feature representations of the input; green indicate new nodes added via neurogenesis). (A) AE can faithfully reconstruct originally trained digit (‘$7$’), but (B) fails at reconstructing novel digit (‘$4$’). (C) New nodes added to all levels enables AE to reconstruct ‘$4$’. Level $1$–$4$ arrows show how inputs can be reconstructed at various depths.](Figure1){width="3.5in"}
The final step of the NDL algorithm is intended to stabilize the network’s previous representations in the presence of newly added nodes. It involves training all nodes in a level with new data and replayed samples from previously seen classes on which the network has been trained. Samples from old classes, where original data no longer exists, are created using the encoding and reconstruction capability of the current network in a process we call “intrinsic replay” (IR) (Figure \[irFigure\]).
![\[irFigure\] Illustration of the intrinsic replay process used in NDL. Original data presented to a trained network results in high-level representations in the “top-most” layer of the encoder. The average entries and the Cholesky decomposition of the covariance matrix of this hidden layer are stored for each class (e.g., ‘$1$’s, ‘$7$’s, and ‘$0$’s). When “replayed” values are desired for a given class, samples are drawn randomly from a normal distribution defined by the class’s stored statistics. Then, using the AE’s reconstruction pathway, new digits of the stored class are approximated.](Figure2){width="2.5in"}
This IR process is analogous to observed dynamics within the brain’s hippocampus during memory consolidation [@carr2011hippocampal]. It appears that neural regions such as the hippocampus “replay” neuronal sequences originally experienced during learned behaviors or explorations in an effort to strengthen and stabilize newly acquired information alongside previously encoded information. Our method involves storing class-conditional statistics (mean and Cholesky factorization of the covariance) of the top layer of the encoding network, $E$. $$\begin{aligned}
\mu_E = \text{Mean}(E), Ch_E = \text{Chol}(\text{Cov}(E))\end{aligned}$$ The Cholesky decomposition requires $n^3/6$ operations [@krishnamoorthy2013matrix], where $n$ is the dimension of $E$, and is performed once for each class on a trained network. High-level representations are retrieved through sampling from a Normal distribution described by these statistics and, leveraging the decoding network, new data points from previously trained classes are reconstructed. $$\begin{aligned}
\text{IR Images } = \text{Decode}(\mu_E + N(0,1)*Ch_E) \label{eq5}\end{aligned}$$ Training samples from previously seen data classes, where original data no longer exists, are generated using (\[eq5\]), which involves a single feed-forward pass through the Decoder (\[eq2\]).
**Input:** $2N$-layer $AE$ trained on data classes $D_1$–$D_{U-1}$, new class of data $D_U$, vector of per-level RE thresholds $Th$, vector of per-level maximum nodes allowed $MaxNodes$, maximum number of samples allowed to have $RE_{Global,L} > Th_L$, $MaxOutliers$, Learning Rate $LR$
**Output:** Autoencoder $AE$ capable of representing classes $D_1$–$D_U$
// Create stabilization training data $AE_{StableTrain} \leftarrow \{D_U | \text{IntrinsicReplay}(D_1\text{--}D_{U-1}) \}$
// Perform Neurogenesis $NewNodes \leftarrow 0$ $Outliers \leftarrow \{d \ni D_U | RE_{Global, L}(d) > Th_L\}$ $N_{Out} \leftarrow |Outliers|$
// Add new nodes to $AE_L$ and train $AE_L \leftarrow W_L, b_L ; W'_{N+1-L}, b'_{N+1-L}$ from $AE$ **Plasticity:** $Nodes_{New} = \text{\# of new nodes to add}$ Add $Nodes_{New}$ to $AE_L$ and Train on $Outliers$ Use $LR$ to update encoder weights connected to new nodes only Use $LR/100$ to update decoder weights $W_L, b_L; W'_{N+1-L}, b'_{N+1-L} \leftarrow AE_L$ $Outliers \leftarrow \{d \ni D_U | RE_{Global, L}(d) > Th_L \}$ $N_{Out} \leftarrow |Outliers|$ $NewNodes \leftarrow NewNodes + Nodes_{New}$
**Plasticity:** Train $AE_{L+1}$ on $D_U$ **Stability:** Train $AE_{L+1} $ on $AE_{StableTrain}$ $W_{L+1}, b_{L+1}; W'_{N-L}, b'_{N-L} \leftarrow AE_{L+1}$
Experiments
===========
We evaluated NDL on two datasets, the MNIST [@lecun1998gradient] and NIST SD $19$ [@grother1995nist] datasets. For the NIST dataset, we downsampled the original $128$x$128$ pixel images to be $28$x$28$ (the MNIST image size). However, we did not otherwise normalize the characters within classes, so the variation in scale and location within the $28$x$28$ frame is much greater than the MNIST data.
For the MNIST dataset, a deep AE was pre-trained in a stacked layered manner on a subset of the dataset classes, then training with and without NDL and with and without IR was conducted on new unseen data classes. The AE was initially trained with two digits ($1$, $7$) that are not statistically representative of the other digits (as shown in the results). Then, learning was incrementally performed with the remaining digits. We used an $8$-layer AE inspired by Hinton’s network on MNIST [@hinton2006reducing], but reduced to $784$-$200$-$100$-$75$-$20$-$75$-$100$-$200$-$784$ since only a subset of digits ($1$, $7$) were used for initial training. For each experiment, all training samples in a class were presented at once.
For the NIST SD $19$ dataset, the AE was trained on the digit classes alone ($0$–$9$), and then learning was performed incrementally on all letters (upper and lower case; A-Z, a-z). In order to evaluate the impact of NDL on the NIST dataset without the potentially complicating factor of IR, training data was used for replaying old classes. The initial AE used for the NIST SD $19$ dataset is also inspired by Hinton’s MNIST network, where the only difference is the number of highest-level features. We used $50$ instead of $30$ high-level features since there is much more variation in scale and location in the NIST digits. The trained NIST SD $19$ ‘Digits’ network is $784$-$1000$-$500$-$250$-$50$-$250$-$500$-$1000$-$784$.
Results on MNIST
================
Trained networks have limited ability to represent novel information
--------------------------------------------------------------------
To illustrate the process of NDL on MNIST data, we first trained a deep AE ($784$-$1000$-$500$-$250$-$30$-$250$-$500$-$1000$-$784$) to encode a subset of MNIST classes. Then, nodes were added via neurogenesis to the trained AE network as needed to encode each remaining digit. The initial DNN size for our illustrative example was determined as follows. In Calandra’s work, a $784$-$600$-$500$-$400$-$10$ DBN classifier was trained initially on digits $4$, $6$, and $8$ and then presented with new digits for training together with samples of $4$, $6$, and $8$ generated from the DBN [@calandra2012learning]. We examined two subsets of digits for initial training of our AE ($4$, $6$, and $8$, as in Calandra, et al. [@calandra2012learning], or $1$ and $7$). Figure \[initialFigure\]A illustrates that digits $4$, $6$, and $8$ appear to contain a more complete set of digit features as seen by the quality of the reconstructions compared to training only on $1$ and $7$ (Figure \[initialFigure\]B), although both limited training sets yield impaired reconstructions of novel (untrained) digits. We chose to focus initial training on digits $1$ and $7$, as these digits represent what may be the smallest set of features in any pair of digits. Then, continuous learning was simulated by progressively expanding the number of encountered classes through adding samples from the remaining digits in sequence one class at a time. The Calandra network was shown to have overcapacity for just $3$ digits by virtue of its subsequent ability to learn all $10$ digits. We suspect the same overcapacity for Hinton’s network and therefore start with a network roughly $1/5$ the size, under the assumption that neurogenesis will grow a network sufficient to learn the remaining digits as they are individually presented for training. Thus the size of our initial DNN prior to neurogenesis was: $784$-$200$-$100$-$75$-$20$-$75$-$100$-$200$-$784$.
![Networks initially trained on (A) ‘$4$,’ ‘$6$,’ and ‘$8$’s and (B) ‘$1$,’ and ‘$7$’s and not yet trained on any of the other MNIST digits reconstruct those novel digits using features biased by their original training data. \[initialFigure\]](Figure3){width="2.5in"}
Accordingly, we trained a $1$,$7$-network using all training samples of $1$’s and $7$’s with a stacked denoising AE. After training the $1$,$7$-AE, it is ready to address drifting inputs through NDL. New classes of digits are presented in the following order: $0$, $2$, $3$, $4$, $5$, $6$, $8$, and $9$. Notably, this procedure is not strictly concept drift (where classes are changing over time) or transfer learning (where a trained network is retrained to apply to a different domain), but rather was designed to examine the capability of the network to learn novel inputs while maintaining the stability of previous information (i.e., address the stability-plasticity dilemma).
NDL begins by presenting all samples of a new class to Level $1$ of the AE and identifying ‘outlier’ samples having REs above a user-specified threshold. Then, one or more new nodes are added to Level $1$ and the entire level is pre-trained in a SHL-AE. Initially, only the weights connected to the newly added nodes are allowed to be updated at the full learning rate. Encoder weights connected to old nodes are not allowed to change at all (to preserve the feature detectors trained on previous classes) and decoder weights from old nodes are allowed to change at the learning rate divided by $100$. This step relates to the notion of plasticity in biological neurogenesis. After briefly training the new nodes, a stabilization step takes place, where the entire level is trained in a SHL-AE using training samples from all classes seen by the network (samples from old classes are generated via intrinsic replay). After again calculating the RE on samples from the new class, additional nodes are added until either 1) the RE for enough samples falls below the threshold or 2) a user-specified maximum number of new nodes are reached for the current level. Once neurogenesis is complete for a level, weights connecting to the next level are trained using samples from all classes. This process repeats for each succeeding level of the AE using outputs from the previous encoding layer. After NDL, the new AE should be capable of reconstructing images from the new class (e.g., $0$) in addition to the current previous classes (e.g., $1$ and $7$).
Neurogenesis allow encoding of novel information
------------------------------------------------
Results of NDL experiments on MNIST data showed that an established network trained on just digits $1$ and $7$ can be enlarged through neurogenesis to represent new digits as guided by RE at each level of a stacked AE. We compared a network created with NDL and IR (‘NDL+IR’) to three control networks: Control $1$ (‘CL’) – an AE the same size as the enlarged NDL without IR network trained first on the subset digits $1$ and $7$, and then retrained without intrinsic replay on all samples from one new single digit at a time (Figure \[comboFigure\]A); Control $2$ (‘NDL’) – continuous learning on the original $1$,$7$ network using NDL, but not using intrinsic replay (Figure \[comboFigure\]B), and Control $3$ (‘CL+IR’) – an AE the same size as the enlarged NDL+IR network trained first on the subset digits $1$ and $7$, and then retrained with all samples from one new single digit at a time, while using intrinsic replay to generate samples of previously trained classes throughout the experiment (Figure \[comboFigure\]C). Figure \[comboFigure\]D shows that the network built upon NDL+IR slightly outperforms learning on a fixed network (Figure \[comboFigure\]C). Notably, NDL+IR outperforms straight learning not only on reconstruction across all digits, but in both the ability to represent the new data as well as preserving the ability to represent previously trained digits (Figure \[comboFigure\]E). This latter point is important, because while getting a trained network to learn new information is not particularly challenging, getting it to preserve old information can be quite difficult.
Note that the final DNN size is unknown prior to neurogenesis. The network size is increased based on the RE when the network is exposed to new information, so there is possible value in using this method to determine an effective DNN size. The original size of the $1$,$7$-AE is $784$-$200$-$100$-$75$-$20$-$75$-$100$-$200$-$784$. Figure \[comboFigure\]F shows how the DNN grows as new classes are presented during neurogenesis, gaining more representational capacity as new classes are learned.
![Global reconstructions of trained MNIST digits after exposure to all $10$ digits. The legend in Plot D applies to Plots A, B, and C; the dotted line shows REs of the original AE trained just on $1$ and $7$. (A) CL without IR provides only marginal improvement in reconstruction ability after learning all 10 digits; (B) NDL without IR likewise fails to improve reconstruction, though NDL training makes reconstruction through partial networks more useful; (C) CL with IR improves overall reconstruction of previously trained digits; (D) NDL with IR further improves on CL with IR in (C) along with improved partial network reconstructions; (E) Full network reconstructions of all networks after progressive training through all digits; (F) Neurogenesis contribution to network size in NDL+IR networks. \[comboFigure\]](Fig4combo_sq){width="3.3in"}
![Reconstructions of all digits by pre-trained ‘$1$/$7$’ networks after learning on progressive new classes. (A) Networks using conventional learning with IR are able to acquire new digits and show some ability to maintain representations of recently trained digits (e.g., ’$6$’s after ‘$8$’ is learned). (B) Networks using NDL with IR are able to acquire new digits and show superior reconstructions of previously encountered digits, even for those digits trained far earlier (e.g., ’$0$’s throughout the experiment). \[reconstructionFigure\]](Figure5){width="3.3in"}
{width="6.5in"}
{width="6.5in"}
The ‘CL+IR’ control network initially had the identical size of the neurogenesis network ‘NDL+IR’, was initially trained on digits $1$ and $7$, and then learned to represent the remaining MNIST digits, one at a time in the same order as presented during neurogenesis, but the network size was fixed. Figure \[reconstructionFigure\]A shows reconstructed images after each new class was learned on the ‘CL+IR’ AE and Figure \[reconstructionFigure\]B shows the comparable images for the ‘NDL+IR’ network as it was trained to accommodate all MNIST digits. One can see that before being trained on new digits (to the right of the blocked trained class shown in each row), both networks mis-reconstructed digits from the unseen classes into digits that appear to belong to a previously trained class as expected. Notably in the ‘CL+IR’ reconstructions (Figure \[reconstructionFigure\]A), digits from previously seen classes were often mis-reconstructed to more recently seen classes. In contrast, the ‘NDL+IR’ networks (Figure \[reconstructionFigure\]B) were more stable in their representations of previously encoded data, with only minimal disruption to past classes as new information was acquired. This suggests that adding neurons as a network is exposed to novel information is advantageous in maintaining the stability of a DNN’s previous representations.
Results on NIST SD $19$
=======================
Applying NDL to the NIST SD $19$ dataset presents challenges for evaluating neurogenesis performance because of the number of classes. Figure \[nistFigure\] shows the effect of learning on each class, comparing the initial RE of each class on the network trained on digits before any learning of letters and the final RE after all classes have been learned. A line segment with a downward (negative) slope indicates that the final RE is less than the initial RE.
The clear observation is that learning new classes with NDL with intrinsic replay (NDL+IR) results in smaller RE than learning without neurogenesis (CL+IR) for all classes. In addition, the final REs for NDL+IR are all lower than the initial REs, even for classes (digits) used to train the original AE. This implies that the ultimate AE built via neurogenesis has a richer set of feature detectors, resulting in better representation of all classes. Another observation is that, in general, the initial REs of the CL+IR network are lower than the initial REs of the NDL+IR network. The reason is that the original NDL+IR network was smaller than the fixed CL+IR network.
{width="6.5in"}
While Figure \[nistFigure\] shows the improvement in class representation at the beginning and end of NDL+IR, Figures \[ngcontributionFigure\] and \[ngcontrib2\] show the progression in time of growing the final network. More new neurons are added earlier in the neurogenesis process than later. As novel classes are presented, new feature are learned and representation capability improves for all classes. Eventually, the need for additional neurons diminishes. Figure \[ngcontributionFigure\] reveals that the AE is particularly lacking feature detectors necessary for good representation of class ‘M’ in all levels. In Figure \[nistFigure\], it is clear that class ‘W’ is also lacking feature detectors, but by the time it is presented for learning, its need has been met by neurogenesis on previous classes.
Characterizing the value of adapting DNNs
=========================================
The value of a model to continuously adapt to changing data is challenging to quantify. Here, we notionally quantify the value of a machine learning algorithm at a given time as $U = B - \frac{C_{M}}{\tau} - C_P$, where the utility, $U$, of an algorithm is considered as a tradeoff between the benefit, $B$, that the computation provides the user, the costs of the algorithm generation or the model itself, $C_M$, and the associated run-time costs, $C_P$, of that computation. $C_P$ typically consists of the time and physical energy and space required for the computation to be performed. For machine learning applications, we must consider the lifetime, $\tau$, of an algorithm for which it is appropriate to amortize a model’s build costs. In algorithm design, it is desirable to minimize both of the cost terms; however, the dominant cost will differ depending on the extent to which the real-world data changes. Consider a DNN with $N$ neurons and on the order of $N^2$ synapses. In this example, the cost of building the model, $C_M$, will scale as $O(N^4)$ due to performing $O(N^2)$ operations over $N^2$ training samples during training of a well-regularized, appropriately fit model. As a result, CM will dominate the algorithm’s cost unless the lifetime of the model, $\tau$, can offset the polynomial difference between $C_M$ and $C_P$. This description illustrates the need to extend the model’s lifetime (e.g., via neurogenesis), and to do so in an inexpensive manner that minimizes the data required to adapt the model for future use.
Conclusions and Future Work
===========================
We presented a new adaptive algorithm using the concept of neurogenesis to extend the learning of an established DNN when presented with new data classes. The algorithm adds new neurons at each level of an AE when new data samples are not accurately represented by the network as determined by a high RE. The focus of the paper is on a proof of concept of continuous learning for DNNs to adapt to changing application domains. Several elements of our NDL algorithm that we have not sought to optimize deserve further consideration. For instance, the optimal number of IR samples is unknown and will affect the computational cost associated with their use. Other elements that need to be considered are 1) better ways of establishing and using RE thresholds and 2) developing a method to determine the number of outliers to allow during neurogenesis. While we considered a network of growing size via neurogenesis, adaptation may be obtainable use of a larger network with a fixed size and restricting the learning rate on a subset of neurons until needed at a later time. We evaluated the NDL algorithm on two datasets having gray-scale objects on blank backgrounds and look forward to application on additional datasets, including natural, color imagery.
Ultimately, we anticipate that there are several significant advantages of a neurogenesis-like method for adapting existing networks to incorporate new data, particularly given suitable IR capabilities. The first relates to the costs of DL in application domains. The ability to adapt to new information can extend a model’s useful lifetime in real-world situations, possibly by substantial amounts. Extending a model’s lifetime increases the duration over which one can amortize the costs of developing the model, and in the case of DL, the build cost often vastly outpaces the runtime operational costs of the trained feed-forward network. As a result, continuous adaptation can potentially make DL cost effective for domains with significant concept drift. Admittedly, the method we describe here has an added processing cost due to the neurogenesis process and the required intrinsic replay; however, this cost will most likely amount to a constant factor increase on the processing costs and still be significantly lower than those costs associated with repeatedly retraining with the original training data.
The second advantage concerns the continuous learning nature of the NDL algorithm. The ability to train a large network without maintaining a growing repository of data can be valuable, particularly in cases where the bulk storage of data is not permitted due to costs or other restrictions. While much of the DL community has focused on cases where there is extensive unlabeled training data, our technique can provide solutions where training data at any time is limited and new data is expected to arrive continuously. Furthermore, we have considered a very stark change in the data landscape, with the network exposed exclusively to novel classes. In real-world applications, novel information may be encountered more gradually. This slower drift would likely require neurogenesis less often, but it would be equally useful when needed.
Finally, it has not escaped us that the algorithm we present is emulating adult neurogenesis within a cortical-like circuit, whereas in adult mammals, substantial neurogenesis does not appear in sensory cortices [@aimone2014regulation]. In this way, our NDL networks are more similar to juvenile or developmental visual systems, where the network has only been exposed to a limited extent of the information it will eventually encounter. Presumably, if one takes a DNN with many more nodes per layer and trains it with a much larger and broader set of data, the requirement for neurogenesis will diminish. In this situation, we predict that the levels of neurogenesis will eventually diminish to zero early in the network because the DNN will have the ability to represent a broad set of low level features that prove sufficient for even the most novel data encountered, whereas neurogenesis may always remain useful at the deepest network layers that are more comparable to the medial temporal lobe and hippocampus areas of cortex. Indeed, this work illustrates that the incorporation of neural developmental and adult plasticity mechanisms, such as staggering network development by layer (e.g., “layergenesis”), into conventional DNNs will likely continue to offer considerable benefits.
### Acknowledgments {#acknowledgments .unnumbered}
This work was supported by Sandia National Laboratories’ Laboratory Directed Research and Development (LDRD) Program under the Hardware Acceleration of Adaptive Neural Algorithms (HAANA) Grand Challenge. Sandia is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under Contract DE-AC04-94AL85000.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Multi-class classification algorithms are very widely used, but we argue that they are not always ideal from a theoretical perspective, because they assume all classes are characterized by the data, whereas in many applications, training data for some classes may be entirely absent, rare, or statistically unrepresentative. We evaluate one-sided classifiers as an alternative, since they assume that only one class (the target) is well characterized. We consider a task of identifying whether a substance contains a chlorinated solvent, based on its chemical spectrum. For this application, it is not really feasible to collect a statistically representative set of outliers, since that group may contain *anything* apart from the target chlorinated solvents. Using a new one-sided classification toolkit, we compare a One-Sided k-NN algorithm with two well-known binary classification algorithms, and conclude that the one-sided classifier is more robust to unexpected outliers.
-Sided, One-Class, Classification, Support Vector Machine, k-Nearest Neighbour, Spectroscopy Analysis
author:
- 'Frank G. Glavin'
- 'Michael G. Madden'
title: Analysis of the Effect of Unexpected Outliers in the Classification of Spectroscopy Data
---
Introduction
============
One-Sided Classification
------------------------
One-sided classification (OSC) algorithms are an alternative to conventional multi-class classification algorithms. They are also referred to as single-class or one-class classification algorithms, and differ in one vital aspect from multi-class algorithms, in that they are only concerned with a single, well-characterized class, known as the target or positive class. Objects of this class are distinguished from all others, referred to as outliers, that consist of *all* the other objects that are not targets. In one-sided classification, training data for the outliers may be either rare, entirely unavailable or statistically unrepresentative.
Over the past decade, several well-known algorithms have been adapted to work with the one-sided paradigm. Tax \[1\] describes many of these one-sided algorithms and notes that the problem of one-sided classification is generally more difficult than that of conventional classification. The decision boundary in the multi-class case has the benefit of being well described from both sides with appropriate examples from each class being available, whereas the single-class case can only support one side of the decision boundary fully, in the absence of a comprehensive set of counter-examples. While multi-class (including binary or two-class) algorithms are very widely used in many different application domains, we argue that they are not always the best choice from a theoretical perspective, because they assume that all classes are appropriately characterized by the training data. We propose that one-sided classifiers are more appropriate in these cases, since they assume that only the target class is well characterized, and seek to distinguish it from any others. Such problem domains include industrial process control, document author identification and the analysis of chemical spectra.
Spectroscopic Analysis
----------------------
Raman spectroscopy, which is a form of molecular spectroscopy, is used in physical and analytical chemistry. It involves the study of experimentally-obtained spectra by using an instrument such as a spectrometer \[2\]. According to Gardiner \[3\], Raman spectroscopy is a well-established spectroscopic technique which involves the study of vibrational and rotational frequencies in a system. Spectra are gathered by illuminating a laser beam onto a substance under analysis and are based on the vibrational motion of the molecules which create the equivalent of a chemical fingerprint. This unique pattern can then be used in the identification of a variety of different materials \[4\].
Machine Learning Task
---------------------
In this work, we consider the task of identifying materials from their Raman spectra, through the application of both one-sided and multi-class classification algorithms. Our primary focus is to analyse the performance of the classifiers when “unexpected” outliers are added to the test sets. The spectra are gathered from materials in pure form and in mixtures. The goal is to identify the presence or absence of a particular material of interest from its spectrum. This task can be seen as an “open-ended” problem, as having a statistically representative set of counter-examples for training is not feasible, as has been discussed already.
In particular, we consider the application of separating materials to enable the safe disposal of harmful solvents. Chemical waste that is potentially hazardous to the environment should be identified and disposed of in the correct manner. Laboratories generally have strict guidelines in place, as well as following legal requirements, for such procedures. Organic solvents can create a major disposal problem in organic laboratories as they are usually water-immiscible and can be highly flammable \[5\]. Such solvents are generally created in abundance each day in busy laboratories. Differentiating between chlorinated and non-chlorinated organic solvents is of particular importance. Depending on whether a solvent is chlorinated or not will dictate how it is transported from the laboratory and, more importantly, what method is used for its disposal \[6\]. Identifying and labeling such solvents is a routine laboratory procedure which usually makes the disposal a straightforward process. However, it is not unlikely that the solvents could be accidentally contaminated or inadvertently mislabeled. In such circumstances it would be beneficial to have an analysis method that would correctly identify whether or not a particular solvent was chlorinated.
We have carried out several experiments for this identification using both one-sided and multi-class classification algorithms in order to analyse the effect of adding “unexpected” outliers to the test sets.
Related Research
================
Madden and Ryder \[7\] explore the use of standard multi-class classification techniques, in comparison to statistical regression methods, for identifying and quantifying illicit materials using Raman spectroscopy. Their research involves using dimension reduction techniques to select some features of the spectral data and discard all others. This feature selection process is performed by using a Genetic Algorithm. The predictions can then be made based only on a small number of data points. The improvements that can be achieved by using several different predictor models together were also noted. This would come at the cost of increased computation but was shown to provide better results than using just one predictor by itself.
O’Connell [*et al.*]{} \[8\] propose the use of Principal Component Analysis (PCA), support vector machines (SVM) and Raman spectroscopy to identify an analyte[^1] in solid mixtures. In this case, the analyte is acetaminophen, which is a pain reliever used for aches and fevers. They used near-infrared Raman spectroscopy to analyse a total of 217 samples, some of which had the target analyte present, of mixtures with excipients[^2] of varying weight. The excipients that were included were sugars, inorganic materials and food products. The spectral data was subjected to first derivative and normalization transformations in order to make it more suitable for analysis. After this pre-treatment, the target analyte was then discriminated using Principal Component Analysis (PCA), Principal Component Regression (PCR) and Support Vector Machines. According to the authors, the superior performance of SVM was particularly evident when raw data was used for the input. The importance and benefits of the pre-processing techniques was also emphasized.
Howley \[9\] uses machine learning techniques for the identification and quantification of materials from their corresponding spectral data. He shows how using Principal Component Analysis (PCA) with machine learning methods, such as SVM, could produce better results than the chemometric technique of Principal Component Regression (PCR). He also presents customized kernels for use with spectral analysis based on prior knowledge of the domain. A genetic programming technique for evolving kernels is also proposed for when no domain knowledge is available.
A Toolkit for One-sided Classification
======================================
In the course of our research, we have developed a one-sided classification toolkit written in Java. It is a command line interface (CLI) driven software package that contains one-sided algorithms that may be chosen by the user at runtime and used to create a new classifier based on a loaded data set and a variety of different adjustable options. Both experiment-specific and classifier parameter options can be set. The toolkit was designed to carry out comprehensive and iterative experiments with minimal input from the user. The resulting classifiers that are generated can be saved and used at a later stage to classify new examples. The user can set up many different runs of an experiment, each differing by an incremented random number seed that shuffles the data for every run before it is broken up into training and testing sets. Results are printed to the screen as they are calculated; these include the classification error, sensitivity, specificity and confusion matrix for each run or individual folds.
Data Sets and Algorithms Used
=============================
Primary Data Set
----------------
The primary data set that we used for these experiments was compiled in earlier research, as described by Conroy [*et al.*]{} \[6\]. It comprises of 230 spectral samples that contain both chlorinated and non-chlorinated mixtures. According to the authors, the compilation of the data involved keeping the concentrations of the mixtures as close as possible to real life scenarios from industrial laboratories. Twenty five solvents, some chlorinated and some not, were included; these are listed in Table 1.
---------------- --------------------- ----------------------- ---------------------
Acetone HPLC Cyclopentane Analytical
Toluene Spectroscopic Acetophenol Analytical
Cyclohexane Analytical & Spect. n-Pentane Analytical
Acetonitrile Spectroscopic Xylene Analytical
2-Propanol Spectroscopic Dimethylformanide Analytical
1,4-Dioxane Analytical & Spect. Nitrobenzene Analytical
Hexane Spectroscopic Tetrahydrofuran Analytical
1-Butanol Analytical & Spect. Diethyl Ether Analytical
Methyl Alcohol Analytical Petroleum Acetate Analytical
Benzene Analytical Chloroform Analytical & Spect.
Ethyl Acetate Analytical Dichloromethane Analytical & Spect.
Ethanol Analytical 1,1,1-trichloroethane Analytical & Spect.
---------------- --------------------- ----------------------- ---------------------
: A list of the various chlorinated and non-chlorinated solvents used in the primary data set and their grades.(Source: Conroy [*et al.*]{} \[6\])[]{data-label="Chlorinated and non-chlorinated solvents"}
--------------------- ----- ---- -----
Pure Solvents 6 24 30
Binary Mixtures 96 23 119
Ternary Mixtures 40 12 52
Quaternary Mixtures 12 10 22
Quintary Mixtures 0 7 7
[**Total**]{} 154 76 230
--------------------- ----- ---- -----
: Summary of chlorinated and non-chlorinated mixtures in the primary data set. (Source: Howley \[9\])
Several variants of the data set were created, which differed only by the labeling of the solvent that was currently assigned as the target class. In each of these variants, all instances not labeled as targets were labeled as outliers. These relabeled data sets were used in the detection of the specific chlorinated solvents: Chloroform, Dichloromethane and Trichloroethane. As an example of the data, the Raman spectrum of pure Chloroform, a chlorinated solvent, is shown in Fig. 1. Other samples from the data set consist of several different solvents in a mixture which makes the classification task more challenging. A final separate data set was created such that all of the chlorinated solvents were labeled as targets. This is for carrying out experiments to simply detect whether a given mixture is chlorinated or not.
Secondary Data Set
------------------
For our Scenario 2 experiments (see Section 5.1), we introduce 48 additional spectra that represent outliers that are taken from a different distribution to those that are present in the primary dataset. These samples are the Raman spectra of various laboratory chemicals, and none of them are chlorinated solvents nor are they the other materials that are listed in Table 1. They include materials such as sugars, salts and acids in solid or liquid state, including Sucrose, Sodium, Sorbitol, Sodium Chloride, Pimelic Acid, Acetic Acid, Phthalic Acid and Quinine.
![*The Raman Spectrum of a sample of 100% pure Chloroform*[]{data-label="fig:3 spectra"}](Chloroform.png){width="80.00000%"}
Algorithms Used
---------------
We carried out the one-sided classification experiments using our toolkit. The conventional classification experiments were carried out using the Weka \[10\] machine learning software.
We have chosen a One-Sided k-Nearest Neighbour (OkNN) algorithm and two conventional classification algorithms; namely, k-Nearest Neighbour, that we refer to as Two-Class KNN, and a Support Vector Machine (SVM) that we will refer to as Two-Class SVM.
The OkNN algorithm we use is based on one described by Munroe and Madden \[11\]. The method involves choosing an appropriate threshold and number of neighbours to use. The average distance from a test example ‘A’ to its m nearest neighbours is found and this is called ‘D1’. Then, the average distance of these neighbours to their own respective k nearest neighbours is found and called ‘D2’. If ‘D1’ divided by ‘D2’ is greater than the threshold value, the test example ‘A’ is rejected as being an outlier. If it is less than the threshold, then it is accepted as being part of the target class.
Description of Experiments
==========================
Scenarios Considered
--------------------
Two scenarios are described in our experiments, as described next.
### Scenario 1: “Expected” Test Data Only.
In this scenario, the test data is sampled from the same distribution as the training data. The primary dataset is divided repeatedly into training sets and test sets, with the proportions of targets and outliers held constant at all times, and these ‘internal’ test sets are used to test the classifiers that are built with the training datasets.
### Scenario 2: “Unexpected” and “Expected” Test Data.
In this scenario, we augment each test dataset with the 48 examples from the secondary data set that are *not* drawn from the same distribution as the training dataset. Therefore, a classifier trained to recognise any chlorinated solvent should reject them as outliers. However, these samples represent a significant challenge to the classifiers, since they violate the standard assumption that the test data will be drawn from the same distribution as the training data; it is for this reason that we term them “unexpected”.
This second scenario is designed to assess the robustness of the classifiers in a situation that has been discussed earlier, whereby in practical deployments of classifiers in many situations, the classifiers are likely to be exposed to outliers that are not drawn from the same distribution as training outliers. In fact, we contend that over the long term, this is inevitable: if we know *a priori* that the outlier class distribution is not well characterized in the training data, then we must accept that sooner or later, the classifier will be exposed to data that falls outside the distribution of the outlier training data.
It should be noted that this is different from *concept drift*, where a target concept may change over time; here, we have a static concept, but over time the weaknesses of the training data are exposed. Of course, re-training might be possible, if problem cases can be identified and labeled correctly, but we concern ourselves with classifiers that have to maintain robust performance without re-training.
Experimental Procedure
----------------------
The data sets, as described earlier, were used to test the ability of each algorithm in detecting the individual chlorinated compounds. This involved three separate experiments for each algorithm, to detect Chloroform, Dichloromethane and Trichloroethane. A fourth experiment involved detecting the presence of any chlorinated compound in the mixture.
All spectra were first normalized. A common method for normalizing a dataset is to recalculate the values of the attributes to fall within the range of zero to one. This is usually carried out on an attribute-by-attribute basis and ensures that certain attribute values, which differ radically in size from the rest, don’t dominate in the prediction calculations. The normalization carried out on the spectral data is different to this in that it is carried out on an instance-by-instance basis. Since each attribute in an instance is a point on the spectrum, this process is essentially rescaling the height of the spectrum into the range of zero to one.
For each experiment, 10 runs were carried out with the data being randomly split each time into 67% for training and 33% for testing. The splitting procedure from our toolkit ensured that there was the same proportion of targets and outliers in the training sets as there was in the test sets. The same data set splits were used for the one-sided classifier algorithms and the Weka-based algorithms, to facilitate direct comparisons.
A 3-fold internal cross validation step was used with all the training sets, to carry out parameter tuning. A list of parameter values was passed to each classification algorithm and each, in turn, was used on the training sets, in order to find the best combination that produced the smallest error estimate. It must be emphasized that we only supplied a small amount of different parameters for each algorithm and that these parameters used were the same for all of the four variants of the data set. The reason for this was that our goal was not to tune and identify the classifier with the best results overall but to notice the change in performance when “unexpected” outliers were added to the test set.
For the One-Sided kNN algorithm, the amount of nearest neighbours (m) and the amount of their nearest neighbours (k) was varied between 1 and 2. The threshold values tried were 1, 1.5 and 2. The distance metric used was Cosine Similarity. For the Weka experiments, the Two-Class kNN approach tried the values 1,2 and 3 for the amount of nearest neighbours. The Two-Class SVM varied the complexity parameter C with the values of 1,3 and 5. The default values were used for all of the other Weka parameters.
Performance Metric
------------------
The error rate of a classification algorithm is the percentage of examples from the test set that are incorrectly classified. We measure the average error rate of each algorithm over the 10 runs to give an overall error estimate of its performance. With such a performance measure being used, it is important to know what percentage of target examples were present in each variant of the data set. This information is listed in Table 3 below.
---------------------------- ----- ----- --------
[**Chlorinated or not**]{} 154 76 66.95%
[**Chloroform**]{} 79 151 34.34%
[**Dichloromethane**]{} 60 170 26.08%
[**Trichloroethane**]{} 79 151 34.34%
---------------------------- ----- ----- --------
: Percentage of target examples in each variant of the *primary* data set
Results and Analysis
====================
The results of the experiments carried out are listed in Table 4, Table 5, and Table 6 below. Each table shows the overall classification error rate and standard deviation (computed over 10 runs) for each algorithm, for both of the scenarios that were tested.
It can be seen that while the conventional multi-class classifiers perform quite well in the first scenario, their performance quickly begins to deteriorate once the “unexpected” outliers are introduced in Scenario 2. The One-Sided kNN’s performance is generally worse than the multi-class approach in Scenario 1. As described in Section 1.1, the decision boundary for the multi-class classifiers have the benefit of being well supported from both sides with representative training examples from each class. In such a scenario, the multi-class algorithms essentially have more information to aid the classification mechanism and, therefore, would be expected to out-perform the one-sided approach.
In detecting whether or not a sample is chlorinated, the average error rate of the Two-Class kNN increased by 28.87% and the Two-Class SVM increased by 33.57% in Scenario 2. In contrast with the two-class classifiers, the One-Sided kNN is seen to retain a consistent performance and the error is only increased by 0.14%. When the algorithms are detecting the individual chlorinated solvents, the same pattern in performance can be seen. The multi-class algorithms’ error rates increase, in some cases quite radically, in the second scenario. The One-Sided kNN manages to remain at a more consistent error rate and, in the case of Chloroform and Dichloromethane, the overall error rate is reduced somewhat.
It should be noted that our experiments are not concerned with comparing the relative performances of a one-sided classifier versus the multi-class classifiers. Rather, we analyse the variance between the two scenarios for each individual classifier and demonstrate the short-comings of the multi-class approach when it is presented with “unexpected” outliers. Our results demonstrate the one-sided classifier’s ability to robustly reject these outliers in the same circumstances.
---------------------------- --------------------- ---------------------
Error % (std. dev.) Error % (std. dev.)
[**Chlorinated or not**]{} 6.49 (2.03) 35.36 (3.65)
[**Chloroform**]{} 22.59 (6.93) 39.44 (7.37)
[**Dichloromethane**]{} 11.94 (4.89) 16.24 (3.49)
[**Trichloroethane**]{} 23.24 (5.10) 25.68 (4.27)
---------------------------- --------------------- ---------------------
: Overall average error rate for two-class kNN in both scenarios
---------------------------- --------------------- ---------------------
Error % (std. dev.) Error % (std. dev.)
[**Chlorinated or not**]{} 4.67 (1.95) 38.24 (2.19)
[**Chloroform**]{} 11.68 (4.01) 37.2 (2.39)
[**Dichloromethane**]{} 8.70 (4.37) 11.68 (3.52)
[**Trichloroethane**]{} 11.03 (3.47) 30.08 (2.50)
---------------------------- --------------------- ---------------------
: Overall average error rate for two-class SVM in both scenarios
---------------------------- --------------------- ---------------------
Error % (std. dev.) Error % (std. dev.)
[**Chlorinated or not**]{} 10.90 (4.5) 11.04 (4.7)
[**Chloroform**]{} 26.10 (3.43) 18.32 (3.16)
[**Dichloromethane**]{} 12.98 (3.23) 9.36 (2.84)
[**Trichloroethane**]{} 20.77 (3.46) 21.04 (5.07)
---------------------------- --------------------- ---------------------
: Overall average error rate for one-sided kNN in both scenarios
Conclusions and Future Work
===========================
Our research demonstrates the potential drawbacks of using conventional multi-class classification algorithms when the test data is taken from a different distribution to that of the training samples. We believe that for a large number of real-world practical problems, one-sided classifiers should be more robust than multi-class classifiers, as it is not feasible to sufficiently characterize the outlier concept in the training set. We have introduced the term “unexpected outliers” to signify outliers that violate the standard underlying assumption made by multi-class classifiers, which is that the test set instances are sampled from the same distribution as the training set instances. We have shown that, in such circumstances, a one-sided classifier can prove to be a more capable and robust alternative. Our future work will introduce new datasets from different domains and also analyse other one-sided and multi-class algorithms.
### Acknowledgments. {#acknowledgments. .unnumbered}
The authors are grateful for the support of Enterprise Ireland under Project CFTD/05/222a. The authors would also like to thank Dr. Abdenour Bounsiar for his help and valuable discussions, and Analyze IQ Limited for supplying some of the Raman spectral data.
[4]{}
Tax, D. M. J.: One-Class Classification, PhD Thesis. Delft University of Technology. (2001)
Hollas, J. M.: Basic Atomic and Molecular Spectroscopy. Royal Society of Chemistry. (2002)
Gardiner, D. J., P.R. Graves, H.J. Bowley: Practical Raman Spectroscopy. Springer-Verlag, New York. (1989)
Bulkin, B.: The Raman Effect: An Introduction. John Wiley, New York.(1991)
Harwood, L. M., C. J. Moody, J. M. Percy: Experimental Organic Chemistry: Standard and Microscale. Blackwell Publishing. (1999)
Conroy, J., A. G. Ryder, M. N. Leger, K. Hennessey, M. G. Madden: Qualitative and quantitative analysis of chlorinated solvents using Raman spectroscopy and machine learning. In: Opto-Ireland 2005. 5826: 131–142. (2005)
Madden, M. G., A. G. Ryder: Machine Learning Methods for Quantitative Analysis of Raman Spectroscopy Data. In: International Society for Optical Engineering (SPIE 2002) 4876: 1130–1139 (2002)
O’Connell, M. L., T. Howley, A. G. Ryder, M. N. Leger, M. G. Madden: Classification of a target analyte in solid mixtures using principal component analysis, support vector machines and Raman spectroscopy. In: Opto-Ireland 2005. 5826: 340–350. (2005)
Howley, T.: Kernel Methods for Machine Learning with Application to the Analysis of Raman Spectra. PhD Thesis. National University of Ireland, Galway. (2007)
Ian H.Witten and Eibe Frank: Data Mining: Practical machine learning tools and techniques., 2nd Edition, Morgan Kaufmann, San Francisco, (2005)
Munroe, D. T., M. G. Madden: Multi-Class and Single-Class Classification Approaches to Vehicle Model Recognition from Images. In: AICS-05. Portstewart. (2005)
[^1]: An analyte is a substance or chemical constituent that is determined in an analytical procedure.
[^2]: An excipient is an inactive substance used as a carrier for the active ingredients of a medication.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
In an open Friedmann-Robertson-Walker (FRW) space background, we study the classical and quantum cosmological models in the framework of the recently proposed nonlinear massive gravity theory. Although the constraints which are present in this theory prevent it from admitting the flat and closed FRW models as its cosmological solutions, for the open FRW universe, it is not the case. We have shown that, either in the absence of matter or in the presence of a perfect fluid, the classical field equations of such a theory adopt physical solutions for the open FRW model, in which the mass term shows itself as a cosmological constant. These classical solutions consist of two distinguishable branches: One is a contacting universe which tends to a future singularity with zero size, while another is an expanding universe having a past singularity from which it begins its evolution. A classically forbidden region separates these two branches from each other. We then employ the familiar canonical quantization procedure in the given cosmological setting to find the cosmological wave functions. We use the resulting wave function to investigate the possibility of the avoidance of classical singularities due to quantum effects. It is shown that the quantum expectation values of the scale factor, although they have either contracting or expanding phases like their classical counterparts, are not disconnected from each other. Indeed, the classically forbidden region may be replaced by a bouncing period in which the scale factor bounces from the contraction to its expansion eras. Using the Bohmian approach of quantum mechanics, we also compute the Bohmian trajectory and the quantum potential related to the system, which their analysis shows are the direct effects of the mass term on the dynamics of the universe.\
PACS numbers: 98.80.-k, 98.80.Qc, 04.50.+hKeywords: Massive cosmology, Quantum cosmology
author:
- |
Babak Vakili$^{1}$[^1] andNima Khosravi$^{2}$[^2]\
\
$^1$\
$^2$ $^{}$
title: '[**Classical and quantum massive cosmology for the open FRW universe** ]{}'
---
-2.5cm
Introduction
============
General Relativity (GR) introduced by Einstein began a renaissance in scientific thought which changed our viewpoint on the concept of space-time geometry and gravity. The interpretation of gravitational force as a modification of geometrical structure of space-time made and makes this force distinguishable from other fundamental interactions, although there are arguments which support the idea that the other interactions may also have geometrical origin. Because of the unknown behavior of gravitational interaction at short distances, this distinction may have some roots in the heart of problems with quantum gravity. Therefore, any hope of dealing with such concepts would be in vain unless a reliable quantum theory of gravity can be constructed. In the absence of a full theory of quantum gravity, it would be then useful to describe its quantum aspects within the context of modified theories of gravity. From a field theory point of view, the gravitational force in GR can be represented as a field theory in which the space-time metric plays the role of the fields and the particle that is responsible to propagate gravity is named graviton. Then, naturally in comparison to the other field theories, one may ask about the different properties of such a particle. The answer to this question is deduced by the linearized form of GR and expansion of the space-time metric $g_{\mu\nu}$, around a fixed background geometry $\eta_{\mu\nu}$, as $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$, where $h_{\mu\nu}$ is the field representation of the graviton. Eventually, it is possible to show that the graviton is a *massless* spin-$2$ particle.
Then, since our knowledge about the behavior of gravity at very long distances is also incomplete, a question arises: Is it possible to consider a small nonvanishing mass for the graviton, i.e., a *massive* spin-$2$ particle? In the first attempts to deal with this question, it seemed that adding a mass term to the action may be sufficient. This was done by Fierz and Pauli [@pauli]. However, it was shown that by considering the number of degrees of freedom, this model suffers from the existence of a ghost field, the so-called Boulware-Deser ghost [@deser], after studying the non-linear terms. This fact made massive gravity an abandoned theory for a while. Recently, de Rahm and Gabadadze proposed a new scenario in which they have shown that it is possible to have a ghost-free massive gravity even at the non-linear level [@derahm]. That was a positive signal in this area, and the early results in this subject have been followed by a number of works that address different aspects of massive gravity [@works]. As in the case of the other modified theories of gravity, it is important to seek cosmological solutions in the newly proposed massive theory of gravity. This is done by the authors of Ref. [@Amico], who show that the existence of some constraints prevent the theory from having the nontrivial homogeneous and isotropic cosmological solutions. Indeed, what is shown in Ref. [@Amico] is that, beginning with the flat FRW ansatz in the context of massive gravity, the corresponding field equations result in nothing but the Minkowski metric. However, by re-examination of the conditions, the authors of Refs. [@Emir; @Emir1] have shown that, for the open FRW model, this is not the case, and the nonlinear massive gravity admits the open FRW as a compatible solution for its field equations. Another progresses to find the massive cosmologies lie in the field of the bi-metric theories of gravity; see, for instance, Ref. [@nima] based on the works of Hassan and Rosen [@hassan], in which they show that a bi-metric representation for massive gravity exists.
Our purpose in the present paper is to continue the works of the authors of Ref. [@Emir; @Emir1] in greater detail, based on the Hamiltonian formalism of the open FRW cosmology in the framework of massive gravity. We obtain the solutions to the vacuum and perfect fluid classical field equations and investigate their different aspects, such as the roll of the graviton’s mass as a cosmological constant, the appearance of singularities, and the late time expansion. We then consider the problem at hand in the context of canonical quantum cosmology to see how the classical picture will be modified. Our final results show that the singular behavior of the classical cosmology will be replaced by a bouncing one when quantum mechanical considerations are taken into account. This means that the quantization of the model suggests the existence of a minimal size for the corresponding universe. We shall also study the quantum model by the Bohmian approach of quantum mechanics to show how the mass term exhibits its direct effects on the evolution of the system.
The structure of the paper is as follows. In section 2, we briefly present the basic elements of the issue of massive gravity and its canonical Hamiltonian for a given open FRW universe. In section 3, classical cosmological dynamics is introduced for the vacuum and perfect fluid. Quantization of the model is the subject of section 4, and in section 5, the Bohmian approach of quantum mechanics is applied to the model. Finally, the conclusions are summarized in section 6.
Preliminary set-up
==================
In this section, we start by briefly studying the nonlinear massive gravity action presented in Refs. [@Emir; @Emir1] for the open FRW model, where the metric is given by $$\label{A}
ds^2=g_{\mu\nu}dx^{\mu}dx^{\nu}=-N^2(t)dt^2+a^2(t)\left[dx^2+dy^2+dz^2-\frac{|K|(xdx+ydy+zdz)^2}{1+|K|(x^2+y^2+z^2)}\right],$$ with $N(t)$ and $a(t)$ being the lapse function and the scale factor, respectively, and $K=-1$ denoting the curvature index. Here we work in units where $c=\hbar=16\pi G=1$. In the massive gravity scenario one considers a metric perturbation as [@Amico; @Clau] $$\label{A1}
g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}=\eta_{ab}\partial_{\mu}\phi^a(x)\partial_{\nu}\phi^b(x)+H_{\mu\nu},$$where $\eta_{ab}=\mbox{diag}(-1,1,1,1)$ and $\phi^a(x)$ are four scalar fields known as Stückelberg scalars and are introduced to keep the principle of general covariance also in massive general relativity [@Arkani]. It is clear that the first term in (\[A1\]) is a representation of the Minkowski space-time in terms of the coordinate system $(\phi^0,\phi^i)$ and thus the tensor $H_{\mu\nu}$ is responsible for describing the propagation of gravity in this space. The action of the model consists of the gravitational part ${\cal S}_g$ and the matter action ${\cal S}_m$ as $$\label{B}
{\cal S}={\cal S}_g+{\cal S}_m.$$The matter part of the action is independent of the massive corrections to the gravity part. Also, the gravity part can be expressed in terms of the usual Einstein-Hilbert, with an additional correction term coming from the massive graviton; that is [@Amico] $$\label{B1}
{\cal S}_g=\int \sqrt{-g}\left[R-\frac{m^2}{4}{\cal
U}(g,H)\right]d^4x,$$in which all of the modifications due to the mass and also the interactions between the tensor fields $H_{\mu\nu}$ and $g_{\mu\nu}$ are summarized in the potential ${\cal U}(g,H)$. By using ghost-free conditions for the theory in Ref. [@Arkani], we propose the following form for the potential term: [@Clau] $$\label{B2}
{\cal U}(g,H)=-4\left({\cal L}_2+\alpha_3 {\cal L}_3+\alpha_4
{\cal L}_4\right),$$where
$$\begin{aligned}
\label{B3}
\left\{
\begin{array}{ll}
{\cal L}_2=\frac{1}{2}\left(<{\cal K}>^2-<{\cal K}^2>\right),\\\\
{\cal L}_3=\frac{1}{6}\left(<{\cal K}>^3-3<{\cal K}><{\cal K}^2>+2<{\cal K}^3>\right),\\\\
{\cal L}_4=\frac{1}{24}\left(<{\cal K}>^4-6<{\cal K}>^2<{\cal
K}^2>+3<{\cal K}^2>^2+8<{\cal K}><{\cal K}^3>-6<{\cal
K}^4>\right),
\end{array}
\right.\end{aligned}$$
in which the tensor ${\cal K}_{\mu\nu}$ is defined as $$\label{B4}
{\cal
K}^{\mu}_{\nu}(g,H)=\delta^{\mu}_{\nu}-\sqrt{\eta_{ab}\partial^{\mu}\phi^a\partial_{\nu}\phi^b},$$and the notations $<{\cal K}>=g^{\mu\nu}{\cal K}_{\mu\nu}$, $<{\cal
K}^2>=g^{\alpha \beta}g^{\mu\nu}{\cal K}_{\alpha \mu}{\cal
K}_{\beta \nu}$,... are used for the corresponding traces. Now, equations (\[B1\])-(\[B4\]) describe the gravitational part of the action for a massive gravity theory. Since its explicit form directly depends on the choice of scalar fields $\phi^a(x)$, it is appropriate to concentrate on this point first. Interesting forms for such fields should involve terms which would describe a suitable coordinate transformation on the Minkowski space-time. In a flat FRW background, for instance, one may select $\phi^0=f(t)$ and $\phi^i=x^i$, as is used in [@Amico]. Here, for the open FRW metric (\[A\]), we use the following ansatz proposed in [@Emir] $$\label{B5}
\phi^0=f(t)\sqrt{1+|K|x_ix^i},\hspace{0.5cm}\phi^i=\sqrt{|K|}f(t)x^i.$$Upon substitution of these scalar fields and also the definition of the Ricci scalar into the relations (\[B1\])-(\[B4\]), we are led to a point-like form for the gravitational Lagrangian in the minisuperspace $\{N,a,f\}$ as $$\label{C}
{\cal L}_g=-\frac{3 a \dot{a}^2}{N}-3|K|Na+m^2\left(L_2+\alpha_3
L_3+\alpha_4 L_4\right),$$where $$\begin{aligned}
\label{D}
\left\{
\begin{array}{ll}
L_2=3a\left(a-\sqrt{|K|}f\right)\left(2Na-a \dot{f}-N\sqrt{|K|}f\right),\\\\
L_3=\left(a-\sqrt{|K|}f\right)^2\left(4Na-3a\dot{f}-N\sqrt{|K|}f\right),\\\\
L_4=\left(a-\sqrt{|K|}f\right)^3\left(N-\dot{f}\right),
\end{array}
\right.\end{aligned}$$in which an overdot represents differentiation with respect to the time parameter $t$. It is seen that this Lagrangian does not involve $\dot{N}$, which means that the momentum conjugate to this variable vanishes. In the usual canonical formalism of general relativity, we know this issue as the primary constraint in the sense that the variable $N$ is not a dynamical variable but a Lagrange multiplier in the Hamiltonian formalism. On the other hand, Lagrangian (\[C\]) seems to show an additional constraint related to the Stückelberg scalars whose dynamics are encoded in the function $f(t)$. We see that in spite of the common Lagrangians in which the first derivative of the configuration variables are of second order, $\dot{f}$ appears linearly in the Lagrangian (\[C\]). Therefore, by computing the momentum conjugate to $f$; that is, $P_f=\frac{\partial {\cal
L}_g}{\partial \dot{f}}$, we obtain $$\label{D1}
P_f=-m^2(a-\sqrt{|K|}f)\left[3a^2+3\alpha_3
a(a-\sqrt{|K|}f)+\alpha_4
(a-\sqrt{|K|}f)^2\right].$$Now, it is clear that this relation is not invertible to obtain $\dot{f}(f,P_f)$. In such a case, the Lagrangian is said to be singular and the relations like (\[D1\]), which hinder the inversion, are known as primary constraints. One may use the method of Lagrange multipliers to analyze the dynamics of the system by adding to the Lagrangian all of the primary constraints multiplied by arbitrary functions of time. However, to deal with our constrained system, we act differently and proceed as follows. We vary the Lagrangian (\[C\]) with respect to $f$ to obtain $$\label{D2}
\left(\dot{a}-\sqrt{|K|}N\right)\left[|K|(\alpha_3+\alpha_4)f^2(t)-2\sqrt{|K|}(1+2\alpha_3+\alpha_4)a(t)f(t)+(3+3\alpha_3+\alpha_4)a^2(t)\right]=0.$$ The solution $\dot{a}=\sqrt{|K|}N$ of this equation is nothing but what we obtain from the variation of the usual Einstein-Hilbert Lagrangian with respect to $N$. Since its counterpart in massive gravity is $$\label{D3}
\dot{a}=\frac{\sqrt{3|K|(\alpha_3+\alpha_4)^2+m^2a^2(t)\left[2(1+\alpha_3+\alpha_3^2-\alpha_4)^{3/2}-(1+\alpha_3)(2+
\alpha_3+2\alpha_3^2-3\alpha_4)\right]}}{\sqrt{3}(\alpha_3+\alpha_4)}N,$$we cannot accept the relation $\dot{a}=\sqrt{|K|}N$ as a physical solution. Therefore, the constraint corresponding to the dynamic of $f(t)$ shows itself in the equation $$\label{D4}
\left[|K|(\alpha_3+\alpha_4)f^2(t)-2\sqrt{|K|}(1+2\alpha_3+\alpha_4)a(t)f(t)+(3+3\alpha_3+\alpha_4)a^2(t)\right]=0,$$where using the same notation as in [@Emir], its solutions can be written as $$\label{E}
f(t)=\frac{X_{\pm}}{\sqrt{|K|}}a(t)\Rightarrow
\dot{f}=\frac{X_{\pm}}{\sqrt{|K|}}\dot{a},\hspace{0.5cm}X_{\pm}\equiv
\frac{1+2\alpha_3+\alpha_4 \pm
\sqrt{1+\alpha_3+\alpha_3^2-\alpha_4}}{\alpha_3+\alpha_4}.$$As is argued in [@Emir], in the limit where $\alpha_3$ and $\alpha_4$ are of the order of a small quantity $\epsilon$, the expression of $X_{+}$ goes to infinity while $X_{-}\rightarrow
3/2$. Because of this limiting behavior, we use the subscript $-$ in the following for numerical values of constants with subscript $\pm$. Now we may insert the constraints (\[E\]) into the relations (\[D\]) to reduce the degrees of freedom of the system and obtain a minimal number of dynamical variables. If we do so, we obtain $$\begin{aligned}
\label{F}
\left\{
\begin{array}{ll}
L_2=3\left(1-X_{\pm}\right)\left[\left(2-X_{\pm}\right)N-\frac{X_{\pm}}{\sqrt{|K|}}\dot{a}\right]a^3,\\\\
L_3=\left(1-X_{\pm}\right)^2\left[\left(4-X_{\pm}\right)N-3\frac{X_{\pm}}{\sqrt{|K|}}\dot{a}\right]a^3,\\\\
L_4=\left(1-X_{\pm}\right)^3\left[N-\frac{X_{\pm}}{\sqrt{|K|}}\dot{a}\right]a^3,
\end{array}
\right.\end{aligned}$$in terms of which the Lagrangian (\[C\]) takes its reduced form with only one physical degree of freedom $a$. The momentum conjugate to $a$ is $$\label{F}
P_a=\frac{\partial {\cal L}_g}{\partial
\dot{a}}=-\frac{6a\dot{a}}{N}+m^2\left(\frac{\partial
L_2}{\partial \dot{a}}+\alpha_3 \frac{\partial L_3}{\partial
\dot{a}}+\alpha_4 \frac{\partial L_4}{\partial
\dot{a}}\right).$$Noting that $$\label{G}
\frac{\partial L_2}{\partial
\dot{a}}=3\frac{X_{\pm}}{\sqrt{|K|}}(X_{\pm}-1)a^3,\hspace{0.5cm}\frac{\partial
L_3}{\partial
\dot{a}}=3\frac{X_{\pm}}{\sqrt{|K|}}(X_{\pm}-1)^2a^3,\hspace{0.5cm}\frac{\partial
L_4}{\partial
\dot{a}}=\frac{X_{\pm}}{\sqrt{|K|}}(X_{\pm}-1)^3a^3,$$ one gets $$\label{H}
P_a=-6\frac{a\dot{a}}{N}-\frac{C_{\pm}m^2}{\sqrt{|K|}}a^3,\hspace{0.5cm}C_{\pm}\equiv
X_{\pm}(1-X_{\pm})\left[3+3\alpha_3(1-X_{\pm})+\alpha_4
(1-X_{\pm})^2\right].$$ Now, the Hamiltonian of the model can be obtained from its standard definition $H=\dot{a}P_a-{\cal L}$, with result $$\label{I}
H_g=N{\cal
H}_g=N\left[-\frac{1}{12a}\left(P_a+\frac{C_{\pm}m^2}{\sqrt{|K|}}a^3\right)^2+3|K|a+c_{\pm}m^2a^3\right],$$ in which we have defined $$\label{J}
c_{\pm}=\left(X_{\pm}-1\right)\left[3(2-X_{\pm})+\alpha_3(1-X_{\pm})(4-X_{\pm})+\alpha_4(1-X_{\pm})^2\right].$$We see that the lapse function enters in the Hamiltonian as a Lagrange multiplier as expected. Thus, when we vary the Hamiltonian with respect to $N$, we get ${\cal H}_g=0$, which is called the Hamiltonian constraint. On a classical level this constraint is equivalent to the Friedmann equation, wherein our problem at hand can be easily checked by comparing it with the equation of motion (4.5) in [@Emir]. On a quantum level, on the other hand, the operator version of this constraint annihilates the wave function of the corresponding universe, leading to the so-called Wheeler-DeWitt equation.
Now, let us deal with the matter field with which the action of the model is augmented. As we have mentioned, the matter part of the action is independent of modifications due to the mass terms. Therefore, the matter may come into play in a common way and the total Hamiltonian can be made by adding the matter Hamiltonian to the gravitational part of (\[I\]). To do this, we consider a perfect fluid whose pressure $p$ is linked to its energy density $\rho$ by the equation of state $$\label{K}
p=\omega \rho,$$where $-1\leq \omega \leq 1$ is the equation of the stated parameter. According to Schutz’s representation for the perfect fluid [@Schutz], its Hamiltonian can be viewed as (see [@Vak] for details) $$\label{L}
H_m=N\frac{P_T}{a^{3\omega}},$$where $T$ is a dynamical variable related to the thermodynamical parameters of the perfect fluid and $P_T$ is its conjugate momentum. Finally, we are in a position in which can write the total Hamiltonian $H=H_g+H_m$ as $$\label{M}
H=N{\cal
H}=N\left[-\frac{1}{12a}\left(P_a+\frac{C_{\pm}m^2}{\sqrt{|K|}}a^3\right)^2+3|K|a+c_{\pm}m^2a^3+\frac{P_T}{a^{3\omega}}\right].$$ The setup for constructing the phase space and writing the Lagrangian and Hamiltonian of the model is now complete. In the following section, we shall deal with classical and quantum cosmologies which can be extracted from a theory with the previously mentioned Hamiltonian.
Cosmological dynamics: classical point of view
==============================================
The classical dynamics are governed by the Hamiltonian equations. To achieve this purpose, we divide this section into two parts. We first consider the case in which the matter is absent, i.e., the vacuum, and then include the matter.
The vacuum classical cosmology
------------------------------
In this case, we can construct the equations of motion by the Hamiltonian equations with use of the Hamiltonian (\[I\]). Equivalently, one may directly write the Friedmann equation from the Hamiltonian constraint $H=0$ which, as we mentioned previously, reflects the fact that the corresponding gravitational theory is a parameterized theory in the sense that its action is invariant under time reparameterization. Noting from (\[H\]) that $$\label{N}
\dot{a}=-\frac{N}{6a}\left(P_a+\frac{C_{\pm}m^2}{\sqrt{|K|}}a^3\right),$$equation (\[I\]) gives $$\label{O}
3a\dot{a}^2-3|K|a=c_{\pm}m^2a^3,$$in which we have chosen the gauge $N=1$, so that the time parameter $t$ becomes the cosmic time $\tau$. As is indicated in [@Emir], this equation looks like the Friedmann equation for the open FRW universe with an effective cosmological constant $\Lambda_{\pm}=c_{\pm}m^2$ and admits the following solutions $$\label{P}
a_{\pm}(\tau)=\sqrt{\frac{3}{\Lambda}}\sinh \left(\pm
\sqrt{\frac{\Lambda}{3}}(\tau-\tau_{*})\right),$$ where $\tau_{*}$ is an integration constant and we have taken $\Lambda=c_{-}m^2$. For a positive $\tau_{*}$, the condition $a(\tau)\geq 0$ implies that the expressions of $a_{+}(\tau)$ and $a_{-}(\tau)$ are valid for $\tau\geq \tau_{*}$ and $\tau\leq
-\tau_{*}$ respectively, such that $a_{\pm}(\tau_{*})=0$. It is seen that the evolution of the corresponding universe with the scale factor $a_{+}(\tau)$ begins with a big-bang-like singularity at $\tau=\tau_{*}$ and then follows an exponential law expansion at late time of cosmic evolution in which the mass term shows itself as a cosmological constant. For a universe with the scale factor $a_{-}(\tau)$, on the other hand, the behavior is opposite. The universe decreases its size from large values of scale factor at $\tau=-\infty$ and ends its evolution at $\tau=-\tau_{*}$ with a zero size. In figure \[fig1\] we have plotted theses scale factors for typical values of the parameters. As this figure shows, although the behavior of $a_{+}(\tau)$ ($a_{-}(\tau)$) is like a de Sitter ($a(\tau)\sim e^{\sqrt{\Lambda/3}\tau}$) universe at $\tau\rightarrow \infty$ ($\tau\rightarrow -\infty$), in spite of the de Sitter, it begins (ends) its evolution with a singularity. In summary, what we have shown previously is that in the framework of an open FRW background geometry, the vacuum solutions of the massive theory are equivalent to the solutions of the usual GR with a cosmological constant. Accordingly, the zero-size singularity of both theories has the same nature. In this sense we would like to emphasize that the metric (\[A\]) with the scale factor (\[P\]) is indeed a section of the de Sitter hyperboloid $$\label{P1}
-T^2+X^2+Y^2+Z^2+W^2=1,$$ embedded in a $5$-dimensional Minkowski space $$\label{P2}
ds^2=-dT^2+dX^2+dY^2+dZ^2+dW^2.$$To see this, one may parameterize the hyperboloid in terms of the spherical coordinates $(r,\theta, \phi)$ as [@Her]
$$\begin{aligned}
\label{P3}
\left\{
\begin{array}{ll}
T=\sqrt{1+r^2}\sinh \tau,\\\\
X=\cosh \tau,\\\\Y=r\sinh \tau \cos \phi \cos \theta,\\\\Z=r\sinh
\tau \cos \phi \sin \theta,\\\\W=r\sinh \tau \sin \phi,
\end{array}
\right.\end{aligned}$$
which, upon substitution into the metric (\[P2\]), yields the open FRW metric with the scale factor $a(\tau)=\sinh
\tau$. This means that the point $a=0$ can be viewed as a coordinate singularity. However, we have to note that in the presence of any kind of matter field the point $a(\tau_{*})=0$ represents a true singularity. Thus, our following analysis to quantize the model is based on the minisuperspace coordinate system in terms of which the dynamical representation of the metric, i.e. (\[A\]), is written.
In the next section, we shall see how the previous picture may be modified when one takes into account quantum mechanical considerations.
[c]{}
Perfect fluid classical cosmology
---------------------------------
Now, we assume that a perfect fluid in its Schutz’s representation is coupled with gravity. In this case the Hamiltonian (\[M\]) describes the dynamics of the system. The equations of motion for $T$ and $P_T$ read as $$\label{Q}
\dot{T}=\{T,H\}=\frac{N}{a^{3\omega}},\hspace{0.5cm}\dot{P_{T}}=\{P_T,H\}=0.$$A glance at the above equations shows that with choosing the gauge $N=a^{3\omega}$, we shall have $$\label{R}
N=a^{3\omega}\Rightarrow T=t,$$which means that variable $T$ may play the role of time in the model. Therefore, the Friedmann equation $H=0$ can be written in the gauge $N=a^{3\omega}$ as follows $$\label{S}
3\dot{a}^2=3|K|a^{6\omega}+\Lambda
a^{6\omega+2}+P_0a^{3\omega-1},$$where we take $P_T=P_0=\mbox{const.}$ from the second equation of (\[Q\]). Since it is not possible to find the analytical solutions of the above differential equation for any arbitrary $\omega$, we present its solutions only in some special cases.
$\bullet$ $\omega=-\frac{1}{3}$: cosmic string. In this case we obtain $$\label{T}
a(t)=\left[\frac{\Lambda}{3}(t-t_0)^2-\frac{P_0+3|K|}{\Lambda}\right]^{1/2},$$where $t_0$ is an integration constant. We see that the evolution of the universe based on (\[T\]) has big-bang-like singularities at $t=t_0\pm t_{*}$ where $t_{*}=\frac{\sqrt{3(P_0+3|K|)}}{\Lambda}$. Indeed, the condition $a^2(t)\geq 0$ separates two sets of solutions $a^{(I)}(t)$ and $a^{(II)}(t)$, each of which is valid for $t\leq t_0-t_{*}$ and $t\geq t_0+t_{*}$, respectively. For the former, we have a contracting universe which decreases its size according to a power law relation and ends its evolution in a singularity at $t=t_0-t_{*}$, while for the latter, the evolution of the universe begins with a big-bang singularity at $t=t_0+t_{*}$ and then follows the power law expansion at late time of cosmic evolution.
One may translate these results in terms of the cosmic time $\tau$. Using its relationship with the time parameter $t$ in this case, that is, $d\tau=a^{-1}(t)dt$, we are led to $$\begin{aligned}
\label{T1}
a(\tau)=\left\{
\begin{array}{ll}
a^{(I)}(\tau)=\frac{1}{\sqrt{12\Lambda}}\left[e^{-\sqrt{\frac{\Lambda}{3}}(\tau-\tau_0)}-3(P_0+3|K|)e^{\sqrt{\frac{\Lambda}{3}}(\tau-\tau_0)}\right]
,\hspace{.5cm}\tau \leq \tau_0-\tau_{*},\\\\\\
a^{(II)}(\tau)=\frac{1}{\sqrt{12\Lambda}}\left[e^{\sqrt{\frac{\Lambda}{3}}(\tau-\tau_0)}-3(P_0+3|K|)e^{-\sqrt{\frac{\Lambda}{3}}(\tau-\tau_0)}\right],\hspace{.5cm}
\tau \geq \tau_0+\tau_{*},\\
\end{array}
\right.\end{aligned}$$where $\tau_{*}=\frac{1}{2}\sqrt{\frac{3}{\Lambda}}\ln 3(P_0+3|K|)$. Again, it is seen that there is a classically forbidden region $\tau_0-\tau_{*}<\tau<\tau_0+\tau_{*}$, for which we have no valid classical solutions. For $\tau \leq \tau_0-\tau_{*}$, the universe has a exponential decreasing behavior which ends its evolution in a singular point with zero size at $\tau=\tau_0-\tau_{*}$, while in the region $\tau \geq \tau_0+\tau_{*}$ it begins with the big-bang singularity at $\tau=\tau_0+\tau_{*}$ and then grows exponentially forever.
$\bullet$ $\omega=-1$: cosmological constant. Performing the integration, we get the following implicit relation between $t$ and $a(t)$: $$\label{U}
\frac{1}{\sqrt{3}(P_0+\Lambda)^2}\left[-6|K|+(P_0+\Lambda)a^2\right]\sqrt{3|K|+(P_0+\Lambda)a^2}=t-t_0.$$In terms of the cosmic time $\tau$, it is easy to see that this solution returns to (\[P\]), in which the cosmological term is replaced by $\Lambda \rightarrow \Lambda+\mbox{cons}.$ This is expected because the solutions (\[P\]) were equivalent to an open FRW universe with a cosmological constant. Therefore, adding a new cosmological term (a perfect fluid with $\omega=-1$) only makes a shift in the corresponding cosmological constant.
Cosmological dynamics: quantum point of view
============================================
In this section we look for the quantization of the model presented above via the method of canonical quantization. As is well known, this procedure is based on the Wheeler-DeWitt equation $\hat{{\cal H}}\Psi=0$, where $\hat{{\cal H}}$ is the operator version of the Hamiltonian constraint and $\Psi$ is the wave function of the universe, a function of the $3$-geometries and the matter fields. As in the case of the classical cosmology, we consider the matter of free and perfect fluid quantum cosmology separately.
Before going to the subject, a remark is in order related to the Hamiltonians (\[I\]) and (\[M\]). The term in the round bracket in these Hamiltonians is like the Hamiltonian of a charged particle moving in an electromagnetic field. From this analogy, one may define the transformation $$\label{U1}
P_a\rightarrow
\Pi_a=P_a+\frac{C_{\pm}m^2}{\sqrt{|K|}}a^3,\hspace{0.5cm}a\rightarrow
a,$$to simplify the form of the classical Hamiltonian. It is clear that this is a canonical transformation both classically and quantum mechanically [@And]. Since going back from a new set of variables to the old ones in a classical canonical transformation can be made without any ambiguity, applying this transformation may not be important for the classical dynamics presented in the previous section. In the context of quantum mechanics, on the other hand, the subject is of little difference. The transition to the quantum version of the theory is achieved by promoting observables to operators which are not necessarily commuting. Thus, by replacing the canonical variables $(a,P_a)$ by their operator counterparts $(\hat{a},\hat{P_a}=-id/da)$, we obtain the quantum Hamiltonian $$\label{U2}
\hat{{\cal
H}}=-\frac{1}{12}\hat{a}^{-1}\hat{\Pi_a}^2+...=-\frac{1}{12}\hat{a}^{-1}\left(\hat{P_a}+\frac{C_{\pm}m^2}{\sqrt{|K|}}\hat{a}^3\right)^2+...,$$where $...$ denotes the terms out of the round bracket in expressions (\[I\]) or (\[M\]). When calculating the square, it should be noted that the operators $\hat{a}$ and $\hat{P_a}$ do not commute. Although the order of these operators does not matter in the classical analysis, quantum mechanically this issue is quite crucial. Indeed, this is the operator ordering problem and, unfortunately, there is no well defined principle which specifies the order of operators in the passage from classical to quantum theory. There are, however, some simple rules which one uses conventionally. If, for instance, we order the products of $\hat{a}$ and $\hat{P_a}$ in $\hat{\Pi_a}^2$ such that the momentum stands to the right of the scale factor, we obtain $$\label{U3}
\hat{\Pi_a}^2\rightarrow
\hat{P_a}^2+\frac{C_{\pm}^2m^4}{|K|}a^6+2\frac{C_{\pm}m^2}{\sqrt{|K|}}\hat{a}^3\hat{P_a}-3i\frac{C_{\pm}m^2}{\sqrt{|K|}}\hat{a}^2,$$in which we have used the commutation relation $[\hat{a},\hat{P_a}]=i$. With this expression at hand, there is still another factor ordering ambiguity in the terms $\hat{a}^{-1}\hat{P_a}^2$ and $\hat{a}^2\hat{P_a}$ to construct the quantum Hamiltonian (\[U2\]). As Hawking and Page have shown [@Haw], the choice of different factor ordering will not affect semiclassical calculations in quantum cosmology, so for convenience one usually chooses a special place for it in the special models. However, in general, the behavior of the wave function depends on the chosen factor ordering [@Ste]. In what follows, as one usually does in the minisuperspace approximation to the cosmological models, we work in the framework of a special factor ordering in which, in addition to the expression (\[U3\]) for $\hat{\Pi_a}^2$, we also use the orderings $\hat{a}^{-1}\hat{P_a}^2=\hat{P_a}\hat{a}^{-1}\hat{P_a}$ and $\hat{a}^2\hat{P_a}=\hat{a}\hat{P_a}\hat{a}$ to make the Hamiltonian hermitian[^3].
The vacuum quantum cosmology
----------------------------
In this case, with the help of the Hamiltonian (\[I\]) and use of the abovementioned choice of ordering, the Wheeler-DeWitt equation reads $$\label{V}
\left\{\frac{d^2}{da^2}+\left(-a^{-1}+2i\frac{C_{\pm}}{c_{\pm}}\Lambda
a^3\right)\frac{d}{da}+
\left[\left(36+2i\frac{C_{\pm}}{c_{\pm}}\Lambda
\right)a^2+12\Lambda
a^4-\frac{C_{\pm}^2}{c_{\pm}^2}\Lambda^2a^6\right]
\right\}\Psi(a)=0.$$This equation does not seem to have analytical solutions. However, we can get some properties of its solutions in special regions where there is interest in classical and quantum regimes. First of all, let us rewrite this equation in the form $$\label{X}
\left\{\frac{d^2}{da^2}-\left(a^{-1}+6i\Lambda
a^3\right)\frac{d}{da}+
\left[\left(36-6i\Lambda\right)a^2+12\Lambda a^4-9\Lambda^2
a^6\right] \right\}\Psi(a)=0,$$in which we have used the numerical values $C_{-}=-9/4$ and $c_{-}=3/4$ [@Emir]. For large values of $a$, the solution to this equation can easily be obtained in the Wentzel-Kramers-Brillouin (WKB) (semiclassical) approximation. In this regime we can neglect the term $a^{-1}$ in equation (\[X\]). Then, substituting $\Psi(a)=\Omega(a)e^{iS(a)}$ in this equation leads to the modified Hamilton-Jacobi equation $$\label{Y}
-\left(\frac{dS}{da}\right)^2+6\Lambda a^3
\frac{dS}{da}+\left(36a^2+12\Lambda a^4-9\Lambda^2
a^6\right)+{\cal Q}=0,$$in which the quantum potential is defined as ${\cal Q}=\frac{1}{\Omega}\frac{d^2\Omega}{da^2}$. It is well-known that the quantum effects are important for small values of the scale factor and in the limit of the large scale factor can be neglected. Therefore, in the semiclassical approximation region we can omit the ${\cal Q}$ term in (\[Y\]) and obtain $$\label{Z}
\frac{dS}{da}=3\Lambda a^3\pm a \sqrt{36+12\Lambda
a^2}.$$ In the WKB method, the correlation between classical and quantum solutions is given by the relation $P_a=\frac{\partial S}{\partial a}$. Thus, using the definition of $P_a$ in (\[H\]), the equation for the classical trajectories becomes $$\label{AB}
\dot{a}=\pm \sqrt{1+\frac{\Lambda}{3}a^2},$$from which one finds $$\label{AC}
a(t)=\sqrt{\frac{3}{\Lambda}}\sinh \left(\pm
\sqrt{\frac{\Lambda}{3}}(t-\delta)\right),$$which shows that the late time behavior of the classical cosmology (\[P\]) is exactly recovered. The meaning of this result is that for large values of the scale factor, the effective action corresponding to the expanding and contracting universes is very large and the universe can be described classically. On the other hand, for small values of the scale factor we cannot neglect the quantum effects, and the classical description breaks down. Since the WKB approximation is no longer valid in this regime, one should go beyond the semiclassical approximation. In the quantum regime, if we neglect the term $\Lambda^2 a^6$ in (\[X\]), the two linearly independent solutions to this equation can be expressed in terms of the Hermite $H_{\nu}(x)$ and hypergeometric $F_{1\hspace{-.5cm}1}\hspace{.4cm}(a,b;z)$ functions, leading to the following general solution: $$\label{AD}
\Psi(a)=e^{-ia^2}\left[c_1H_{-\frac{1}{2}-\frac{8}{3\Lambda}i}\left(\frac{(1+i)(2+3\Lambda
a^2)}{2\sqrt{3\Lambda}}\right)+c_2 \,\,\,
F_{1\hspace{-.5cm}1}\hspace{.4cm}\left(\frac{1}{4}+\frac{4}{3\Lambda}i,\frac{1}{2};\frac{i(2+3\Lambda
a^2)^2}{6\Lambda}\right)\right].$$At this step we take a quick glance at the question of the boundary conditions on the solutions to the Wheeler-DeWitt equation. Note that the minisuperspace of the above model has only one degree of freedom denoted by the scale factor $a$ in the range $0<a<\infty$. According to [@Vil], its nonsingular boundary is the line $a=0$, while at the singular boundary this variable is infinite. Since the minisuperspace variable is restricted to the abovementioned domain, the minisuperspace quantization deals only with wave functions defined on this region. Therefore, to construct the quantum version of the model, one should take into account this issue. This is because in such cases, one usually has to impose boundary conditions on the allowed wave functions; otherwise the relevant operators, especially the Hamiltonian, will not be self-adjoint. The condition for the Hamiltonian operator $\hat{{\cal H}}$ associated with the classical Hamiltonian function (\[I\]) and (\[M\]) to be self-adjoint is $(\psi_1,\hat{{\cal H}}\psi_2)=(\hat{{\cal H}}\psi_1,\psi_2)$ or $$\begin{aligned}
\label{AD2}
\int_0^\infty \psi_1^*(a)\hat{{\cal H}}\psi_2(a)da= \int_0^\infty
\psi_2(a)\hat{{\cal H}}\psi_1^*(a)da.\end{aligned}$$ Following the calculations in [@Lem] and dealing only with square integrable wave functions, this condition yields a vanishing wave function at the nonsingular boundary of the minisuperspace. Hence, we impose the boundary condition on the solutions (\[AD\]) such that at the nonsingular boundary (at $a=0$), the wave function vanishes. This makes the Hamiltonian hermitian and self-adjoint and can avoid the singularities of the classical theory, i.e. there is zero probability for observing a singularity corresponding to $a=0$.[^4] Therefore, we require $$\label{AD1}
\Psi(a=0)=0\Rightarrow
\frac{c_2}{c_1}=-\frac{H_{-\frac{1}{2}-\frac{8}{3\Lambda}i}\left(\frac{1+i}{\sqrt{3\Lambda}}\right)}{F_{1\hspace{-.5cm}1}\hspace{.4cm}
\left(\frac{1}{4}+\frac{4i}{3\Lambda},\frac{1}{2};\frac{2i}{3\Lambda}\right)}.$$Note that equation (\[X\]) is a Schrödinger-like equation for a fictitious particle with zero energy moving in the field of the superpotential with the real part $U(a)=-(36a^2+12\Lambda a^4)$. Usually, in the presence of such a potential the minisuperspace can be divided into two regions, $U>0$ and $U<0$, which could be termed the classically forbidden and classically allowed regions, respectively. In the classically forbidden region the behavior of the wave function is exponential, while in the classically allowed region the wave function behaves oscillatorily. In the quantum tunneling approach [@Vil], the wave function is so constructed as to create a universe emerging from [*nothing*]{} by a tunneling procedure through a potential barrier in the sense of usual quantum mechanics. Now, in our model, the superpotential is always negative, which means that there is no possibility of tunneling anymore, since a zero energy system is always above the superpotential. In such a case, tunneling is no longer required as classical evolution is possible. As a consequence the wave function always exhibits oscillatory behavior. In figure \[fig2\], we have plotted the square of the wave functions for typical values of the parameters. It is seen from this figure that the wave function has a well-defined behavior near $a=0$ and describes a universe emerging out of nothing without any tunneling. (See [@Coul], in which such wave functions also appeared in the case study of the probability of quantum creation of compact, flat, and open de Sitter universes.) On the other hand, the emergence of several peaks in the wave function may be interpreted as a representation of different quantum states that may communicate with each other through tunneling. This means that there are different possible universes (states) from which the present universe could have evolved and tunneled in the past, from one universe (state) to another.
[c]{}
Perfect fluid quantum cosmology
-------------------------------
In this case, the Wheeler-DeWitt equation can be constructed by means of the Hamiltonian (\[M\]). With the same approximations as we used in the previous subsection, we obtain $$\label{AE}
\left\{\frac{\partial^2}{\partial a^2}-\left(a^{-1}+6i\Lambda
a^3\right)\frac{\partial}{\partial a}+
\left[\left(36-6i\Lambda\right)a^2+12\Lambda
a^4\right]-ia^{1-3\omega}\frac{\partial}{\partial T}
\right\}\Psi(a,T)=0.$$We separate the variables in this equation as $$\label{AF}
\Psi(a,T)=e^{iET}\psi(a),$$leading to $$\label{AG}
\left\{\frac{d^2}{d a^2}-\left(a^{-1}+6i\Lambda
a^3\right)\frac{d}{d a}+
\left[\left(36-6i\Lambda\right)a^2+12\Lambda
a^4+Ea^{1-3\omega}\right]\right\}\psi(a)=0.$$The solutions of the above differential equation may be written in the form $$\label{AH}
\psi_E(a)=e^{-ia^2}\left[c_1H_{-\frac{1}{2}-\frac{32+E}{12\Lambda}i}\left(\frac{(1+i)(2+3\Lambda
a^2)}{2\sqrt{3\Lambda}}\right)+c_2 \,\,\,
F_{1\hspace{-.5cm}1}\hspace{.4cm}\left(\frac{1}{4}+\frac{32+E}{24\Lambda}i,\frac{1}{2};\frac{i(2+3\Lambda
a^2)^2}{6\Lambda}\right)\right],$$for $\omega=-1/3$ and $$\begin{aligned}
\label{AI}
\psi_E(a)=e^{-i(1+\frac{E}{12
\Lambda})a^2}\left[c_1H_{-\frac{1}{2}-\frac{1152\Lambda^2-24E\Lambda-E^2}{432\Lambda^3}i}\left(\frac{(1+i)\left[E+6\Lambda(2+3\Lambda
a^2)\right]}{12\Lambda\sqrt{3\Lambda}}\right)+\nonumber \right.\\
\left.
c_2\,\,\,F_{1\hspace{-.5cm}1}\hspace{.4cm}\left(\frac{1}{4}+\frac{1152\Lambda^2-24E\Lambda-E^2}{864\Lambda^3}i,\frac{1}{2};\frac{i\left[E+6\Lambda(2+3\Lambda
a^2)\right]^2}{216\Lambda^3}\right)\right],\end{aligned}$$for $\omega=-1$. Now the eigenfunctions of the Wheeler-DeWitt equation can be written as $$\label{AJ}
\Psi_E(a,T)=e^{iET}\psi_E(a).$$We may now write the general solution to the Wheeler-DeWitt equation as a superposition of its eigenfunctions; that is, $$\label{AK}
\Psi(a,T)=\int_0^\infty A(E)\Psi_E(a,T)dE,$$where $A(E)$ is a suitable weight function to construct the wave packets. The above relations seem to be too complicated to extract an analytical expression for the wave function. Therefore, in the following (for the case $\omega=-1/3$), we present an approximate analytic method which is valid for very small values of scale factor, i.e., in the range that we expect the quantum effects to be important. In this regime if we keep only the $a^{-1}$ and $a^2$ terms in the second and third terms of (\[AG\]), the solutions to this equation can be viewed as a superposition of the functions $\sin\left(\frac{\sqrt{36+E-6i\Lambda}}{2}a^2\right)$ and $\cos\left(\frac{\sqrt{36+E-6i\Lambda}}{2}a^2\right)$. If we impose the boundary condition $\psi(a=0)=0$ on these solutions, we are led to the following eigenfunctions: $$\label{AL}
\Psi_E(a,T)=e^{iET}\sin\left(\frac{\sqrt{36+E-6i\Lambda}}{2}a^2\right).$$ Now, by using the equality $$\label{AM}
\int_0^\infty e^{-\gamma x}\sin \sqrt{mx}dx=\frac{\sqrt{\pi
m}}{2\gamma^{3/2}}e^{-(m/4\gamma)},$$we can evaluate the integral over $E$ in (\[AK\]), and the simple analytical expression for this integral is found if we choose the function $A(E)$ to be a quasi-Gaussian weight factor $A({\cal
E})=e^{-\gamma {\cal E}}$ ($\gamma$ is an arbitrary positive constant and ${\cal E}=36+E-6i\Lambda$), which results in $$\label{AN}
\Psi(a,T)=e^{-6(\Lambda+6i)T}\int_0^\infty e^{-\gamma {\cal
E}}e^{i{\cal E}T}\sin\left(\frac{\sqrt{{\cal
E}}}{2}a^2\right)d{\cal E}.$$Using the relation (\[AM\]) yields the following expression for the wave function $$\label{AO}
\Psi(a,T)={\cal
N}e^{-6(\Lambda+6i)T}\frac{a^2}{(\gamma-iT)^{3/2}}\exp
\left(-\frac{a^2}{8(\gamma-iT)}\right),$$where ${\cal
N}$ is a numerical factor. Now, having this expression for the wave function of the universe, we are going to obtain the predictions for the behavior of the dynamical variables in the corresponding cosmological model. To do this, one may calculate the time dependence of the expectation value of a dynamical variable $q$ as $$\label{AP}
<q>(T)=\frac{<\Psi|q|\Psi>}{<\Psi|\Psi>}.$$Following this approach, we may write the expectation value for the scale factor as $$\label{AR}
<a>(T)=\frac{\int_0^\infty
\Psi^{*}(a,T)a\Psi(a,T)da}{\int_0^\infty
\Psi^{*}(a,T)\Psi(a,T)da},$$which yields $$\label{AS}
<a>(T)=\sqrt{\frac{\Lambda}{3}}\left(\gamma^2+T^2\right)^{1/2}.$$This relation may be interpreted as the quantum counterpart of the classical solutions (\[T\]). However, in spite of the classical solutions, for the wave function (\[AO\]), the expectation value (\[AS\]) of $a$ never vanishes, showing that these states are nonsingular. Indeed, in (\[AS\]) $T$ varies from $-\infty$ to $+\infty$, and any $T_0$ is just a specific moment without any particular physical meaning like big-bang singularity. The above result may be written in terms of the cosmic time $\tau$. By the definition $d\tau=a^{-1}(T)dT$, we obtain the quantum version of the relations (\[T1\]) as $$\label{AT}
<a>(\tau)=\frac{1}{2}\left(
e^{\sqrt{\frac{\Lambda}{3}}\tau}+\gamma^2
e^{-\sqrt{\frac{\Lambda}{3}}\tau}\right).$$ In figure \[fig3\], we have plotted the classical scale factors (\[T\]) and (\[T1\]) and their quantum counterparts (\[AS\]) and (\[AT\]). As is clear from this figure, for a perfect fluid with $\omega=-1/3$, the corresponding classical cosmology admits two separate solutions which are disconnected from each other by a classically forbidden region. One of these solutions represents a contracting universe ending in a singularity while another describes an expanding universe which begins its evolution with a big-bang singularity. On the other hand, the evolution of the scale factor based on the quantum-mechanical considerations shows a bouncing behavior in which the universe bounces from a contraction epoch to a reexpansion era. Indeed, the classically forbidden region is where the quantum bounce has occurred. We see that in the late time of cosmic evolution in which the quantum effects are negligible, these two behaviors coincide with each other. This means that the quantum structure which we have constructed has a good correlation with its classical counterpart.
[c]{}
Bohmian trajectories
====================
In the previous sections, we saw how the classical singular behavior of the universe was replaced with a bouncing one in a quantum picture. Now, a natural question may arise: Why will the bounce occur? Clearly, it is due to the quantum mechanical effects which show themselves when the size of the universe tends to very small values. However, we would like to know whether the massive correction to the underlying gravity theory has any contribution to this phenomenon. To deal with this question, let us return to the wave function (\[AO\]) and write it in the polar form $\Psi(a,T)=\Omega(a,T)e^{iS(a,T)}$, where $\Omega(a,T)$ and $S(a,T)$ are real functions, which simple algebra gives as $$\label{AU}
\Omega(a,T)=e^{-6\Lambda
T}\frac{a^2}{(\gamma^2+T^2)^{3/4}}\exp\left[-\frac{\gamma
a^2}{8(\gamma^2+T^2)}\right],$$ $$\label{AV}
S(a,T)=-36T+\frac{3}{2}\arctan
\frac{T}{\gamma}-\frac{Ta^2}{8(\gamma^2+T^2)}.$$According to the Bohm-de Broglie interpretation of quantum mechanics [@Bohm] and also its usage in quantum cosmology [@Pin], upon using this form of the wave function in the corresponding wave equation, we arrive at the modified Hamilton-Jacobi equation as $$\label{AX}
{\cal H}\left(q_i,P_i=\frac{\partial S}{\partial q_i}\right)+{\cal
Q}=0,$$where $P_i$ are the momentum conjugate to the dynamical variables $q_i$ and ${\cal Q}$ is the quantum potential. With beginning of the wave equation (\[AE\]), for which we have used the same approximations as in the previous section, the above mentioned procedure gives the quantum potential as $$\label{AY}
{\cal Q}=\frac{1}{\Omega}\frac{\partial^2 \Omega}{\partial
a^2}-\frac{1}{a\Omega}\frac{\partial \Omega}{\partial
a}.$$On the other hand, the Bohmian equations of motion can be obtained by $P_a=\frac{\partial S}{\partial a}$, where by means of the relation (\[H\]) reads $$\label{AZ}
-6a\dot{a}+3\Lambda
a^2=-\frac{T}{4(\gamma^2+T^2)}.$$The solution to this equation denotes the Bohmian representation of the scale factor; that is $$\label{BA}
a(t)=\sqrt{ce^{\Lambda T}+\frac{1}{24}e^{\Lambda T-i\gamma
\Lambda}\left[e^{2i\gamma \Lambda}\mbox{ Ei}(1;-\Lambda T-i\gamma
\Lambda)+\mbox{Ei}(1;-\Lambda T+i\gamma
\Lambda)\right]},$$where $c$ is an integration constant and $\mbox{Ei}(b;z)$ is the exponential integral function defined by $$\label{BC}
\mbox{Ei}(b;z)=\int_1^\infty e^{-kz}k^{-b}dk.$$The bouncing behavior of the scale factor is again its main property near the classical singularities as we have shown in figure \[fig4\]. To achieve an expression for the quantum potential in terms of the scale factor, we note that all of our above calculations are in the vicinity of $T\sim 0$, where the scale factor is small. In this regime, a numerical analysis shows that the Bohmian scale factor (\[BA\]) behaves as $a(T)\sim
(\gamma^2+T^2)^{1/2}$, in agreement with the expectation value (\[AS\]). Thus, substituting in (\[AU\]), we get the quantum potential from (\[AY\]) as $$\label{BD}
{\cal Q}(a)=\frac{3}{4}\left[\gamma^2 \left(\frac{1}{a^4 -
\gamma^2 a^2}+\frac{8\Lambda}{(a^2 - \gamma^2)^{3/2}}\right)+
\frac{-1+48 \Lambda ^2 a^2}{a^2-\gamma^2}\right].$$
[c]{}
In figure \[fig4\] we also have plotted the qualitative behavior of the quantum potential versus the scale factor. As this figure shows, this potential goes to zero for the large values of the scale factor. This behavior is expected, since in this regime the quantum effects can be neglected and the universe evolves classically. On the other hand, for the small values of the scale factor the potential takes a large magnitude and the quantum mechanical considerations come into the scenario. This is where the quantum potential can produce a huge repulsive force which may be interpreted as being responsible of the avoidance of singularity. In figure \[fig4\] the horizontal line represents a constant energy level which in intersecting with the potential curves gives the turning points at which the bounce will occur. The solid curve in this figure is plotted in the case of $\Lambda
\neq 0$; i.e., for the massive theory, while the dashed curve is for $\Lambda=0$; i.e., for when the massive corrections are absent. It is seen that, although the mass term $\Lambda$ is not the only reason for the bouncing behavior in the vicinity of the classical singularity, it may shift the bouncing point into the smaller values of the scale factor. This means that if we consider the bouncing point as the minimum size of the universe (which is suggested by quantum cosmology), then the massive version of the underlying gravity theory predicts a smaller value for this minimal size in comparison with the usual Einstein-Hilbert model. These facts and also other considerable possibilities such as quantum tunneling between different classically allowed regimes (as can be seen from figure \[fig4\]) through the potential barrier support the idea that the massive corrections to the classical cosmology are some signals from quantum gravity.
Conclusions
===========
In this paper we have applied the recently proposed nonlinear massive theory of gravity to an open FRW cosmological setting. Although the absence of homogeneous and isotropic solutions is one of the main challenges related to this kind of gravitational theory, we moved along the lines of [@Emir; @Emir1], in which the existence of open FRW cosmologies is investigated. By using the constraint corresponding to the Stückelberg scalars, we reduced the number of degrees of freedom, according to which the total Hamiltonian of the model is deduced. We then presented in detail, the classical cosmological solutions either for the empty universe or in the case where the universe is filled by a perfect fluid (in its Schutz representation) with the equation of state parameter $\omega=-1/3,-1$. We saw that in both of these cases, the solutions consist of a contraction universe which finalizes its evolution in a singular point and an expanding universe which begins its dynamic with a big-bang singularity. These two branches of solutions are disconnected from each other by a classically forbidden region. Also, the common feature of the vacuum and matter classical solutions is that the mass term plays a role which resembles the role of cosmological constant in the usual de Sitter universe. In this sense we may relate the massive corrections of GR to the problem of dark energy.
In another part of the paper, we dealt with the quantization of the model described above via the method of canonical quantization. For an empty universe, we have shown that by applying the WKB approximation on the Wheeler-DeWitt equation, one can recover the late time behavior of the classical solutions. For the early universe, we obtained oscillatory quantum states free of classical singularities by which two branches of classical solutions may communicate with each other. In the presence of matter, we focused our attention on the approximate analytical solutions to the Wheeler-DeWitt equation in the domain of small scale factor, i.e. in the region which the quantum cosmology is expected to be dominant. Using Schutz’s representation for the perfect fluid, under a particular gauge choice, we led to the identification of a time parameter which allowed us to study the time evolution of the resulting wave function. Investigation of the expectation value of the scale factor shows a bouncing behavior near the classical singularity. In addition to singularity avoidance, the appearance of bounce in the quantum model is also interesting in its nature due to prediction of a minimal size for the corresponding universe. We know the idea of existence of a minimal length in nature is supported by almost all candidates of quantum gravity. Finally, we repeated the quantum calculations by means of the Bohmian approach to quantum mechanics. The analysis of the quantum potential shows the importance of the mass term in the action of the model. Indeed, we have shown that in the presence of the massive graviton, the quantum potential changes its behavior from an infinite barrier to a finite one, and hence the minimal size of the universe, from which the bounce occurs, will be shifted to the smaller values. Also, the massive theory of quantum cosmology exhibits some other possibilities; for example, tunneling between different classically allowed regions, for cosmic evolution in the early universe epoch.
[99]{} M. Fierz and W. Pauli, [*Proc. R. Soc.*]{} A [**173**]{} (1939) 211
D.G. Boulware and S. Deser, [*Phys. Rev.*]{} D [**6**]{} (1972) 3368
C. de Rham and G. Gabadadze, [*Phys. Rev.*]{} D [**82**]{} (2010) 4 (arXiv: 1007.0443 \[hep-th\])
K. Koyama, G. Niz and G. Tasinato, [*Phys. Rev. Lett.*]{} [**107**]{} (2011) 131101 (arXiv: 1103.4708 \[hep-th\])\
K. Koyama, G. Niz and G. Tasinato, [*Phys. Rev.*]{} D [**84**]{} (2011) 064033 (arXiv: 1104.2143 \[hep-th\])\
T.M. Nieuwenhuizen, [*Phys. Rev.*]{} D [**84**]{} (2011) 024038 (arXiv: 1103.5912 \[gr-qc\])\
S.F. Hassan and R.A. Rosen, [*Phys. Rev. Lett.*]{} [**108**]{} (2012) 041101 (arXiv: 1106.3344 \[hep-th\])\
C. de Rham, G. Gabadadze and A.J. Tolley, [*Phys. Lett.*]{} B [**711**]{} (2012) 190 (arXiv: 1107.3820 \[hep-th\])\
C. de Rham, G. Gabadadze and A.J. Tolley, [*Helicity Decomposition of Ghost-free Massive Gravity*]{}, (arXiv: 1108.4521 \[hep-th\])\
L. Berezhiani, G. Chkareuli, C. de Rham, G. Gabadadze, A.J. Tolley, [*On Black Holes in Massive Gravity*]{} (arXiv: 1111.3613 \[hep-th\])
G. D’Amico, C. de Rham, S. Dubovsky, G. Gabadadze, D. Pirtskhalava and A.J. Tolley, [*Phys. Rev.*]{} D [**84**]{} (2011) 124046 (2011) (arXiv: 1108.5231 \[hep-th\])
A.E. Gümrükçüoğlu, C. Lin and S. Mukohyama, [*J. Cosmol. Astropart. Phys.*]{} [**11**]{} (2011) 030 (arXiv: 1109.3845 \[hep-th\])
A.E. Gümrükçüoğlu, C. Lin and S. Mukohyama, [*J. Cosmol. Astropart. Phys.*]{} [**03**]{} (2012) 006 (arXiv: 1111.4107 \[hep-th\])
M.S. Volkov, [*J. High Energy Phys.*]{} [**01**]{} (2012) 035 (arXiv: 1110.6153 \[hep-th\]),\
D. Comelli, M. Crisostomi, F. Nesti and L. Pilo, [*J. High Energy Phys.*]{} [**03**]{} (2012) 065 (arXiv: 1111.1983 \[hep-th\])\
M. von Strauss, A. Schmidt-May, J. Enander, E. Mortsell and S.F. Hassan, *Cosmological Solutions in Bimetric Gravity and Their Observational Tests*, (arXiv: 1111.1655 \[gr-qc\])\
N. Khosravi, N. Rahmanpour, H.R. Sepangi and S. Shahidi, [*Phys. Rev.*]{} D [**85**]{} (2012) 024049 (arXiv: 1111.5346 \[hep-th\])\
N. Khosravi, H.R. Sepangi and S. Shahidi, [*On massive cosmological scalar perturbations*]{}, (arXiv: 1202.2767 \[gr-qc\])
S.F. Hassan and R.A. Rosen, [*J. High Energy Phys.*]{} [**01**]{} (2012) 126 (arXiv: 1109.3515 \[hep-th\])\
S.F. Hassan and R.A. Rosen, *Confirmation of the Secondary Constraint and Absence of Ghost in Massive Gravity and Bimetric Gravity*, (arXiv: 1111.2070 \[hep-th\])\
S.F. Hassan and R.A. Rosen, *On Non-Linear Actions for Massive Gravity*, (arXiv: 1103.6055 \[hep-th\])
C. de Rham, G. Gabadadze and A.J. Tolley, [*Phys. Rev. Lett.*]{} [**106**]{} (2011) 231101 (arXiv: 1011.1232 \[hep-th\])
N. Arkani-Hamed, H. Georgi and M.D. Schwartz, [*Ann. Phys.*]{} [**305**]{} (2003) 96 (arXiv: hep-th/0210184)\
S.L. Dubovsky, [*J. High Energy Phys.*]{} [**10**]{} (2004) 076 (arXiv: hep-th/0409124)
B.F. Schutz, [*Phys. Rev.*]{} D [**2**]{} (1970) 2762\
B.F. Schutz, [*Phys. Rev.*]{} D [**4**]{} (1971) 3559\
V.G. Lapchinskii, V.A. Rubakov, [*Theor. Math. Phys.*]{} [**33**]{} (1977) 1076
A.B. Batista, J.C. Fabris, S.V.B. Goncalves and J. Tossa, [*Phys. Lett.*]{} A [**283**]{} (2001) 62 (arXiv: gr-qc/0011102)\
F.G. Alvarenga, J.C. Fabris, N.A. Lemos and G.A. Monerat, [*Gen. Rel. Grav.*]{} [**34**]{} (2002) 651 (arXiv: gr-qc/0106051)\
A.B. Batista, J.C. Fabris, S.V.B. Goncalves and J. Tossa, [*Phys. Rev.*]{} D [**65**]{} (2002) 063519 (arXiv: gr-qc/0108053)\
B. Vakili, [*Phys. Lett.*]{} B [**688**]{} (2010) 129 (arXiv: 1004.0306 \[gr-qc\])\
B. Vakili, [*Class. Quantum Grav.*]{} [**27**]{} (2010) 025008 (arXiv: 0908.0998 \[gr-qc\])
. Gr[ø]{}n and S. Hervik, [*Einstein’s General Theory of Relativity*]{} (Springer, New York, 2007)
A. Anderson, [*Ann. Phys.*]{} [**232**]{} (1994) 292 (arXiv: hep-th/9305054)
S.W. Hawking and D.N. Page, [*Nucl. Phys.*]{} B [**264**]{} (1986) 185
R. Steigl and F. Hinterleitner, [*Class. Quantum Grav.*]{} [**23**]{} (2006) 3879
A. Vilenkin, [*Phys. Rev.*]{} D [**37**]{} (1988) 888\
A. Vilenkin, [*Phys. Rev.*]{} D [**33**]{} (1986) 3560
N.A. Lemos, [*J. Math. Phys.*]{} [**37**]{} (1996) 1449 (arXiv: gr-qc/9511082)
B.S. DeWitt, [*Phys. Rev.*]{} [**160**]{} (1967) 1113
C. Kiefer, [*Quantum Gravity*]{} (Oxford University Press, New York, 2007).
D.H. Coule and J. Martin, [*Phys. Rev.*]{} D [**61**]{} (2000) 063501 (arXiv: gr-qc/9905056)\
A. Linde, [*J. Cosmol. Astropart. Phys.*]{} [**10**]{} (2004) 004 (arXiv: hep-th/0408164)
D. Bohm, [*Phys. Rev.*]{} [**85**]{} (1952) 166\
D. Bohm, [*Phys. Rev.*]{} [**85**]{} (1952) 180\
P.R. Holland, [*The Quantum Theory of Motion: An Account of the de Broglie-Bohm Interpretation of Quantum Mechanics*]{}, (Cambridge University Press, Cambridge, 1993)
F.T. Falciano and N. Pinto-Neto, [*Phys. Rev.*]{} D [**79**]{} (2009) 023507 (arXiv: 0810.3542 \[gr-qc\])\
A. Shojai and F. Shojai, [*Europhys. Lett.*]{} [**71**]{} (2005) 886 (arXiv: gr-qc/0409020)
[^1]: [email protected]
[^2]: [email protected]
[^3]: With the canonical transformation (\[U1\]) at hand, one may uses the transformed Hamiltonian ${\cal H}=-\frac{1}{12a}\Pi_a^2+...,$ to quantize the system, where again ... denotes the terms out of the round bracket in expressions (\[I\]) or (\[M\]). Using this Hamiltonian in the hermitian form $a^{-1}\Pi_a^2=\Pi_a a^{-1}\Pi_a$ and also representing $\Pi_a$ by $-i\partial_a$, this is equivalent to our above treatment in which the last term in (\[U3\]) is absent. Therefore, one may have some doubts on the validity of the main following results due to the effects of the chosen factor ordering. To overcome this problem, we have made some calculations based on the above mentioned transformed Hamiltonian and have verified that the general patterns of the resulting wave functions follow the behavior shown in following sections.
[^4]: Such a boundary condition is also suggested by DeWitt in the form $\Psi[{\cal G}^{(3)}]=0$ [@Dew], where ${\cal G}^{(3)}$ denotes all three-geometries which may play the roll of barriers, for instance singular three-geometries. As is argued in [@Dew], with this boundary condition some kinds of classical singularities can be removed and a unique solution to the Wheeler-DeWitt equation may be obtained. Although in the presence of more fundamental proposals of the boundary condition in quantum cosmology (for example, Vilenkin’s tunneling or Hawking’s no boundary proposals), it is not clear that the above mentioned boundary condition is true, there are some evidences in quantum gravity models in which suitable wave packets obey such kind of boundary condition, see [@Kief].
|
{
"pile_set_name": "ArXiv"
}
|
**Simple Simulational Model for**
**Stocks Markets**
JUAN R. SANCHEZ [^1]
*Departamento de Física, Facultad de Ingeniería*
*Universidad Nacional de Mar del Plata*
*Av. J.B. Justo 4302, 7600 Mar del Plata, Argentina*
Recently, several computational models trying to represent the behavior of actual stocks markets have been presented. [@1; @2; @3] >From these results it seems to be a well established fact that, in order to obtain a good representation of the time evolution of actual markets, two type of traders agents must be included. On one side there are the so called [*noisy*]{} traders which are supposed to follow the [*local*]{} (in space and time) trend of the market. The noisy traders, also called followers, place their buy or sell orders on a given stock following the behavior of other (related) stocks. This kind of attitude seems to be an almost evident practice for someone trying to operate within a market. But, on the other hand, there are also [*fundamentalists*]{} traders which are considered to be responsible of the market turn offs. These kind of traders are supposed to know something more about the market and then are able to develop some kind of more sophisticated strategy to operate. They can take actions to buy or sell according to other indicators; the markets [*fundamentals*]{}. These indicators could depend on other type of information which usually may come from [*outside*]{} of the system. In previous models the two types of agents are represented in a variety of ways. [@2; @3] For instance, in reference 2 the influence of the fundamentalists is modeled by including a term that takes into account the [*tendency*]{} in the price changes. This effect stabilizes the prices. On the other hand, in reference 3 the two type of traders are included as separate entities. Then, it is realistic to think that both type of behaviors are acting at the same time and influence the way in which a specific stock price change. >From the point of view of simulations, another common characteristic of many of the models just presented seems to be the use of Ising like variables that represent the buy or sell attitude of market agents. Taking into account all the above mentioned characteristics, a somewhat different modeling approach is presented here. In principle, the model is based in previously reported models of opinion evolution in a closed community. [@1] But, instead of using Ising spins, here the community is modeled by a vector of stock prices $\mathbf{x}$ having $N$ integer valued “Potts” components $x_i$, each one representing the price of a market asset (in arbitrary units). Then, associated with each $x_i$ is a [*direction of movement*]{} value $v_i$. The components $v_i$ form a vector $\mathbf{v}$. This components are of Ising type, i.e., they can take two values $v_i=+1$ and $v_i=-1$. The vectors $\mathbf{x}$ and $\mathbf{v}$ evolve in time according to the following dynamical rules. A randomly chosen component $x_i$ is updated according to the equation $$\label{eq1}
x_i(t+\Delta t)=x_i(t)+v_k(t) \:;$$ while for the corresponding $v_i$ components the following equation is valid $$\label{eq2}
v_i(t+\Delta t) = \left\{
\begin{array}{rl}
v_k(t) & {\mathrm {if}} \:\: |x_i(t)| < X_{th} \\
& \\
-v_i(t) & {\mathrm {if}} \:\: |x_i(t)| > X_{th} \: .
\end{array} \right .$$ The value of $v_k$ is obtained by choosing at random among the direction values of each one of the [*neighbors*]{}, i.e., $v_k = v_{i-1}$ or $v_k = v_{i+1}$ with equal probability. In principle, the algorithm described by the equations \[eq1\] and \[eq2\] takes into account the influence of the noisy traders which follow the trend of related stocks in order to buy ($v_i = +1$) or to sell ($v_i = -1$) an specific asset. However, it is not reasonable to think that the prices $\mathbf{x}$ could take arbitrary positive or negative values. There is no actual market following a given trend for ever. Then in order to take into account the influence of the fundamentalists traders, a threshold $X_{th}$ is established for the [*absolute*]{} value of each $x_i$. As it can be seen in equation \[eq2\], if at any time $|x_i| > X_{th}$ the corresponding direction of movement $v_i$ is [*reversed*]{}, $v_i \rightarrow -v_i$. This reversal procedure simulates the influence of the fundamentalists traders which, when the absolute value of a stock reaches a value $X_{th}$, consider that the price is low enough so is time to buy or it is high enough and then is time to sell. In equations \[eq1\] and \[eq2\], $\Delta t$ is proportional to $N^{-1}$ so the simulation time is incremented by one when all the stocks have, on average, the chance to evolve once. In order to analyze the behavior of the model, two representative indexes of the market, the mean value time series $$\label{eq3}
x_M(t) = \frac{1}{N} \: \sum_{i=1}^{N} x_i(t)$$ and the corresponding [*returns*]{} or change of price, defined here as $$\label{eq4}
r(t) = x_M(t) - x_M(t-1)$$ are investigated by Monte Carlo simulations. $N=1024$ and $X_{th}=30$ where used as typical parameters and $T \cong 20000$ Monte Carlo steps (MCS) where used in order to obtain most of the reported results. To avoid the existence of initial correlations the simulations are started with the components of the vector $\mathbf{x}$ distributed randomly between $-10$ and $10$ and with the vector $\mathbf{v}$ in a complete antiferromagnetic state.
A typical path of the simulated price series $x_M(t)$ is shown in Fig. 1. The sequence shows the well known noisy shape. The market turn offs have been indicated in the figure. This turn offs have a self-organized character, since they result from the time evolution of the model and their occurrence cannot be explicitly predicted from the dynamic equations. Previous analysis of actual markets time series suggest that the probability distribution of daily returns are not of Gaussian type, but they are [*fat-tailed*]{}. [@4] This characteristic can be noticed if the probability distribution function (PDF) of the returns is plotted on the scale of the cumulative Gaussian distribution function. Normal distributions appear as straight lines in such representation while the fat-tail of other type of distributions result in a departure of the straight line. For the model presented here, the PDF of the price returns is plotted in Fig. 2 and the tails are clearly visible . According to previous results, it is known that actual markets time series cannot be modeled by series of independent and identically distributed realizations of random variables with a given distribution (fat-tailed or not). This is a consequence of the existence of non-stationarity in the process that generate the series. It has been shown that a direct method exists in order to detect the non-stationarity, it is the calculation of the autocorrelation function [@3; @5] $$\label{eq5}
{\mathbf{acf}}(r,t') =
\frac{\sum_{t=t'+1}^{T}(r_t - \overline{r})
(r_{t-t'} - \overline{r})}{\sum_{t=t'+1}^{T}(r_t - \overline{r})^2} \:\:,$$ of the [*absolute*]{} value of the returns $|r(t)|$. For Brownian motion the ${\mathbf{acf}}(r,t')$ of $r(t)$, $r^2(t)$ and $|r(t)|$ fluctuates around zero for $t' > 1$. In Fig. 3 the ${\mathbf{acf}}(r,t')$ for the $r(t)$ time series generated by the model is plotted up to $t'=50$. The horizontal lines represent the $0.95$ confidence interval of a Brownian random walk. Clearly, it is shown that the simulated series has a certain degree of non-stationarity very similar of those just reported for actual markets time series.
Another parameter that can reflect the deviation of a time series from the Gaussian distribution is the excess kurtosis, defined as $$\label{eq6}
\kappa = \frac{\mu_4}{\sigma^4} - 3$$ where $\mu_4$ is the fourth central moment and $\sigma$ is the standard deviation of the series under study. $\kappa$ is defined to be zero for a normal distribution, but ranges between $2$ and $50$ for daily returns of actual stock markets data have been reported. [@6; @7] An average value of $\overline \kappa = 2.69$ was obtained when the kurtosis is calculated on the absolute values of the returns. The value $\overline \kappa$ was obtained by averaging over $100$ independent runs of $T=10000$ MCS.
Finally, the simulated series was analyzed using the Hurst R/S method. [@4] The R/S analyses begins with dividing the time series in segments of equal length and normalizing the data in each segment by subtracting the sample mean. Then, the rescaled range (range/standard deviation) is log-log plotted against the segment size. By linear regression of the plot the Hurst exponent $H$ is obtained. The values of the exponent reflect some characteristics of the series: for $H > 0.5$ the series is said to be persistent, if $H=0.5$ the series represent a normal distributed random walk, while for $H < 0.5$ the series is considered to have anti-persistent characteristics. The R/S analysis for the price and returns series is presented in Fig. 4. The straight lines through the points indicates that the Hurst exponent is $H_x = 0.885 \pm 0.01$ for the $x_M(t)$ series and $H_r = 0.569 \pm 0.01$ for the $r(t)$ series. Since the values are greater than $0.5$ both series show a persistent character which results from the existence of long term correlations in both distributions. In particular, the value of $H_r$ is very close to the value of the Hurst exponent calculated on the returns of the USD/DEM exchange rate.
Although several other analysis could be made, the above presented results show that the simulated time series obtained from the operation of the dynamic rules \[eq1\] and \[eq2\] can be considered as a good approximation of the behavior of actual stock markets. Also, the discrete nature of the price variables follow closely the same characteristic of actual stocks prices. Further refinement could be made on the model in order to reproduce more closely some particular characteristic of a given market.
This work was partially supported by a research grant from Universidad Nacional de Mar del Plata (Mar del Plata, Argentina).
[000]{}
H. Levy, M. Levy and S. Solomon, [*Microscopic Simulation of Financial Markets*]{}, Academic Press, San Diego (2000). G.W. Kim and H.M. Markowitz, J. Portfolio Management [**16**]{}, 45 (1989). P. Bak, M. Paczuski and M. Shubik Physica A [**246**]{}, 430 (1997). T.Lux and M. Marchesi, Nature [**397**]{}, 498 (1999).
D. Sornette and K. Ide, cond-mat/0106054. K. Ide and D. Sornette, cond-mat/0106047.
K. Sznajd-Werron, J. Sznajd, cond-mat/0101001 (2000).
E.E. Peters, [*Fractal Market Analysis*]{} (Wiley, New York, 1994).
J. Beran, [*Statistics for Long-Memory Processes*]{} (Chapman & Hall, New York, 1997).
J. Campbell, A.H. Lo, C. McKinlay, [*The Econometrics of Financial Markets*]{} (Princeton Univ. Press, 1997).
A. Pagan, J. of Empirical Finance, [**3**]{}, 15 (1996).
[^1]: Email: [email protected]
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We prove the existence of non-negative measure- and $H^{-1}$-valued vorticity solutions to the stochastic 2D Euler equations with transport vorticity noise, starting from any non-negative vortex sheet. This extends the result by Delort [@Del1991] to the stochastic case.'
author:
- 'Zdzisław Brzeźniak[^1], Mario Maurelli[^2]'
bibliography:
- 'my\_bib3.bib'
title: 'Existence for stochastic 2D Euler equations with positive $H^{-1}$ vorticity'
---
Introduction
============
In this paper we consider the two-dimensional (2D) stochastic Euler equations, in vorticity form, with transport noise, namely, on $[0,T]\times {\mathbb{T}}^2$, $$\begin{aligned}
\begin{aligned}\label{eq:stochEuler_intro}
&\partial_t \xi + u\cdot \nabla \xi + \sum_k \sigma_k\cdot \nabla \xi \circ \dot{W}^k =0,\\
&u = K*\xi.
\end{aligned}\end{aligned}$$ This equation represents the motion of an incompressible fluid in a periodic domain, perturbed by a noise of transport-type. We show in our main result, Theorem \[thm:main\], the weak existence (weak in both the probabilistic and the analytic sense) of certain measure-valued solutions for this equation: more precisely, for any initial datum $\xi_0$ which is a non-negative measure and in $H^{-1}$, there exists a weak, measure- and $H^{-1}$-valued solution to . This result extends the existence result by Delort [@Del1991] to the stochastic case.
The deterministic (incompressible, $d$-dimensional) Euler equations $$\begin{aligned}
&\partial_t u +(u\cdot\nabla) u +\nabla p = 0,\\
&{\mathrm{div}}u =0,\end{aligned}$$ describe the evolution of the velocity $v(t,x)\in {\mathbb{R}}^d$ and the pressure $p(t,x)\in {\mathbb{R}}$ of an incompressible fluid; for this discussion, for simplicity, we assume $x$ in ${\mathbb{T}}^d$, the $d$-dimensional torus. The local (in time) well-posedness for smooth solutions has been established since [@EbiMar1970], a general description of Euler equations can be found for example in [@MarPul1994], [@Lio1996], [@MajBer2002]. In the two-dimensional case, the Euler equations are formally equivalent to a non-linear transport equation, namely the vorticity equation $$\begin{aligned}
\begin{aligned}\label{eq:Eulervort_intro}
&\partial_t \xi + u\cdot \nabla \xi =0,\\
&u = K*\xi.
\end{aligned}\end{aligned}$$ Here $\xi(t,x):=\text{curl} u(t,x) = \partial_{x_1}u^2(t,x) -\partial_{x_2}u^1(t,x)$ is the scalar vorticity, which expresses how fast the fluid rotates around a point $x$, and the kernel $K:{\mathbb{T}}^2\rightarrow{\mathbb{R}}^2$ is given by $K=\nabla^\perp G$, where $G:{\mathbb{T}}^2\rightarrow {\mathbb{R}}$ is the Green function of the Laplacian on ${\mathbb{T}}^2$, restricted on the $L^2$ functions with zero mean. Global well-posedness among essentially bounded solution has been proved in [@Wol1933] and [@Yud1963], see also [@Yud1995] for an extension to almost bounded functions and [@MarPul1994 Section 2.3] for a different proof using flows. Beyond bounded solutions, a global existence result has been given in [@DiPMaj1987] for vorticity in $L^p({\mathbb{T}}^2)$, with $1<p<\infty$. The case that we are interested in here is when the vorticity is measure-valued. One of the main results in this context is a global existence result by Delort [@Del1991], where the vorticity has a distinguished sign and is in $H^{-1}({\mathbb{T}}^2)$: precisely for any non-negative measure $H^{-1}({\mathbb{T}}^2)$ initial datum $\xi_0$, there exists a non-negative measure- and $H^{-1}({\mathbb{T}}^2)$-valued solution. This includes the case of an initial vortex sheet, that is when $\xi_0$ is concentrated on a line. Later papers by Schochet [@Sch1995] and Poupaud [@Pou2002] gave a somehow clearer argument which we will use mostly here. A more general existence result, where the vorticity is the sum of a non-negative measure $H^{-1}({\mathbb{T}}^2)$ and an $L^p({\mathbb{T}}^2)$ function, for $p\ge 1$, has been given in [@VecWu1993]. The study of such irregular vorticity solutions has several motivations; it represents physically relevant situations, where the vorticity is concentrated on sets with zero Lebesgue measure (see e.g. [@MarPul1994 Chapter 6]); it is also related to possible anomalous energy dissipation (to our knowledge, energy conservation or dissipation remains an open problem for Delort solutions, see [@CLNS2016] for energy conservation for unbounded vorticity) and to boundary layers (see e.g. [@Cho1978]), though we will not explore these aspects here.
Before passing to the stochastic case, we give a short idea of the proof of Delort’s result. The strategy passes through an usual compactness and convergence argument: take a sequence of approximants, show a priori uniform bounds, derive the compactness, show the stability of the equation in the limit. The main problem comes from the stability, in particular from the stability of the non-linear term. The main point by Delort (and later Schochet) is that the non-linear term is continuous among non-negative, or more-generally bounded from below, $H^{-1}({\mathbb{T}}^2)$ measures, with respect to the weak topologies (the precise topologies will be given later). Hence it suffices to show preservation of the non-negativity, or at least a bound from below, and uniform a priori bounds on the total variation norm and the $H^{-1}({\mathbb{T}}^2)$ norm of a solution $\xi$. The transport and divergence-free nature of the vorticity equation implies easily the non-negativity and the a priori bound on the total variation norm. The $H^{-1}({\mathbb{T}}^2)$ norm of $\xi$ is equivalent to the $L^2({\mathbb{T}}^2)$ norm of $u=K*\xi$, that is the energy, so the a priori $H^{-1}({\mathbb{T}}^2)$ bound is equivalent to an a priori bound on the energy of $u$, which is also classical and available. This allows to have compactness and stability as required.
Concerning the stochastic case, Euler equations with noise have been investigated in a large amount of papers, here we only give a review of a small selection of them. Many works take additive or multiplicative noise, but dependent on $u$ and not on its gradient. The first works are [@BesFla1999], which shows the global well-posedness of strong solutions, with additive noise, for bounded vorticity in a 2D domain (though adaptedness is not proven there), [@Bes1999] and [@BrzPes2001], which show the global existence of martingale solutions, with multiplicative noise, for $L^2({\mathbb{T}}^2)$ vorticity in 2D. We also mention [@MikVal2000] for a geometric approach in the case of finite-dimensional additive noise and [@CapCut1999] for an approach via nonstandard analysis. The works [@Kim2009], [@GlaVic2014] prove local strong well-posedness, with additive and, for the second work, multiplicative noise, in general dimension for smooth solutions. The second paper [@GlaVic2014] shows also global well-posedness among smooth solutions in 2D for additive and linear multiplicative noise.
The transport noise is considered first in [@Yok2014], [@StaYok2014], [@CruTor2015], where the transport term is put on the velocity and not on the vorticity; these works show the global existence of martingale solutions for $L^2({\mathbb{T}}^2)$ vorticity in 2D. The model is considered in [@BrzFlaMau2016], which proves global strong well-posedness for bounded vorticity on the two-dimensional torus. In the papers [@FlaLuo2019] and [@FlaLuo2019_2], the authors consider also the same model, with very irregular vorticity (not in $H^{-1}({\mathbb{T}}^2)$), and they show an existence result putting a measure on initial conditions which is invariant for the dynamics or almost invariant (that is, it is absolutely continuous with respect to the invariant measure). The analogue of this model in 3D, where a stochastic advection term also appears, is considered in [@CriFlaHol2019], which shows the local strong well-posedness and a Beale-Kato-Majda criterion for non-explosion. We also point out the work [@FlaGubPri2011], which considers with initial mass concentrated in a finite number of point vortices and shows a regularization by noise phenomenon, precisely that, with full probability, no collapse of vortices happens with a certain noise, while it does happen without noise.
There are several reasons why to use transport noise. The addition of a transport noise preserves the transport structure of the vorticity equation: at a formal level, a solution $\xi(t,x,\omega)$ follows the characteristics of the associated SDE, that is $\xi(t,X(t,x,\omega),\omega)=\xi(0,x)$, where $X(t,x,\omega)$ is the stochastic flow solution to $$\begin{aligned}
dX(t) = u(t,X(t))dt +\sum_k\sigma_k(X(t)) \circ dW^k(t).\end{aligned}$$ This fact follows from the Itô formula and here the Stratonovich noise is essential, because the Itô formula for this noise works as the chain rule, without second-order corrections. Furthermore, there is a derivation of models with stochastic transport term in [@Hol2015], [@DriHol2018], there are applications using the transport noise to model uncertainties, e.g. [@CCHPS2018], also the linear stochastic transport equation has been used as a toy model for turbulence, see e.g. [@Gaw2008], though we will not go into these directions here. The main feature of interest here is the fact that the transport noise preserves at least formally the $L^\infty({\mathbb{T}}^2)$ norm of the solution and also, when the coefficients $\sigma_k$ are divergence-free as here, the $L^p({\mathbb{T}}^2)$ norm for any $1\le p\le\infty$ (this can be seen for example via a priori bounds for the $L^p$ norm). As a consequence, the transport noise preserves the total mass of the vorticity and its positivity (if $\xi_0\ge0$, then also $\xi_t\ge0$), two properties that are crucial to apply the argument by Schochet. This is not the case for a generic additive or multiplicative noise as considered in the above mentioned papers. Indeed the additive noise would not guarantee positivity of the solution. We should say though that some multiplicative noises may still be used for our purposes: for example one can take a linear multiplicative noise in $\xi$, $+\sigma_k\xi \circ \dot{W}^k$ this gives preservation of the positivity and may also give some uniform bound on the mass (for this example, the Itô noise may work as well). Moreover, what we really need to have Delort’s result is some uniform ${\mathcal}{M}({\mathbb{T}}^2)$ bound of the vorticity and some uniform $L^\infty$ bound, or even $L^p$ bound in the line of the more general result in [@VecWu1993], on the negative part of the vorticity; such uniform bounds may hold also for an additive noise, though anyway the arguments for the proof would be more complicated.
We remark here that the stochastic vorticity equation preserves in particular the enstrophy, that is the $L^2({\mathbb{T}}^2)$ norm of the solution $\xi$, but not the energy, that is the $L^2({\mathbb{T}}^2)$ norm of the velocity $u=K*\xi$. Indeed, the equation for the velocity is formally $$\begin{aligned}
\begin{aligned}\label{eq:stochEulervel_intro}
&\partial_t u +(u\cdot\nabla) u +\sum_k(\sigma_k\cdot\nabla +D\sigma_k)u \circ \dot{W}^k = -\nabla p -\gamma,\\
&\text{div}u=0,
\end{aligned}\end{aligned}$$ with the velocity $u(t,x,\omega)$, the pressure $p(t,x,\omega)$ and also the constant (in space) $\gamma(t,\omega)$ are unknown, $\gamma$ here is needed to keep $u$ with zero mean. In this equation for $u$, the zero order term $(D\sigma_k) u \circ \dot{W}^k$ appears and causes the velocity not to be preserved anymore.
Let us also mention that the transport noise has been used to show regularization by noise phenomena for the transport equations, though this is mostly limited to the linear case (see [@FlaGubPri2010], [@FedFla2013], [@BFGM2014], [@FlaMauNek2014] as examples among many others), while the extension of [@FlaGubPri2011] to the case of a more general, measure-valued vorticity meets relevant difficulties; in other nonlinear hyperbolic cases, only a few results are available (e.g. [@GesMau2018], [@DelFlaVin2014]).
Before passing to our case, we point out two peculiarities of the stochastic case in the strategy that one uses to get martingale solutions, see for example [@BrzPes2001]. Analogously to the deterministic case, the strategy passes through an usual tightness and convergence argument: take a sequence of approximants, show a priori uniform bounds for suitable moments, derive the tightness, show the stability of the equation in the limit. We point out two facts, which we will use in our proof as well. Firstly, to derive tightness, say, in $C([0,T];X)$, where $X$ is a functional space on ${\mathbb{T}}^2$, one needs a uniform bound on the marginals and also a uniform bound in $C^\alpha([0,T];Y)$, where $Y$ is another functional space, typically a negative-order Sobolev space, containing $X$. The latter is the stochastic counterpart of the Aubin-Lions lemma, introduced since at least [@FlaGat1995]. Secondly, for the stability, one option is to exploit Skorokhod representation theorem to pass from convergence in law to a.s. convergence.
Now we come back to our contribution. Our main result Theorem \[thm:main\] extends the result by Delort [@Del1991] in the stochastic setting with transport noise. To our knowledge, no such extension has been studied before in the stochastic setting. Our proof combines the argument by Schochet [@Sch1995] to deal with the nonlinear term and the arguments by e.g. [@BrzPes2001] to deal with the stochasticity, as explained before; more details are given in Seciton \[sec:main\_strategy\].
Two comments are in place. Firstly, as mentioned, the transport noise gives preservation of mass and of positivity and thus allows to extend easily the Schochet argument concerning these aspects. On the contrary, the additive noise would fail at this point, while some multiplicative noise may still work and the additive noise may work with a more complicated argument.
Secondly, the Schochet argument does not immediatly goes through for the uniform $L^2({\mathbb{T}}^2)$ bound on the velocity $u$: as we have seen, the equation for the velocity does not preserve the energy. To handle this problem, we assume a certain structure of the $\sigma_k$ so that a uniform $L^2({\mathbb{T}}^2)$ bound on $u$ can be still obtained. These assumptions allow still to deal with relevant cases, as for example “locally isotropic” convariance matrices, see the discussion in Section \[sec:hp\_noise\].
We make one last comment on an alternative possible strategy. In [@BrzFlaMau2016 Section 7], we give an alternative proof of well-posedness among bounded solutions, which is based on the Doss-Sussmann transformation: if $\psi$ is the stochastic flow solutions to $$\begin{aligned}
d\psi_t = \sum_k\sigma_k(\psi_t) \circ dW^k_t,\end{aligned}$$ then $\tilde{\xi}(t,x,\omega):= \xi(t,\psi(t,x,\omega),\omega)$ satisfies a random Euler-type PDE, precisely $$\begin{aligned}
\begin{aligned}\label{eq:random_PDE}
&\partial_t \tilde{\xi} +\tilde{u} \cdot\nabla \tilde{\xi} =0,\\
&\tilde{u}(t,x,\omega) = D\psi^{-1}(t,x,\omega) \int_{{\mathbb{T}}^2} K(\psi(t,x,\omega)-\psi(t,y,\omega)) \tilde{\xi}(t,y,\omega)dy,
\end{aligned}\end{aligned}$$ where $\psi^{-1}$ is the inverse flow. This is a nonlinear tranport equation, where the kernel $K$ has been replaced by the above random kernel. In the case that $\sigma_k=e_k 1_{k\in\{1,2\}}$, where $e_k$ is the canonical basis on ${\mathbb{R}}^2$, the PDE is exactly the Euler vorticity equation: indeed in this case the stochastic Euler equations correspond simply to a random shift in Lagrangian coordinates. In the general case however, this is not the Euler vorticity equation, but it enjoys similar properties since the random kernel has similar regularity properties, hence the well-posedness among bounded solutions can be derived applying the deterministic arguments to the random PDE . One may then wonder if a similar argument is possible here, namely using Schochet’s arguments for the random PDE . In this context however, there are two problems at least. Firstly, even if we could just apply Delort’s or Schochet’s result to , at $\omega$ fixed, this would only give the existence of a measure-valued and $H^{-1}$-valued solution to at $\omega$ fixed, with no adaptedness property; so a compactness argument would probably be needed anyway. Secondly, from it is not immediately clear how to get a uniform $H^{-1}({\mathbb{T}}^2)$ bound on $\tilde{\xi}$. Moreover this type of arguments à la Doss-Sussmann do not work intrinsically on the stochastic PDE. Let us remark anyway that the argument may still give some hints: for example, it would be interesting to see if, using the equation for $\tilde{u}$ instead of the equation for $u$, one can get a uniform $H^{-1}({\mathbb{T}}^2)$ bound without the additional assumptions on $\sigma$ that are needed here. We leave this point for future research.
The setting
===========
Notation
--------
We recall some notation frequently used in the paper. We use the letters $t$, $x$, $\omega$ for a generic element in $[0,T]$, ${\mathbb{T}}^2$, $\Omega$ resp.; the coordinates of $x$ are indicated with $(x^1,x^2)$, while the partial derivatives are denoted by $\partial_{x_1},\partial_{x_2}$. Unless differently specified, the derivatives $\nabla$, $D$, $\Delta$ are indended with respect to the space $x$. The Sobolev spaces are denoted with $W^{s,p}$ or, for $p=2$, $H^s = W^{s,2}$. For a map $f:{\mathbb{T}}^2\rightarrow {\mathbb{R}}^2$, recall that $Df =(\partial_{x_j}f^i)$ and $\nabla f=(Df)^T$.
For functional spaces, we often put the input variables ($t$, $x$, $\omega$) as subscripts: for example, the spaces $L^2({\mathbb{T}}^2)$, $C([0,T])$, is denoted in short as $L^2_x$, $C_t$ resp.; as another example, the space $C([0,T];(H^{-4}({\mathbb{T}}^2),w))$ of continuous functions on $[0,T]$, which take values in $H^{-4}({\mathbb{T}}^2)$ with the weak topology, is denoted in short as $C_t(H^{-4}_x,w)$. The symbol $f*g$ stands for the convolution in the space variable (that is on ${\mathbb{T}}^2$) between two functions or distributions $f$ and $g$ on ${\mathbb{T}}^2$.
The space ${\mathcal}{M}_x={\mathcal}{M}({\mathbb{T}}^2)$ is the space of finite signed Radon measures on ${\mathbb{T}}^2$; it is a Banach space, endowed with the total variation norm $\|\cdot\|_{{\mathcal}{M}_x}$ and is the dual of the space $C_x= C({\mathbb{T}}^2)$ of the space of continuous functions on ${\mathbb{T}}^2$; the notation ${\langle}\mu,\varphi{\rangle}$ will be used to denote the duality product between a measure $\mu$ in ${\mathcal}{M}_x$ and a function $\varphi$ in $C_x$. The closed ball of center $0$ and radius $M$ on ${\mathcal}{M}_x$ is denoted by ${\mathcal}{M}_{x,M}$ (the radius here refers to the strong norm $\|\cdot\|_{{\mathcal}{M}_x}$). Following the notation above, the space $C([0,T];({\mathcal}{M}_{x,M},w*))$ of continuous functions on $[0,T]$, which take values in the closed ball of radius $M$ of ${\mathcal}{M}_x$ with the weak-\* topology, is denoted in short as $C_t({\mathcal}{M}_{x,M},w*)$. We will also use ${\mathcal}{M}_{x,+}$ for the set of non-negative finite Radon measures on ${\mathbb{T}}^2$ and ${\mathcal}{M}_{x,M,+,no-atom}$ for the set of non-negative non-atomic Radon measures with total mass $\le M$, ${\mathcal}{M}_{x,y}$ for the space of finite Radon measures on ${\mathcal}{T}^{2\times 2}$.
For $x=(x^1,x^2)$ in ${\mathbb{R}}^2$, we call $x^\perp:=(-x^2,x^1)$. Similarly, for a function $f:{\mathbb{T}}^2\rightarrow {\mathbb{R}}$, we call $\nabla^\perp f := (\nabla f)^\perp = (-\partial_{x^2}f,\partial_{x^1}f)$.
Given a probability space $(\Omega,{\mathcal}{A},P)$ with a filtration $({\mathcal}{F}_t)_t$, te symbol $\mathcal{P}$ will be used for the progressively measurable $\sigma$-algebra associated with $({\mathcal}{F}_t)_t$ (not the predictable $\sigma$-algebra).
The letter $C$ will be used for constants which may change from one line to another.
Assumptions on the noise {#sec:hp_noise}
------------------------
Here we give the assumptions on the noise coefficients $\sigma_k$:
\[assumption\_sigma\] We assume that:
- The vector fields $\sigma_k:{\mathbb{T}}^2\to{\mathbb{R}}^2$, $k\in\mathbb{N}$, are of class $C^1_x$, divergence-free and satisfy $$\begin{aligned}
\sum_k\|\sigma_k\|_{C^1_x}^2<\infty.\end{aligned}$$ In particular, there exists a continuous function $a:{\mathbb{T}}^2\times{\mathbb{T}}^2\rightarrow {\mathbb{R}}^{2\times 2}$, called infinitesimal covariance matrix, such that $$\begin{aligned}
\sum_k \sigma_k(x)\sigma_k(y)^T = a(x,y),\end{aligned}$$ with convergence in $C_{x,y}$ (that is uniformly in $(x,y)$). Moreover $a$ is differentiable in $x$ and in $y$ and $$\begin{aligned}
\sum_k \partial_{x_i}\sigma_k(x)\sigma_k(y)^T = \partial_{x_i}a(x,y),\quad i=1,2,\end{aligned}$$ with convergence in $C_{x,y}$, and analogously for the derivative in $y$.
- The function $a$ defined above safisfies, for some $c\ge 0$, $$\begin{aligned}
a(x,x)=cI_2,\end{aligned}$$ where $I_2$ is the $2\times 2$ identity matrix.
- The function $a$ defined above satisfies $$\begin{aligned}
\partial_{y_i}a(x,y)\mid_{y=x} =0,\quad i=1,2,\quad \forall x\in{\mathbb{T}}^2.\end{aligned}$$
The second assumption reads $$\begin{aligned}
&\sum_k\sigma_k^i(x)\sigma_k^j(x) = c\delta_{ij},\quad \forall x,\quad \forall i,j=1,2.\end{aligned}$$ The third assumption reads $$\begin{aligned}
&\sum_k\sigma_k^i(x) \partial_{x_j} \sigma_k^h(x) = \partial_{y_i}a(x,y)\mid_{y=x} = 0,\quad \forall x,\quad \forall i,j,h=1,2.\end{aligned}$$
The second assumption allows to simplifies the Itô-Stratonovich correction, because it cancels the first order term $\frac12\sum_k (\sigma_k\cdot\nabla)\sigma_k\cdot\nabla \xi$ in the correction and it makes the second order term $\sum_k \text{Tr}[\sigma_k\sigma_k^T D^2\xi]$ a constant coefficient operator: indeed we have, for every $j=1,2$, $$\begin{aligned}
\sum_k (\sigma_k(x)\cdot\nabla) \sigma_k^j (x) = \sum_k \sum_i \partial_{x_i}[\sigma_k^i\sigma_k^j](x) = \sum_i \partial_{x_i}a^{ij}(\cdot,\cdot) (x) =0,\\
\sum_k {\mathrm{tr}}[\sigma_k(x)\sigma_k(x)^T D^2\xi(x)] = c\Delta \xi(x).\end{aligned}$$ Hence the Itô-Stratonovich correction reads formally $$\begin{aligned}
\sum_k [\sigma_k\cdot\nabla \xi(x),W^k]_t = - \int^t_0 c\Delta \xi(x) dr\end{aligned}$$ and the vorticity equation reads formally in Itô form $$\begin{aligned}
\begin{aligned}\label{eq:stochEuler_intro_Ito}
&\partial_t \xi + u\cdot \nabla \xi + \sum_k \sigma_k\cdot \nabla \xi \dot{W}^k = \frac12 c\Delta\xi,\\
&u = K*\xi.
\end{aligned}\end{aligned}$$
These assumptions \[assumption\_sigma\] here are stronger than the ones in [@BrzFlaMau2016] for well-posedness of bounded vorticity solutions. Precisely, in [@BrzFlaMau2016] only the first two assumptions are made (the second in a slightly weaker form) and the second assumption is not essential (a first-order Itô correction can be treated via lengthly computations). Here on the contrary, it seems unclear from the proof whether the second and third assumptions can be removed. Indeed these two assumptions guarantee that the equation for the velocity $u$ has no first order term other than the transport one, which in turn implies a key energy bound of solution (that is the $L^2_x$ norm of $u$).
There is a relevant class of example of non trivial (that is, non constant) $\sigma_k$ satisfying Condition \[assumption\_sigma\]:
Let $\beta>3$ and define $$\begin{aligned}
\sigma_k(x) = (\cos(k\cdot x)+\sin(k\cdot x)) \frac{k^\perp}{|k|^\beta}, \quad k \in\mathbb{Z}^2\setminus \{0\}.\end{aligned}$$ Since $\beta>0$, we infer that $\sum_k\|\sigma_k\|_{C^1_x}^2 <\infty$ and we can calculate that $$\begin{aligned}
&a(x,y) = \sum_{k\in \mathbb{Z}^2,k\neq 0} \cos(k\cdot (x-y)) \frac{k^\perp(k^\perp)^T}{|k|^{2\beta}} =:a(x-y),\\
&a(x,x) = 2\sum_{k\in\mathbb{Z}^2,k^1\ge0, k^2>0} \frac{1}{|k|^{2\beta-2}} I_2.\end{aligned}$$
Note that, in this example, the infinitesimal covariance matrix $a$ is translation-invariant (that is $a(x,y) = a(x-y)$) and even (that is $a(x)=a(-x)$). More in general, if $a$ is translation-invariant and even, then it satisfies the third assumption in Condition \[assumption\_sigma\]: indeed $$\begin{aligned}
\partial_{y_i}a(x,y)\mid_{y=x} = -\partial_{z_i}a(z)\mid_{z=0} =0.\end{aligned}$$
We could *morally* include the class of isotropic infinitesimal covariance matrices in our setting. On ${\mathbb{R}}^2$, an infinitesimal covariance matrix $a$ is called isotropic if it is translation- and rotation-invariant, that is $a(x,y)=a(x-y)$ for all $(x,y)$ and $R^Ta(Rx)R =a(x)$ for every rotation matrix $R$, and $a(0)=cI$ for some $c>0$; rotation invariance implies that $a$ is even (take $Rx=-x$), hence a sufficiently regular isotropic matrix $a$ satisfies the Condition \[assumption\_sigma\] (precisely, for $a$ regular one can find $\sigma_k$ regular safisfying Condition \[assumption\_sigma\]).
The isotropic condition on $a$ means morally that the noise $\sum_k \sigma_k(x)\circ \dot{W}_t$ is Gaussian, white in time, coloured and isotropic in space. Such a class has been considered in the mathematical and physical literature, see e.g. [@BaxHar1986], [@Gaw2008], even without the nonlinear term: for example, the same type of noise, but irregular in space, provides a simplified model for the study of passive scalars in a turbulent motion (see [@Gaw2008], [@FalGawVer2001], [@LeJRai2002]). Strictly speaking we are not allowed here to take an isotropic matrix as infinitesimal covariance matrix $a$ here, for the simple reason that the torus itself (considered as $[-1,1]^2$ with periodic boundary conditions) is not rotation-invariant. However, one may still take $a$ translation-invariant and rotation-invariant on a neighborhood of the diagonal $\{x=y\}$. Moreover the torus setting here is taken to avoid technicalities at infinity, but we believe a similar construction, including isotropic vector fields, would go through also in the full space case.
The nonlinear term
------------------
We focus now on the definition and properties of the nonlinear term ${\langle}\xi,u^\xi\cdot\nabla \varphi{\rangle}$. For this, we introduce some notation. The space ${\mathcal}{M}_x={\mathcal}{M}({\mathbb{T}}^2)$ is the space of finite signed Radon measures on ${\mathbb{T}}^2$; it is a Banach space, endowed with the total variation norm $\|\cdot\|_{{\mathcal}{M}_x}$, and is the dual of the space $C_x= C({\mathbb{T}}^2)$ of the space of continuous functions on ${\mathbb{T}}^2$. We endow ${\mathcal}{M}_x$ with the Borel $\sigma$-algebra generated by the *weak-\** topology. The notation ${\langle}f, g {\rangle}$ denotes the $L^2$ duality product between two functions or distributions on ${\mathbb{T}}^2$; in particular, ${\langle}\mu,\varphi{\rangle}$ will be used to denote the duality product between a measure $\mu$ in ${\mathcal}{M}_x$ and a function $\varphi$ in $C_x$
For $\xi$ in $L^{4/3}_x$, $u^\xi$ is in $W^{1,4/3}_x$ by Lemma \[lem:Green\_function\] and so in $L^4_x$ by Sobolev embedding; hence the product $\xi u^\xi$ is in $L^1_x$ and the nonlinear term makes sense for regular $\varphi$. However, if $\xi$ is not in $L^{4/3}_x$, the product $\xi u^\xi$ is not defined in general. To overcome this difficulty, we use the following Lemma, due to Schochet [@Sch1995 Lemma 3.2 and discussion thereafter]. For $\varphi$ in $C^2_x$, we define the function $F_\varphi:{\mathbb{T}}^2\times{\mathbb{T}}^2\rightarrow{\mathbb{R}}^{2\times 2}$ by $$\begin{aligned}
F_\varphi(x,y) := \frac12 K(x-y) \cdot (\nabla\varphi(x)-\nabla\varphi(y))1_{x\neq y}.\end{aligned}$$
\[rmk:Poupaud\_trick\] The following hold:
- For every $\varphi$ in $C^2_x$, $F_\varphi$ is bounded everywhere by $C\|\varphi\|_{C^2_x}$ and continuous outside the diagonal $\{(x,y)\mid x=y\}$.
- For any measure $\xi$ in ${\mathcal}{M}_x$, the formula $$\begin{aligned}
{\langle}N(\xi), \varphi {\rangle}:= \int_{{\mathbb{T}}^2}\int_{{\mathbb{T}}^2} F_\varphi(x,y) \xi({\mathrm{d}}x)\xi({\mathrm{d}}y),\quad \varphi\in C^{\infty}_x,\end{aligned}$$ defines a distribution in $H^{-4}_x$ and we have $$\begin{aligned}
\|N(\xi)\|_{H^{-4}_x} \le C\|\xi\|_{{\mathcal}{M}_x}^2.\end{aligned}$$
- The map $N:{\mathcal}{M}_x\rightarrow H^{-4}_x$ is Borel, where ${\mathcal}{M}_x$ is endowed with the Borel $\sigma$-algebra generated by the weak-\* topology.
- The map $N$ coincides with the nonlinear term in equation for $\xi$ in $L^{4/3}_x$: precisely, for $\xi$ in $L^{4/3}_x$, $$\begin{aligned}
{\langle}\xi,u^\xi\cdot\nabla \varphi{\rangle}= \int_{{\mathbb{T}}^2}\int_{{\mathbb{T}}^2} F_\varphi(x,y) \xi({\mathrm{d}}x)\xi({\mathrm{d}}y). \label{eq:nonlin_poupaud}\end{aligned}$$
- For every $M>0$, the map $N$, restricted on the set ${\mathcal}{M}_{x,M,+,no-atom}$ of non-negative non-atomic measures on ${\mathbb{T}}^2$ with total mass $\le M$, is continuous (that is $\xi\mapsto {\langle}N(\xi),\varphi{\rangle}$ is continuous for every $\varphi$ in $H^4$); hence $N$ is the only continuous extension of the nonlinear term within ${\mathcal}{M}_{x,M,+,no-atom}$.
Thanks to this Lemma, we use the right-hand side of as the definition for the nonlinear term in . Note that, by Lemma \[lem:no\_atoms\] stated later, the set ${\mathcal}{M}_{x,M,+,no-atom}$ includes the case of $\xi$ in ${\mathcal}{M}_{x,+}\cap H^{-1}_x$ we are interested in here. The last assertion of Lemma \[rmk:Poupaud\_trick\] is a consequence of Lemma \[lem:continuity\_mass\], while the other assertions are proved in the Appendix.
Definition of measure-valued solutions
--------------------------------------
Now we can give the definition of a measure-valued solution to the stochastic vorticity equation. Again we give some notation. For any fixed $M>0$, ${\mathcal}{M}_{x,M}$ is the set of all finite signed Radon measures $\mu$ on ${\mathbb{T}}^2$ with total variation $\|\mu\|_{{\mathcal}{M}_x}\le M$. We consider ${\mathcal}{M}_{x,M}$ endowed with the weak-\* topology induced by ${\mathcal}{M}_x$, which makes it a compact Polish space (by Banach-Alaoglu theorem and by e.g. [@Bre2011 Theorem 3.28]), and with the Borel $\sigma$-algebra generated by this topology. Given a filtration $({\mathcal}{F}_t)_t$, we call ${\mathcal}{P}$ the associated progressive $\sigma$-algebra (not the predictable $\sigma$-algebra). A cylindrical Brownian motion on $({\mathcal}{F}_t)_t$ is a sequence $W=(W^k)_k$ of independent $({\mathcal}{F}_t)_t$-Brownian motions.
\[def:sol\] Fix $M>0$. Assume that the vector fields $\sigma_k$ satisfy the Condition \[assumption\_sigma\]. A weak distributional ${\mathcal}{M}_{x,M}$-valued solution to the vorticity equation is an object $(\Omega,{\mathcal}{A},({\mathcal}{F}_t)_t,P,(W^k)_k,\xi)$, where $(\Omega,{\mathcal}{A},({\mathcal}{F}_t)_t,P,(W^k)_k)$ is a cylindrical Brownian motion with the usual assumptions, $\xi:[0,T]\times\Omega\rightarrow{\mathcal}{M}_{x,M}$ is ${\mathcal}{P}$ Borel measurable (with ${\mathcal}{P}$ the progressive $\sigma$-algebra associated with $({\mathcal}{F}_t)_t$) and it holds $$\begin{aligned}
\begin{aligned}\label{eq:stochEulervort}
\xi_t = \xi_0 -\int^t_0 N(\xi_r) {\mathrm{d}}r -\sum_k \int^t_0 \sigma_k\cdot\nabla \xi_r {\mathrm{d}}W^k_r +\frac12 \int^t_0 c\Delta \xi_r {\mathrm{d}}r,\quad \forall t,\quad P-\text{a.s.}
\end{aligned}\end{aligned}$$ (the $P$-exceptional set being independent of $t$), as equality in $H^{-4}_x$.
This definition is the rigorous formulation of in the Itô form , under the Condition \[assumption\_sigma\] on $\sigma_k$.
Lemmas \[rmk:Poupaud\_trick\] and \[lem:H\_Borel\] imply that $\xi$ and the integrands in are ${\mathcal}{P}$ Borel measurable as $H^{-4}_x$-valued maps, moreover $$\begin{aligned}
{\mathrm{E}}\sum_k \int^T_0 \|\sigma_k\cdot\nabla \xi_r\|^2_{H^{-4}_x} {\mathrm{d}}r \le M^2 \sum_k\|\sigma_k\|_{C_x}^2.\label{eq:welldef_stoch_int}\end{aligned}$$ Hhence the deterministic integrals and the stochastic Itô integral make sense resp. as Bochner and stochastic integrals in $H^{-4}_x$.
Moreover, Lemma \[lem:stochEulervort\_H\] shows that is equivalent to the formulation with test functions, namely, for every $\varphi$ in $C^\infty_x$, $$\begin{aligned}
\begin{aligned}\label{eq:stochEulervort_test}
{\langle}\xi_t,\varphi {\rangle}&= {\langle}\xi_0,\varphi{\rangle}+\int^t_0 {\langle}N(\xi_r),\varphi {\rangle}{\mathrm{d}}r\\
&\ \ \ +\sum_k \int^t_0 {\langle}\xi_r,\sigma_k\cdot\nabla \varphi {\rangle}{\mathrm{d}}W^k_r \\
&\ \ \ +\frac12 \int^t_0 {\langle}\xi_r, c\Delta \varphi {\rangle}{\mathrm{d}}r,\quad\text{for every }t,\quad P-\text{a.s.},
\end{aligned}\end{aligned}$$ (the $P$-exceptional set being independent of $t$).
We sometimes say that $\xi$ is an $L^p_x$-valued solution if has finite $L^m_{t,\omega}(L^p_x)$ norm for some $1\le m\le \infty$, where we identify a measure with its density if the density exists. Similarly for $H^{-1}_x$-valued solutions.
Global existence for vorticity in ${\mathcal}{M}_{x,+}\cap H^{-1}_x$ {#sec:main_strategy}
====================================================================
We give the main result of this paper. Here ${\mathcal}{M}_{x,+}$, ${\mathcal}{M}_{x,M,+}$ are the subsets of ${\mathcal}{M}_x$ resp. of non-negative measures and of non-negative measures with total variation $\le M$.
\[thm:main\] Assume the Condition \[assumption\_sigma\] on $\sigma_k$, fix $M>0$. For every $\xi_0$ in ${\mathcal}{M}_{x,M,+}\cap H^{-1}_x$, there exists a weak distributional ${\mathcal}{M}_{x,M}$-valued solution $(\Omega,{\mathcal}{A},({\mathcal}{F}_t)_t,P,(W^k)_k,\xi)$ to the vorticity equation , with $\xi$ in $C_t(\mathcal{M}_{x,M,+},w*)\cap L^2_{t,\omega}(H^{-1}_x)$.
Note that, if $\xi$ is a solution and $\alpha$ is a real constant (in space, time and $\Omega$), then $\xi+\alpha$ is also a solution (this follows from $K*\alpha =0$). Hence the result can be generalized as follows: for every $\xi_0$ in ${\mathcal}{M}_{x,M}\cap H^{-1}_x$ with negative part bounded by a constant $\alpha$, then there exists a weak distributional ${\mathcal}{M}_{x,M}$-valued solution $\xi$ to , in $C_t(\mathcal{M}_{x,M,+},w*)\cap L^2_{t,\omega}(H^{-1}_x)$, with negative part bounded by $\alpha$ for all $t$, $P$-a.s..
This is relevant in particular because, if we start from a velocity $u$, then $\text{curl} [u]$ cannot be non-negative (unless $\text{curl} [u] =0$), but it can be bounded from below.
The strategy of the proof goes as follows:
**Compactness argument**: We take $\xi^\epsilon$ solutions with regular bounded initial datum $\xi^\epsilon_0$ and we show that $\xi^\epsilon$ are tight via suitable a priori bounds. The steps are:
1. Prove the non-negativity and a uniform $L^\infty_{t,\omega}(\mathcal{M}_{x,+})$ bound on $\xi^{\varepsilon}$: This follows from the conservation of non-negativity and from the conservation of mass for the vorticity equation .
2. Prove a uniform $L^2_{t,\omega}(H^{-1}_x)$ bound on $\xi^{\varepsilon}$: Since the $L^2_{t,\omega}(H^{-1}_x)$ norm of $\xi$ is equivalent to the $L^2_{t,\omega}(L^2_x)$ norm of the velocity $u^{\varepsilon}:=u^{\xi^{\varepsilon}}$, we write the equation for $u^{\varepsilon}$ and prove a uniform energy bound on $u^{\varepsilon}$. Note that, oppositely to the deterministic case, the energy (the $L^2$ norm of $u^{\varepsilon}$) is not preserved, due to the additional term $(\nabla \sigma_k)\cdot u \circ \dot{W}^k $ in the equation for the velocity $u^{\varepsilon}$. To prove the energy bound, the assumptions on $\sigma_k$ play a crucial role.
3. Prove a uniform $L^2_\omega(C^\alpha_t(H^{-4}_x))$ bound on $\xi^{\varepsilon}$, for $\alpha<1/2$: This follows from the Lipschitz bounds in time on the deterministic integrals in the vorticity equation and the Hölder bound in time on the stochastic integral as $H^{-4}$-valued object.
4. Show tightness in $C_t(\mathcal{M}_{x,M},w*)\cap (L^2_t(H^{-1}_x),w)$ (where $w^*$ refers to the weak-\* topology on $\mathcal{M}_{x,M}$ and $w$ refers to the weak topology on $L^2_t(H^{-1}_x)$): This follows from the previous uniform bounds. Actually, one could have tightness simply in $C_t(\mathcal{M}_{x,M},w*)$, without using the $L^2_t(H^{-1}_x)$ bound, but this will be useful in the convergence part.
**Convergence argument**: We show that any limit point $\xi$ of $\xi^\epsilon$ solves the vorticity equation . The steps are:
1. Pass to an a.s. convergence: By Skorokhod-Jacubowski theorem we have, up to subsequences, an a.s. convergence on a larger probability space of $\xi^{\varepsilon}$ to some $\xi$ in $C_t(\mathcal{M}_{x,M},w*)\cap (L^2_t(H^{-1}_x),w)$.
2. Show that the a.s. limit $\xi$ of any subsequence satisfies : For the nonlinear term, we use the Schochet approach and in particular: the continuity of the nonlinear term among non-negative non-atomic measures as in Lemma \[lem:continuity\_mass\] and the fact that $H^{-1}_x$ measures are non-atomic. For the stochastic term, we use the approximation of the stochastic integral via Riemann sums as in [@BrzGolJeg2013].
The main result will follow from Lemma \[lem:limit\_sol\], which will show that the limit $\xi$ of any subsequence satisfies .
Proof of the main result
========================
A priori bounds {#sec:apriori_bd}
---------------
We fix $M>0$ and the initial condition $\xi_0$ in ${\mathcal}{M}_{x,M,+}\cap H^{-1}_x$.
We take $\xi^{\varepsilon}$ to be the $L^\infty_{t,x,\omega}$ solution to the stochastic vorticity equation , with initial condition $\xi_0^{\varepsilon}= \xi_0*\rho_{\varepsilon}$, where $\rho_{\varepsilon}$ are standard mollifiers on ${\mathbb{T}}^2$ (precisely, we take a $C^\infty_c$ non-negative even function on ${\mathbb{R}}^2$, we define $\rho_{\varepsilon}= {\varepsilon}^{-2}\rho({\varepsilon}^{-1}\cdot)$ and make it periodic). The existence (and the uniqueness) of such $L^\infty_{t,x,\omega}$ solution is proved in [@BrzFlaMau2016]. Precisely, for any ${\varepsilon}>0$, [@BrzFlaMau2016 Theorem 2.14] implies the existence and the strong uniqueness of the stochastic flow $\Phi^{\varepsilon}=\Phi^{\varepsilon}(t,x,\omega)$, Lebesgue-measure-preserving and continuous with respect to $(t,x)$, solution to the SDE $$\begin{aligned}
{\mathrm{d}}\Phi^{\varepsilon}(t,x) = \int_{{\mathbb{T}}^2} K(\Phi^{\varepsilon}(t,x)-\Phi^{\varepsilon}_(t,y)) \xi^{\varepsilon}_0(y) {\mathrm{d}}y {\mathrm{d}}t +\sum_k \sigma_k(\Phi^{\varepsilon}(t,x)) {\mathrm{d}}W^k_t.\end{aligned}$$ As a consequence of [@BrzFlaMau2016 Proposition 5.1], if we define, for every $t$, $$\begin{aligned}
\xi^{\varepsilon}_t(\omega)=(\Phi(t,\cdot,\omega))_\# \xi^{\varepsilon}_0\label{eq:repr_formula}\end{aligned}$$ (the image measure of $\xi_0^{\varepsilon}$ under $\Phi(t,\cdot,\omega)$), then, for a.e. $\omega$, for every $t$ $\xi^{\varepsilon}_t(\omega)$ admits a density with respect to the Lebesgue measure, this density is in $L^\infty_{t,x,\omega}$ and, for every $\varphi$ in $C^\infty_x$, ${\langle}\xi_t,\varphi{\rangle}$ is progressively measurable and satisfies . It follows, see Remark \[rmk:Linfty\_sol\], that, up to taking an indistinguishable version, such $\xi^{\varepsilon}$ is a weak distributional ${\mathcal}{M}_{x,M^{\varepsilon}}$-valued solution in the sense of Definition \[def:sol\], for some $M^{\varepsilon}\ge \|\xi_0^{\varepsilon}\|_{{\mathcal}{M}_x}$.
### Bound in $L^\infty_{t,\omega}({\mathcal}{M}_x)$
We start with a uniform $L^\infty_{t,\omega}({\mathcal}{M}_x)$ bound:
\[lem:cons\_mass\] For every ${\varepsilon}>0$ fixed, for a.e. $\omega$, we have $$\begin{aligned}
\sup_{t\in[0,T]}\|\xi^{\varepsilon}_t\|_{{\mathcal}{M}_x} \le \|\xi_0\|_{{\mathcal}{M}_x}\le M.\end{aligned}$$ In particular, up to taking an indistinguishable version, $\xi^{\varepsilon}$ is a ${\mathcal}{M}_{x,M}$-valued solution (in the sense of Definition \[def:sol\]). Moreover, for a.e. $\omega$, we have: for every $t$, $\xi^{\varepsilon}_t\ge 0$ (that is, it is a non-negative measure).
The non-negativity follows directly from the fact that $\xi_0^{\varepsilon}$ and so $\xi_0^{\varepsilon}$ are non-negative and from the representation formula . Concerning the bound, this also follows from the representation formula , but we prefer giving a PDE (short) proof. Using $\varphi\equiv 1$ as test function in , we get, for a.e. $\omega$: for every $t$, $$\begin{aligned}
\int_{{\mathbb{T}}^2}\xi^{\varepsilon}_t {\mathrm{d}}x = \int_{{\mathbb{T}}^2}\xi^{\varepsilon}_0 {\mathrm{d}}x.\end{aligned}$$ Since $\xi^{\varepsilon}_t\ge 0$, we get that $\|\xi^{\varepsilon}_t\|_{L^\infty_{t,\omega}({\mathcal}{M}_x)}\le \|\xi_0\|_{{\mathcal}{M}_x}$ for every $t$, for a.e. $\omega$. Defining $\xi^{\varepsilon}=0$ outside the exceptional set where the bound is not satisfied, we get that $\xi^{\varepsilon}$ is in $C_t({\mathcal}{M}_{x,M})$ and so is a ${\mathcal}{M}_{x,M}$-valued solution.
In the proof of the previous Lemma, we used the fact that $\xi^{\varepsilon}$ is positive (and so $\int_{\mathbb{T}}^2 \xi^{\varepsilon}$ is the ${\mathcal}{M}_x$ norm of $\xi^{\varepsilon}$), but this is not essential. Indeed, the vorticity equation is a transport equation with divergence-free velocity field, therefore the mass is conserved at least under suitable regularity assumption on the velocity, which are satisfied here.
### Equation for the velocity and for its energy
In view of a uniform $H^{-1}_x$ bound on $\xi^{\varepsilon}$, we will: 1) get an equation for the velocity $u^{\varepsilon}=u^{\xi^{\varepsilon}}=K*\xi^{\varepsilon}$, then 2) get an equation for the energy of $u^{\varepsilon}$, that is the $L^2_x$ norm of $u^{\varepsilon}$, then 3) conclude a uniform $L^2_x$ bound on $u^{\varepsilon}$; by Lemma \[lem:Green\_function\], this bound is equivalent to a uniform $H^{-1}_x$ bound on $\xi^{\varepsilon}$, up to the space average to $\xi^{\varepsilon}$.
Here we consider a solution $\xi$ to the vorticity equation , with sufficiently integrability to include the (bounded) approximants $\xi^\epsilon$.
As we have seen in the introduction (formula ) $$\begin{aligned}
\begin{aligned}\label{eq:stochEulervel_formal}
&\partial_t u +(u\cdot\nabla) u +\sum_k(\sigma_k\cdot\nabla +(D\sigma_k)^T)u \circ \dot{W}^k = -\nabla p -\gamma,\\
&\text{div}u=0,
\end{aligned}\end{aligned}$$ where $p:[0,T]\times{\mathbb{T}}^2\times \Omega\rightarrow{\mathbb{R}}$ and $\gamma:[0,T]\times\Omega \rightarrow {\mathbb{R}}^2$ are unknown (and random). The rigorous result is as follows.
\[lem:velocity\_eq\] Assume Condition \[assumption\_sigma\] on $\sigma_k$. Let $\xi$ be a ${\mathcal}{M}_{x,M}$-valued distributional solution to the stochastic vorticity equation, assume that $\xi$ is also in $L^p_{t,\omega}(L^p_x)$ for some $2<p<\infty$ and define $u=K*\xi$. Then $u$ is in $L^p_{t,\omega}(W^{1,p}_x)$ and is a distributional solution to the stochastic Euler equation, that is it holds $$\begin{aligned}
\begin{aligned}\label{eq:stochEulervel}
u_t &= u_0 -\int^t_0 \Pi[(u_r\cdot\nabla) u_r] {\mathrm{d}}r\\
&\ \ \ -\sum_k \int^t_0 \Pi[\sigma_k\cdot\nabla u_r +(D\sigma_k)^T u_r] {\mathrm{d}}W^k \\
&\ \ \ +\frac12 \int^t_0 c\Delta u_r {\mathrm{d}}r,\quad \text{for every }t,\quad P-\text{a.s.},
\end{aligned}\end{aligned}$$ as equality among $H^{-1}_x$-valued processes, where $\Pi$ is the Leray projector on the divergence-free zero-mean $H^{-1}_x$ distributions.
The proof is essentially based on the following formal equality, for $v:{\mathbb{T}}^2\rightarrow{\mathbb{R}}^2$ divergence-free and $w:{\mathbb{T}}^2\rightarrow{\mathbb{R}}^2$: $$\begin{aligned}
\text{curl}[v\cdot\nabla w +(Dv)^T w] = v\cdot\nabla \text{curl}[w].\end{aligned}$$ Using this equality, one can formally pass from the velocity equation to the vorticity equation . The rigorous proof is given in the Appendix.
From the equation of the velocity we get the equation for the expected valued of the energy:
\[lem:u\_norm\] Under the assumptions of Lemma \[lem:velocity\_eq\], we have $$\begin{aligned}
\begin{aligned}\label{eq:u_norm}
{\mathrm{E}}\|u_t\|_{L^2_x}^2 &=
{\mathrm{E}}\|u_0\|_{L^2_x}^2 -2{\mathrm{E}}\int^t_0 \int_{{\mathbb{T}}^2} u\cdot \Pi[(u \cdot\nabla) u] {\mathrm{d}}x {\mathrm{d}}r \\
&\ \ \ -{\mathrm{E}}\int^t_0 \int_{{\mathbb{T}}^2} c|\nabla u|^2 {\mathrm{d}}x {\mathrm{d}}r \\
&\ \ \ +{\mathrm{E}}\int^t_0 \sum_k \int_{{\mathbb{T}}^2} |\Pi[(\sigma_k\cdot\nabla +(D\sigma_k)^T) u]|^2 {\mathrm{d}}x {\mathrm{d}}r,\quad \text{for every }t.
\end{aligned}\end{aligned}$$
One can get this equation formally from by applying the Itô formula to $\|u\|_{L^2_x}^2$. However this is not possible rigorously, because the rigorous equation holds in $H^{-1}_x$ and the square of the $L^2_x$ norm is not continuous on $H^{-1}_x$. The rigorous proof of is based on a regularization argument and is postponed to the appendix.
### Bound in $L^2_{t,\omega}(H^{-1}_x)$
Now we give a uniform $L^2_{t,\omega}(H^{-1}_x)$ bound on the approximants $\xi^{\varepsilon}$. This bound is not essential for the compactness argument, but it is essential for the convergence argument, as we will see.
\[lem:Hm1\_bound\] It holds $$\begin{aligned}
{\mathrm{E}}\|\xi^{\varepsilon}_t\|_{H^{-1}_x}^2\le C (\|\xi_0\|_{H^{-1}_x}^2 +\|\xi_0\|_{{\mathcal}{M}_x}^2),\quad \text{for every }t.\end{aligned}$$
In the proof of this lemma, we use crucially the assumptions \[assumption\_sigma\] on $\sigma_k$. We recall that $u^{\varepsilon}=u^{\xi^{\varepsilon}}=K*\xi^{\varepsilon}$.
By Remark \[rmk:Borel\_norm\], the $H^{-1}_x$ norm is a Borel function on $({\mathcal}{M},w*)$, therefore $\|\xi^{\varepsilon}\|_{H^{-1}_x}$ and $\|u^{\varepsilon}\|_{L^2_x}$ are progressively measurable and the expectations of their moments make sense. By Lemma \[lem:Green\_function\], applied to $\xi^{\varepsilon}_t-\int \xi^{\varepsilon}_t(y) dy$, we have, for every $t$, $$\begin{aligned}
&\|\xi^{\varepsilon}_t\|_{H^{-1}_x} \le C\|u^{\varepsilon}_t\|_{L^2_x} + \left|\int_{{\mathbb{T}}^2}\xi^{\varepsilon}_t dx\right|,\\
&\|u^{\varepsilon}_t\|_{L^2_x}\le C\|\xi^{\varepsilon}_t\|_{H^{-1}_x}.\end{aligned}$$ By Lemma \[lem:cons\_mass\], the $L^1$ norm of $\xi^{\varepsilon}$ is uniformly bounded by $\|\xi_0\|_{{\mathcal}{M}_x}$. Hence it is enough to show, for every $t$, $$\begin{aligned}
{\mathrm{E}}\|u^{\varepsilon}_t\|_{L^2_x}^2\le C\|u_0\|_{L^2_x}^2.\end{aligned}$$ We will show the above bound using the velocity equation .
We start with applied to $u^{\varepsilon}$. Since $u^{\varepsilon}$ is divergence-free, the nonlinear term in vanishes: indeed $$\begin{aligned}
&\int_{{\mathbb{T}}^2} u^{\varepsilon}\cdot \Pi[(u^{\varepsilon}\cdot\nabla) u^{\varepsilon}] {\mathrm{d}}x = \int_{{\mathbb{T}}^2} \Pi[u^{\varepsilon}]\cdot (u^{\varepsilon}\cdot\nabla) u^{\varepsilon}{\mathrm{d}}x\\
&= \int_{{\mathbb{T}}^2} u^{\varepsilon}\cdot (u^{\varepsilon}\cdot\nabla) u^{\varepsilon}{\mathrm{d}}x = 0.\end{aligned}$$ For the term with $\sigma_k$, since $\Pi$ is a projector in $L^2_x$, we have $$\begin{aligned}
&\sum_k \int_{{\mathbb{T}}^2} |\Pi[(\sigma_k\cdot\nabla +(D\sigma_k)^T) u^{\varepsilon}]|^2 {\mathrm{d}}x \le \sum_k \int_{{\mathbb{T}}^2} |(\sigma_k\cdot\nabla +(D\sigma_k)^T) u^{\varepsilon}|^2 {\mathrm{d}}x\\
&= \int_{{\mathbb{T}}^2} [\sum_k|\sigma_k \cdot\nabla u^{\varepsilon}|^2 +\sum_k|(D\sigma_k)^T u^{\varepsilon}|^2 +2\sum_{i,j,h} \sum_k\sigma_k^i \partial_{x_j} \sigma_k^h (u^{\varepsilon})^h \partial_{x_i} (u^{\varepsilon})^j ] {\mathrm{d}}x.\end{aligned}$$ Now we use the assumptions \[assumption\_sigma\], precisely that $\sum_k\sigma_k^i\sigma_k^j = c\delta_{ij}$ and that $\sum_k\sigma_k^i \partial_{x^j} \sigma_k^h = 0$ for all $i,j,h$, with uniform (with respect to $x$) convergence in the series over $k$: we get $$\begin{aligned}
&\sum_k \int_{{\mathbb{T}}^2} |\Pi[(\sigma_k\cdot\nabla +(D\sigma_k)^T) u^{\varepsilon}]|^2 {\mathrm{d}}x \le \int_{{\mathbb{T}}^2} c|\nabla u^{\varepsilon}|^2 {\mathrm{d}}x +\sum_k\|\sigma_k\|_{C^1_x} \int_{{\mathbb{T}}^2} |u^{\varepsilon}|^2 {\mathrm{d}}x .\end{aligned}$$ Putting all together, we obtain for every $t$ $$\begin{aligned}
{\mathrm{E}}\|u^{\varepsilon}_t\|_{L^2_x}^2 \le {\mathrm{E}}\|u^{\varepsilon}_0\|_{L^2_x}^2 +\sum_k\|\sigma_k\|_{C^1_x} \int^t_0 {\mathrm{E}}\|u^{\varepsilon}_r\|_{L^2_x}^2 {\mathrm{d}}r.\end{aligned}$$ We apply Gronwall lemma to ${\mathrm{E}}\|u^{\varepsilon}_t\|_{L^2_x}^2$: we conclude, for every $t$, $$\begin{aligned}
{\mathrm{E}}\|u^{\varepsilon}_t\|_{L^2_x}^2 \le {\mathrm{E}}\|u^{\varepsilon}_0\|_{L^2_x}^2 \exp[t\sum_k\|\sigma_k\|_{C^1_x}],\end{aligned}$$ which implies the desired bound since $u^{\varepsilon}_0$ is deterministic and $\|u^{\varepsilon}_0\|_{L^2_x}\le C\|u_0\|_{L^2_x}$. The proof is complete.
### Bound in $L^m_\omega(C_t^\alpha(H^{-4}_x))$
Now we prove a uniform $L^m_\omega(C_t^\alpha(H^{-4}_x))$ bound, for $m\ge 2$:
\[lem:Hm4\_bound\] Fix $2\le m<\infty$. For every $0<\alpha<1/2$, we have $$\begin{aligned}
{\mathrm{E}}\| \xi^{\varepsilon}\|_{C^\alpha_t(H^{-4}_x)}^m \le C (\|\xi_0\|_{{\mathcal}{M}_x}^{2m} +\|\xi_0\|_{{\mathcal}{M}_x}^m),\quad \text{for every }t.\end{aligned}$$
Note that, by Remark \[rmk:Borel\_norm\], $\omega\mapsto \|\xi\|_{C_t^\alpha(H^{-4}_x)}$ is measurable. Using the equation , we get, for every $t$, $$\begin{aligned}
&{\mathrm{E}}\| \xi^{\varepsilon}_t - \xi^{\varepsilon}_s \|_{H^{-4}_x}^m\\
&\le C{\mathrm{E}}\left\| \int^t_s N(\xi^{\varepsilon}_r) {\mathrm{d}}r \right\|_{H^{-4}_x}^m\\
&\ \ \ +C{\mathrm{E}}\left\| \sum_k \int^t_s \sigma_k\cdot\nabla \xi^{\varepsilon}_r {\mathrm{d}}W^k \right\|_{H^{-4}_x}^m\\
&\ \ \ +Cc^m{\mathrm{E}}\left\| \int^t_s \Delta \xi^{\varepsilon}_r {\mathrm{d}}r \right\|_{H^{-4}_x}^m.\end{aligned}$$ By Lemma \[rmk:Poupaud\_trick\], we have for the nonlinear term $$\begin{aligned}
{\mathrm{E}}\left\| \int^t_s N(\xi^{\varepsilon}_r) {\mathrm{d}}r \right\|_{H^{-4}_x}^m \le C(t-s)^m \|\xi^{\varepsilon}\|_{L^\infty_{t,\omega}({\mathcal}{M}_x)}^{2m}.\end{aligned}$$ For the stochastic integral, the Burkholder-Davis-Gundy inequality and Lemma \[lem:H\_Borel\] give $$\begin{aligned}
&{\mathrm{E}}\left\| \sum_k \int^t_s \sigma_k\cdot\nabla \xi^{\varepsilon}_r \circ {\mathrm{d}}W^k \right\|_{H^{-4}_x}^m\\
&\le C {\mathrm{E}}\left( \sum_k \int^t_s \|\sigma_k\cdot\nabla \xi^{\varepsilon}_r \|_{H^{-4}_x}^m {\mathrm{d}}r \right)^{m/2}\\
&\le C(t-s)^{m/2} \left(\sum_k\|\sigma_k\|_{C_x}^2\right)^{m/2} \|\xi^{\varepsilon}\|_{L^\infty_{t,\omega}({\mathcal}{M}_x)}^m\end{aligned}$$ Finally for the second order term, again Lemma \[lem:H\_Borel\] gives $$\begin{aligned}
{\mathrm{E}}\left\| \int^t_s \Delta \xi^{\varepsilon}_r {\mathrm{d}}r \right\|_{H^{-4}_x}^m\le C \|\xi^{\varepsilon}\|_{L^\infty_{t,\omega}({\mathcal}{M}_x)}^m\end{aligned}$$ We put all together and we recall the a priori bound on $\|\xi^{\varepsilon}\|_{L^\infty_{t,\omega}({\mathcal}{M}_x)}$ in Lemma \[lem:cons\_mass\]: we obtain $$\begin{aligned}
{\mathrm{E}}\| \xi^{\varepsilon}_t - \xi^{\varepsilon}_s \|_{H^{-4}_x}^m \le C (t-s)^{m/2} (\|\xi_0\|_{{\mathcal}{M}_x}^m +\|\xi_0\|_{{\mathcal}{M}_x}^{2m}),\end{aligned}$$ where the constant $C$ depends on $\sum_k\|\sigma_k\|_{B_x}^2$ and on $c$. By the Kolmogorov criterion (or the Sobolev embedding in $t$), recalling that $\xi^{\varepsilon}$ is already continuous as $H^{-4}_x$-valued process, we get, for every $0<\alpha<1/2$, $$\begin{aligned}
{\mathrm{E}}\| \xi^{\varepsilon}\|_{C^\alpha_t(H^{-4}_x)}^m \le C(\|\xi_0\|_{{\mathcal}{M}_x}^m +\|\xi_0\|_{{\mathcal}{M}_x}^{2m}).\end{aligned}$$ The proof is complete.
Tightness
---------
In this section we prove the tightness of $\xi^{\varepsilon}$ on $C_t({\mathcal}{M}_{x,M},w*) \cap (L^2_t(H^{-1}_x),w)$ (recall that $M>0$ is fixed such that $\|\xi_0\|_{{\mathcal}{M}_x}\le M$).
We recall that $({\mathcal}{M}_{x,M},w*)$ is metrizable with the distance $d_{{\mathcal}{M}_{x,M}}(\mu,\nu) = \sum_j 2^{-j} |{\langle}\mu-\nu, \varphi_j {\rangle}|$, therefore $C_t({\mathcal}{M}_{x,M},w*)$ is metrizable as well: see Remark \[rmk:continuity\_weak\*\] with $X=C_x$. Here the space $C_t({\mathcal}{M}_x,w*)\cap (L^2_t(H^{-1}_x),w)$ is defined as the subspace of $C_t({\mathcal}{M}_x,w*)$ whose paths have finite $L^2_t(H^{-1}_x)$ norm. On this subspace, the topology is induced by the $C_t({\mathcal}{M}_x,w*)$ topology and the $(L^2_t(H^{-1}_x),w)$ norm, that is, any open set in $C_t({\mathcal}{M}_x,w*)\cap (L^2_t(H^{-1}_x),w)$ is the union of a $C_t({\mathcal}{M}_x,w*)$-open set and a $(L^2_t(H^{-1}_x),w)$-open set. The Borel $\sigma$-algebra on $C_t({\mathcal}{M}_x,w*)\cap (L^2_t(H^{-1}_x),w)$ is then generated by the Borel $\sigma$ algebras related to the $C_t({\mathcal}{M}_x,w*)$ topology and the $(L^2_t(H^{-1}_x),w)$ topology. Actually, the Borel $\sigma$-algebra generated by $(L^2_t(H^{-1}_x),w)$ coincides with the Borel $\sigma$-algebra generated by the strong topology on $L^2_t(H^{-1}_x)$, by Lemma \[lem:equiv\_topol\].
Note that, by Lemmas \[lem:cons\_mass\] and \[lem:Hm1\_bound\], for any ${\varepsilon}>0$, $\xi^{\varepsilon}_t$ takes values in ${\mathcal}{M}_{x,M}\cap H^{-1}_x$ for every $t$, $P$-a.s. and so for every $\omega$ up to taking an indistinguishable version. Hence, by Lemma \[lem:C\_Lebesgue\_meas\], $\xi$ is measurable as $C_t({\mathcal}{M}_{x,M},w*)$-map and as $(L^2_t(H^{-1}_x),w)$-valued map (both spaces being endowed with their Borel $\sigma$-algebras), that is $\xi^{\varepsilon}$ is a random variable with values in $C_t({\mathcal}{M}_{x,M},w*) \cap (L^2_t(H^{-1}_x),w)$.
\[lem:tightness\] Fix $M>0$ such that $\|\xi_0\|_{{\mathcal}{M}_x}\le M$. The family $(\xi^{\varepsilon})_{\varepsilon}$ is tight on $C_t({\mathcal}{M}_{x,M},w*) \cap (L^2_t(H^{-1}_x),w)$.
We start with a generalization of [@BrzMot2013 Lemma 3.1]. The latter is a refined version of the compactness argument in [@FlaGat1995], which can be seen as a stochastic version of the Aubin-Lions lemma. Given a Banach space $X$, we call $B_M^X$ the closed ball in $X$ of radius $M$.
Let $X$, $Y$ be separable Banach spaces with $Y$ densely embedded in $X$. Then, for every $M\ge0$, $\alpha>0$, $a\ge0 $, the set $$\begin{aligned}
A_a = \{ z \in C_t(B_M^{X^*},w*) \mid \|z\|_{C^\alpha_t(Y^*)}\le a \}\end{aligned}$$ is compact in $C_t(B_M^{X^*},w*)$.
\[rmk:continuity\_weak\*\] For the proof, we recall the following facts. First, the ball $B_M^{X^*}$ endowed with the weak-\* topology is metrizable with the distance $d_{B_M^{X^*}}(w,w') = \sum_j 2^{-j} |{\langle}w-w', \varphi_j {\rangle}|$, where $(\varphi_j)_j$ is a dense sequence in $B^X_1$, see [@Bre2011 Theorem 3.28]. Hence the set $C_t(B_M^{X^*},w*)$ is metrizable with the distance $$\begin{aligned}
d(z,z') = \sup_{t\in[0,T]} \sum_j 2^{-j} |{\langle}z-z', \varphi_j {\rangle}|,\quad z,z'\in C_t(B_M^{X^*},w*).\label{eq:dist_CM}\end{aligned}$$ Moreover, given a sequence $(z^n)_n$ and $z$ in $C_t(B_M^{X^*},w*)$ and a set $D$, dense in $X$, the following three conditions are equivalent:
- $(z^n)_n$ converges to $z$ in $C_t(B_M^{X^*},w*)$;
- for every $\varphi$ in $X$, ${\langle}z^n,\varphi {\rangle}$ converges uniformly (that is, in $C_t$) to ${\langle}z,\varphi {\rangle}$;
- for every $\varphi$ in $D$, ${\langle}z^n,\varphi {\rangle}$ converges uniformly (that is, in $C_t$) to ${\langle}z,\varphi {\rangle}$.
The equivalence between the first two conditions can be seen using the distance . The equivalente between the last two points can be seen approximating a generic $\varphi$ in $X$ with elements in $D$ and using the uniform bound $\|z_n\|_{X^*}\le M$.
Since $C_t(B_M^{X^*},w*)$ is metrizable, compactness is equivalent to sequential compactness. Let $(z^n)_n$ be a sequence in $A_a$, we have to find a subsequence $(z^{n_k})_k$ which converges in $C_t(B_M^{X^*},w*)$ to an element of $A_a$.
For fixed $t$ in $\mathbb{Q}\cap [0,T]$, $(z^n_t)_n$ is a sequence in $B^{X^*}_M$, hence, by Banach-Alaoglu theorem, there exists a subsequence $(z^{n_k}_t)_k$ converging weakly-\* to an element $\tilde{z}_t$ in $B^{X^*}_M$. By a diagonal procedure, we can make the sequence $(n_k)_k$ independent of $t$ in $\mathbb{Q}\cap [0,T]$.
On the other side, let $D$ be a countable dense set in $Y$, and so in $X$. The fact that $z^n$ are equicontinuous and equibounded in $Y^*$ implies that, for every $\varphi$ in $D$, the functions $t\mapsto {\langle}z^{n_k}_t,\varphi {\rangle}$ are equicontinuous and equibounded, their $C^\alpha$ norm being bounded by $a\|\varphi\|_Y$. Hence, by Ascoli-Arzelà theorem, there exists a subsequence converging in $C_t$ to some element $t\mapsto f^\varphi_t$, which also satisfies $\|f^\varphi\|_{C^\alpha_t}\le a\|\varphi\|_Y$. By a diagonal procedure, we can choose the subsequence independent of $\varphi$ in $D$. With a small abuse of notation, we continue using $n_k$ for this subsequence. Then, for all $t$ in $\mathbb{Q}\cap [0,T]$, for all $\varphi$ in $D$, ${\langle}\tilde{z}_t,\varphi{\rangle}= f^\varphi_t$.
Fix $t$ in $[0,T]$ and let $(t_j)_j$ be a sequence in $\mathbb{Q}\cap [0,T]$ converging to $t$. The sequence $(\tilde{z}_{t_j})_j$ is in $B^{X^*}_M$, so, up to subsequences, it converges weakly-\* to an element $z_t$ in $B^{X^*}_M$. On the other hand, for all $\varphi$ in $D$, by continuity of $t\mapsto f^\varphi_t$, we must have ${\langle}z_t,\varphi {\rangle}= f^\varphi_t$. The map $t\mapsto {\langle}z_t,\varphi {\rangle}= f^\varphi_t$ is continuous for every $\varphi$ in $D$, and actually for every $\varphi$ in $X$, by an approximation argument. Hence $z$ is in $C_t(B^{X*}_M,w*)$. Moreover $$\begin{aligned}
&\|z_t-z_s\| = \sup_{\varphi \in D,\|\varphi\|_Y\le 1} |{\langle}z_t-z_s,\varphi {\rangle}|\\
&= \sup_{\varphi \in D,\|\varphi\|_Y\le 1} |f^\varphi_t-f^\varphi_s| \le a|t-s|^\alpha,\end{aligned}$$ and similarly for $\|z_t\|_Y$ alone. Hence $\|z\|_{C^\alpha_t(Y*)}\le a$ and so $z$ is in $A_a$.
Finally, for every $\varphi$ in $D$, ${\langle}z^{n_k},\varphi {\rangle}$ converges uniformly to $f^\varphi = {\langle}z,\varphi {\rangle}$, therefore, by Remark \[rmk:continuity\_weak\*\], $z^{n_k}$ converges to $z$ in $C_t((B_M^{X^*},w*)$. The proof is complete.
As a consequence of the previous Lemma and the Banach-Alaoglu theorem, we get the following:
\[lem:cpt\_set\] For every $M\ge0$, $\alpha>0$, for every $a,b\ge0 $, the set $$\begin{aligned}
A_{a,b} = \{ \mu \in C_t({\mathcal}{M}_{x,M},w*) \cap L^m_t(H^{-1}_x) \mid \|\mu\|_{C^\alpha_t(H^{-4}_x)}\le a, \ \|\mu\|_{L^m_t(H^{-1}_x)}\le b \}\end{aligned}$$ is metrizable and compact in $C_t({\mathcal}{M}_{x,M},w*) \cap (L^m_t(H^{-1}_x),w)$.
Since the topologies on $C_t({\mathcal}{M}_{x,M},w*)$ and on the closed ball of radius $b$ in $(L^m_t(H^{-1}_x),w*)$ are metrizable, $A_{a,b}$ is metrizable as well and the compactness is equivalent to the sequential compactness.
Let $(\mu^n)_n$ be a sequence in $A_{a,b}$. By the previous Lemma, applied to $X=C_x$ and $Y=H^4_x$, there exists a sub-subsequence $(\mu^{n_k})_k$ converging to some $\mu$ in $C_t({\mathcal}{M}_{x,M},w*)$ with $\|\mu\|_{C^\alpha_t(H^{-4}_x)}\le a$. On the other hand, by the Banach-Alaoglu theorem, there exists a subsequence, which we can assume $(\mu^{n_k})_k$ up to relabelling, converging to some $\nu$ in $(L^m_t(H^{-1}_x),w)$ with $\|\nu\|_{L^m_t(H^{-1}_x)}\le b$. Using these two limits, for every $g$ in $C_t$ and every $\varphi$ in $C^1_x$, we have $$\begin{aligned}
\int^T_0 g(t) {\langle}\mu_t, \varphi {\rangle}{\mathrm{d}}t = \int^T_0 g(t) {\langle}\nu_t, \varphi {\rangle}{\mathrm{d}}t.\end{aligned}$$ Hence $\mu=\nu$ and so $\mu$ is the limit in $A_{a,b}$ of the subsequence $(\mu^{n_k})_k$. The proof is complete.
We are ready to prove tightness of $\xi^{\varepsilon}$.
As we have seen at the beginning of this section, by Lemmas \[lem:cons\_mass\] and \[lem:Hm1\_bound\], for any ${\varepsilon}>0$, $\xi^{\varepsilon}$ is, up to an indistinguishable version, a $C_t({\mathcal}{M}_x,w*)\cap (L^2_t(H^{-1}_x),w)$-valued random variable. Lemma \[lem:cpt\_set\] ensures that the set $A_{a,b}$ defined in that Lemma is metrizable and compact in $C_t({\mathcal}{M}_x,w*)\cap (L^2_t(H^{-1}_x),w)$. The Markov inequality gives $$\begin{aligned}
&P\{\xi^{\varepsilon}\notin A_{a,b} \}\le P\{ \| \xi^{\varepsilon}\|_{C_t^\alpha(H^{-4}_x)}>a \} + P\{ \| \xi^{\varepsilon}\|_{L^m_t(H^{-1}_x)}>b \}\\
&\le a^{-m} {\mathrm{E}}\| \xi^{\varepsilon}\|_{C_t^\alpha(H^{-4}_x)}^m + b^{-m}{\mathrm{E}}\| \xi^{\varepsilon}\|_{L^m_t(H^{-1}_x)}^m.\end{aligned}$$ By Lemmas \[lem:Hm1\_bound\] and \[lem:Hm4\_bound\], the right-hand side above can be made arbitrarily small, uniformly in ${\varepsilon}$, taking $a$ and $b$ large enough. The tightness is proved.
As a consequence, we have actually:
\[cor:tightness\_couple\] The family $(\xi^{\varepsilon},W)_{\varepsilon}$ (where $W=(W^k)_k$) is tight on the space $\chi := [C_t({\mathcal}{M}_{x,M},w*)\cap(L^m_t(H^{-1}_x),w)] \times C_t^{\mathbb{N}}$.
The tightness of $(\xi^{\varepsilon},W)$ follows easily from the tightness of the marginals.
Convergence
-----------
We can apply the Skorohod-Jakubowski representation theorem, see [@Jak1997], to the family $(\xi^{\varepsilon},W)_{\varepsilon}$ and the space $\chi = [C_t({\mathcal}{M}_{x,M},w*) \cap (L^m_t(H^{-1}_x),w)] \times C_t^\mathbb{N}$. Indeed the family $(\xi^{\varepsilon},W)_{\varepsilon}$ is tight by Corollary \[cor:tightness\_couple\] and the space $\chi$ satisfies the assumption (10) in [@Jak1997]: for given sequences $(t_i)_i$ dense in $[0,T]$, $(\varphi_j)_j$ dense in $C_x$, the maps $f_{i,j}$ and $g_{i,k}$, defined on $\chi$ by $f_{i,j}(\mu,\gamma)={\langle}\mu_{t_i},\varphi_j{\rangle}$ and $g_{i,k}(\mu,\gamma)= \arctan(\gamma^k_{t_i})$, form a sequence of continuous, uniformly bounded maps separating points in $\chi$.
Hence, by the Skorohod-Jakubowski representation theorem, there exist an infinitesimal sequence $({\varepsilon}_j)_j$, a probability space $({\tilde}{\Omega},{\tilde}{{\mathcal}{A}},{\tilde}{P})$, a $\chi$-valued sequence $({\tilde}{\xi}^j, {\tilde}{W}^{(j)})_j$ and a $\chi$-valued random variable $({\tilde}{\xi},{\tilde}{W})$ such that $({\tilde}{\xi}^j,{\tilde}{W}^{(j)})$ has the same law of $(\xi^{{\varepsilon}_j},W)$ and $({\tilde}{\xi}_j,{\tilde}{W}^{(j)})$ converges to $({\tilde}{\xi},{\tilde}{W})$ a.s. in $C_t({\mathcal}{M}_{x,M},w*) \cap (L^m_t(H^{-1}_x),w)$. For notation, we use ${\tilde}{W}^{(j),k}$ and ${\tilde}{W}^k$ for the $k$-th component of the $C_t^{\mathbb{N}}$-valued random variables ${\tilde}{W}^{(j)}$ and ${\tilde}{W}$.
Call ${\tilde}{{\mathcal}{F}}_t^{0}$ the filtration generated by ${\tilde}{\xi}$, ${\tilde}{W}$ and the ${\tilde}{P}$-null sets on $({\tilde}{\Omega},{\tilde}{{\mathcal}{A}},{\tilde}{P})$ and call ${\tilde}{{\mathcal}{F}}_t = \cap_{s>t}{\tilde}{{\mathcal}{F}}^{0}_s$. Similarly, call ${\tilde}{{\mathcal}{F}}_t^{0,j}$ the filtration generated by ${\tilde}{\xi}^j$, ${\tilde}{W}^{(j)}$ and the ${\tilde}{P}$-null sets on $({\tilde}{\Omega},{\tilde}{{\mathcal}{A}},{\tilde}{P})$ and call ${\tilde}{{\mathcal}{F}}^j_t = \cap_{s>t}{\tilde}{{\mathcal}{F}}^{0,j}_s$.
\[lem:BM\_enlarged\] The filtration $({\tilde}{{\mathcal}{F}}_t)_t$ is complete and right-continuous and ${\tilde}{W}$ is a cylindrical Brownian motion with respect to it. Moreover ${\tilde}{\xi}$ is an $({\mathcal}{M}_{x,M},w*)$-valued $({\tilde}{{\mathcal}{F}}_t)_t$-progressively measurable process. Similarly for $({\tilde}{{\mathcal}{F}}^j_t)_t$, ${\tilde}{W}^{(j)}$ and ${\tilde}{\xi}^j$, for each $j$.
The proof of this Lemma is simple but technical and postponed to the appendix.
Each copy ${\tilde}{\xi}^j$ of the approximant $\xi^{{\varepsilon}_j}$ is a solution to the stochastic vorticity equation:
\[lem:eq\_enlarged\] For each fixed $j$, the object $({\tilde}{\Omega},{\tilde}{{\mathcal}{A}},({\tilde}{{\mathcal}{F}}^j_t)_t,{\tilde}{P},{\tilde}{W}^{(j)},{\tilde}{\xi}^j)$ is a ${\mathcal}{M}_{x,M}$-valued solution to the vorticity equation with the initial condition ${\tilde}{\xi}^j_0=\xi_0^{{\varepsilon}_j}$ $P$-a.s.. Moreover Lemmas \[lem:Hm1\_bound\] and \[lem:Hm4\_bound\] hold for ${\tilde}{\xi}^j$ in place of $\xi^{{\varepsilon}}$ and, $P$-a.s., ${\tilde}{\xi}_t$ is non-negative for every $t$.
Also the proof of this Lemma is technical and postponed to the appendix.
Limiting equation
-----------------
Now we show that ${\tilde}{\xi}$ satisfies the vorticity equation with ${\tilde}{W}$ as Brownian motion. With this Lemma, Theorem \[thm:main\] is proved.
\[lem:limit\_sol\] The object $({\tilde}{\Omega},{\tilde}{{\mathcal}{A}},({\tilde}{{\mathcal}{F}}_t)_t,{\tilde}{P},{\tilde}{W},{\tilde}{\xi})$ is a ${\mathcal}{M}_{x,M}$-valued solution to the vorticity equation , which is also in $C_t({\mathcal}{M}_{x,M},w*) \cap (L^2_t(H^{-1}_x),w)$.
To prove Lemma \[lem:limit\_sol\] we will show for $({\tilde}{\xi},{\tilde}{W})$ for every test function $\varphi$ in $C^\infty_x$. By Lemma \[lem:eq\_enlarged\], for each $j$, $({\tilde}{\xi}^j,{\tilde}{W}^{(j)})$ satisfies for every $\varphi$ in $C^\infty_x$. Hence it is enough to pass to the ${\tilde}{P}$-a.s. limit, as $j\rightarrow \infty$, in each term of for $({\tilde}{\xi}^j,{\tilde}{W}^{(j)})$, possibly choosing a subsequence, for every $t$ and every $\varphi$ in $C^\infty_x$. We fix $t$ in $[0,T]$ and $\varphi$ in $C^\infty_x$.
We start with the deterministic linear terms: ${\langle}{\tilde}{\xi}^j_t, \varphi {\rangle}$, ${\langle}{\tilde}{\xi}^j_0, \varphi {\rangle}$ and $$\begin{aligned}
\int^t_0 {\langle}{\tilde}{\xi}^j_r, c\Delta \varphi {\rangle}{\mathrm{d}}r\end{aligned}$$ converge $P$-a.s. to the corresponding terms without the superscript $j$, thanks to the convergence of ${\tilde}{\xi}^j$ to ${\tilde}{\xi}$ in $C_t({\mathcal}{M}_{x,M},w*)$.
### The nonlinear term
Concerning the nonlinear term, we recall Lemma \[rmk:Poupaud\_trick\]) and we follow the Schochet argument, see Schochet [@Sch1995] and Poupaud [@Pou2002 Section 2]. The first main ingredient for the convergence is the following:
\[lem:continuity\_nonlin\] Fix $M>0$. For every $\varphi$ in $C^2_x$, the map $\mu\mapsto {\langle}N(\mu),\varphi {\rangle}$ is continuous on the subset ${\mathcal}{M}_{x,M,+,\text{no-atom}}$ of ${\mathcal}{M}_{x,M}$ of non-negative non-atomic measures with total mass bounded by $M$, endowed with the weak-\* topology.
We use the following result, a version of the classical Portmanteau theorem (which deals with probability measures rather than non-negative measures):
\[lem:continuity\_mass\] Let $X$ be a compact metric space. Assume that $(\nu^k)_k$ is a sequence of non-negative bounded measures and converges to $\nu$ in $({\mathcal}{M}(X),w*)$. Let $F$ be a closed set in $X$ with $\nu(F)=0$ and let $\psi:X\rightarrow{\mathbb{R}}$ be a bounded Borel function, continuous on $X\setminus F$. Then the sequence $({\langle}\nu^k,\psi {\rangle})_k$ converges to ${\langle}\nu,\psi {\rangle}$.
Let ${\varepsilon}>0$, we have to prove that $|{\langle}\nu^k-\nu, \psi {\rangle}| <C{\varepsilon}$ for $k$ large enough, for some constant $C$. The fact that $\nu(F)=0$ implies the existence of $\delta>0$ such that $\nu(\bar{B}(F,\delta))<{\varepsilon}$, where $\bar{B}(F,\delta):=\{x\in X\mid d(x,F)\le \delta\}$. As the function $1_{\bar{B}(F,\delta)}$ is upper semi-continuous and $(\nu^k)_k$ converges weakly-\* to $\nu$, there exists $\bar{k}$ such that $\nu^k(\bar{B}(F,\delta)) <{\varepsilon}$ for all $k\ge \bar{k}$. By Urysohn lemma, there exists a continuous function $\rho$ with $0\le \rho\le 1$, $\rho=1$ on $F$ and $\rho=0$ on $\bar{B}(F,\delta)^c$; it is easy to see that $\psi(1-\rho)$ is then continuous on all $X$. Now we split $$\begin{aligned}
|{\langle}\nu^k-\nu, \psi {\rangle}| \le |{\langle}\nu^k, \psi\rho {\rangle}| +|{\langle}\nu^k-\nu, \psi(1-\rho) {\rangle}| +|{\langle}\nu, \psi\rho {\rangle}|.\label{eq:ineq_Pormanteau}\end{aligned}$$ For the first term in the right-hand side, we have $$\begin{aligned}
&|{\langle}\nu^k, \psi\rho {\rangle}|\nonumber\\
&\le \sup_k |\nu^k(\bar{B}(F,\delta))| \sup_X|\psi| = \sup_k \nu^k(\bar{B}(F,\delta)) \sup_X|\psi| \le {\varepsilon}\sup_X|\psi|.\label{eq:positivity}\end{aligned}$$ The same inequality holds for the third term in the right-hand side of . Finally, the second term in is bounded by ${\varepsilon}$ provided $k$ is large enough, by weak-\* convergence of $(\nu^k)_k$. The proof is complete.
It is only in in the proof of Lemma \[lem:continuity\_mass\] that we need to use that the process $\xi$ takes values in non-negative measures.
We have to show that, for every sequence $(\mu^n)_n$ converging to $\xi$ in ${\mathcal}{M}_{x,M,+,\text{no-atom}}$, ${\langle}N(\mu^n),\varphi{\rangle}$ converges to ${\langle}N(\mu),\varphi{\rangle}$. By Lemma \[rmk:cont\_prod\_meas\] in the Appendix, $(\mu^n\otimes\mu^n)_n$ converges weakly-\* to $\xi\otimes \xi$. Moreover, since $\mu$ has no atoms, then $\mu \otimes \mu$ gives no mass to the diagonal $D=\{(x,y)\mid x=y\}$: indeed, by the Fubini theorem, $$\begin{aligned}
(\mu\otimes\mu)(D) = \int_{{\mathbb{T}}^2}\mu({\mathrm{d}}x) \int_{\{x\}}\mu({\mathrm{d}}y) = 0.\end{aligned}$$ We are now in a position to apply Lemma \[lem:continuity\_mass\] to the sequence $(\mu^n\otimes\mu^n)_n$, with the state space $X=({\mathbb{T}}^2)^2$, with $F=D$ and with $\psi=F_\varphi$, which is continuous outside the diagonal $D$: we get that $$\begin{aligned}
{\langle}\mu^n\otimes\mu^n , F_\varphi {\rangle}\rightarrow {\langle}\mu\otimes\mu , F_\varphi {\rangle},\end{aligned}$$ which is exactly the desired convergence. The proof is complete.
The second ingredient for the convergence of the nonlinear term is the following:
\[lem:no\_atoms\] Let $\mu$ be in ${\mathcal}{M}_x \cap H^{-1}_x$. Then $\mu$ has no atoms.
Fix $x_0$ in ${\mathbb{T}}^2$, we have to prove that $\mu(\{x_0\})=0$. Let $\rho:{\mathbb{R}}^2\rightarrow{\mathbb{R}}$ be a smooth function with $0\le \rho \le 1$, supported on $B_1(0)$ (the ball centered at $0$ with radius $1$) and with $\rho(x)=1$ if and only if $x=0$. For $n$ positive integer, call $\rho_n(x)=\rho(n(x-x_0))$ and take its periodic version on ${\mathbb{T}}^2$, which, with small abuse of notation, we continue calling $\rho_n$. Now $(\rho_n)_n$ is a nonincreasing sequence which converges pointwise to $1_{\{x_0\}}$, so $({\langle}\mu,\rho_n {\rangle})_n$ converges to $\mu(\{x_0\})$.
On the other hand $|{\langle}\mu,\rho_n {\rangle}|\le \|\mu\|_{H^{-1}_x}\|\rho_n\|_{H^1_x}$. For the $H^1_x$ norm of $\rho_n$, we have $$\begin{aligned}
\|\nabla \rho_n\|_{L^2_x}^2 = \int_{{\mathbb{R}}^2} n^2 |\nabla \rho(nx)|^2 {\mathrm{d}}x = \int_{B_{1/n}(0)} |\nabla \rho(y)|^2 {\mathrm{d}}y\end{aligned}$$ and so $\|\nabla \rho_n\|_{L^2}$ converges to $0$ as $n\rightarrow\infty$. In a similar and easier way one sees that $\|\rho_n\|_{L^2}$ converges to $0$. So $\|\rho_n\|_{H^1}$ tends to $0$. Hence $({\langle}\mu,\rho_n {\rangle})_n$ tends also to $0$ and therefore $\mu(\{x_0\})=0$.
It is only for the previous Lemma that we need to use that the process $\xi$ is $H^{-1}_x$-valued.
We are now able to conclude the convergence of the nonlinear term in . Fix $\omega$ in a full measure set such that $({\tilde}{\xi}^j)_j$ converges to ${\tilde}{\xi}$ in $C_t({\mathcal}{M}_{x,M},w*)$. Since ${\tilde}{\xi}$ belongs to $L^2_t(H^{-1}_x)$, ${\tilde}{\xi}_r$ belongs to $H^{-1}_x$ for all $r$ in a full measure set $S$ of $[0,T]$. In particular, by Lemma \[lem:no\_atoms\], ${\tilde}{\xi}_r$ has no atoms. Hence, for all $r$ in $S$, Lemma \[lem:continuity\_nonlin\] implies the convergence of $$\begin{aligned}
{\langle}N(\xi^j),\varphi {\rangle}= {\langle}\xi^j\otimes\xi^j, F_\varphi{\rangle}\end{aligned}$$ towards the same term without $j$. By the dominated convergence theorem (in $r$), its time integral converges as well. This proves convergence for the nonlinear term.
### The stochastic integral
It remains to prove convergence of the stochastic term. We follow the strategy in Brzezniak-Goldys-Jegaraj. We use the notation $t^l_i = 2^{-l}i$, $$\begin{aligned}
&Y_{j,k}(t) = {\langle}{\tilde}{\xi}^j_t, \sigma_k\cdot\nabla \varphi_t {\rangle},\\
&Y_{j,k}^{K,l}(t) = 1_{k\le K}\sum_i Y_{j,k}(t^l_i)1_{[t^l_i,t^l_{i+1}[}(t)\end{aligned}$$ and similarly without $j$. Finally we call $\rho_{j,k}$ the modulus of continuity of $Y_{j,k}$, namely $$\begin{aligned}
\rho_{j,k}(a) = \sup_{|t-s|\le a} |Y_{j,k}(t)-Y_{j,k}(s)|\end{aligned}$$ and similarly without $j$. Note that $\rho_{j,k}$ and $\rho_k$ are $\tilde{{\mathcal}{F}}_{T}$-measurable on $\tilde{\Omega}$, since the above supremum can be restricted to rational times $t,s$. We split $$\begin{aligned}
&\left|\sum_k \int^t_0 {\langle}{\tilde}{\xi}^j, \sigma_k\cdot\nabla \varphi {\rangle}{\mathrm{d}}{\tilde}{W}^{(j),k} - \int^t_0 {\langle}{\tilde}{\xi}, \sigma_k\cdot\nabla \varphi {\rangle}{\mathrm{d}}{\tilde}{W}^{k} \right|\\
&\le \left| \sum_k \int^t_0 (Y_{j,k}-Y^{K,l}_{j,k}) {\mathrm{d}}{\tilde}{W}^{(j),k} \right|\\
&\ \ \ + \left| \sum_k \int^t_0 Y^{K,l}_{j,k} {\mathrm{d}}{\tilde}{W}^{(j),k} - \int^t_0 Y^{K,l}_{k} {\mathrm{d}}{\tilde}{W}^{k} \right|\\
&\ \ \ + \left| \sum_k \int^t_0 (Y_{k}-Y^{K,l}_{k}) {\mathrm{d}}{\tilde}{W}^{k} \right|\\
&=:T1+T2+T3.\end{aligned}$$ Concerning the first addend $T1$, we have $$\begin{aligned}
{\mathrm{E}}T1^2 = \sum_k {\mathrm{E}}\int^t_0 |Y_{j,k}-Y^{K,l}_{j,k}|^2 {\mathrm{d}}r.\end{aligned}$$ In order to have uniform estimates with respect to $j$, we want to use the convergence in $C_t$ of $Y^{j,k}$ to $Y^k$. For this, we split again the right-hand side: $$\begin{aligned}
&\sum_k {\mathrm{E}}\int^t_0 |Y_{j,k}-Y^{K,l}_{j,k}|^2 {\mathrm{d}}r\\
&\le C\sum_k {\mathrm{E}}\int^t_0 |Y^{K,l}_{j,k}-Y^{K,l}_{k}|^2 {\mathrm{d}}r + C \sum_k {\mathrm{E}}\int^t_0 |Y_{k}-Y^{K,l}_{k}|^2 {\mathrm{d}}r + C \sum_k {\mathrm{E}}\int^t_0 |Y_{j,k}-Y_{k}|^2 {\mathrm{d}}r\\
&=:T11+T12+T13.\end{aligned}$$ For $T11$, we have $$\begin{aligned}
&T11= C \sum_{k\le K} {\mathrm{E}}\int^t_0 |Y^{K,l}_{j,k}-Y^{K,l}_{k}|^2 {\mathrm{d}}r\\
&\le C \sum_{k\le K} {\mathrm{E}}\sup_r |Y_{j,k}(r)-Y_{k}(r)|^2\end{aligned}$$ For $T13$, we have similarly $$\begin{aligned}
&T13= C \sum_{k\le K} {\mathrm{E}}\int^t_0 |Y_{j,k}-Y_{k}|^2 {\mathrm{d}}r + C \sum_{k>K} {\mathrm{E}}\int^t_0 |Y_{j,k}-Y_{k}|^2 {\mathrm{d}}r\\
&\le C \sum_{k\le K} {\mathrm{E}}\sup_r |Y_{j,k}(r)-Y_{k}(r)|^2 + C \sum_{k>K} \|\sigma_k\|_{C_x}^2\end{aligned}$$ where we have used that $\sup_r|Y_{j,k}(r)|\le C\|\sigma_k\|_{C_x}^2$ (the constant $C$ here being dependent on $M$, the upper bound of $\|\xi^j\|_{{\mathcal}{M}_x}$ and $\varphi$). For $T12$, we have $$\begin{aligned}
&T12 = C \sum_{k\le K} {\mathrm{E}}\int^t_0 |Y_{k}-Y^{K,l}_{k}|^2 {\mathrm{d}}r + C \sum_{k>K} {\mathrm{E}}\int^t_0 |Y_{k}|^2 {\mathrm{d}}r\\
&\le C \sum_{k\le K} {\mathrm{E}}\rho_{k}(2^{-l})^2 + C \sum_{k>K} \|\sigma_k\|_{C_x}^2.\end{aligned}$$ This complete the bound for $T1$. Concerning the term $T3$, we have $$\begin{aligned}
{\mathrm{E}}T3^2 = \sum_k {\mathrm{E}}\int^t_0 |Y_{k}-Y^{K,l}_{k}|^2 {\mathrm{d}}r,\end{aligned}$$ which is the term $T12$ up to a multiplicative constant and can therefore bounded as $T12$. Finally, we note that the term $T2$ can be written as $$\begin{aligned}
T2 = \left| \sum_{k\le K} \sum_i [Y_{j,k}(t^l_i) (W^{(j),k}_{t^l_{i+1}}-W^{(j),k}_{t^l_i}) - Y_{k}(t^l_i) (W^{k}_{t^l_{i+1}}-W^{k}_{t^l_i})] \right|\end{aligned}$$ Putting all together, we find $$\begin{aligned}
&{\mathrm{E}}\left|\sum_k \int^t_0 {\langle}{\tilde}{\xi}^j, \sigma_k\cdot\nabla \varphi {\rangle}{\mathrm{d}}{\tilde}{W}^{(j),k} - \int^t_0 {\langle}{\tilde}{\xi}, \sigma_k\cdot\nabla \varphi {\rangle}{\mathrm{d}}{\tilde}{W}^{k} \right|^2\\
&\le C \sum_{k\le K} {\mathrm{E}}\rho_{k}(2^{-l})^2 + C \sum_{k>K} \|\sigma_k\|_{C_x}^2 \\
&\ \ \ + C \sum_{k\le K} {\mathrm{E}}\sup_r |Y_{j,k}(r)-Y_{k}(r)|^2\\
&\ \ \ + C{\mathrm{E}}\left| \sum_{k\le K} \sum_i [Y_{j,k}(t^l_i) (W^{(j),k}_{t^l_{i+1}}-W^{(j),k}_{t^l_i}) - Y_{k}(t^l_i) (W^{k}_{t^l_{i+1}}-W^{k}_{t^l_i})] \right|^2.\end{aligned}$$ We first choose $K$ such that $\sum_{k>K} \|\sigma_k\|_{C_x}^2<{\varepsilon}$. For each $k$, since $Y_k$ is a continuous function, $\rho_{k}(2^{-l})$ converges to $0$ as $l\rightarrow\infty$ ${\tilde}{P}$-a.s.. Moreover $Y_k$ is also essentially bounded, therefore, by dominated convergence theorem, ${\mathrm{E}}\rho_{k}(2^{-l})^2$ also converges to $0$. Hence, for $K$ fixed as before, we can choose $l$ such that $$\begin{aligned}
\sum_{k\le K}{\mathrm{E}}\rho_{k}(2^{-l})^2<{\varepsilon}.\end{aligned}$$ Again for each $k$, due to the convergence of ${\tilde}{\xi}^j$ in $C_t({\mathcal}{M}_{x,M},w*)$, $\sup_r |Y_{j,k}(r)-Y_{k}(r)|^2$ converges to $0$ as $j\rightarrow \infty$ ${\tilde}{P}$-a.s.. Moreover $Y_{j,k}$ are bounded uniformly in $j$, therefore, by dominated convergence theorem, ${\mathrm{E}}\sup_r |Y_{j,k}(r)-Y_{k}(r)|^2$ also converges to $0$. Hence, for $K$ fixed as before, we can choose $\bar{j}$ such that, for every $j\ge \bar{j}$, $$\begin{aligned}
\sum_{k\le K}{\mathrm{E}}\sup_r |Y_{j,k}(r)-Y_{k}(r)|^2<{\varepsilon}.\end{aligned}$$ Finally, for $K$, $l$ fixed as before, the term $$\begin{aligned}
\sum_{k\le K} \sum_i [Y_{j,k}(t^l_i) (W^{(j),k}_{t^l_{i+1}}-W^{(j),k}_{t^l_i}) - Y_{k}(t^l_i) (W^{k}_{t^l_{i+1}}-W^{k}_{t^l_i})]\end{aligned}$$ converges to $0$ as $j\rightarrow\infty$ ${\tilde}{P}$-a.s.; therefore, by dominated convergence theorem, also its second moment converges to $0$. Hence, for $K$, $l$ fixed as before, we can choose a new $\bar{j}$ such that, for every $j\ge \bar{j}$, $$\begin{aligned}
{\mathrm{E}}\left| \sum_{k\le K} \sum_i [Y_{j,k}(t^l_i) (W^{(j),k}_{t^l_{i+1}}-W^{(j),k}_{t^l_i}) - Y_{k}(t^l_i) (W^{k}_{t^l_{i+1}}-W^{k}_{t^l_i})] \right|^2 <{\varepsilon}.\end{aligned}$$ This proves that the stochastic term in converges in $L^2_\omega$ norm, and so ${\tilde}{P}$-a.s. up to subsequences.
We have proved that all the terms in passes to the ${\tilde}{P}$-a.s. limit, up to subsequences, and therefore ${\tilde}{\xi}$ is a solution to , so to with ${\tilde}{W}$ as Brownian motion. The proof of Lemma \[lem:limit\_sol\] is complete.
On the nonlinear term in Euler equations
========================================
This Lemma is essentially due to Schochet [@Sch1995 Lemma 3.2 and discussion thereafter], we use here the interpretation of Poupaud [@Pou2002 Section 2].
- Since $K$ is smooth outside the diagonal $\{x=y\}$, $F_\varphi$ is smooth outside the diagonal. Recall by Lemma \[lem:Green\_function\] we have $|K(x-y)|\le C|x-y|^{-1}$. Therefore, using that $\nabla \varphi$ is Lipschitz, we get $$\begin{aligned}
|F_\varphi(x,y)|\le \frac12 |K(x-y)|\|D^2\varphi\|_{C_x} |x-y|\le C\|\varphi\|_{C^2_x},\end{aligned}$$ which gives the bound on $F_\varphi$.
- For every $\xi$ in ${\mathcal}{M}_x$, for every $\varphi$ in $C^2_x$, $F_\varphi$ is bounded and so ${\langle}N(\xi),\varphi{\rangle}$ is well-defined and $$\begin{aligned}
&|{\langle}N(\xi),\varphi{\rangle}| = \left|\int\int F_\varphi(x,y) \xi({\mathrm{d}}x)\xi({\mathrm{d}}y)\right|\\
&\le C\|\varphi\|_{C^2_x} \|\xi^{\otimes 2}\|_{{\mathcal}{M}_{x,y}} \le C\|\varphi\|_{H^4_x} \|\xi\|_{{\mathcal}{M}_x}^2,\end{aligned}$$ where we used the Sobolev embedding in the last inequality. In particular $N(\xi)$ is a well-defined linear bounded functional on $H^4$.
- The map ${\mathcal}{M}_x\ni\xi\mapsto \xi\otimes \xi\in {\mathcal}{M}_{x,y}$ is Borel (with respect to the Borel $\sigma$-algebras generated by the weak-\* topologies on ${\mathcal}{M}_x$ and ${\mathcal}{M}_{x,y}$), by Lemma \[rmk:cont\_prod\_meas\]. Also, for every $\varphi$ in $C^2_x$, the map $$\begin{aligned}
{\mathcal}{M}_{x,y}\ni \mu\mapsto {\langle}\mu, F_\varphi {\rangle}\in {\mathbb{R}}\end{aligned}$$ is Borel by Lemma \[rmk:bounded\_test\_Borel\]. Therefore, for every $\varphi$ in $C^2_x$, $$\begin{aligned}
{\mathcal}{M}_x\ni\xi\mapsto {\langle}N(\xi),\varphi{\rangle}\in {\mathbb{R}}\end{aligned}$$ is Borel, that is $N$ is weakly-\* Borel as an $H^{-4}_x$-valued map. Since $H^{-4}_x$ is a separable reflexive space, $N$ is Borel by Lemma \[lem:equiv\_topol\].
- Recall that $K$ is odd by Lemma \[lem:Green\_function\], therefore $$\begin{aligned}
&\int \xi(x) u(x)\cdot \nabla \varphi(x) {\mathrm{d}}x = \int\int \xi(x)\xi(y) K(x-y) \cdot\nabla\varphi(x) {\mathrm{d}}x{\mathrm{d}}y\\
&= - \int\int \xi(x)\xi(y) K(y-x) \cdot\nabla\varphi(x) {\mathrm{d}}x{\mathrm{d}}y\\
&= - \int\int \xi(x)\xi(y) K(x-y) \cdot\nabla\varphi(y) {\mathrm{d}}x{\mathrm{d}}y,\end{aligned}$$ where in the last equality we swapped $x$ and $y$. Hence $$\begin{aligned}
&\int \xi(x) u(x)\cdot \nabla \varphi(x) {\mathrm{d}}x\\
&= \frac12 \int\int \xi(x)\xi(y) K(x-y) \cdot (\nabla\varphi(x) -\nabla\varphi(y)) {\mathrm{d}}x{\mathrm{d}}y.\end{aligned}$$
- Continuity of $N$ on ${\mathcal}{M}_{x,M,+,no-atom}$ follows from Lemma \[lem:continuity\_mass\].
Technical lemmas
================
\[lem:H\_Borel\] Assume Condition \[assumption\_sigma\] on $\sigma_k$. Then the maps $$\begin{aligned}
\begin{aligned}\label{eq:maps_M_H}
&{\mathcal}{M}_x \ni \mu \mapsto \mu \in H^{-4}_x,\\
&{\mathcal}{M}_x \ni \mu \mapsto \sigma_k\cdot\nabla \mu \in H^{-4}_x,\quad k\in\mathbb{N},\\
&{\mathcal}{M}_x \ni \mu \mapsto \Delta \mu \in H^{-4}_x
\end{aligned}\end{aligned}$$ are linear norm-to-norm and weak-\*-to-weak continuous, in particular Borel (where ${\mathcal}{M}_x$ is endowed with the weak-\* topology). Moreover we have the bounds $$\begin{aligned}
\|\mu\|_{H^{-4}_x} +\|\Delta \mu\|_{H^{-4}_x} \le C\|\mu\|_{{\mathcal}{M}_x},\\
\|\sigma_k\cdot\nabla \mu\|_{H^{-4}_x}\le C\|\sigma_k\|_{C_x} \|\mu\|_{{\mathcal}{M}_x}\end{aligned}$$
The maps in , tested against a test function $\varphi$, read formally $$\begin{aligned}
&\mu \mapsto {\langle}\mu,\varphi {\rangle},\\
&\mu \mapsto -{\langle}\mu,\sigma_k\cdot\nabla \varphi{\rangle},\\
&\mu \mapsto {\langle}\mu, \Delta \varphi{\rangle}.\end{aligned}$$ Now, by Sobolev embedding, for any $\varphi$ in $H^4_x$, the functions $\varphi$, $\sigma_k\cdot\nabla \varphi$, $\Delta\varphi$ are continuous with $$\begin{aligned}
&\|\varphi\|_{C_x} \le C\|\varphi\|_{H^4_x},\\
&\|\sigma_k\cdot\nabla \varphi\|_{C_x} \le \|\sigma_k\|_{C_x} \|\nabla\varphi\|_{C_x} \le C\|\sigma_k\|_{C_x} \|\nabla\varphi\|_{H^4_x},\\
&\|\Delta\varphi\|_{C_x} \le C\|\varphi\|_{H^4_x}.\end{aligned}$$ Hence the maps in are weak-\*-to-weak continuous. Taking the supremum over $\varphi$ in the unit ball of $H^4_x$, we get also the norm-to-norm continuity and the desired bounds.
\[lem:stochEulervort\_H\] Assume Condition \[assumption\_sigma\] on $\sigma_k$, fix $M>0$. For any object $(\Omega,{\mathcal}{A},({\mathcal}{F}_t)_t,P,(W^k)_k,\xi)$, where $(\Omega,{\mathcal}{A},({\mathcal}{F}_t)_t,P,(W^k)_k)$ is a cylindrical Brownian motion with the usual assumptions and $\xi:[0,T]\times\Omega\rightarrow{\mathcal}{M}_{x,M}$ is ${\mathcal}{P}$ Borel measurable, then holds if and only if holds for every $\varphi$ in $C^\infty_x$.
Assume that holds, fix $\varphi$ in $C^\infty_x$. Then follows by applying the linear continuous functional ${\langle}\cdot,\varphi{\rangle}$ to and exchanging the functional with the integrals.
Conversely, assume that holds for every $\varphi$ in $C^\infty_x$. Lemma \[lem:H\_Borel\] implies that $\xi$ and all the integrands in are progressively measurable as $H^{-4}_x$ processes and that the deterministic and stochastic integrals are well-defined (see ). Now we have for every test function $\varphi$ in $C^\infty_x$, by exchanging ${\langle}\cdot,\varphi{\rangle}$ and the integrals, $$\begin{aligned}
{\langle}\xi_t,\varphi {\rangle}&= {\langle}\xi_0,\varphi{\rangle}+{\langle}\int^t_0 N(\xi_r) {\mathrm{d}}r, \varphi {\rangle}\\
&\ \ \ -{\langle}\sum_k \int^t_0 \sigma_k\cdot\nabla \xi_r {\rangle}{\mathrm{d}}W^k_r , \varphi {\rangle}\\
&\ \ \ +\frac12 {\langle}\int^t_0 c\Delta \xi_r {\mathrm{d}}r, \varphi {\rangle}, \quad\text{for every }t,\quad P-\text{a.s.},\end{aligned}$$ that is tested against $\varphi$, where the $P$-exceptional set can depend on $\varphi$. Taking $\varphi$ in a countable set of $C^\infty_x$, dense in $H^4_x$, we deduce in $H^{-4}_x$. The proof is complete.
\[rmk:Linfty\_sol\] Let $\xi^{\varepsilon}_t(\omega)=(\Phi(t,\cdot,\omega))_\# \xi^{\varepsilon}_0$ be defined as at the beginning of Section \[sec:apriori\_bd\]. For a.e. $\omega$, for every $t$, $\xi^{\varepsilon}_t$ lies in ${\mathcal}{M}_{x,M^{\varepsilon}}$, where $M^{\varepsilon}\ge \|\xi_0^{\varepsilon}\|_{{\mathcal}{M}_x}$. Moreover, ${\langle}\xi_t,\varphi{\rangle}$ is progressively measurable and continuous (because it satisfies ) for every $\varphi$ in $C^\infty_x$, hence, by density of $C^\infty_x$ in $C_x$, for every $\varphi$ in $C_x$. Hence, by Lemma \[rmk:Borel\_weakstar\] and Remark \[rmk:restriction\], up to redefining $\xi^{\varepsilon}=0$ on the $P$-exceptional set where $\xi^{\varepsilon}_t$ is not in ${\mathcal}{M}_{x,M^{\varepsilon}}$ for some $t$, $\xi^{\varepsilon}$ is ${\mathcal}{P}$ Borel as ${\mathcal}{M}_x$-valued map and satisfies for every $\varphi$ in $C^\infty_x$. By Lemma \[lem:stochEulervort\_H\], $\xi^{\varepsilon}$ is a ${\mathcal}{M}_{x,M^{\varepsilon}}$-valued solution in the sense of Definition \[def:sol\].
**First part**: identities with $\text{curl}$. We start proving that, for $v:{\mathbb{T}}^2\rightarrow{\mathbb{R}}^2$ regular and divergence-free and $w:{\mathbb{T}}^2\rightarrow{\mathbb{R}}^2$ regular, $$\begin{aligned}
\text{curl}[v\cdot\nabla w +(Dv)^T w] = v\cdot\nabla \text{curl}[w].\label{eq:vector_eq}\end{aligned}$$ Indeed, we have (in the following, we omit the sum symbols over $i$ and $j$) $$\begin{aligned}
&\text{curl}[v\cdot\nabla w +(\nabla v)^T\cdot w] - v\cdot\nabla \text{curl}[w].\\
&= \partial_{x^1} v^i\partial_{x^i}w^2 -\partial_{x^2} v^i\partial_{x^i}w^1 +\partial_{x^2}v^j\partial{x^1}w^j -\partial_{x^1}v^j\partial{x^2}w^j\\
&= \partial_{x^1} v^1\partial_{x^1}w^2 -\partial_{x^2} v^2\partial_{x^2}w^1 +\partial_{x^2}v^2\partial{x^1}w^2 -\partial_{x^1}v^1\partial{x^2}w^1\\
&= \text{div}[v]\text{curl}[w] =0.\end{aligned}$$ By Lemma \[lem:Green\_function\], implies that $$\begin{aligned}
\Pi[v\cdot\nabla w +(D v)^T w] = K*[v\cdot\nabla \text{curl}[w]],\end{aligned}$$ where $\Pi$ is the Leray projector in $L^2_x$ on the divergence-free zero-mean functions. For $v:{\mathbb{T}}^2\rightarrow{\mathbb{R}}^2$ regular and divergence-free and $\xi:{\mathbb{T}}^2\rightarrow{\mathbb{R}}$ regular, we can apply the above formula to $w=K*\xi$: we get $$\begin{aligned}
\Pi[v\cdot\nabla K*\xi +(Dv)^T K*\xi] = K*[v\cdot\nabla \xi];\label{eq:vel_vort}\end{aligned}$$ we used that, by Lemma \[lem:Green\_function\], $\text{curl}[w] = \xi -\gamma$, where $\gamma$ is a real number (precisely, the space average of $\xi$), and the contribution of $\gamma$ in the right-hand side is zero. The equality , intended as equality in $H^{-1}_x$ holds also for general $\xi$ in $L^p_x$ with $2<p<\infty$ and $v$ divergence-free and in $W^{1,p}_x$ (in particular continuous by the Sobolev embedding): indeed, we can approximate $\xi$ and $v$ in $L^p_x$ and $W^{1,p}_x$ resp. with regular $\xi^n$ and regular divergence-free $v^n$ and we use that, by Lemma \[lem:Green\_function\], $K*\xi^n$ converge to $K*\xi$ in $W^{1,p}_x$ and so in $L^\infty_x$ by Sobolev embedding, so we can get tested againgst any test function in $C^\infty_x$.
**Second part**: conclusion. Since $\xi$ is in $L^p_{t,\omega}(L^p_x)$, $u$ is in $L^p_{t,\omega}(W^{1,p}_x)$ by Lemma \[lem:Green\_function\]. We note also that, for $\xi$ in $L^p_{t,\omega}(L^p_x)$, the nonlinear term can be written as $u\cdot\nabla\xi$, by Lemma \[rmk:Poupaud\_trick\], and the equation holds actually in $H^{-2}_x$: indeed, as one can prove testing against $H^3_x$ functions and using the density of $H^3$ in $H^2$, $\xi$ and all the integrands of takes values in $H^{-2}_x$ and are progressively measurable as $H^{-2}_x$-valued processes (and their deterministic and stochastic $H^{-2}_x$-valued integrals coincide with the $H^{-3}_x$-valued integrals). Now we apply to the operator $$\begin{aligned}
K*\cdot: H^{-2}_x \rightarrow H^{-1}_x,\end{aligned}$$ which is linear and bounded by Lemma \[lem:Green\_function\]. By the first part of this proof, we get, for a.e. $\omega$, as equality in $H^{-1}_x$: for every $t$, $$\begin{aligned}
u_t &= u_0 - \int^t_0 \Pi[u_r\cdot\nabla u_r +(Du_r)^T u_r] {\mathrm{d}}r\\
&- \sum_k \int^t_0 \Pi[ \sigma_k\cdot\nabla u_r +(D\sigma_k)^T u_r ] {\mathrm{d}}W^k_r\\
&+\frac12 \int^t_0 c\Delta u_r {\mathrm{d}}r\end{aligned}$$ Now we note that $(Du_r)^T u_r = \nabla [|u_r|^2]/2$ and so its Leray projection is zero. Hence we get .
For any $\delta>0$, we call $$\begin{aligned}
R^\delta = \rho_\delta *\cdot :H^{-1}_x\rightarrow L^2_x\end{aligned}$$ the linear bounded operator given by the convolution with $\rho_\delta$, where $(\rho_\delta)_\delta$ is a standard family of mollifiers on ${\mathbb{T}}^2$ (precisely, $\rho_1$ is a non-negative, even $C^\infty_c$ function on ${\mathbb{R}}^2$, $\rho_\delta=\delta^{-2}\rho_1(\delta^{-1}\cdot)$ and is defined on the torus by periodicity); $R^\delta$ is extended componentwise to vector fields. Note that $R^\delta f \rightarrow f$ in $L^2_x$, resp. $H^{-1}_x$ for every $f$ in $L^2_x$, resp. $H^{-1}_x$. We apply $R^\delta$ to : we get $$\begin{aligned}
&R^\delta u_t = R^\delta u_0 -\int^t_0 R^\delta \Pi[u_r\cdot\nabla u_r] dr\\
&-\int^t_0 R^\delta \Pi[ \sigma_k\cdot\nabla u_r +(D\sigma_k)^T u_r ] {\mathrm{d}}W^k_r\\
&+\frac12 \int^t_0 cR^\delta\Delta u_r {\mathrm{d}}r,\end{aligned}$$ as equality for every $t$ among $L^2_x$-valued processes. Now we can apply the Itô formula (from [@DaPZab2014 Theorem 4.32]) to the square of the $L^2_x$ norm, which is $C^2$ on $L^2_x$ with uniformly continuous derivatives on bounded subsets of $L^2_x$. We get, for every $t$, $$\begin{aligned}
&\|R^\delta u_t\|_{L^2_x}^2 = \|R^\delta u_0\|_{L^2_x}^2 -2\int^t_0 {\langle}R^\delta u_r, R^\delta \Pi[u_r\cdot\nabla u_r] {\rangle}dr\\
&-2\int^t_0 {\langle}R^\delta u_r, R^\delta \Pi[ \sigma_k\cdot\nabla u_r +(D\sigma_k)^T u_r ] {\rangle}{\mathrm{d}}W^k_r\\
&+\int^t_0 {\langle}R^\delta, cR^\delta\Delta u_r {\rangle}{\mathrm{d}}r + \int^t_0 \sum_k \|R^\delta \Pi[ \sigma_k\cdot\nabla u_r +(D\sigma_k)^T u_r\|_{L^2_x}^2 {\mathrm{d}}r.\end{aligned}$$ Since $R^\delta$ is linear bounded also on $L^p_x$ and $W^{1,p}_x$, $R^\delta u$ is in $`W^{1,p}_x$ and so it has finite $L^p_{t,\omega}(L^\infty_x)$ norm, therefore $$\begin{aligned}
&\sum_k {\mathrm{E}}\int^T_0 |{\langle}R^\delta u_r, R^\delta \Pi[ \sigma_k\cdot\nabla u_r +(D\sigma_k)^T u_r ] {\rangle}|^2 {\mathrm{d}}r\\
&\le C \|u\|_{L^2_{t,\omega}(L^\infty_x)}^2 \left(\sum_k \|\sigma_k\|_{C^1_x}\right) \|u\|_{L^2_{t,\omega}(W^{1,2}_x)}^2.\end{aligned}$$ Hence the stochastic integral is a martingale with zero mean. Similarly the integrands in the deterministic integrals have finite $L^1_{t,\omega}$ norm and we can take expectation: we get $$\begin{aligned}
&{\mathrm{E}}\|R^\delta u_t\|_{L^2_x}^2 = {\mathrm{E}}\|R^\delta u_0\|_{L^2_x}^2 -2{\mathrm{E}}\int^t_0 {\langle}R^\delta u_r, R^\delta \Pi[u_r\cdot\nabla u_r] {\rangle}dr\\
&-{\mathrm{E}}\int^t_0 c{\mathrm{E}}\|R^\delta \nabla u_r\|_{L^2_x}^2 {\mathrm{d}}r + {\mathrm{E}}\int^t_0 \sum_k \|R^\delta \Pi[ \sigma_k\cdot\nabla u_r +(D\sigma_k)^T u_r\|_{L^2_x}^2 {\mathrm{d}}r,\end{aligned}$$ where we have used integration by parts and that $R^\delta$ commutes with $\Delta$ and $\nabla$. Finally, we note that $R^\delta f \rightarrow f$ in $L^2_x$ for every $f$ in $L^2_x$. We exploit this fact for $f=u_r$, $f=\Pi[u_r\cdot\nabla u_r]$, $f=\sigma_k\cdot\nabla u_r +(D\sigma_k)^T u_r$ and $f=\nabla u_r$, and use the dominated convergence theorem in $r$ and $\omega$ and $k$, to pass $\delta\to 0$ and obtain . The proof is complete.
\[lem:C\_Lebesgue\_meas\] Let $X$ be a closed convex subset of a topological vector space, endowed with its Borel $\sigma$-algebra, assume that $X$ is also a Polish space. Let $\zeta:[0,T]\times \Omega\rightarrow X$ be a ${\mathcal}{B}([0,T])\times {\mathcal}{A}$ Borel measurable map.
- The set $C_t(X)$ is a Polish space and, if, for every $\omega$, $t\mapsto \zeta_t$ is in $C_t(X)$, then $\omega\mapsto \zeta(\cdot,\omega)$ is ${\mathcal}{A}$ Borel measurable as $C_t(X)$-valued map.
- If $X$ is a separable reflexive Banach space and, for every $\omega$, $t\mapsto \zeta_t$ is in $L^2_t(X)$ (more precisely, has finite $L^2_t(X)$ norm), then $\omega\mapsto \zeta(\cdot,\omega)$ (more precisely, its equivalence class) is ${\mathcal}{A}$ Borel measurable as $L^2_t(X)$-valued map.
For the first point, the fact that $C_t(X)$ is a Polish space is well known. Moreover the Borel $\sigma$-algebra ${\mathcal}{B}(C_t(X))$ on $C_t(X)$ is generated by the evaluation maps $\pi_t(\gamma)=\gamma_t$: indeed, ${\mathcal}{B}(C_t(X))$ is generated by the maps $$\begin{aligned}
\gamma \mapsto d(\gamma(t),g(t)),\quad t\in [0,T]\cap \mathbb{Q},\quad g\in C_t(X),\end{aligned}$$ with $d$ distance on $X$, and these maps are measurable in the $\sigma$-algebra generated by the evaluation maps (because they are composition of the evaluation maps and a Borel function on $X$). Now, for every $t$, the map $\pi_t(\zeta)=\zeta_t$ is ${\mathcal}{A}$ Borel, by the Fubini theorem, hence, if $\zeta$ is $C_t(X)$-valued, then it is ${\mathcal}{A}$ Borel measurable as $C_t(X)$-valued map. For the second point, we note that, by Lemma \[lem:equiv\_topol\], it is enough to show that $\zeta$ is weakly progressively measurable. Since the dual of $L^2_t(X)$ is $L^2_t(X^*)$ (see [@DieUhl1977 Chapter IV Section 1]), it is enough to show that, for every $\varphi$ in $L^2_t(X^*)$, $$\begin{aligned}
\omega\mapsto \int^T_0 {\langle}\zeta(t,\omega), \varphi(t){\rangle}_{X,X^*} {\mathrm{d}}t\end{aligned}$$ is measurable. But this follows from Fubini theorem. The proof is complete.
Call ${\tilde}{{\mathcal}{F}}^{00}_t = \sigma\{{\tilde}{\xi}_s,{\tilde}{W}_s \mid 0\le s\le t\}$ the filtration generated by ${\tilde}{\xi}$ and ${\tilde}{W}$. Clearly ${\tilde}{W}$ and ${\tilde}{\xi}$ are adapted to ${\tilde}{{\mathcal}{F}}^{00}$. We claim that ${\tilde}{W}$ is a cylindrical Brownian motion with respect to ${\tilde}{{\mathcal}{F}}^{00}$. Indeed, ${\tilde}{W}$ is a cylindrical Brownian motion with respect to its natural filtration, as a.s. limit of cylindrical Brownian motions. Moreover, for every $0\le s_1\le \ldots \le s_h\le s < t$, ${\tilde}{W}^{(j)}_t-{\tilde}{W}^{(j)}_s$ is independent of $({\tilde}{\xi}^j_{s_1},{\tilde}{W}^{(j)}_{s_1},\ldots {\tilde}{\xi}^j_{s_h},{\tilde}{W}^{(j)}_{s_h})$, therefore ${\tilde}{W}_t-{\tilde}{W}_s$ is independent of $({\tilde}{\xi}_{s_1},{\tilde}{W}_{s_1},\ldots {\tilde}{\xi}_{s_h},{\tilde}{W}_{s_h})$. This proves our claim.
Recall that ${\tilde}{{\mathcal}{F}}_t^{0}$ is the filtration generated by ${\tilde}{{\mathcal}{F}}_t^{00}$ and the ${\tilde}{P}$-null sets on $({\tilde}{\Omega},{\tilde}{{\mathcal}{A}},{\tilde}{P})$ and that ${\tilde}{{\mathcal}{F}}_t = \cap_{s>t}{\tilde}{{\mathcal}{F}}^{0}_s$. We argue as in the proof of [@Bas2011 Proposition 2.5, Point 1] (note that the proof is valid for any filtration making $W^k$ Brownian motions) and we get that the filtration $({\tilde}{{\mathcal}{F}}_t)_t$ is complete and right-continuous and ${\tilde}{W}$ is still a cylindrical Brownian motion with respect to it. Finally ${\tilde}{\xi}$ is an $({\mathcal}{M}_{x,M},w*)$-valued $({\tilde}{{\mathcal}{F}}_t)_t$-adapted and continuous process, hence also progressively measurable.
In a similar (and easier) way, one gets the result for $({\tilde}{{\mathcal}{F}}^j_t)_t$, ${\tilde}{W}^{(j)}$ and ${\tilde}{\xi}^j$, for each $j$.
Let us fix $j$. We have to verify equation for $(\xi^j,W^{(j)})$ for every $\varphi$ in $C^\infty_{t,x}$. The idea is taken by [@BrzGolJeg2013 Section 5]: it is enough to verify that, for every $\varphi$ in $C^\infty_x$, for every $t$, the random variables $$\begin{aligned}
Z_t&:= {\langle}\xi^{{\varepsilon}_j}_t,\varphi_t {\rangle}- {\langle}\xi^{{\varepsilon}_j}_0,\varphi{\rangle}-\int^t_0 {\langle}N(\xi^{{\varepsilon}_j}_r),\varphi {\rangle}{\mathrm{d}}r\nonumber\\
& -\sum_k \int^t_0 {\langle}\xi^{{\varepsilon}_j}_r,\sigma_k\cdot\nabla \varphi {\rangle}{\mathrm{d}}W^k_r -\frac12 \int^t_0 {\langle}\xi^{{\varepsilon}_j}_r, c\Delta \varphi {\rangle}{\mathrm{d}}r \label{eq:vorticity_td_approx}\end{aligned}$$ and ${\tilde}{Z}_t$, obtained as in replacing $(\xi^{{\varepsilon}_j},W)$ with $({\tilde}{\xi}^j,{\tilde}{W}^{(j)})$, have the same law. We fix $t$ and $\varphi$ in $C^\infty_x$. By Lemma \[rmk:Poupaud\_trick\] and Lemma \[lem:H\_Borel\], all the terms in but the nonlinear term and the stochastic integral are Borel functions of $\xi^{{\varepsilon}_j}$ with respect to the $C_t({\mathcal}{M}_{x,M},w*)$ topology. Concerning the stochastic integral we use an approximation argument. For every positive integers $K$ and $N$, calling $t^N_i =2^{-N}i$ for $i$ integer, the map $$\begin{aligned}
C_t({\mathcal}{M}_{x,M},w*)\times C_t^{\mathbb{N}}\ni (\xi,W)\mapsto \sum_{k=1}^K \sum_{i,t^N_{i+1}\le t} {\langle}\xi_{t^N_i}, \sigma_k\cdot\nabla \varphi{t^N_i} {\rangle}(W_{t^N_{i+1}}-W_{t^N_i})\end{aligned}$$ is a continuous, in particular Borel function. By the continuity of $t\mapsto {\langle}\xi_t, \sigma_k\cdot\nabla \varphi_t {\rangle}$ for every $k$, for a.e. $\omega$, and by the square-summability of $\|\sigma_k\|_{C_x}$, we get via the dominated convergence theorem that, as $(N,K)$ tends to $\infty$, $$\begin{aligned}
\sum_k {\mathrm{E}}\int^T_0 |{\langle}\xi_t, \sigma_k\cdot\nabla \varphi_t {\rangle}- 1_{k\le K} \sum_i {\langle}\xi^{{\varepsilon}_j}_{t^N_i}, \sigma_k\cdot\nabla \varphi{t^N_i} {\rangle}1_{[t^N_{i+1},t^N_i)}(t)|^2 {\mathrm{d}}r \rightarrow 0,\end{aligned}$$ so by the Itô isometry we obtain that, as $(N,K)\rightarrow\infty$, $$\begin{aligned}
\sum_{k=1}^K \sum_{i,t^N_{i+1}\le t} {\langle}\xi^{{\varepsilon}_j}_{t^N_i}, \sigma_k\cdot\nabla \varphi{t^N_i} {\rangle}(W_{t^N_{i+1}}-W_{t^N_i}) \rightarrow \sum_k \int^t_0 {\langle}\xi^{{\varepsilon}_j}_r,\sigma_k\cdot\nabla \varphi_r {\rangle}{\mathrm{d}}W^k_r \quad \text{in }L^2_\omega.\end{aligned}$$ Similarly for $({\tilde}{\xi}^j,{\tilde}{W}^{(j)})$ (with convergence in $L^2_{\tilde{\omega}}$). We conclude that $$\begin{aligned}
&Z_t = F_t(\xi^{{\varepsilon}_j}) +L^2_\omega-\lim_{N,K} G_{N,K,t}(\xi^{{\varepsilon}_j},W),\\
&\tilde{Z}_t = F_t(\tilde{\xi}^j) +L^2_{\tilde{\omega}}-\lim_{N,K} G_{N,K,t}(\tilde{\xi}^j,\tilde{W}^{(j)})\end{aligned}$$ for some Borel maps $F_t$ and $G_{N,K,t}$. Since $(\xi^{{\varepsilon}_j},W)$ and $({\tilde}{\xi}^j,{\tilde}{W}^{(j)})$ have the same law, also $Z_t$ and ${\tilde}{Z}_t$ have the same law. Since, $P$-a.s., $Z_t=0$ for every $t$, also, $P$-a.s., ${\tilde}{Z}_t=0$ for every $t$, and so, by Lemma \[lem:stochEulervort\_H\], $({\tilde}{\Omega},{\tilde}{{\mathcal}{A}},({\tilde}{{\mathcal}{F}}^j_t)_t,{\tilde}{P},{\tilde}{W}^{(j)},{\tilde}{\xi}^j)$ solves .
Concerning Lemmas \[lem:Hm1\_bound\] and \[lem:Hm4\_bound\], for any integer $h$, as a consequence of Remark \[rmk:Borel\_norm\], the maps $$\begin{aligned}
&C_t({\mathcal}{M}_{x,M},w*)\ni \xi \mapsto (\|\xi_t\|_{H^h_x})_t \in C_t,\\
&C_t({\mathcal}{M}_{x,M},w*)\ni \xi \mapsto \|\xi\|_{C^\alpha_t(H^h_x)} \in {\mathbb{R}}\end{aligned}$$ are Borel. Hence $\|\xi^{{\varepsilon}_j}_t\|_{H^h_x}$ and $\|{\tilde}{\xi}^j_t\|_{H^h_x}$ have the same laws (as $C_t$-valued random variables) and so Lemma \[lem:Hm1\_bound\] holds for ${\tilde}{\xi}^j$. Similarly $\|\xi^{{\varepsilon}_j}_t\|_{H^h_x}$ and $\|{\tilde}{\xi}^j_t\|_{H^h_x}$ have the same laws and so Lemma \[lem:Hm4\_bound\] holds for ${\tilde}{\xi}^j$.
Finally, concerning non-negativity, we note that the set $\{\xi_t\ge 0,\, \forall t\}$ is Borel in $C_t({\mathcal}{M}_x,w*)$, because it can be written as ${\langle}\xi_t,\varphi {\rangle}\ge 0$ for all rational $t$ and all $\varphi$ in a countable dense set in $C_x$. Since $\xi^{{\varepsilon}_j}$ is concentrated on $\{\xi_t\ge 0,\, \forall t\}$, also ${\tilde}{\xi}^j$ is concentrated on this set. The proof is complete.
The torus and the Green function
================================
We consider the torus ${\mathbb{T}}^2$ as the two-dimensional manifold obtained from $[-1,1]^2$ identifying the opposite sides; we call $\pi: \mathbb{R}^2 \to \mathbb{T}^2$ the quotient map. A continuous ($C_x$) function is understood here as a continuous periodic function on ${\mathbb{R}}^2$, with period $2$ on both $x_1$ and $x_2$ directions, and can be identified with a continuous function on the torus ${\mathbb{T}}^2$. For $s$ positive integer, a $C^s_x$ function on ${\mathbb{T}}^2$ is a $C^s$ periodic function on ${\mathbb{R}}^2$ (with period $2$). Similarly, for $s$ positive integer and $1\le p \le\infty$, a $W^{s,p}_x$ function on ${\mathbb{T}}^2$ is a $W^{s,p}_{loc}$ periodic function on ${\mathbb{R}}^2$ (with period $2$). One can also define a Riemannian structure on the torus via the quotient map $\pi$ so that $\pi$ is a local isometry; the local isometry implies that the gradient, the covariant derivatives etc transform naturally, moreover the $C^s$ and $W^{s,p}$ spaces defined via the Riemannian structure coincide with the corresponding spaces of periodic functions as defined above.
The space of distribution $\mathcal{D}^\prime_x$ on ${\mathbb{T}}^2$ is understood as the dual space of $C^\infty$ periodic functions on ${\mathbb{R}}^2$; the spaces of functions can be identified with subspaces of distribution via the $L^2$ scalar product ${\langle}f,g{\rangle}=\int_{[-1,1[^2} f(x) g(x) {\mathrm{d}}x$. The space of measures ${\mathcal}{M}_x$ is the space of distributions on ${\mathbb{T}}^2$ which are continuous (precisely, can be extended continuously) on $C_x$; the space ${\mathcal}{M}_x$ can be identified with the space of finite signed Radon measures on ${\mathbb{T}}^2$ and with the quotient space of finite signed Radon measures on $[-1,1]^2$ under the map $\pi$, via the $L^2_x$ scalar product: $$\begin{aligned}
{\langle}f, \mu{\rangle}= \int_{[-1,1[^2} f(x) \mu({\mathrm{d}}x),\quad \forall f\in C_x.\end{aligned}$$ For $s$ positive integer and $1<p<\infty$, calling $p'$ the conjugate exponent of $p$, the space $W^{-s,p'}$ is the space of distributions on ${\mathbb{T}}^2$ which can be continuously extended to $W^{s,p}$.
The convolution on the torus is understood as $$\begin{aligned}
f*g(x) = \int_{[-1,1[^2}f(y)g(x-y) {\mathrm{d}}y\end{aligned}$$ for $f$, $g$ periodic functions on ${\mathbb{R}}^2$.
We recall here some standard facts on the Green function $G$ of the Laplacian on the zero-mean functions, that is $$\begin{aligned}
\Delta G(\cdot, y) = \delta_y,\quad \forall y\in \mathbb{T}^2.\end{aligned}$$
\[lem:Green\_function\] The following facts hold:
1. The Green function $G$ is translation invariant (that is $G(x,y)=G(x-y)$), even, regular outside $0$, with $-C^{-1}\log|x|\le G(x)\le -C\log|x|$ in a neighborhood of $0$.
2. The kernel $K$ is divergence-free (in the distributional sense), odd, regular outside $0$, with $C^{-1}|x|^{-1}\le |K(x)|\le C|x|^{-1}$ in a neighborhood of $0$.
3. Let $\xi$ be a distribution on ${\mathbb{T}}^2$ with zero mean, define $u=K*\xi$. Then ${\mathrm{div}}u =0$ and $\xi = \text{curl} u$.
4. Let $u$ be a vector-valued distribution on ${\mathbb{T}}^2$ with zero mean and with ${\mathrm{div}}u =0$, define $\xi= \text{curl} u$. Then $u=K*\xi$.
5. Let $u$ be a vector-valued distribution on ${\mathbb{T}}^2$, define $\xi= \text{curl} u$. Then $\Pi u = K*\xi$, where $\Pi$ is the Leray projector on zero-mean divergence-free distributions.
6. Let $\xi$ be a distribution on ${\mathbb{T}}^2$ with zero mean, define $u=K*\xi$. For any $1<p<\infty$, for any integer $s$, $\xi$ is in $W^{s,p}_x$ if and only if $u$ is in $W^{s+1,p}_x$ and it holds $$\begin{aligned}
C^{-1}\|\xi\|_{W^{s.p}_x}\le \|u\|_{W^{s+1,p}_x}\le C\|\xi\|_{W^{s.p}_x}.\end{aligned}$$
For the proof, we recall the following facts:
- Any distribution $f$ on ${\mathbb{T}}^2$ can be written in Fourier series as $f= \sum_k a_k e^{ik\cdot x}$ (the convergence being when tested against a smooth periodic function), see [@Tri1983 Section 9] and [@Tri1978 Section 4.11.1].
- For any integer $s$ and any $1<p<\infty$, the Sobolev space $W^{s,p}$ can also be written in terms of Fourier series, that is $$\begin{aligned}
W^{s,p}_x = \{f \in \mathcal{D}^\prime_x \mid \tilde{f}^s := \sum_k a_k (1+|k|^2)^{s/2} e^{ik\cdot x} \in L^p_x \},\label{eq:Sob_Fourier}\end{aligned}$$ with $\|\tilde{f}^s\|_{L^p_x}$ as equivalent norm, see [@Tri1978 Section 4.11.1]. This fact is well-known for $s\ge 0$. We give a sketch of the proof for $s<0$ for completeness. We have to show that the above right-hand side is the dual space of $W^{-s,p}_x$: indeed, for every distributions $f$ continuous on $W^{-s,p'}_x$, it holds $$\begin{aligned}
|{\langle}\tilde{\varphi}^{-s}, \tilde{f}^s {\rangle}| = |{\langle}\varphi , f {\rangle}| \le C\|\varphi\|_{W^{-s,p'}_x} \le C'\|\tilde{\varphi}^{-s}\|_{L^{p'}_x}, \quad \forall \varphi \in W^{-s,p'}_x,\end{aligned}$$ hence $\tilde{f}^s$ belongs to $L^p_x$, so $f$ belongs to the right-hand side of .
- Regularity theory: The Laplacian operator $\Delta$, indended in the sense of distribution, acts multiplying each Fourier coefficient $a_k$ by $|k|^2$. In particular, it is invertible on the subspace of zero-mean distributions and its inverse acts multiplying each Fourier coefficient $a_k$ by $|k|^{-2}1_{k\neq 0}$. It follows that the inverse $\Delta^{-1}$ of the Laplacian (on zero-mean distributions) maps $W^{s,p}$ into $W^{s+2,p}$, for any integer $s$ and any $1<p<\infty$.
- Hodge decomposition: if $f$ is a ${\mathbb{R}}^2$-valued distributions with $\text{div} f=0$ and $\text{curl} f=0$, then $f$ is a constant: indeed, if $a_k$ are the Fourier coefficients of $f$, we have $a_k \cdot k=0$ and $a_k\cdot k^\perp =0$ for every $k$, therefore $a_k=0$ for every $k\neq 0$.
1. The fact that $G$ is translation-invariant is due to the translation invariant property of the torus: if $\varphi$ is periodic and zero-mean and solves $\Delta \varphi =\delta_0$ in the distributional sense, then $\varphi_y(x):=\varphi(x-y)$ is still periodic and zero-mean and solves $\Delta \varphi= \delta_y$. For the even and regularity property and the bounds, see e.g. [@BrzFlaMau2016 Proposition B.1] and references therein.
2. The fact that $K$ is divergence-free, odd and regular outside $0$ is a consequence of the definition of $K$ and the properties of $G$. For the bounds, see again [@BrzFlaMau2016 Proposition B.1].
3. The fact that $u$ is divergence-free follows from the same property of $K$. Call $\psi=(-\Delta)^{-1}\xi = -G*\xi$. Then $u=-\nabla^\perp \psi$ and so $$\begin{aligned}
\text{curl} u = \partial_{x_1}u^2 - \partial_{x_2}u^1 = -\Delta \psi = \xi,\end{aligned}$$ where all the computations are intended using test functions.
4. Call $\tilde{u}=K*\xi$. We deduce from the previous points that $\text{curl} (u-\tilde{u}) = 0$ and that ${\mathrm{div}}(u-\tilde{u}) = 0$. From this we conclude that $u-\tilde{u}$ is a constant, therefore is $=0$ as both functions have zero mean.
5. This follows from the previous point, applied to $\Pi u$ in place of $u$.
6. If $u$ is in $W^{s+1,p}$ then $\xi=\text{curl} u$ is in $W^{s,p}$. Conversely, if $\xi$ is in $W^{s,p}$, then $\psi = (-\Delta)^{-1} \xi$ is in $W^{s+2,p}$ and so $u = -\nabla^{-1}\psi$ is in $W^{s+1,p}$.
Measurability
=============
We include here various standard concepts and results about measurability.
We recall the definition of strong, weak, weak-\* and Borel measurability for a Banach-space valued map. We are given a $\sigma$-finite measure space $(E,\mathcal{E},\mu)$, a Banach space $V$ and a function $f:E\rightarrow V$:
- we say that $f$ is strongly measurable if it is the pointwise (everywhere) limit of a sequence of $V$-valued simple measurable functions (i.e. of the form $\sum_{i=1}^{N}v_{i}1_{A_{i}}$ for $A_{i}$ in $\mathcal{E}$ and $v_{i}$ in $V$);
- we say that $f$ is weakly measurable if, for every $\varphi$ in $V^{*}$, $x\mapsto\langle f(x),\varphi\rangle_{V,V^{*}}$ is measurable;
- if $V=U^{*}$ is the dual space of a Banach space $U$, we say that $f$ is weakly-[\*]{} measurable if, for every $\varphi$ in $U$, $x\mapsto\langle f(x),\varphi\rangle_{V,U}$ is measurable;
- we say that $f$ is resp. strongly Borel, weakly Borel, weakly-\* Borel measurable if, for every open set $A$ in $V$ resp. in the strong, weak, weak-\* topology, $f^{-1}(A)$ is in $\mathcal{E}$. We omit strongly/weakly/weakly-\* when clear.
The following result is morally Pettis measurability theorem. The present version is a consequence of cite\[Chapter I Propositions 1.9 and 1.10\][VTC87]{}.
\[lem:equiv\_topol\] Assume that $V$ is a separable Banach space. Then the notions of strong measurability, weak measurability, strongly Borel measurability and weakly Borel measurability coincide. They also coincide with weak-\* measurability and weakly-\* Borel measurability if in addition $V$ is reflexive.
We prove here a statement concerning weak-\* and weakly-\* Borel measurability, which applies in particular to ${\mathcal}{M}_x=(C_x)^*$. We call $\bar{B}_R$ the closed centered ball in $V$ of radius $R$ (in the strong topology).
\[rmk:Borel\_weakstar\] Assume that $V=U^{*}$ is the dual space of a separable Banach space $U$. Then the notions of weak-\* measurability and of weakly-\* Borel measurability coincide. Moreover, for any sequence $(\varphi_k)_k$ dense in the unit centered ball of $U$, the Borel $\sigma$-algebra associated to the weak-\* topology is generated by $$\begin{aligned}
{\langle}\cdot,\varphi_k{\rangle}.\end{aligned}$$
\[rmk:restriction\] We recall that, if $(E,{\mathcal}{E})$ is a measurable space, ${\mathcal}{I}$ generates the $\sigma$-algebra ${\mathcal}{E}$ and $F$ is a subset of $E$, then the $\sigma$-algebra ${\mathcal}{E}\mid_F=\{A\cap F\mid A\in {\mathcal}{E}\}$ on $F$ is the $\sigma$-algebra generated on $F$ by ${\mathcal}{I}\mid_F=\{I\cap F\mid I\in {\mathcal}{I}\}$. In particular, the Borel $\sigma$-algebra restricted to a subset $F$ is the Borel $\sigma$-algebra on $F$ (with the topology restricted on $F$) and the previous statement can be extended to subsets of $U$.
We fix the sequence $(\varphi_k)_k$. We call ${\mathcal}{B}$ the Borel $\sigma$-algebra associated to the weak-\* topology and ${\mathcal}{C}$ the $\sigma$-algebra generated by the maps ${\langle}\cdot,\varphi{\rangle}$ for $\varphi$ in $U$. Since $\varphi_k$ are dense in the unit centered ball of $U$, ${\mathcal}{C}$ is generated by the maps ${\langle}\cdot,\varphi_k{\rangle}$. We will show that ${\mathcal}{B}={\mathcal}{C}$, this implies both statements in the lemma. Since the maps ${\langle}\cdot,\varphi{\rangle}$, for $\varphi$ in $U$, are continuous in the weak-\* topology, ${\mathcal}{C}\subseteq {\mathcal}{B}$. For the converse inclusion, it is enough to show that, for any $R>0$, for any open set $A$ in the weak-\* topology, the sets $\bar{B}_R$ and $A\cap \bar{B}_R$ are in ${\mathcal}{C}$, where $\bar{B}_R$ is the closed centered ball in $V$ of radius $R$ (in the strong topology). By separability of $U$, we can fix a sequence $(\varphi_k)_k$ which is dense in the unit centered ball of $U$. For any $R>0$, the ball $\bar{B}_R$ is in ${\mathcal}{C}$ because the strong norm on $V$ is ${\mathcal}{C}$-measurable: indeed it can be written as $$\begin{aligned}
\|v\| = \sup_{k} |{\langle}v,\varphi_k{\rangle}|.\end{aligned}$$ We recall that the weak-\* topology, restricted on $\bar{B}_R$ is separable and metrizable (see [@Bre2011 Theorem 3.28] for metrizability, separability follows by compactness), with the distance $$\begin{aligned}
d(v,v') = \sum_{k} 2^{-k}|{\langle}v-v',\varphi_k{\rangle}|.\end{aligned}$$ Now, for every $v$ in $\bar{B}_R$, $d(v,\cdot)$ is ${\mathcal}{C}$-measurable, hence any open ball with respect to $d$ is in ${\mathcal}{C}$. Moreover, for any open set $A$ in the weak-\* topology, $A\cap \bar{B}_R$ can be written as countable union of open balls with respect to $d$, hence $A\cap \bar{B}_R$ is in ${\mathcal}{C}$. The proof is complete.
Now we give a measurability property of the testing against bounded, but not necessarily continuous maps. Here, given a Polish space $X$, ${\mathcal}{M}(X)$ is the set of finite Radon measures on $X$.
\[rmk:bounded\_test\_Borel\] Let $F:X\rightarrow {\mathbb{R}}$ be a bounded Borel function on a compact metric space $X$ (in particular $X={\mathbb{T}}^2$). Then the map $$\begin{aligned}
\Psi_F: {\mathcal}{M}(X) \ni \mu \mapsto \int_X F(x) \mu({\mathrm{d}}x) \in {\mathbb{R}}\end{aligned}$$ is Borel with respect to the weak-\* topology on ${\mathcal}{M}(X)$.
If $F$ is continuous, then also $\Psi_F$ is continuous in the weak-\* topology, in particular weakly-\* Borel. If $F=1_A$ is the indicator of an open set $A$ in ${\mathbb{T}}^2$, then $1_A$ is the pointwise (everywhere) non-decreasing limit on ${\mathbb{T}}^2$ of continuous functions $F_n$; so, by the dominated convergence theorem, $\Psi_{1_A}$ is the pointwise limit of $\Psi_{F_n}$ and so it is also weakly-\* Borel. For the case of general $F$, we use the monotone class theorem. We consider the set $W$ of Borel functions $F$ on $X$ such that $\Psi_F$ is weakly-\* Borel. Then ${\mathcal}{W}$ contains the indicators of all the open sets, it is a vector space and it is stable under monotone non-decreasing convergence: indeed, if $(F_n)_n$ is a non-decreasing sequence in ${\mathcal}{W}$ converging pointwise to $F$, then, by the dominated convergece theorem, $\Psi_F$ is the pointwise limit of $\Psi_{F_n}$, in particular weakly-\* Borel, and so $F$ belongs also to ${\mathcal}{W}$. Then, by the monotone class theorem, ${\mathcal}{W}$ contains all bounded Borel functions $F$ on ${\mathbb{T}}^2$, which gives the result.
We recall a classical fact for the product of measures. For a compact metric space $X$, we call ${\mathcal}{M}(X)$ the set of finite Radon measures on $X$, dual to the space $C(X)$ of continuous function on $X$, and, for $M>0$, ${\mathcal}{M}_M(X)$ the closed centered ball on ${\mathcal}{M}(X)$ of radius $M$.
\[rmk:cont\_prod\_meas\] For any compact metric space $X$, the map $G:{\mathcal}{M}(X)\ni \mu\mapsto \mu\otimes \mu \in {\mathcal}{M}(X\times X)$ is Borel with respect to the weak-\* topologies. Moreover, for any $M>0$, the map $G$, restricted on ${\mathcal}{M}_M(X)$ with values in ${\mathcal}{M}_{M^2}(X\times X)$, is continuous with respect to the weak-\* topologies.
For $M>0$, we call $G_M:{\mathcal}{M}_M(X) \rightarrow{\mathcal}{M}_{M^2}(X\times X)$ the map $G$ restricted on ${\mathcal}{M}_M(X)$ with values in ${\mathcal}{M}_{M^2}(X\times X)$. We start showing the continuity of $G_M$. By metrizability of ${\mathcal}{M}_M(X)$ and ${\mathcal}{M}_{M^2}(X\times X)$, it is enough to show that, if $(\mu^n)_n$ is a sequence in ${\mathcal}{M}_M(X)$ converging weakly-\* to $\mu$, then $(\mu^n\otimes \mu^n)_n$ converges weakly-\* to $\mu\otimes \mu$. For every two continuous functions $\varphi$, $\psi$ on $X$, we have $$\begin{aligned}
{\langle}\varphi\otimes \psi, \mu^n\otimes \mu^n {\rangle}= {\langle}\varphi,\mu^n{\rangle}{\langle}\psi,\mu^n{\rangle}\rightarrow {\langle}\varphi\otimes \psi, \mu\otimes \mu {\rangle}.\end{aligned}$$ Now the set of all linear combinations of $\varphi\otimes \psi$ for all continuous functions $\varphi$, $\psi$ is a subalgebra of $C(X\times X)$ which separates point, hence, by the Stone-Weierstrass theorem, it is dense in $C(X\times X)$. Then, for any $\phi$ continuous function on $X\times X$, by a standard approximation argument on $\phi$ we get that $({\langle}\phi,\mu^n\otimes\mu^n {\rangle})_n$ converges to ${\langle}\phi,\mu\otimes\mu {\rangle}$. This shows continuity of the map $G$ restricted to ${\mathcal}{M}_M(X)$.
For Borel measurability on the full space, take any open set $A$ in ${\mathcal}{M}(X\times X)$, then $G^{-1}(A)$ is the non-decreasing union of $G_M^{-1}(A\cap {\mathcal}{M}_{M^2}(X\times X))$ for $M$ in $\mathbb{N}$. By continuity of $G_M$, $G_M^{-1}(A\cap {\mathcal}{M}_{M^2}(X\times X))$ is open, hence Borel, in ${\mathcal}{M}_M(X)$. Moreover ${\mathcal}{M}_M(X)$ is itself a Borel set in ${\mathcal}{M}(X)$: indeed the closed centered ball $\bar{B}_R$ in a dual space $V=U^*$ is Borel, as shown in the proof of Lemma \[rmk:Borel\_weakstar\]. So $G_M^{-1}(A\cap {\mathcal}{M}_{M^2}(X\times X))$ is Borel in ${\mathcal}{M}(X)$. Therefore $A$ is Borel in ${\mathcal}{M}(X)$. The proof is complete.
We conclude on measurability of the $H^h$ norms:
\[rmk:Borel\_norm\] For any fixed integer $h$, the $H^h_x$ norm can be written as supremum of $|{\langle}\cdot,\varphi{\rangle}|$ over a set $D$ of $\varphi$ in $C_x$, with $D$ countable and dense in $H^h_x$. Therefore the $H^h_x$ norm is a lower semi-continuous function and Borel function on $({\mathcal}{M}_x,w*)$.
The $C_t^\alpha(H^h_x)$ norm can be written as $$\begin{aligned}
\|f\|_{C^\alpha_t(H^h_x)}= \sup_{t\in \mathbb{Q}\cap[0,T]}\|f_t\|_{H^h_x} +\sup_{s,t\in \mathbb{Q}\cap[0,T],s<t} \frac{\|f_t-f_s\|_{H^h_x}}{|t-s|^\alpha}\end{aligned}$$ (note the supremum over a countable set of times). Therefore, for any fixed $M>0$, the $C_t(H^h_x)$ norm is a lower semi-continuous function, in particular a Borel function, on $C_t({\mathcal}{M}_{x,M},w*)$.
**Acknowledgement.** We would like to thank James-Michael Leahy for pointing out the presence of the constant $\gamma$ in the equation for the velocity. We would like to thank Jasper Hoeksema and Oliver Tse for pointing out a mistake in Lemma \[rmk:bounded\_test\_Borel\], in a previous draft of this paper, and a way to correct it. This work was undertaken mostly when M.M. was at the University of York, supported by the Royal Society via the Newton International Fellowship NF170448 “Stochastic Euler Equations and the Kraichnan model”.
[^1]: Department of Mathematics, University of York, YO10 5DD Heslington, York, UK. E-mail address: [[email protected]]{}
[^2]: Dipartimento di Matematica ‘Federigo Enriques’, Università degli Studi di Milano, via Saldini 50, 20133 Milano, Italy. E-mail address: [[email protected]]{}
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'We revisit the problem of constructing Menon-Hadamard difference sets. In 1997, Wilson and Xiang gave a general framework for constructing Menon-Hadamard difference sets by using a combination of a spread and four projective sets of type Q in ${\mathrm{PG}}(3,q)$. They also found examples of suitable spreads and projective sets of type Q for $q=5,13,17$. Subsequently, Chen (1997) succeeded in finding a spread and four projective sets of type Q in ${\mathrm{PG}}(3,q)$ satisfying the conditions in the Wilson-Xiang construction for all odd prime powers $q$. Thus, he showed that there exists a Menon-Hadamard difference set of order $4q^4$ for all odd prime powers $q$. However, the projective sets of type Q found by Chen have automorphisms different from those of the examples constructed by Wilson and Xiang. In this paper, we first generalize Chen’s construction of projective sets of type Q by using “semi-primitive” cyclotomic classes. This demonstrates that the construction of projective sets of type Q satisfying the conditions in the Wilson-Xiang construction is much more flexible than originally thought. Secondly, we give a new construction of spreads and projective sets of type Q in ${\mathrm{PG}}(3,q)$ for all odd prime powers $q$, which generalizes the examples found by Wilson and Xiang. This solves a problem left open in Section 5 of the Wilson-Xiang paper from 1997.'
address:
- 'Faculty of Education, Kumamoto University, 2-40-1 Kurokami, Kumamoto 860-8555, Japan'
- 'Department of Mathematical Sciences, University of Delaware, Newark, DE 19716, USA'
author:
- 'Koji Momihara$^{\dagger}$ and Qing Xiang$^{\ast}$'
title: 'Generalized constructions of Menon-Hadamard difference sets '
---
[^1]
[^2]
Introduction
============
Let $G$ be an additively written abelian group of order $v$. A $k$-subset $D$ of $G$ is called a [*$(v,k,\lambda)$ difference set*]{} if the list of differences “$x-y$, $x,y\in D, x\neq y$", represents each nonidentity element of $G$ exactly $\lambda$ times. In this paper, we revisit the problem of constructing Menon-Hadamard difference sets, namely those difference sets with parameters $(v,k,\lambda)=(4m^2,2m^2-m,m^2-m)$, where $m$ is a positive integer. It is well known that a Menon-Hadamard difference set generates a regular Hadamard matrix of order $4m^2$. So by contructing Menon-Hadamard difference sets in groups of order $4m^2$, we obtain regular Hadamard matrices of order $4m^2$. The main problem in the study of Menon-Hadamard difference sets is: For each positive integer $m$, which groups of order $4m^2$ contain a Menon-Hadamard difference set. We give a brief survey of results on this problem in the case where the group under consideration is abelian. First we mention a product theorem of Turyn [@T84]: If there are Menon-Hadamard difference sets in abelian groups $H\times G_1$ and $H\times G_2$, respectively, where $|H|=4$ and $|G_i|$, $i=1,2$, are squares, then there also exists a Menon-Hadamard difference set in $H\times G_1\times G_2$. With Turyn’s product theorem in hand, in order to construct Menon-Hadamard difference sets, one should start with the case where the order of the abelian group is $4q$ with $q$ an even power of a prime. In the case where $q$ is an even power of $2$, that is, $G$ is an abelian $2$-group, the existence problem was completely solved in [@K93] after much work was done in [@davis]; it was shown that there exists a Menon-Hadamard difference set in an abelian group $G$ of order $2^{2t+2}$ if and only if the exponent of $G$ is less than or equal to $2^{t+2}$.
In the case where $q$ is an even power of an odd prime, Turyn [@T84] observed that there exists a Menon-Hadamard difference set in $H\times (\Z_3)^2$; hence by the product theorem, there is a Menon-Hadamard difference set in $H\times (\Z_3)^{2t}$ for any positive integer $t$. On the other hand, McFarland [@mcfarland] proved that if an abelian group of order $4p^2$, where $p$ is a prime, contains a Menon-Hadamard difference set, then $p=2$ or $3$. After McFarland’s paper [@mcfarland] was published, it was conjectured [@jung p. 287] that if an abelian group of order $4m^2$ contains a Menon-Hadamard difference set, then $m=2^r3^s$ for some nonnegative integers $r$ and $s$. So it was a great surprise when Xia [@X] constructed a Menon-Hadamard difference set in $H\times \Z_{p}^4$ for any odd prime $p$ congruent to $3$ modulo $4$. Xia’s method of contruction depends on very complicated computations involving cyclotomic classes of finite fields; it was later simplified by Xiang and Chen [@XC] by using a character theoretic approach. Moreover, in [@XC], the authors also asked whether a certain family of 3-weight projective linear code exists or not, since such projective linear codes are needed for the construction of Menon-Hadamard difference set in the group $H\times (\Z_p)^4$, where $p$ is a prime congruent to 1 modulo 4.
Van Eupen and Tonchev [@ET] found the required 3-weight projective linear codes when $p=5$, hence constructed Menon-Hadamard difference sets in $\Z_2^2\times \Z_5^4$, which are the first examples of abelian Menon-Hadamard difference sets in groups of order $4p^4$, where $p$ is a prime congruent to $1$ modulo $4$. Inspired by these examples, Wilson and Xiang [@WX97] gave a general framework for constructing Menon-Hadamard difference sets in the groups $H\times G$, where $H$ is either group of order $4$ and $G$ is an elementary abelian group of order $q^4$, $q$ an odd prime power, using a combination of a spread and four projective sets of type Q in $\PG(3,q)$. (See Section \[sec:twonew\] for the definition of projective sets of type Q.) Wilson and Xiang [@WX97] also found examples of suitable spreads and the required projective sets of type Q when $q=5,13,17$. They used $\F_{q^2}\times \F_{q^2}$ as a model of the four-dimensional vector space $V(4,q)$ over $\F_q$, and considered projective sets of type Q with the automorphism $$T'=\begin{pmatrix}
\omega^2 &0 \\
0 &\omega^{-2}
\end{pmatrix},$$ where $\omega$ is a primitive element of $\F_{q^2}$. However, the existence of the required projective sets of type Q with this prescribed automorphism remained unsolved for $q>17$.
Immediately after [@WX97] appeared, Chen [@Ch97] succeeded in showing the existence of a combination of a spread and four projective sets of type Q in $\PG(3,q)$ satisfying the conditions in the Wilson-Xiang construction for all odd prime powers $q$. As a consequence, Chen [@Ch97] obtained the following theorem by applying Turyn’s product theorem in [@T84].
\[thm:Chen\] Let $p_i$, $i=1,2,\ldots,s$, be odd primes and $t_i$, $i=1,2,\ldots,s$, be positive integers. Furthermore, let $H$ be either group of order $4$ and $G_i$, $i=1,2,\ldots,s$, be an elementary abelian group of order $p_i^{4t_i}$. Then, there exists a Menon-Hadamard difference set in $H\times G_1\times G_2\times \cdots \times G_s$.
Here, Chen [@Ch97] found projective sets of type Q in $\PG(3,q)$ with the following automorphism $$T=\begin{pmatrix}
\omega^2 &0 \\
0 &\omega^{2}
\end{pmatrix},$$ which is obviously different from that of the projective sets of type Q found by Wilson and Xiang [@WX97]. Thus, the existence problem of projective sets of type Q in $\PG(3,q)$ with the prescribed automorphism $T'$ remained open.
The objectives of this paper are two-fold. First, we give a generalization of Chen’s construction of projective sets of type Q by using “semi-primitive” cyclotomic classes. This demonstrates that the construction of projective sets of type Q satisfying the conditions in the Wilson-Xiang construction is much more flexible than originally thought. In particular, the proof of the candidate sets are projective sets of type Q is much simpler than that in [@Ch97]. Second, we show the existence of a combination of a spread and four projective sets of type Q with automorphism $T'$ for all odd prime powers $q$. Our construction generalizes the examples found by Wilson and Xiang in [@WX97]; this solves the problem left open in Section 5 of [@WX97].
Preliminaries
=============
Characters of finite fields
---------------------------
In this subsection, we collect some auxiliary results on characters of finite fields. We assume that the reader is familiar with basic theory of characters of finite fields as in [@LN97 Chapter 5].
Let $p$ be a prime and $s,f$ be positive integers. We set $q=p^s$, and denote the finite field of order $q$ by $\F_{q}$. Let $\Tr_{q^f/q}$ be the trace map from $\F_{q^f}$ to $\F_{q}$, which is defined by $$\Tr_{q^f/q}(x)=x+x^q+\cdots+x^{q^{f-1}}, \quad x\in \F_{q^f}.$$
Let $\omega$ be a fixed primitive element of $\F_q$, $\zeta_p$ a fixed (complex) primitive $p$th root of unity, and $\zeta_{q-1}$ a (complex) $q-1$th root of unity. The character $\psi_{\F_{q}}$ of the additive group of $\F_{q}$ defined by $\psi_{\F_{q}}(x)=\zeta_p^{\Tr_{q/p}(x)}$, $x\in \F_q$, is called the [*canonical additive character*]{} of $\F_{q}$. Then, each additive character is given by $\psi_a(x)=\psi_{\F_q}(ax)$, $x\in \F_{q}$, where $a\in \F_q$. On the other hand, each multiplicative character is given by $\chi^j(\omega^\ell)=\zeta_{q-1}^{j\ell}$, $\ell=0,1,\ldots,q-2$, where $j=0,1,\ldots,q-2$.
For a multiplicative character $\chi$ of $\F_q$, the character sum defined by $$G_q(\chi)=\sum_{x\in \F_q^\ast}\chi(x)\psi_{\F_q}(x)$$ is called a [*Gauss sum*]{} of $\F_q$. Gauss sums satisfy the following basic properties: (1) $G_q(\chi)\overline{G_q(\chi)}=q$ if $\chi$ is nontrivial; (2) $G_q(\chi^{-1})=\chi(-1)\overline{G_q(\chi)}$; (3) $G_q(\chi)=-1$ if $\chi$ is trivial. In general, explicit evaluations of Gauss sums are difficult. There are only a few cases that the Gauss sums have been completely evaluated. The most well-known case is the [*quadratic case*]{}, i.e., the order of the multiplicative character involved is $2$.
\[thm:quad\][*([@LN97 Theorem 5.15])*]{} Let $\eta$ be the quadratic character of $\F_{q}=\F_{p^s}$. Then, $$G_{q}(\eta)
=(-1)^{s-1}\Big(\sqrt{(-1)^{\frac{p-1}{2}}p}\Big)^s.$$
The next simple case is the so-called [*semi-primitive case*]{}, where there exists an integer $\ell$ such that $p^\ell\equiv -1\,(\mod{N})$. Here, $N$ is the order of the multiplicative character involved. In particular, we give the following for later use.
\[thm:semi\][*([@LN97 Theorem 5.16])*]{} Let $\chi$ be a nontrivial multiplicative character of $\F_{q^{2}}$ of order $N$ dividing $q+1$. Then, $$\begin{aligned}
G_{q^{2}}(\chi)
=
\left\{
\begin{array}{ll}
q & \mbox{ if $N$ odd or $\tfrac{q+1}{N}$ even,}\\
-q, &\mbox{ if $N$ even and $\tfrac{q+1}{N}$ odd.}
\end{array}
\right.\end{aligned}$$
We will also need the [*Davenport-Hasse product formula*]{}, which is stated below.
\[thm:Stickel2\][*([@BEW97 Theorem 11.3.5])*]{} Let $\chi'$ be a multiplicative character of order $\ell>1$ of $\F_{q}$. For every nontrivial multiplicative character $\chi$ of $\F_{q}$, $$G_q(\chi)=\frac{G_q(\chi^\ell)}{\chi^\ell(\ell)}
\prod_{i=1}^{\ell-1}
\frac{G_q({\chi'}^i)}{G_q(\chi{\chi'}^i)}.$$
Let $N$ be a positive integer dividing $q-1$. We set $C_i^{(N,q)}=\omega^i\langle \omega^N\rangle$, $0\leq i\leq N-1$, which are called the $N$th [*cyclotomic classes*]{} of $\F_q$. In this paper, we need to evaluate the (additive) character values of a union of some cyclotomic classes. In particular, the character sums defined by $$\psi_{\F_q}(C_i^{(N,q)})=\sum_{x\in C_i^{(N,q)}}\psi_{\F_q}(x), \quad i=0,1,\ldots,N-1,$$ are called the $N$th [*Gauss periods*]{} of $\F_q$. By the orthogonality of characters, the Gauss period can be expressed as a linear combination of Gauss sums: $$\label{eq:ortho1}
\psi_{\F_{q}}(C_i^{(N,q)})=\frac{1}{N}\sum_{j=0}^{N-1}G_q(\chi^{j})\chi^{-j}(\omega^i), \, \quad i=0,1,\ldots,N-1,$$ where $\chi$ is any fixed multiplicative character of order $N$ of $\F_q$. For example, if $N=2$, we have the following from Theorem \[thm:quad\]: $$\label{eq:Gaussquad}
\psi_{\F_q}(C_i^{(2,q)})=\frac{-1+(-1)^iG_q(\eta)}{2}=\frac{-1+(-1)^{i+s-1+\frac{(p-1)s}{4}}p^\frac{s}{2}}{2}, \quad i=0,1,$$ where $\eta$ is the quadratic character of $\F_q$. On the other hand, the Gauss sum with respect to a multiplicative character $\chi$ of order $N$ can be expressed as a linear combination of Gauss periods: $$\label{eq:ortho2}
G_{q}(\chi)=\sum_{i=0}^{N-1}\psi_{\F_{q}}(C_i^{(N,q)})\chi(\omega^i).$$
Known results on projective sets of type Q {#sec:twonew}
------------------------------------------
Let $\PG(k-1,q)$ denote the $(k-1)$-dimensional projective space over $\F_q$. A set ${\mathcal S}$ of $n$ points of $\PG(k-1,q)$ is called a [*projective $(n,k,h_1,h_2)$ set*]{} if every hyperplane of $\PG(k-1,q)$ meets ${\mathcal S}$ in $h_1$ or $h_2$ points. In particular, a subset ${\mathcal S}$ of the point set of $\PG(3,q)$ is called [*type Q*]{} if $$(n,k,h_1,h_2)=\Big(\frac{q^4-1}{4(q-1)},4,\frac{(q-1)^2}{4},\frac{(q+1)^2}{4}\Big).$$
In this paper, we will use the following model of $\PG(3,q)$: We view $\F_{q^2}\times \F_{q^2}$ as a $4$-dimensional vector space over $\F_q$. For a nonzero vector $(x,y)\in (\F_{q^2}\times \F_{q^2})\setminus \{(0,0)\}$, we use $\langle (x,y)\rangle$ to denote the projective point in $\PG(3,q)$ corresponding to the one-dimensional subspace over $\F_q$ spanned by $(x,y)$. Let ${\mathcal P}$ be the set of points of $\PG(3,q)$. Then, all (hyper)planes in $\PG(3,q)$ are given by $$H_{a,b}=\{\langle (x,y)\rangle\,|\,\Tr_{q^2/q}(ax+by)=0\},\quad \langle(a,b)\rangle\in {\mathcal P}.$$ Let ${\mathcal S}$ be a set of points of $\PG(3,q)$, and define $$E=\{\lambda (x,y)\,|\,\lambda\in \F_{q}^\ast,\langle (x,y)\rangle\in {\mathcal S}\}.$$ Noting that each nontrivial additive character of $\F_{q^2}\times \F_{q^2}$ is given by $$\psi_{a,b}((x,y))=\psi_{\F_{q^2}}(ax+by), \quad (x,y)\in \F_{q^2}\times \F_{q^2},$$ where $(0,0)\neq (a,b)\in \F_{q^2}\times \F_{q^2}$, we have $$\begin{aligned}
\psi_{a,b}(E)=&\,\sum_{\lambda \in \F_{q}}\sum_{\langle(x,y)\rangle \in {\mathcal S}}\psi_{\F_q}(\lambda\Tr_{q^2/q}(ax+by))-|{\mathcal S}|\\
=&\,q|H_{a,b}\cap {\mathcal S}|-|{\mathcal S}|. \end{aligned}$$ Hence, we have the following proposition.
\[prop:twoint\] The set ${\mathcal S}$ is a projective set of type Q in $\PG(3,q)$ if and only if $|E|=\frac{q^4-1}{4}$ and $\psi_{a,b}(E)$ take exactly two values $\frac{q^2-1}{4}$ and $\frac{-3q^2-1}{4}$ for all $(0,0)\neq (a,b)\in \F_{q^2}\times \F_{q^2}$.
The set $E\subseteq \F_{q^2}\times \F_{q^2}$ is also called [*type Q*]{} if it satisfies the condition of Proposition \[prop:twoint\].
A [*spread*]{} in $\PG(3,q)$ is a collection ${\mathcal L}$ of $q^2+1$ pairwise skew lines; equivalently, ${\mathcal L}$ can be regarded as a collection ${\mathcal K}$ of $2$-dimensional subspaces of the underlying $4$-dimensional vector space $V(4,q)$ over $\F_q$, any two of which intersect at zero only. We also call such a set ${\mathcal K}$ of $2$-dimensional subspaces as a [*spread*]{} of $V(4,q)$.
The following important theorem was given by Wilson and Xiang [@WX97].
\[thm:HDiff\] Let ${\mathcal L}=\{L_i\,|\,0\le i\le q^2\}$ be a spread of $\PG(3,q)$, and assume the existence of four pairwise disjoint projective sets ${\mathcal S}_i$, $i=1,2,3,4$, of type Q in $\PG(3,q)$ such that ${\mathcal S}_0\cup {\mathcal S}_2=\bigcup_{i=0}^{(q^2-1)/2}L_i$ and ${\mathcal S}_1\cup {\mathcal S}_3=\bigcup_{i=(q^2+1)/2}^{q^2}L_i$. Then there exists a Menon-Hadamard difference set in $H\times G$, where $H$ is either group of order $4$ and $G$ is an elementary abelian group of order $q^4$.
\[rem:HDiff\] From Proposition \[prop:twoint\] and Theorem \[thm:HDiff\], in order to construct a Menon-Hadamard difference set in a group of order $4q^4$, we need to find four disjoint sets $C_i\subseteq (\F_{q^2}\times \F_{q^2})\setminus \{(0,0)\}$, $i=0,1,2,3$, of type Q and a suitable spread ${\mathcal K}=\{K_i\,|\,0\le i\le q^2\}$ consisting of $2$-dimensional subspaces of $V(4,q)$ such that $C_0\cup C_2 \cup \{(0,0)\}=\bigcup_{i=0}^{(q^2-1)/2}K_i$ and $C_1\cup C_3\cup \{(0,0)\}=\bigcup_{i=(q^2+1)/2}^{q^2}K_i$.
We now review the construction of projective sets of type Q given by Chen [@Ch97]. Let $\omega$ be a primitive element of $\F_{q^2}$. Furthermore, let $$\begin{aligned}
X=\{x\in \F_{q^2}\,|\,\Tr_{q^2/q}(x)\in C_0^{(2,q)}\}, \quad
X'=\{x\omega \,|\,\Tr_{q^2/q}(x)\in C_0^{(2,q)}\}. \end{aligned}$$ Define $$\begin{aligned}
&X_1=X\setminus (X\cap X'),\, \, X_2=X'\setminus (X\cap X'),\\
&X_3=X\cap X',\, \, X_4=\F_{q^2}\setminus (X_1\cup X_2\cup X_3),\end{aligned}$$ and $$\begin{aligned}
C_0&\,=\{(x,xy)\,|\,x\in C_0^{(2,q^2)},y\in X_1\}\cup \{(x,xy)\,|\,x\in C_1^{(2,q^2)},y\in X_2\}\cup \{(0,x)\,|\,x\in C_\tau^{(2,q^2)}\},\\
C_1&\,=\{(x,xy)\,|\,x\in C_0^{(2,q^2)},y\in X_3\}\cup \{(x,xy)\,|\,x\in C_1^{(2,q^2)},y\in X_4\},\\
C_2&\,=\{(x,xy)\,|\,x\in C_1^{(2,q^2)},y\in X_1\}\cup \{(x,xy)\,|\,x\in C_0^{(2,q^2)},y\in X_2\}\cup \{(0,x)\,|\,x\in C_{\tau+1}^{(2,q^2)}\},\\
C_3&\,=\{(x,xy)\,|\,x\in C_1^{(2,q^2)},y\in X_3\}\cup \{(x,xy)\,|\,x\in C_0^{(2,q^2)},y\in X_4\},\end{aligned}$$ where $\tau=0$ or $1$ depending on whether $q\equiv 1$ or $3\,(\mod{4})$. It is clear that these type Q sets admit the automorphism $T$.
\[thm:chen\] The sets $C_i$, $i=0,1,2,3$, are type Q. Furthermore, these sets satisfy the assumption of Remark \[rem:HDiff\] with respect to the spread ${\mathcal K}$ consisting of the following $2$-dimensional subspaces: $$K_y=\{(x,xy)\,|\,x\in \F_{q^2}\}, \, y \in \F_{q^2}, \mbox{ \, and \, }
K_{\infty}=\{(0,x)\,|\,x\in \F_{q^2}\}.$$
On the other hand, Wilson and Xiang [@WX97] constructed Menon-Hadamard difference sets of order $4q^4$ for $q=5,13,17$ using the following four type Q sets: $$\begin{aligned}
C_i=&\{(0,y)\,|\,y\in C_{\tau_i}^{(2,q^2)}\} \cup \{(xy,xy^{-1}\omega^j)\,|\,x\in \F_q^\ast,y\in C_0^{(2,q^2)},j\in A_i\}\\
&\quad \cup
\{(xy,xy^{-1}\omega^j)\,|\,x\in \F_q^\ast,y\in C_1^{(2,q^2)},j\in B_i\}, \quad i=0,2,\\
C_i=&\{(y,0)\,|\,y\in C_{\tau_i}^{(2,q^2)}\}\cup \{(xy,xy^{-1}\omega^j)\,|\,x\in \F_q^\ast,y\in C_0^{(2,q^2)},j\in A_i\}\\
&\quad \cup \{(xy,xy^{-1}\omega^j)\,|\,x\in \F_q^\ast,y\in C_1^{(2,q^2)},j\in B_i\}, \quad i=1,3,\end{aligned}$$ for some subsets $A_i,B_i$, $i=0,1,2,3$, of $\{0,1,\ldots,2q+1\}$, and the spread ${\mathcal K}$ consisting of the following $2$-dimensional subspaces: $$K_y=\{(x,yx^q)\,|\,x\in \F_{q^2}\}, \, y \in \F_{q^2}, \mbox{ \, and \, }
K_{\infty}=\{(0,x)\,|\,x\in \F_{q^2}\}.$$ It is clear that these type Q sets admit the automorphism $T'$.
A generalization of Chen’s construction {#sec:Chen}
=======================================
We first fix notation used in this section. Let $q=p^s$ be an odd prime power with $p$ a prime, and $m$ be a fixed positive integer satisfying $2m\,|\,(q+1)$. Then, there exists a minimal $\ell$ such that $2m\,|\,(p^\ell+1)$. Write $s=\ell t$ for some $t\ge 1$. Let $\omega$ be a primitive element of $\F_{q^2}$. Let $T_i$, $i=0,1$, be two arbitrary subsets of $\F_{q}$, and $$\label{eq:defS01}
S_0=\{x\,|\,\Tr_{q^2/q}(x)\in T_0\},\quad
S_1=\{x\,|\,\Tr_{q^2/q}(x\omega^m)\in T_1\}.$$ Furthermore, let $K$ be any $m$-subset of $\{0,1,\ldots,2m-1\}$ such that $K\cap \{x+m\,(\mod{2m})\,|\,x \in K\}=\emptyset$. Define $$\label{eq:defA01}
A_0=S_0\setminus S_1, \quad A_1=S_1\setminus S_0,\quad D_0=\bigcup_{i\in K}C_i^{(2m,q^2)},\quad
D_1=\bigcup_{i\in K}C_{i+m}^{(2m,q^2)},$$ and $$\epsilon:=\left\{
\begin{array}{ll}
1, &\mbox{ if $(p^\ell+1)/2m$ is even and $t$ is odd,}\\
0, &\mbox{ otherwise. }
\end{array}
\right.$$
\[rem:secChen\]
- The indicator function of $S_i$, $i=0,1$, is given by $$f_{S_i}(y)=\frac{1}{q}\sum_{c\in \F_q}\sum_{u\in T_i}\psi_{\F_{q^2}}(cy\omega^{mi})
\psi_{\F_q}(-cu), \quad i=0,1.$$
- The size of each $S_i$ is $q|T_i|$ since $\Tr_{q^2/q}$ is a linear mapping over $\F_{q}$.
- The size of $S_0\cap S_1$ is $|T_0||T_1|$; it is clear that $$\begin{aligned}
|S_0\cap S_1|=&\,\sum_{y\in \F_{q^2}}f_{S_0}(y)f_{S_1}(y)\nonumber\\
=&\,\frac{1}{q^2}\sum_{c,d\in \F_q}\sum_{u\in T_0}\sum_{v\in T_1}\sum_{y\in \F_{q^2}}\psi_{\F_{q^2}}(y(c+d\omega^{m}))
\psi_{\F_q}(-cu-dv). \label{eq:sizes0s1}\end{aligned}$$ Since $\omega^m\not\in \F_{q}$, $c+d\omega^m=0$ if and only if $c=d=0$. Hence, the right-hand side of is equal to $|T_0||T_1|$.
- Since $2m\,|\,(q+1)$, the character values of $D_i\subseteq \F_{q^2}$, $i=0,1$, can be evaluated by using and the Gauss sums in semi-primitive case (see, e.g., [@bwx Theorem 2]): for $b\in \F_{q^2}^\ast$, $$\sum_{x\in D_\epsilon}\psi_{\F_{q^2}}(bx)=\left\{
\begin{array}{ll}
\frac{-1-q}{2}, & \mbox{ if $b^{-1}\in D_0$,}\\
\frac{-1+q}{2}, &\mbox{ if $b^{-1}\in D_1$. }
\end{array}
\right.$$
The following is our main result in this section.
\[thm:mainf1\]
- Assume that $|T_0|=|T_1|=(q-1)/2$, and define $$E_0=\{(x,xy)\,|\,x\in D_0,y \in A_0\} \cup
\{(x,xy)\,|\,x\in D_1,y \in A_1\} \cup \{(0,x)\,|\,x \in D_\epsilon\}.$$ Then $E_0$ is a set of type Q in $\F_{q^2}\times \F_{q^2}$.
- Assume that $|T_0|=(q-1)/2$ and $|T_1|=(q+1)/2$, and define $$E_1=\{(x,xy)\,|\,x\in D_0,y \in A_0\} \cup
\{(x,xy)\,|\,x\in D_1,y \in A_1\}.$$ Then $E_1$ is a set of type Q in $\F_{q^2}\times \F_{q^2}$.
This theorem obviously generalizes the construction of type Q sets given by Chen [@Ch97]. Indeed, we used $D_i$, $i=0,1$, instead of $C_i^{(2,q^2)}$, $i=0,1$, in the definition of $X$ and $X'$ (see Subsection \[sec:twonew\]). This new construction is much more flexible than that in [@Ch97].
To prove this theorem, we will evaluate the character values $\psi_{a,b}(E_i)$, $(a,b)\in (\F_{q^2}\times \F_{q^2})\setminus \{(0,0)\}$, by a series of the following lemmas. We first treat the case where $b=0$.
\[lem:b0\] For $b=0$ and $a\not=0$, it holds that $$\psi_{a,b}(E_0)=\frac{q^2-1}{4}.$$
Since $|T_0|=|T_1|=(q-1)/2$, by Remark \[rem:secChen\] (ii),(iii), we have $|A_0|=|A_1|=(q^2-1)/4$. Then, we have $$\begin{aligned}
\psi_{a,0}(E_0)=&\,\sum_{x\in D_0}\sum_{y\in A_0}\psi_{\F_{q^2}}(ax)+
\sum_{x\in D_1}\sum_{y\in A_1}\psi_{\F_{q^2}}(ax)+\frac{q^2-1}{2}\\
=&\frac{q^2-1}{4}\sum_{x\in \F_{q^2}^\ast}\psi_{\F_{q^2}}(ax)+\frac{q^2-1}{2}=
\frac{q^2-1}{4}. \end{aligned}$$ This completes the proof.
\[lem:b02\] For $b=0$ and $a\not=0$, we have $$\psi_{a,b}(E_1)=\left\{
\begin{array}{ll}
\frac{q^2-1}{4}, & \mbox{ if $a^{-1}\in D_\epsilon$,}\\
\frac{-3q^2-1}{4}, &\mbox{ otherwise.}
\end{array}
\right.$$
Since $|T_0|=(q-1)/2$ and $|T_1|=(q+1)/2$, by Remark \[rem:secChen\] (ii),(iii), we have $|A_0|=(q-1)^2/4$ and $|A_1|=(q+1)^2/4$. Then, we have $$\begin{aligned}
\psi_{a,0}(E_1)=&\,\sum_{x\in D_0}\sum_{y\in A_0}\psi_{\F_{q^2}}(ax)+
\sum_{x\in D_1}\sum_{y\in A_1}\psi_{\F_{q^2}}(ax)\nonumber\\
=&\frac{(q-1)^2}{4}\sum_{x\in \F_{q^2}^\ast}\psi_{\F_{q^2}}(ax)+q\sum_{x\in D_1}\psi_{\F_{q^2}}(ax). \label{eq:b0ane0}\end{aligned}$$ Finally, by Remark \[rem:secChen\] (iv), is reformulated as $$\psi_{a,0}(E_1)=-\frac{(q-1)^2}{4}+q
\left\{
\begin{array}{ll}
\frac{-1+q}{2}, & \mbox{ if $a^{-1}\in D_\epsilon$,}\\
\frac{-1-q}{2}, &\mbox{ otherwise.}
\end{array}
\right.$$ This completes the proof.
We next treat the case where $b\not=0$. Let $f_{S_i}$, $i=0,1$, be defined as in Remark \[rem:secChen\] (i). Define $$\begin{aligned}
U_1=&\sum_{x\in D_0}\sum_{y\in \F_{q^2}}\psi_{\F_{q^2}}(x(a+by))
f_{S_0}(y),\\
U_2=&\sum_{x\in D_1}\sum_{y\in \F_{q^2}}\psi_{\F_{q^2}}(x(a+by))
f_{S_1}(y),\\
U_3=&\sum_{x\in \F_{q^2}^\ast}\sum_{y\in \F_{q^2}}\psi_{\F_{q^2}}(x(a+by))
f_{S_0}(y)f_{S_1}(y). \end{aligned}$$ Then, the character values of $E_i$, $i=0,1$, are given by $$\label{eq:chraE0}
\psi_{a,b}(E_0)=
U_1+U_2-U_3+\sum_{x\in D_\epsilon}\psi_{\F_{q^2}}(bx)$$ and $$\label{eq:chraE1}
\psi_{a,b}(E_1)=
U_1+U_2-U_3.$$
\[lem:u1\] If $b\not=0$, it holds that $$U_1=
\left\{
\begin{array}{ll}
-q|T_0|+q^2, &\mbox{ if $-ab^{-1}\in S_0$ and $b^{-1}\in D_0$,}\\
-q|T_0|, &\mbox{ if $-ab^{-1}\not\in S_0$ and $b^{-1}\in D_0$,}\\
0, &\mbox{ if $b^{-1}\in D_1$.}\\
\end{array}
\right.$$
If $b\not=0$, we have $$\label{eq:U1}
U_1=\frac{1}{q}\sum_{x\in D_0}\sum_{y\in \F_{q^2}}
\sum_{c\in \F_q}\sum_{u\in T_0}\psi_{\F_{q^2}}(xa)
\psi_{\F_{q^2}}((xb+c)y)
\psi_{\F_q}(-cu).$$ If $b^{-1}\in D_1$, there are no $x\in D_0$ such that $xb+c=0$; we have $U_1=0$. If $b^{-1} \in D_0$, continuing from , we have $$\begin{aligned}
U_1=&\,q
\sum_{c\in \F_q^\ast}\sum_{u\in T_0}\psi_{\F_{q^2}}(-acb^{-1})
\psi_{\F_q}(-cu)\\
=&\,-q|T_0|+q
\sum_{c\in \F_q}\sum_{u\in T_0}\psi_{\F_q}(\Tr_{q^2/q}(-ab^{-1})c-cu)\\
=&\,-q|T_0|+q^2
\left\{
\begin{array}{ll}
1, & \mbox{ if $\Tr_{q^2/q}(-ab^{-1})\in T_0$,}\\
0, &\mbox{ otherwise.}
\end{array}
\right.\end{aligned}$$ This completes the proof.
\[lem:u2\] If $b\not=0$, we have $$U_2=
\left\{
\begin{array}{ll}
-q|T_1|+q^2, &\mbox{ if $-ab^{-1}\in S_1$ and $b^{-1}\in D_0$,}\\
-q|T_1|, &\mbox{ if $-ab^{-1}\not\in S_1$ and $b^{-1}\in D_0$,}\\
0, &\mbox{ if $b^{-1}\in D_1$.}
\end{array}
\right.$$
If $b\not=0$, we have $$\label{eq:U2}
U_2=\frac{1}{q}\sum_{x\in D_1}\sum_{y\in \F_{q^2}}
\sum_{c\in \F_q}\sum_{u\in T_1}\psi_{\F_{q^2}}(xa)
\psi_{\F_{q^2}}((xb+c\omega^{m})y)
\psi_{\F_q}(-cu).$$ If $b^{-1}\in D_1$, there are no $x\in D_1$ such that $xb+c\omega^{m}=0$; hence $U_2=0$. If $b^{-1}\in D_0$, continuing from , we have $$\begin{aligned}
U_2=&\,q
\sum_{c\in \F_q^\ast}\sum_{u\in T_1}\psi_{\F_{q^2}}(-acb^{-1}\omega^{m})
\psi_{\F_q}(-cu)\\
=&\,-q|T_1|+q
\sum_{c\in \F_q}\sum_{u\in T_1}\psi_{\F_q}(\Tr_{q^2/q}(-ab^{-1}\omega^{m})c-cu)\\
=&\,-q|T_1|+q^2
\left\{
\begin{array}{ll}
1, & \mbox{ if $\Tr_{q^2/q}(-ab^{-1}\omega^{m})\in T_1$,}\\
0, &\mbox{ otherwise.}
\end{array}
\right.\end{aligned}$$ This completes the proof.
\[lem:u3\] If $b\not=0$, we have $$U_3=
\left\{
\begin{array}{ll}
-|T_0||T_1|+q^2, & \mbox{ if $-ab^{-1}\in S_0\cap S_1$,}\\
-|T_0||T_1|, &\mbox{ otherwise.}
\end{array}
\right.$$
Note that $D_0\cup D_1=\F_{q^2}^\ast$ and $|S_0\cap S_1|=|T_0||T_1|$. Since $b\not=0$, we have $$\begin{aligned}
U_3=&\,\sum_{x\in \F_{q^2}}\sum_{y\in \F_{q^2}}\psi_{\F_{q^2}}(x(a+by))
f_{S_0}(y)f_{S_1}(y)-|S_0\cap S_1|\\
=&\,q^2
f_{S_0}(-ab^{-1})f_{S_1}(-ab^{-1})-|T_0||T_1|. \end{aligned}$$ This completes the proof.
[**Proof of Theorem \[thm:mainf1\]:**]{} In the case where $b=0$, the statement follows from Lemmas \[lem:b0\] and \[lem:b02\]. We now treat the case where $b\not=0$. By the evaluations for $U_1,U_2,U_3$ in Lemmas \[lem:u1\]–\[lem:u3\], we have $$\begin{aligned}
U_1+U_2-U_3=
\left\{
\begin{array}{ll}
-q(|T_0|+|T_1|-q)+|T_0||T_1|, & \mbox{if $b^{-1}\in D_0$, $-ab^{-1}\in S_0$, $-ab^{-1}\in S_1$;}\\
& \mbox{\, or $b^{-1}\in D_0$, $-ab^{-1}\not\in S_0$, $-ab^{-1}\in S_1$;}\\
& \mbox{\, or $b^{-1}\in D_0$, $-ab^{-1}\in S_0$, $-ab^{-1}\not\in S_1$,}\\
-q(|T_0|+|T_1|)+|T_0||T_1|, &\mbox{if $b^{-1}\in D_0$, $-ab^{-1}\not\in S_0$, $-ab^{-1}\not\in S_1$, }\\
-q^2+|T_0||T_1|, & \mbox{if $b^{-1}\in D_1$, $-ab^{-1}\in S_0$, $-ab^{-1}\in S_1$,}\\
|T_0||T_1|, &\mbox{if $b^{-1}\in D_1$, $-ab^{-1}\not\in S_0$, $-ab^{-1}\in S_1$;}\\
&\mbox{\, or $b^{-1}\in D_1$, $-ab^{-1}\in S_0$, $-ab^{-1}\not\in S_1$;}\\
&\mbox{\, or $b^{-1}\in D_1$, $-ab^{-1}\not\in S_0$, $-ab^{-1}\not\in S_1$.}
\end{array}
\right. \end{aligned}$$ (1) Since $|T_0|=|T_1|=(q-1)/2$, by Remark \[rem:secChen\] (iv), we have $$\begin{aligned}
\psi_{a,b}(E_0)=&\,U_1+U_2-U_3+\sum_{x\in D_\epsilon}\psi_{\F_{q^2}}(bx)\\
=&\,
\left\{
\begin{array}{ll}
\frac{-3q^2-1}{4}, & \mbox{ if $b^{-1}\in D_0$, $-ab^{-1}\not\in S_0$, and $-ab^{-1}\not\in S_1$;}\\
& \mbox{ or if $b^{-1}\in D_1$, $-ab^{-1}\in S_0$, and $-ab^{-1}\in S_1$,}\\
\frac{q^2-1}{4}, & \mbox{ otherwise.}
\end{array}
\right. \end{aligned}$$ (2) Since $|T_0|=(q-1)/2$ and $|T_1|=(q+1)/2$, we have $$\begin{aligned}
\psi_{a,b}(E_1)=&\,U_1+U_2-U_3\\
=&\,
\left\{
\begin{array}{ll}
\frac{-3q^2-1}{4}, & \mbox{ if $b^{-1}\in D_0$, $-ab^{-1}\not\in S_0$, and $-ab^{-1}\not\in S_1$;}\\
& \mbox{ or if $b^{-1}\in D_1$, $-ab^{-1}\in S_0$, and $-ab^{-1}\in S_1$,}\\
\frac{q^2-1}{4}, & \mbox{ otherwise.}
\end{array}
\right. \end{aligned}$$ This completes the proof of the theorem.
Let $T_i$, $i=0,1$, be arbitrary $(q-1)/2$-subsets of $\F_q$ and $S_0,S_1,A_0,A_1$ be the sets defined as in and . Furthermore, define $$S_1'=\{x\in \F_{q^2}\,|\,\Tr_{q^2/q}(x\omega^m)\in \F_q\setminus T_1\}, \quad
A_0'=S_0\setminus S_1', \quad A_1'=S_1'\setminus S_0.$$ Then, the sets $$\begin{aligned}
C_0=&\{(x,xy)\,|\,x\in D_0,y \in A_0\} \cup
\{(x,xy)\,|\,x\in D_1,y \in A_1\} \cup \{(0,x)\,|\,x \in D_\epsilon\},\\
C_1=&\{(x,xy)\,|\,x\in D_0,y \in A_0'\} \cup
\{(x,xy)\,|\,x\in D_1,y \in A_1'\},\\
C_2=&\{(x,xy)\,|\,x\in D_1,y \in A_0\} \cup
\{(x,xy)\,|\,x\in D_0,y \in A_1\} \cup \{(0,x)\,|\,x \in D_{\epsilon+1}\},\\
C_3=&\{(x,xy)\,|\,x\in D_1,y \in A_0'\} \cup
\{(x,xy)\,|\,x\in D_0,y \in A_1'\} \end{aligned}$$ are of type Q, where the subscript of $D_{\epsilon+1}$ is reduced modulo $2$. Furthermore, these sets satisfy the assumptions of Remark \[rem:HDiff\] with respect to the spread consisting of the following $2$-dimensional subspaces: $$K_y=\{(x,xy)\,|\,x\in \F_{q^2}\}, y\in \F_{q^2}, \mbox{\, and \, }K_\infty=\{(0,x)\,|\,x\in \F_{q^2}\}.$$
By Theorem \[thm:mainf1\], $C_0$ and $C_1$ are type Q sets. Furthermore, since $C_2=\omega^m C_0$ and $C_3=\omega^m C_1$, the sets $C_2$ and $C_3$ are also of type Q. Finally, $C_i$, $i=0,1,2,3$, satisfy the assumption of Remark \[rem:HDiff\] as $C_0\cup C_2\cup\{(0,0)\}=\big(\bigcup_{y\in A_0\cup A_1}K_y\big)\cup K_\infty$ and $C_1\cup C_3\cup\{(0,0)\}=\bigcup_{y\in A_0'\cup A_1'}K_y$.
A generalization of Wilson-Xiang’s examples {#sec:firstty}
===========================================
The Setting {#subsec:set}
-----------
We fix notation used in this section. Let $q$ be a prime power and $\omega$ be a primitive element of $\F_{q^2}$. Let $c$ be any fixed odd integer in $\{0,1,\ldots,2q+1\}=\Z_{2q+2}$.
Define the following subsets of $\{0,1,\ldots,2q+1\}$: $$\begin{aligned}
I_1=&\,\{i\,(\mod{2(q+1)})\,|\,\Tr_{q^2/q}(\omega^i)=0\}=\{\tfrac{q+1}{2},\tfrac{3(q+1)}{2}\},\\
I_2=&\,\{i\,(\mod{2(q+1)})\,|\,\Tr_{q^2/q}(\omega^i)\in C_0^{(2,q)}\},\\
I_3=&\,\{i\,(\mod{2(q+1)})\,|\,\Tr_{q^2/q}(\omega^i)\in C_1^{(2,q)}\},\\
J_{i}=&\,I_i-c\,(\mod{2(q+1)}),\, \, \, \, i=1,2,3. \end{aligned}$$ Then $|I_1|=2$, $|I_2|=|I_3|=q$, and $I_1\cup I_2\cup I_3=\Z_{2q+2}$. Furthermore, define $$\begin{aligned}
X_{1,c}=&\,(I_1\cap J_{2})\cup (I_2\cap J_{1}),\\
X_{2,c}=&\,(I_1\cap J_{3})\cup (I_3\cap J_{1}),\\
X_{3,c}=&\,I_2\cap J_{2},\, \, X_{4,c}=I_3\cap J_{3},\\
X_{5,c}=&\,(I_2\cap J_{3})\cup (I_3\cap J_{2}).\end{aligned}$$ It is clear that the $X_{i,c}$’s partition $\Z_{2q+2}$. In the appendix, we will show that the $X_{i,c}$’s have the following properties:
- $X_{1,c}\equiv X_{2,c}+(q+1)\,(\mod{2(q+1)})$, $X_{3,c}\equiv X_{4,c}+(q+1)\,(\mod{2(q+1)})$;
- $|X_{1,c}|=|X_{2,c}|=2$, $|X_{3,c}|=|X_{4,c}|=\frac{q-1}{2}$, $|X_{5,c}|=q-1$;
- $X_{3,c+q+1}\cup X_{4,c+q+1}=X_{5,c}$;
- $X_{1,c}+c\equiv -X_{1,c}+(q+1)\,(\mod{2(q+1)})$ or $X_{2,c}+c\equiv -X_{2,c}+(q+1)\,(\mod{2(q+1)})$ according as $q\equiv 3$ or $1\,(\mod{4})$.
- $|X_{1,c}\cap X_{1,c+q+1}|=1$;
- By the properties (P2) and (P5), we can assume that $X_{1,c}=\{\alpha,\beta\}$ and $X_{1,c+q+1}=\{\alpha,\gamma\}$. Then, $\beta\equiv \gamma+(q+1)\,(\mod{2(q+1)})$. Furthermore, $\alpha\equiv 0\,(\mod{2})$ and $\beta\equiv 1\,(\mod{2})$ or $\alpha\equiv 1\,(\mod{2})$ and $\beta\equiv 0\,(\mod{2})$ according as $q\equiv 3$ or $1\,(\mod{4})$.
- Define $R_i=\bigcup_{j\in X_{i,c}}C_{j}^{(2(q+1),q^2)}$, $i=1,2,3,4,5$. Then, $R_i$ takes the character values listed in Table \[tab\_1\]:
$R_1$ $R_2$ $R_3$ $R_4$ $R_5$
---------------- ----------------------------- ----------------------------- ------------------------------------- ------------------------------------- -------------------------------------
$a\in Y_{1,c}$ $\frac{-2+q+G_q(\eta)}{2}$ $\frac{-2+q- G_q(\eta)}{2}$ $\frac{(q-1)(-1+ G_q(\eta))}{4}$ $\frac{(q-1)(-1- G_q(\eta))}{4}$ $\frac{-q+1}{2}$
$a\in Y_{2,c}$ $\frac{-2+q- G_q(\eta)}{2}$ $\frac{-2+q+ G_q(\eta)}{2}$ $\frac{(q-1)(-1- G_q(\eta))}{4}$ $\frac{(q-1)(-1+ G_q(\eta))}{4}$ $\frac{-q+1}{2}$
$a\in Y_{3,c}$ $-1+ G_q(\eta)$ $-1- G_q(\eta)$ $\frac{(1- G_q(\eta))^2}{4}$ $\frac{(1+ G_q(\eta))^2}{4}$ $\frac{1-(-1)^{\frac{q-1}{2}}q}{2}$
$a\in Y_{4,c}$ $-1- G_q(\eta)$ $-1+ G_q(\eta)$ $\frac{(1+ G_q(\eta))^2}{4}$ $\frac{(1- G_q(\eta))^2}{4}$ $\frac{1-(-1)^{\frac{q-1}{2}}q}{2}$
$a\in Y_{5,c}$ $-1$ $-1$ $\frac{1-(-1)^{\frac{q-1}{2}}q}{4}$ $\frac{1-(-1)^{\frac{q-1}{2}}q}{4}$ $\frac{1+(-1)^{\frac{q-1}{2}}q}{2}$
: \[tab\_1\]The values of $\psi_{\F_{q^2}}(\omega^a R_i)$’s
In the language of association schemes, the Cayley graphs on $(\F_{q^2},+)$ with connection sets $R_i$’s, together with the diagonal relation arising from the connection set $R_0=\{0\}$, form a $5$-class translation association scheme. Here, $Y_{i,c}$’s are subsets of $\{0,1,\ldots,2q+1\}$ defined as the index sets of the dual association scheme.
- $-Y_{i,c}+c\equiv Y_{i,c}\,(\mod{2(q+1)})$, $i=1,2$;
- $-(Y_{3,c}\cup Y_{4,c})\equiv Y_{5,c}-c\,(\mod{2(q+1)})$;
- Define $R_i'=\bigcup_{j\in Y_{i,c}}C_{j}^{(2(q+1),q^2)}$, $i=1,2,3,4,5$. Then, $R_i'$ takes the character values listed in Table \[tab\_2\]:
$R_1'$ $R_2'$ $R_3'$ $R_4'$ $R_5'$
---------------- ----------------------------- ----------------------------- ------------------------------------- ------------------------------------- -------------------------------------
$a\in X_{1,c}$ $\frac{-2+q+G_q(\eta)}{2}$ $\frac{-2+q- G_q(\eta)}{2}$ $\frac{(q-1)(-1+ G_q(\eta))}{4}$ $\frac{(q-1)(-1- G_q(\eta))}{4}$ $\frac{-q+1}{2}$
$a\in X_{2,c}$ $\frac{-2+q- G_q(\eta)}{2}$ $\frac{-2+q+ G_q(\eta)}{2}$ $\frac{(q-1)(-1- G_q(\eta))}{4}$ $\frac{(q-1)(-1+ G_q(\eta))}{4}$ $\frac{-q+1}{2}$
$a\in X_{3,c}$ $-1+ G_q(\eta)$ $-1- G_q(\eta)$ $\frac{(1- G_q(\eta))^2}{4}$ $\frac{(1+ G_q(\eta))^2}{4}$ $\frac{1-(-1)^{\frac{q-1}{2}}q}{2}$
$a\in X_{4,c}$ $-1- G_q(\eta)$ $-1+ G_q(\eta)$ $\frac{(1+ G_q(\eta))^2}{4}$ $\frac{(1- G_q(\eta))^2}{4}$ $\frac{1-(-1)^{\frac{q-1}{2}}q}{2}$
$a\in X_{5,c}$ $-1$ $-1$ $\frac{1-(-1)^{\frac{q-1}{2}}q}{4}$ $\frac{1-(-1)^{\frac{q-1}{2}}q}{4}$ $\frac{1+(-1)^{\frac{q-1}{2}}q}{2}$
: \[tab\_2\]The values of $\psi_{\F_{q^2}}(\omega^a R_i')$’s
The Construction {#subsec:const}
----------------
Let $X_{i,c}$, $Y_{i,c}$, $R_i$, $R_i'$, $i=1,2,\ldots,5$, be sets defined as in Subsection \[subsec:set\]. Let $A$ and $B$ be subsets of $\{0,1,\ldots,2q+1\}$ satisfying $A\cap B=X_{3,c}$ and as multisets, $A\cup B=X_{1,c}\cup X_{3,c}\cup X_{3,c}$. It follows that $(A\setminus B)\cup (B\setminus A)=X_{1,c}$.
Let $\tau=0$ or $1$ according as $q\equiv 3$ or $1\,(\mod{4})$. Define $$\begin{aligned}
D_0=&\{(0,y)\,|\,y\in C_\tau^{(2,q^2)}\},\nonumber\\
D_1=&\{(y,0)\,|\,y\in C_0^{(2,q^2)}\},\nonumber\\
D_2=&\{(xy,xy^{-1}\omega^i)\,|\,x\in \F_q^\ast,y\in C_0^{(2,q^2)},i\in A \},
\nonumber\\
D_3=&
\{(xy,xy^{-1}\omega^i)\,|\,x\in \F_q^\ast,y\in C_1^{(2,q^2)},i\in B \}. \label{def:Dis}\end{aligned}$$ We denote the set of even (resp. odd) elements in any subset $S$ of $\{0,1,\ldots,2q+1\}$ by $S_e$ (resp. $S_o$). The following is our main result in this section.
\[thm:mainWX1\]
- If $|A|=|B|=\frac{q+1}{2}$ and $|A_e|+|B_o|=|A_o|+|B_e|-2(-1)^{\frac{q-1}{2}}$, then $E_0=D_0\cup D_2\cup D_3$ is a type Q set in $\F_{q^2}\times \F_{q^2}$.
- If $|A|=\frac{q+3}{2}$, $|B|=\frac{q-1}{2}$ and $|A_e|+|B_o|=|A_o|+|B_e|$, then $E_1=D_1\cup D_2\cup D_3$ is a type Q set in $\F_{q^2}\times \F_{q^2}$.
This theorem generalizes the examples of type Q sets found by Wilson-Xiang [@WX97]. Indeed, these sets admit the automorphism $T'$. See Subsection \[sec:twonew\].
To prove the theorem above, we will evaluate the character values of $E_i$, $i=0,1$. Define $$\begin{aligned}
V_0=&\sum_{y\in C_\tau^{(2,q^2)}}\psi_{\F_{q^2}}(by), \quad
V_1=\sum_{y\in C_0^{(2,q^2)}}\psi_{\F_{q^2}}(ay), \\
V_2=&\frac{1}{2}\sum_{i\in A}\sum_{y\in C_0^{(2,q^2)}}\sum_{x\in \F_q^\ast}\psi_{\F_{q^2}}(axy)\psi_{\F_{q^2}}(bxy^{-1}\omega^i), \\
V_3=&\frac{1}{2}\sum_{i\in B}\sum_{y\in C_1^{(2,q^2)}}\sum_{x\in \F_q^\ast}\psi_{\F_{q^2}}(axy)\psi_{\F_{q^2}}(bxy^{-1}\omega^i).\end{aligned}$$ Noting that each element in $D_2$ (resp. $D_3$) appears exactly twice when $x$ runs through $\F_q^*$ and $y$ runs through $C_0^{(2,q^2)}$ (resp. $C_1^{(2,q^2)}$), we have $\psi_{a,b}(E_0)=V_0+V_2+V_3$ and $\psi_{a,b}(E_1)=V_1+V_2+V_3$. We will evaluate these character sums by considering two cases: (i) exactly one of $a, b$ is zero; and (ii) $a\not=0$ and $b\not=0$. We first treat Case (i).
\[lemma:ab0\] If exactly one of $a, b$ is zero, then $$\psi_{a,b}(E_0)=\left\{
\begin{array}{ll}
\frac{-3q^2-1}{4}, & \mbox{ if $a=0$ and $b\in C_{1}^{(2,q^2)}$, }\\
\frac{q^2-1}{4}, & \mbox{ otherwise.}
\end{array}
\right.$$
If $a\not=0$ and $b=0$, it is clear that $V_0=\frac{q^2-1}{2}$. Furthermore, since $|A|=|B|=\frac{q+1}{2}$, we have $$V_2+V_3=\frac{q+1}{4}\sum_{y\in \F_{q^2}^\ast}\sum_{x\in \F_q^\ast}\psi_{\F_{q^2}}(axy)=-\frac{q^2-1}{4}.$$ Hence, $\psi_{a,b}(E_0)=\frac{q^2-1}{4}$. If $a=0$ and $b\not=0$, we have $$\begin{aligned}
\label{eq:v1v2v3}
&V_0+V_2+V_3\nonumber\\
=&\,
\psi_{\F_{q^2}}(b C_\tau^{(2,q^2)})+\frac{q-1}{2}((|A_e|+|B_o|)
\psi_{\F_{q^2}}(b C_0^{(2,q^2)})+(|A_o|+ |B_e|)\psi_{\F_{q^2}}(b C_1^{(2,q^2)})).
$$ Since $|A_e|+|A_o|+|B_e|+|B_o|=|A|+|B|=q+1$ and $|A_e|+|B_o|=|A_o|+|B_e|-2(-1)^{\frac{q-1}{2}}$, we have $|A_e|+|B_o|=\frac{q+1}{2}-(-1)^{\frac{q-1}{2}}$ and $|A_o|+|B_e|=\frac{q+1}{2}+(-1)^{\frac{q-1}{2}}$. Hence, is reformulated as $$V_0+V_2+V_3=q\psi_{\F_{q^2}}(b C_\tau^{(2,q^2)})-\Big(\frac{q-1}{2}\Big)^2.$$ Finally, by , the statement follows.
\[lemma:ab02\] If exactly one of $a, b$ is zero, then $$\psi_{a,b}(E_1)=\left\{
\begin{array}{ll}
\frac{-3q^2-1}{4}, & \mbox{ if $b=0$ and $a\in C_{\tau+1}^{(2,q^2)}$, }\\
\frac{q^2-1}{4}, & \mbox{ otherwise.}
\end{array}
\right.$$
If $a=0$ and $b\not=0$, it is clear that $V_1=\frac{q^2-1}{2}$. Since $|A_e|+|B_o|=|A_o|+|B_e|=\frac{q+1}{2}$, we have $$\begin{aligned}
V_2+V_3
=&\,\frac{q-1}{2}((|A_e|+|B_o|)
\psi_{\F_{q^2}}(b C_0^{(2,q^2)})+(|A_o|+ |B_e|)\psi_{\F_{q^2}}(b C_1^{(2,q^2)})) \\
=&\,-\frac{q^2-1}{4}. \end{aligned}$$ Hence, $\psi_{a,b}(E_1)=\frac{q^2-1}{4}$. If $a\not=0$ and $b=0$, $$\label{eq:v1v2v3R}
V_1+V_2+V_3=\psi_{\F_{q^2}}(aC_0^{(2,q^2)})
+\frac{q-1}{2}(|A|\psi_{\F_{q^2}}(aC_0^{(2,q^2)})+|B|\psi_{\F_{q^2}}(aC_1^{(2,q^2)})).$$ Since $|A|=\frac{q+3}{2}$ and $|B|=\frac{q-1}{2}$, is reformulated as $$V_1+V_2+V_3=q\psi_{\F_{q^2}}(a C_0^{(2,q^2)})-\Big(\frac{q-1}{2}\Big)^2.$$ Finally, by , the statement follows.
We next consider Case (ii), i.e., $a\neq 0$ and $b\not=0$.
\[lem:abnot0\] If $a\neq 0$ and $b\not=0$, then $$\begin{aligned}
V_2+V_3=&\,\frac{1}{4(q+1)}\sum_{u=0,1}\sum_{h=0}^{2q+1}G_{q^2}(\chi_{2(q+1)}^{-h}\rho^u)G_{q^2}(\chi_{2(q+1)}^{-h})\chi_{2(q+1)}^h(ab)\rho^u(a)\nonumber
\\
&\hspace{3.3cm}\times \Big(\sum_{i\in A}\chi_{2(q+1)}^{h}(\omega^i)+\sum_{i\in B}\chi_{2(q+1)}^h(\omega^{i})\rho^u(\omega)\Big), \label{eigen34}\end{aligned}$$ where $\chi_{2(q+1)}$ is a multiplicative character of order $2(q+1)$ of $\F_{q^2}$ and $\rho$ is the quadratic character of $\F_{q^2}$.
Let $\chi$ be a multiplicative character of order $q^2-1$ of $\F_{q^2}$. By , we have $$\begin{aligned}
V_2=&\,\frac{1}{2(q^2-1)^2}\sum_{i\in A}\sum_{y\in C_0^{(2,q^2)}}\sum_{x\in \F_q^\ast}\sum_{j,k=0}^{q^2-2}G_{q^2}(\chi^{-j})\chi^j(axy)G_{q^2}(\chi^{-k})\chi^k(bxy^{-1}\omega^i)\nonumber\\
=&\,\frac{1}{2(q^2-1)^2}\sum_{i\in A}\sum_{j,k=0}^{q^2-2}G_{q^2}(\chi^{-j})G_{q^2}(\chi^{-k})\chi^j(a)\chi^k(b\omega^i)\chi^{j-k}(C_0^{(2,q^2)})\Big(\sum_{x\in \F_q^\ast}\chi^{j+k}(x)\Big).
\label{eigen2}\end{aligned}$$ Since $\chi^{j-k}(C_0^{(2,q^2)})=\frac{q^2-1}{2}$ or $0$ according as $j-k\equiv 0\,(\mod{\frac{q^2-1}{2}})$ or not, continuing from (\[eigen2\]), we have [$$\begin{aligned}
V_2=&\,\frac{1}{4(q^2-1)}\sum_{i\in A}\sum_{u=0,1}\sum_{k=0}^{q^2-2}G_{q^2}(\chi^{-k-\frac{q^2-1}{2}u})G_{q^2}(\chi^{-k})\chi^{k+\frac{q^2-1}{2}u}(a)\chi^{k}(b\omega^i)\Big(\sum_{x\in \F_q^\ast}\chi^{2k+\frac{q^2-1}{2}u}(x)\Big).\label{eigen3}\end{aligned}$$]{} Let $\chi_{2(q+1)}=\chi^{\frac{q-1}{2}}$ and $\rho=\chi^{\frac{q^2-1}{2}}$. Since $\sum_{x\in \F_q^\ast}\chi^{2k+\frac{q^2-1}{2}u}(x)=q-1$ or $0$ according as $2k\equiv 0\,(\mod{q-1})$ or not, continuing from (\[eigen3\]), we have $$V_2=\frac{1}{4(q+1)}\sum_{u=0,1}\sum_{h=0}^{2q+1}G_{q^2}(\chi_{2(q+1)}^{-h}\rho^u)G_{q^2}(\chi_{2(q+1)}^{-h})\chi_{2(q+1)}^h(ab)\rho^u(a)\sum_{i\in A}\chi_{2(q+1)}^{h}(\omega^i).$$ Similarly, we have $$V_3
=\frac{1}{4(q+1)}\sum_{u=0,1}\sum_{h=0}^{2q+1}G_{q^2}(\chi_{2(q+1)}^{-h}\rho^u)G_{q^2}(\chi_{2(q+1)}^{-h})\chi_{2(q+1)}^h(ab)\rho^u(a)
\sum_{i\in B}\chi_{2(q+1)}^h(\omega^{i})\rho^u(\omega).$$ This completes the proof of the lemma.
Let $W_0$ (resp. $W_1$) be the contribution for $u=0$ (resp. $u=1$) in the summations of (\[eigen34\]); then $V_2+V_3=W_0+W_1$.
\[lem:ss1\] Let $r=ab\not=0$. Then, $$W_0=
\left\{
\begin{array}{ll}
\frac{-q^2+1}{4}, & \mbox{ if $r\in \omega^c R_1$\, or\, $r\in \omega^c R_2$,}\\
\frac{q^2+1}{4}\mbox{ or } \frac{-3q^2+1}{4}, &\mbox{ otherwise, }
\end{array}
\right.$$ depending on whether $q\equiv 3$ or $1\,(\mod{4})$.
By the definition of $W_0$, we have $$\begin{aligned}
W_0=\frac{1}{4(q+1)}\sum_{h=0}^{2q+1}G_{q^2}(\chi_{2(q+1)}^{-h})^2\chi_{2(q+1)}^h(r)\Big(\sum_{i\in A\cup B}\chi_{2(q+1)}^{h}(\omega^i)\Big). \end{aligned}$$ Since $A\cup B=X_{1,c}\cup X_{3,c}\cup X_{3,c}$ as a multiset, by the property (P7), we have $$\psi_{\F_{q^2}}(\omega^a\bigcup_{i\in A\cup B}C_i^{(2(q+1),q^2)})=\left\{
\begin{array}{ll}
\frac{q G_q(\eta)-1}{2}(=:c_1), & \mbox{ if $a\in Y_{1,c}(=:Z_1)$,}\\
\frac{- q G_q(\eta)-1}{2}(=:c_2), & \mbox{ if $a\in Y_{2,c}(=:Z_2)$,}\\
\frac{-1+(-1)^{\frac{q-1}{2}}q}{2}(=:c_3), & \mbox{ if $a\in Y_{3,c}\cup Y_{4,c}(=:Z_3)$,}\\
\frac{-1-(-1)^{\frac{q-1}{2}}q}{2}(=:c_4), & \mbox{ if $a\in Y_{5,c}(=:Z_4)$. }
\end{array}
\right.$$ Then, by , we have $$\begin{aligned}
G_{q^2}(\chi_{2(q+1)}^{-h})\sum_{i\in A\cup B}\chi_{2(q+1)}^{h}(\omega^{i})
=&\,
\sum_{a=0}^{2q+1}\psi_{\F_{q^2}}(\omega^a \bigcup_{i\in A\cup B}C_i^{(2(q+1),q^2)})
\chi_{2(q+1)}^{-h}(\omega^a)\\
=&\,\sum_{i=1}^4c_{i}\sum_{a\in Z_i}\chi_{2(q+1)}^{-h}(\omega^a). \end{aligned}$$ Then, by , we have $$\begin{aligned}
W_0=&\,\sum_{i=1}^4\frac{c_i}{4(q+1)}\sum_{a\in Z_i}\sum_{h=0}^{2q+1}G_{q^2}(\chi_{2(q+1)}^{-h})\chi_{2(q+1)}^h(r\omega^{-a})\\
=&\,\sum_{i=1}^4\frac{c_i}{2}
\sum_{a\in -Z_{i}}\psi_{\F_{q^2}}(rC_a^{(2(q+1),q^2)})
.\end{aligned}$$ Since $-Y_{i,c}\equiv Y_{i,c}-c\,(\mod{2(q+1)})$, $i=1,2$, from the property (P8), we have by the property (P10) that for $i=1,2$ $$\sum_{a\in -Z_{i}}\psi_{\F_{q^2}}(rC_a^{(2(q+1),q^2)})
=
\left\{
\begin{array}{ll}
\frac{-2+q+(-1)^{i-1} G_q(\eta)}{2}, & \mbox{ if $r\in \omega^cR_1$,}\\
\frac{-2+q-(-1)^{i-1} G_q(\eta)}{2}, &\mbox{ if $r\in \omega^cR_2$,}\\
-1+(-1)^{i-1} G_q(\eta), & \mbox{ if $r\in \omega^cR_3$,}\\
-1-(-1)^{i-1} G_q(\eta),& \mbox{ if $r\in \omega^cR_4$,}\\
-1,& \mbox{ if $r\in\omega^cR_5$.}
\end{array}
\right.$$ Furthermore, since $-(Y_{3,c}\cup Y_{4,c})\equiv Y_{5,c}-c\,(\mod{2(q+1)})$ by the property (P9), we have $$\begin{aligned}
\sum_{a\in -Z_3}\psi_{\F_{q^2}}(rC_a^{(2(q+1),q^2)})=&\, \sum_{a\in Y_{5,c}-c}\psi_{\F_{q^2}}(rC_a^{(2(q+1),q^2)})
\\
=&\,
\left\{
\begin{array}{ll}
\frac{1-q}{2}, & \mbox{ if $r\in \omega^c(R_1\cup R_2)$,}\\
\frac{1+(-1)^{\frac{q-1}{2}}q}{2}, &\mbox{ if $r\in \omega^c R_5$,}\\
\frac{1-(-1)^{\frac{q-1}{2}}q}{2}, &\mbox{ if $r\in \omega^c(R_3\cup R_4)$.}
\end{array}
\right. \end{aligned}$$ Similarly, we have $$\begin{aligned}
\sum_{a\in -Z_4}\psi_{\F_{q^2}}(rC_a^{(2(q+1),q^2)})=&\, \sum_{a\in (Y_{3}\cup Y_{4})-c}\psi_{\F_{q^2}}(rC_a^{(2(q+1),q^2)})\\
=&\,
\left\{
\begin{array}{ll}
\frac{-q+1}{2}, & \mbox{ if $r\in \omega^c(R_1\cup R_2)$,}\\
\frac{1-(-1)^{\frac{q-1}{2}}q}{2}, & \mbox{ if $r\in \omega^c R_5$,}\\
\frac{1+(-1)^{\frac{q-1}{2}}q}{2}, & \mbox{ if $r\in \omega^c(R_3\cup R_4)$.}
\end{array}
\right.\end{aligned}$$ Summing up, we have $$W_0=
\left\{
\begin{array}{ll}
\frac{-q^2+1}{4}, & \mbox{ if $r\in \omega^c R_1$\, or\, $r\in \omega^c R_2$,}\\
\frac{q^2+1}{4}, &\mbox{ if $r\in \omega^c (R_2\cup R_4\cup R_5)$\, or\, $r\in \omega^c (R_1\cup R_3\cup R_5)$,}\\
\frac{-3q^2+1}{4}, & \mbox{ if $r\in \omega^c R_3$\,
or\, $r\in \omega^c R_4$, }
\end{array}
\right.$$ according as $q\equiv 3$ or $1\,(\mod{4})$. This completes the proof.
We next evaluate $W_1$ below.
\[lem:s1\] Let $r=ab\not=0$. Then, $$\begin{aligned}
W_1=&\,
-\frac{(-1)^{\frac{q-1}{2}}\rho(a)q}{4}\Big(|A|-|B|+\rho(r)(|A_e|+|B_o|-|A_o|-|B_e|)\Big)\\
&\quad +
\frac{(-1)^{\frac{q-1}{2}}\rho(a)q^2}{2}\cdot
\left\{
\begin{array}{ll}
1,& \mbox{ if $r \in \bigcup_{i\in -(A\setminus B)+q+1}C_{i}^{(2(q+1),q^2)}$,} \\
-1,& \mbox{ if $r \in \bigcup_{i\in -(B\setminus A)+q+1}C_{i}^{(2(q+1),q^2)}$,} \\
0,& \mbox{ otherwise. }
\end{array}
\right.\end{aligned}$$
By the definition of $W_1$, we have $$\label{eq:w1}
W_1=\frac{\rho(a)}{4(q+1)}\sum_{h=0}^{2q+1}G_{q^2}(\chi_{2(q+1)}^{-h}\rho)G_{q^2}(\chi_{2(q+1)}^{-h})\chi_{2(q+1)}^h(r)\Big(\sum_{i\in A}\chi_{2(q+1)}^{h}(\omega^i)-\sum_{i\in B}\chi_{2(q+1)}^h(\omega^{i})\Big).$$ By applying the Davenport-Hasse product formula (Theorem \[thm:Stickel2\]) with $\chi=\chi_{2(q+1)}^{-h}$, $\chi'=\rho$, and $\ell=2$ we have $$G_{q^2}(\chi_{2(q+1)}^{-h})G_{q^2}(\chi_{2(q+1)}^{-h}\rho)=G_{q^2}(\rho)G_{q^2}(\chi_{q+1}^{-h}),$$where $\chi_{q+1}=\chi_{2(q+1)}^{2}$ has order $q+1$. Then, is rewritten as $$\label{eq:s0}
W_1=\frac{\rho(a)}{4(q+1)}G_{q^2}(\rho)\sum_{h=0}^{2q+1}G_{q^2}(\chi_{q+1}^{-h})\chi_{2(q+1)}^h(r)\Big(\sum_{i\in A}\chi_{2(q+1)}^{h}(\omega^i)-\sum_{i\in B}\chi_{2(q+1)}^h(\omega^{i})\Big).$$ We will compute $W_1$ by dividing it into three parts. Let $W_{1,1},W_{1,2},W_{1,3}$ denote the contributions in the sum on the right hand side of (\[eq:s0\]) when $h=0, q+1$; other even $h$; and odd $h$, respectively. Then $W_1=W_{1,1}+W_{1,2}+W_{1,3}$. For $W_{1,1}$, we have $$W_{1,1}=-\frac{\rho(a)}{4(q+1)}G_{q^2}(\rho)\Big(|A|-|B|+\rho(r)(|A_e|+|B_o|-|A_o|-|B_e|)\Big).$$ Next, by Theorem \[thm:semi\], we have $$\label{eq:w12}
W_{1,2}=\frac{\rho(a)q}{4(q+1)}G_{q^2}(\rho)\sum_{\ell=0;\ell\not=0,\frac{q+1}{2}}^{q}\chi_{q+1}^\ell(r)\Big(\sum_{i\in A}\chi_{q+1}^{\ell}(\omega^i)-\sum_{i\in B}\chi_{q+1}^\ell(\omega^{i})\Big).$$ By the property (P2), $$\{x\,(\mod{q+1})\,|\,x\in A\setminus B\}\cap \{x\,(\mod{q+1})\,|\,x\in B\setminus A\}=\emptyset.$$ Hence, continuing from , we have $$\begin{aligned}
W_{1,2}=& \,-\frac{\rho(a)q}{4(q+1)}G_{q^2}(\rho)\Big(|A|-|B|+\rho(r)(|A_e|+|B_o|-|A_o|-|B_e|)\Big)\\
&\quad +\frac{\rho(a)q}{4}G_{q^2}(\rho)
\cdot
\left\{
\begin{array}{ll}
1,& \mbox{ if $r \in \bigcup_{i\in -(A\setminus B)}C_{i}^{(q+1,q^2)}$,} \\
-1,& \mbox{ if $r \in \bigcup_{i\in -(B\setminus A)}C_{i}^{(q+1,q^2)}$,}\\
0,& \mbox{ otherwise. } \\
\end{array}
\right.
$$ Finally, by Theorem \[thm:semi\] again, we have $$\begin{aligned}
W_{1,3}=&\,-\frac{\rho(a)q}{4(q+1)}G_{q^2}(\rho)\sum_{h:\, odd}\chi_{2(q+1)}^h(r)\Big(\sum_{i\in A}\chi_{2(q+1)}^{h}(\omega^i)-\sum_{i\in B}\chi_{2(q+1)}^h(\omega^{i})\Big)\\
=&\,-\frac{\rho(a)q}{4(q+1)}G_{q^2}(\rho)\sum_{h=0}^{2q+1}\chi_{2(q+1)}^h(r)\Big(\sum_{i\in A}\chi_{2(q+1)}^{h}(\omega^i)-\sum_{i\in B}\chi_{2(q+1)}^h(\omega^{i})\Big)\\
&\, \quad +\frac{\rho(a)q}{4(q+1)}G_{q^2}(\rho)\sum_{\ell=0}^{q}\chi_{q+1}^\ell(r)\Big(\sum_{i\in A}\chi_{q+1}^{\ell}(\omega^i)-\sum_{i\in B}\chi_{q+1}^\ell(\omega^{i})\Big)\\
=&\,-\frac{\rho(a)q}{2}G_{q^2}(\rho)\cdot
\left\{
\begin{array}{ll}
1,& \mbox{ if $r \in \bigcup_{i\in -(A\setminus B)}C_{i}^{(2(q+1),q^2)}$,} \\
-1,& \mbox{ if $r \in \bigcup_{i\in -(B\setminus A)}C_{i}^{(2(q+1),q^2)}$,}\\
0,& \mbox{ otherwise, }
\end{array}
\right.\\
&\quad +\frac{\rho(a)q}{4}G_{q^2}(\rho)
\cdot \left\{
\begin{array}{ll}
1,& \mbox{ if $r \in \bigcup_{i\in -(A\setminus B)}C_{i}^{(q+1,q^2)}$,} \\
-1,& \mbox{ if $r \in \bigcup_{i\in -(B\setminus A)}C_{i}^{(q+1,q^2)}$,}\\
0,& \mbox{ otherwise. }
\end{array}
\right.
$$ Summing up, we have $$\begin{aligned}
W_1=&\,W_{1,2}+W_{1,2}+W_{1,3}\\
=&\,
-\frac{\rho(a)}{4}G_{q^2}(\rho)\Big(|A|-|B|+\rho(r)(|A_e|+|B_o|-|A_o|-|B_e|)\Big)\\
&\quad +
\frac{\rho(a)q}{2}G_{q^2}(\rho)\cdot
\left\{
\begin{array}{ll}
1,& \mbox{ if $r \in \bigcup_{i\in -(A\setminus B)+q+1}C_{i}^{(2(q+1),q^2)}$,} \\
-1,& \mbox{ if $r \in \bigcup_{i\in -(B\setminus A)+q+1}C_{i}^{(2(q+1),q^2)}$,} \\
0,& \mbox{ otherwise. }
\end{array}
\right.\end{aligned}$$ The statement now follows from $G_{q^2}(\rho)=-(-1)^{\frac{q-1}{2}}q$.
By Lemmas \[lem:ss1\] and \[lem:s1\], we have $$\begin{aligned}
V_2+V_3=&\, W_0+W_1\nonumber\\
=&\,
\frac{(-1)^{\frac{q-1}{2}}\rho(a)q}{4}\Big(|A|-|B|+\rho(r)(|A_e|+|B_o|-|A_o|-|B_e|)\Big)\nonumber\\
&\, \quad -\frac{(-1)^{\frac{q-1}{2}}\rho(a)q^2}{2}\cdot \left\{
\begin{array}{ll}
1,& \mbox{ if $r \in \bigcup_{i\in -(A\setminus B)+(q+1)}C_i^{(2(q+1),q^2)}$,} \\
-1,& \mbox{ if $r \in \bigcup_{i\in -(B\setminus A)+(q+1)}C_i^{(2(q+1),q^2)}$,} \\
0,& \mbox{ otherwise,}
\end{array}
\right.\nonumber\\
&\, \, \quad +\left\{
\begin{array}{ll}
\frac{-q^2+1}{4},& \mbox{ if $r\in \omega^c R_1$ or $r\in \omega^c R_2$,} \\
\frac{-3q^2+1}{4} \mbox{\, or \,} \frac{q^2+1}{4},& \mbox{ otherwise,}
\end{array}
\right.
\label{eq:chara}\end{aligned}$$ according as $q\equiv 3$ or $1\,(\mod{4})$. By the property (P4), $X_{1,c}+c\equiv -((A\setminus B)\cup (B\setminus A))+(q+1)\,(\mod{2(q+1)})$ or $X_{2,c}+c\equiv-((A\setminus B)\cup (B\setminus A))+(q+1)\,(\mod{2(q+1)})$ depending on whether $q\equiv 3$ or $1\,(\mod{4})$. Hence, continuing from , we have $$\begin{aligned}
&V_2+V_3-\frac{(-1)^{\frac{q-1}{2}}\rho(a)q}{4}\Big(|A|-|B|+\rho(r)(|A_e|+|B_o|-|A_o|-|B_e|)\Big) \nonumber\\
=&\, \frac{q^2+1}{4}\, \, \mbox{or }\, \, \frac{-3q^2+1}{4}.\label{eq:v2v3last}\end{aligned}$$
We are now ready to prove our main theorem.
In the case where exactly one of $a, b$ is zero, the statement follows from Lemmas \[lemma:ab0\] and \[lemma:ab02\]. We treat the case where $a\neq 0$ and $b\not=0$.
\(1) By , $V_0=\frac{-1+\rho(b)q}{2}$. Furthermore, by $|A|=|B|=\frac{q+1}{2}$ and $|A_e|+|B_o|=|A_o|+|B_e|-2(-1)^{\frac{q-1}{2}}$, we have $$\frac{(-1)^{\frac{q-1}{2}}\rho(a)q}{4}\Big(|A|-|B|+\rho(r)(|A_e|+|B_o|-|A_o|-|B_e|)\Big)=-\frac{\rho(b)q}{2}.$$ Hence, by , it follows that $\psi_{a,b}(E_0)=
\frac{q^2-1}{4}\, \, \mbox{or }\, \, \frac{-3q^2-1}{4}$.
\(2) By , $V_1=\frac{-1-(-1)^{\frac{q-1}{2}}\rho(a)q}{2}$. Furthermore, by $|A|=\frac{q+3}{2}$, $|B|=\frac{q-1}{2}$, and $|A_e|+|B_o|=|A_o|+|B_e|$, we have $$\frac{(-1)^{\frac{q-1}{2}}\rho(a)q}{4}\Big(|A|-|B|+\rho(r)(|A_e|+|B_o|-|A_o|-|B_e|)\Big)=\frac{(-1)^{\frac{q-1}{2}}\rho(a)q}{2}.$$ Hence, by , it follows that $\psi_{a,b}(E_1)=
\frac{q^2-1}{4}\, \, \mbox{or }\, \, \frac{-3q^2-1}{4}$.
Let $A=\{\beta\}\cup X_{3,c}$, $B=\{\alpha\}\cup X_{3,c}$, $A'=X_{1,c+q+1}\cup X_{3,c+q+1}$, and $B'=X_{3,c+q+1}$, where $\alpha,\beta$ are defined as in the property (P6). Then, the sets $$\begin{aligned}
C_0=&\{(0,y)\,|\,y\in C_\tau^{(2,q^2)}\} \cup \{(xy,xy^{-1}\omega^i)\,|\,x\in \F_q^\ast,y\in C_0^{(2,q^2)},i\in A\}\\
&\quad \cup
\{(xy,xy^{-1}\omega^i)\,|\,x\in \F_q^\ast,y\in C_1^{(2,q^2)},i\in B\},\\
C_1=&\{(y,0)\,|\,y\in C_0^{(2,q^2)}\}\cup \{(xy,xy^{-1}\omega^i)\,|\,x\in \F_q^\ast,y\in C_0^{(2,q^2)},i\in A'\}\\
&\quad \cup \{(xy,xy^{-1}\omega^i)\,|\,x\in \F_q^\ast,y\in C_1^{(2,q^2)},i\in B'\},\\
C_2=&\{(0,y)\,|\,y\in C_{\tau+1}^{(2,q^2)}\} \cup \{(xy\omega,xy^{-1}\omega^{i+q})\,|\,x\in \F_q^\ast,y\in C_0^{(2,q^2)},i\in A\}\\
&\quad \cup
\{(xy\omega,xy^{-1}\omega^{i+q})\,|\,x\in \F_q^\ast,y\in C_1^{(2,q^2)},i\in B\},\\
C_3=&\{(y,0)\,|\,y\in C_1^{(2,q^2)}\}\cup \{(xy\omega,xy^{-1}\omega^{i+q})\,|\,x\in \F_q^\ast,y\in C_0^{(2,q^2)},i\in A'\}\\
&\quad \cup \{(xy\omega,xy^{-1}\omega^{i+q})\,|\,x\in \F_q^\ast,y\in C_1^{(2,q^2)},i\in B'\}\end{aligned}$$ are of type Q. Furthermore, these sets satisfy the assumptions of Remark \[rem:HDiff\] with respect to the spread ${\mathcal K}$ consisting of the following $2$-dimensional subspaces: $$K_y=\{ (x,yx^q) \,|\,x\in \F_{q^2}\}, y\in \F_{q^2}, \, \mbox{\, and \, }\, K_\infty=\{(0,x)\,|\,x\in \F_{q^2}\}.$$
By the property (P6), $|A_e|+|B_o|=|A_o|+|B_e|-2(-1)^{\frac{q-1}{2}}$ and $|A_e'|+|B_o'|=|A_o'|+|B_e'|$. Hence, by Theorem \[thm:mainWX1\], $C_0$ and $C_1$ are type Q sets. Since $C_2=\{(\omega x,\omega^q y)\,|\, (x,y)\in C_0\}$ and $C_3=\{(\omega x,\omega^q y)\,|\, (x,y)\in C_1\}$, the sets $C_2$ and $C_3$ are also of type Q. Furthermore, $\bigcup_{i=1}^3C_i=(\F_{q^2}\times \F_{q^2})\setminus \{(0,0)\}$ since $A\cup A'\cup (B+q+1) \cup (B'+q+1)\equiv \{0,1,\ldots,2q+1\}\,(\mod{2(q+1)})$ by the properties (P1),(P4),(P5) and (P6). Therefore, $C_i$, $i=0,1,2,3$, satisfy the assumptions of Remark \[rem:HDiff\] as $
C_0\cup C_2\cup \{(0,0)\}=\big(\bigcup_{y\in H_0}K_y\big)\cup K_\infty$ and $C_1\cup C_3\cup \{(0,0)\}=\bigcup_{y\in H_1}K_y$, where $H_0= \bigcup_{i\in -(A\cup (B+q+1))}C_i^{(2(q+1),q^2)}$ and $H_1=\bigcup_{i\in -(A'\cup (B'+q+1))}C_i^{(2(q+1),q^2)}$.
Appendix {#sec:newass .unnumbered}
========
In this appendix, we prove that the sets $X_{i,c}$ and $Y_{i,c}$, $i=1,2,3,4,5$, have the properties (P1)–(P10).
By the definition of $X_{1,c}$, we have $$\begin{aligned}
X_{1,c}=&\,(\{\tfrac{q+1}{2},\tfrac{3(q+1)}{2}\}\cap \{i \, (\mod{2(q+1)})\,|\,\Tr_{q^2/q}(\omega^{i+c})\in C_0^{(2,q)}\})\\
&\quad \cup (\{\tfrac{q+1}{2}-c,\tfrac{3(q+1)}{2}-c\}\cap
\{i \, (\mod{2(q+1)})\,|\,\Tr_{q^2/q}(\omega^{i})\in C_0^{(2,q)}\}). \end{aligned}$$ Hence, there are $\epsilon,\delta\in \{-1,1\}$ such that $X_{1,c}=\{\frac{q+1}{2}\epsilon,\frac{q+1}{2}\delta-c\}$. In particular, we have $$\label{eq:traceapp1}
\Tr_{q^2/q}(\omega^{c+\frac{q+1}{2}\epsilon})\in C_0^{(2,q)}
\mbox{\, \, and\, \, }
\Tr_{q^2/q}(\omega^{\frac{q+1}{2}\delta-c})\in C_0^{(2,q)}.$$
\[lem:x1c\] We have $X_{1,c}=\{\tfrac{q+1}{2}\epsilon,\tfrac{q+1}{2}\delta-c\}$ for some $(\epsilon,\delta)\in \{(1,1),(-1,-1)\} $ or $\{(-1,1),(1,-1)\} $ according as $q\equiv 3\,(\mod{4})$ or $q\equiv 1\,(\mod{4})$.
By , we have $$\label{eq:traceapp0}
\omega^{\frac{q+1}{2}\epsilon}(\omega^{c}-\omega^{cq})\in C_0^{(2,q)}\mbox{\, \, and\, \, }
\omega^{\frac{q+1}{2}\delta}(\omega^{-c}-\omega^{-cq})\in C_0^{(2,q)}.$$ Putting $\omega^d=1-\omega^{c(q-1)}$, the conditions in are rewritten as $$\omega^{\frac{q+1}{2}\epsilon+c+d}= \omega^{2(q+1)k}\mbox{\, \, and\, \, }
\omega^{\frac{q+1}{2}\delta-c+dq}=\omega^{2(q+1)\ell}$$ for some $k,\ell\in \Z$. Here, $d$ is odd if $q\equiv 3\,(\mod{4})$, and $d$ is even if $q\equiv 1\,(\mod{4})$. By multiplying these equations, we have $\omega^{\frac{q+1}{2}(\epsilon+\delta)+d(q+1)}=\omega^{2(q+1)(k+\ell)}$. Then, the statement immediately follows.
\[rem:fives\] For $X_{i,c}$, $i=1,2,3,4,5$, we observe the following facts:
- Since $I_2\equiv I_3+(q+1)\,(\mod{2(q+1)})$, we have $X_{1,c}\equiv X_{2,c}+(q+1)\,(\mod{2(q+1)})$, $X_{3,c}\equiv X_{4,c}+(q+1)\,(\mod{2(q+1)})$, and $X_{5,c}\equiv X_{5,c}+(q+1)\,(\mod{2(q+1)})$. Hence, the property (P1) follows.
- Since $I_2$ forms a $(q+1,2,q,\frac{q-1}{2})$ relative difference set (cf. [@ADJP]), we have $|X_{3,c}|=\frac{q-1}{2}$. Then, the property (P2) follows.
- Since $X_{3,c+q+1}=I_2\cap J_3$ and $X_{4,c+q+1}=I_3\cap J_2$, we have $X_{3,c+q+1}\cup X_{4,c+q+1}=X_{5,c}$. Then, the property (P3) follows.
- The property (P4) directly follows from Lemma \[lem:x1c\].
- By Lemma \[lem:x1c\], $X_{1,c+q+1}=\{\frac{q+1}{2}\epsilon',\frac{q+1}{2}\delta'-c+q+1\}$ for some $(\epsilon',\delta')\in \{(1,1),(-1,-1)\} $ or $\{(-1,1),(1,-1)\} $ according to whether $q\equiv 3\,(\mod{4})$ or $q\equiv 1\,(\mod{4})$. Then, it is direct to see that $|X_{1,c}\cap X_{1,c+q+1}|=1$ and $(X_{1,c}\setminus X_{1,c+q+1})\equiv (X_{1,c+q+1}\setminus X_{1,c})+q+1\,(\mod{2(q+1)})$ in all cases. More precisely, $X_{1,c+q+1}=\{\frac{q+1}{2}\epsilon+q+1,\frac{q+1}{2}\delta-c\}$ since $\frac{q+1}{2}\delta-c\in J_1\cap I_2$. Hence, $X_{1,c}\setminus X_{1,c+q+1}=\{\frac{q+1}{2}\epsilon\}$ and $X_{1,c}\cap X_{1,c+q+1}=\{\frac{q+1}{2}\delta-c\}$. Thus, the properties (P5) and (P6) follow.
Next, we show that the $X_{i,c}$’s have property (P7).
\[prop:asso\] Let $R_{i}$, $i=1,2,3,4,5$, be defined as in Subsection \[subsec:set\]. Then, $R_i$, $i=1,2,3,4,5$, take the character values listed in Table \[tab\_1\]. In particular, $Y_{i,c}$’s in Table \[tab\_1\] are determined as follows: $$\begin{aligned}
Y_{1,c}=&\{0,c\}, \, \, \, Y_{2,c}=\{q+1,c+q+1\},\\
Y_{3,c}=&\{i+c-\tfrac{q+1}{2}\delta\,|\,\Tr_{q^2/q}(\omega^i)\in C_0^{(2,q)}\}\cap \{i-\tfrac{q+1}{2}\epsilon\,|\,\Tr_{q^2/q}(\omega^i)\in C_0^{(2,q)}\}, \\
Y_{4,c}=&\{i+c-\tfrac{q+1}{2}\delta\,|\,\Tr_{q^2/q}(\omega^i)\in C_1^{(2,q)}\}\cap \{i-\tfrac{q+1}{2}\epsilon\,|\,\Tr_{q^2/q}(\omega^i)\in C_1^{(2,q)}\},\\
Y_{5,c}=& (\{i+c-\tfrac{q+1}{2}\delta\,|\,\Tr_{q^2/q}(\omega^i)\in C_0^{(2,q)}\}\cap \{i-\tfrac{q+1}{2}\epsilon\,|\,\Tr_{q^2/q}(\omega^i)\in C_1^{(2,q)}\})\\
&\, \cup (\{i+c-\tfrac{q+1}{2}\delta\,|\,\Tr_{q^2/q}(\omega^i)\in C_1^{(2,q)}\}\cap \{i-\tfrac{q+1}{2}\epsilon\,|\,\Tr_{q^2/q}(\omega^i)\in C_0^{(2,q)}\}). \end{aligned}$$
The character values $\psi_{\F_{q^2}}(\omega^aR_1)$, $a=0,1,\ldots,2q+1$, are evaluated as follows: $$\begin{aligned}
\psi_{\F_{q^2}}(\omega^aR_1)=&\,\psi_{\F_{q^2}}(\omega^{a+\frac{q+1}{2}\delta-c}C_0^{(2(q+1),q^2)})+\psi_{\F_{q^2}}(\omega^{a+\frac{q+1}{2}\epsilon}C_0^{(2(q+1),q^2)})\\
=&\,
\psi_{\F_q}(\Tr_{q^2/q}(\omega^{a+\frac{q+1}{2}\delta-c})C_0^{(2,q)})+
\psi_{\F_q}(\Tr_{q^2/q}(\omega^{a+\frac{q+1}{2}\epsilon})C_0^{(2,q)})
\\
=&\,
\left\{
\begin{array}{ll}
\frac{q-1}{2}, & \mbox{ if $a\in I_1-\frac{q+1}{2}\delta+c$,}\\
\frac{-1+G_q(\eta)}{2}, &\mbox{ if $a\in I_2-\frac{q+1}{2}\delta+c$,}\\
\frac{-1-G_q(\eta)}{2}, & \mbox{ if $a\in I_3-\frac{q+1}{2}\delta+c$,}
\end{array}
\right.
+
\left\{
\begin{array}{ll}
\frac{q-1}{2}, & \mbox{ if $a\in I_1-\frac{q+1}{2}\epsilon$,}\\
\frac{-1+G_q(\eta)}{2}, &\mbox{ if $a\in I_2-\frac{q+1}{2}\epsilon$,}\\
\frac{-1-G_q(\eta)}{2}, & \mbox{ if $a\in I_3-\frac{q+1}{2}\epsilon$, }
\end{array}
\right.\\
=&\,
\left\{
\begin{array}{ll}
\frac{-2+q+G_q(\eta)}{2}, & \mbox{ if $a\in Y_{1,c}'$,}\\
\frac{-2+q-G_q(\eta)}{2}, &\mbox{ if $a\in Y_{2,c}'$,}\\
-1+G_q(\eta), & \mbox{ if $a\in Y_{3,c}$,}\\
-1-G_q(\eta),& \mbox{ if $a\in Y_{4,c}$,}\\
-1,& \mbox{ if $a\in Y_{5,c}$, }
\end{array}
\right.\end{aligned}$$ where $$\begin{aligned}
Y_{1,c}'=&\,((I_1-\tfrac{q+1}{2}\delta +c)\cap (I_2-\tfrac{q+1}{2}
\epsilon)) \cup
((I_2-\tfrac{q+1}{2}\delta +c)\cap (I_1-\tfrac{q+1}{2}\epsilon)),\\
Y_{2,c}'=&\,((I_1-\tfrac{q+1}{2}\delta +c)\cap (I_3-\tfrac{q+1}{2}\epsilon))\cup
((I_3-\tfrac{q+1}{2}\delta +c)\cap (I_1-\tfrac{q+1}{2}\epsilon)). \end{aligned}$$ By (\[eq:traceapp1\]), it is direct to see that $$\begin{aligned}
&(I_1-\tfrac{q+1}{2}\delta +c)\cap (I_2-\tfrac{q+1}{2}
\epsilon)=\{c\},\quad
(I_2-\tfrac{q+1}{2}\delta +c)\cap (I_1-\tfrac{q+1}{2}\epsilon)=\{0\}, \\
&(I_1-\tfrac{q+1}{2}\delta +c)\cap (I_3-\tfrac{q+1}{2}
\epsilon)=\{c+q+1\},\quad
(I_3-\tfrac{q+1}{2}\delta +c)\cap (I_1-\tfrac{q+1}{2}\epsilon)=\{q+1\}. \end{aligned}$$ Hence, we have $Y_{1,c}'=Y_{1,c}$ and $Y_{2,c}'=Y_{2,c}$.
The character values of $R_2$ is determined as $\psi_{\F_{q^2}}(\omega^a R_2)=\psi_{\F_{q^2}}(\omega^{a+q+1} R_1)$.
We next evaluate $\psi_{\F_{q^2}}(\omega^a R_3)$, $a=0,1,\ldots,2q+1$. By Remark \[rem:secChen\] (i), the indicator function of $\{x\,|\,\Tr_{q^2/q}(x)\in C_0^{(2,q)}\}$ is given by $$f(x)=\frac{1}{q}\sum_{s\in \F_q}\sum_{y\in C_0^{(2,q)}}
\psi_{\F_{q^2}}(sx)\psi_{\F_{q}}(-sy).$$ Then, $$\begin{aligned}
\psi_{\F_{q^2}}(\omega^a R_3)=&\,\sum_{x\in \F_{q^2}}\psi_{\F_{q^2}}(\omega^a x)f(x)f(x\omega^c)\\
=&\,\frac{1}{q^2}\sum_{x\in \F_{q^2}}\sum_{s,t\in \F_{q}}\sum_{y,z\in C_0^{(2,q)}}
\psi_{\F_{q^2}}(x(\omega^a +s+t\omega^c))\psi_{\F_{q}}(-sy)\psi_{\F_{q}}(-tz)\\
=&\,\sum_{s,t\in \F_{q}:\omega^a=s+t\omega^{c}}\sum_{y,z\in C_0^{(2,q)}}
\psi_{\F_{q}}(sy)\psi_{\F_{q}}(tz).\end{aligned}$$ We treat the case where $a\in Y_{1,c}\cup Y_{2,c}=\{0,c,q+1,c+q+1\}$. If $a=c$, then $s=0$ and $t\in C_0^{(2,q)}$, and hence $\psi_{\F_{q^2}}(\omega^aR_3)=\frac{(q-1)(-1+G_q(\eta))}{4}$. If $a=c+q+1$, then $s=0$ and $t\in C_1^{(2,q)}$, and hence $\psi_{\F_{q^2}}(\omega^aR_3)=\frac{(q-1)(-1-G_q(\eta))}{4}$. If $a=0$, then $t=0$ and $s\in C_0^{(2,q)}$, and hence $\psi_{\F_{q^2}}(\omega^aR_3)=\frac{(q-1)(-1+G_q(\eta))}{4}$. If $a=q+1$, then $t=0$ and $s\in C_1^{(2,q)}$, and hence $\psi_{\F_{q^2}}(\omega^aR_3)=\frac{(q-1)(-1-G_q(\eta))}{4}$. Next, we treat the case where $s,t\not=0$. Define $$\begin{aligned}
G_3=&\{a\,(\mod{2(q+1)})\,|\,\omega^a=s+t\omega^c,s,t\in C_0^{(2,q)}\},\\
G_4=&\{a\,(\mod{2(q+1)})\,|\,\omega^a=s+t\omega^c,s,t\in C_1^{(2,q)}\},\\
G_5=&\{a\,(\mod{2(q+1)})\,|\,\omega^a=s+t\omega^c,(s,t)\in C_0^{(2,q)}\times C_1^{(2,q)}\mbox{ or } C_1^{(2,q)}\times C_0^{(2,q)}\}. \end{aligned}$$ Then, we have $$\psi_{\F_{q^2}}(\omega^aR_3)=
\left\{
\begin{array}{ll}
\frac{(1-G_q(\eta))^2}{4}, & \mbox{ if $a\in G_3$,}\\
\frac{(1+G_q(\eta))^2}{4}, &\mbox{ if $a\in G_4$, }\\
\frac{1-(-1)^{\frac{q-1}{2}}q}{4}, & \mbox{ if $a\in G_5$.}
\end{array}
\right.$$ We need to show that $G_i=Y_{i,c}$, $i=3,4,5$. Let $a\in G_3$. Then, there are some $s,t\in C_0^{(2,q)}$ such that $\omega^a=s+t\omega^c$. Taking trace of both sides of $\omega^{a+\frac{q+1}{2}\epsilon}=s\omega^{\frac{q+1}{2}\epsilon}+t\omega^{c+\frac{q+1}{2}\epsilon}$, we have $\Tr_{q^2/q}(\omega^{a+\frac{q+1}{2}\epsilon})=
s\Tr_{q^2/q}(\omega^{\frac{q+1}{2}\epsilon})+t\Tr_{q^2/q}(\omega^{\frac{q+1}{2}\epsilon+c})$. Since $\Tr_{q^2/q}(\omega^{\frac{q+1}{2}\epsilon})=0$ and $\Tr_{q^2/q}(\omega^{\frac{q+1}{2}\epsilon+c})\in C_0^{(2,q)}$, we obtain $\Tr_{q^2/q}(\omega^{a+\frac{q+1}{2}\epsilon})\in C_0^{(2,q)}$, i.e., $a\in I_2-\frac{q+1}{2}\epsilon$. On the other hand, taking trace of both sides of $\omega^{a+\frac{q+1}{2}\delta-c}=s\omega^{\frac{q+1}{2}\delta-c}+t\omega^{\frac{q+1}{2}\delta}$, we have $\Tr_{q^2/q}(\omega^{a+\frac{q+1}{2}\delta-c})=
s\Tr_{q^2/q}(\omega^{\frac{q+1}{2}\delta-c})+t\Tr_{q^2/q}(\omega^{\frac{q+1}{2}\delta})$. Since $\Tr_{q^2/q}(\omega^{\frac{q+1}{2}\delta})=0$ and $\Tr_{q^2/q}(\omega^{\frac{q+1}{2}\delta-c})\in C_0^{(2,q)}$, we obtain $\Tr_{q^2/q}(\omega^{a+\frac{q+1}{2}\delta-c})\in C_0^{(2,q)}$, i.e., $a\in I_2+c-\frac{q+1}{2}\delta$. Thus, $a\in (I_2-\frac{q+1}{2}\epsilon)\cap (I_2+c-\frac{q+1}{2}\delta)$, and hence $G_3\subseteq Y_{3,c}$. Noting that $|G_3|=|Y_{3,c}|$, it follows that $G_3=Y_{3,c}$. Furthermore, since $G_2\equiv G_3+(q+1)\,(\mod{2(q+1)})$ and $G_5=\{0,1,\ldots,2q+1\}\setminus (G_3\cup G_4 \cup \{0,c,q+1,c+q+1\})$, we have $G_4=Y_{4,c}$ and $G_5=Y_{5,c}$
Finally, the character values of $R_4$ and $R_5$ are determined as $\psi_{\F_{q^2}}(\omega^a R_4)=\psi_{\F_{q^2}}(\omega^{a+q+1} R_3)$ and $\psi_{\F_{q^2}}(\omega^a R_5)=-1-\sum_{i=1}^4\psi_{\F_{q^2}}(\omega^a R_i)$. This completes the proof of the proposition.
By the definition of $Y_{i,c}$, $i=1,2$, in Proposition \[prop:asso\], it is clear that $-Y_{i,c}+c\equiv Y_{i,c}\,(\mod{2(q+1)})$, that is, the property (P8).
Next, we show that the $Y_{i,c}$’s have property (P9).
We have $$-(Y_{3,c}\cup Y_{4,c})+c\equiv Y_{5,c}\,(\mod{2(q+1)}).$$
Since $Y_{i,c}=G_i$ for $i=3,4,5$ as in the proof of Proposition \[prop:asso\], we have $$\begin{aligned}
Y_{3,c}\cup Y_{4,c}=&\, \{a\,(\mod{2(q+1)})\,|\,\omega^a=s+t\omega^c,(s,t)\in S\times S\mbox{ or }N\times N\}, \\
Y_{5,c}=&\,\{a\,(\mod{2(q+1)})\,|\,\omega^a=s+t\omega^c,(s,t)\in S\times N\mbox{ or }S\times N\}. \end{aligned}$$ Assume that $a \in -(Y_{3,c}\cup Y_{4,c})+c$. There are some $s',t'\in \F_q$ such that $\omega^a=s'+t'\omega^c$. On the other hand, since $a \in -(Y_{3,c}\cup Y_{4,c})+c$, $\omega^{-a+c}=s+t\omega^c$ for some $s,t\in S\times S$ or $N\times N$. Then, we have $$\label{eq:stst}
(s\omega^{-c}+t)(s'+t'\omega^c)=1.$$ By multiplying both sides of by $\omega^{\frac{q+1}{2}\epsilon}$ and taking trace, we have $$\label{eq:stst2}
ss'\Tr(\omega^{-c+\frac{q+1}{2}\epsilon})+(ts'+st')\Tr_{q^2/q}(\omega^{\frac{q+1}{2}\epsilon})+tt'\Tr_{q^2/q}(\omega^{c+\frac{q+1}{2}\epsilon})=\Tr_{q^2/q}(\omega^{\frac{q+1}{2}\epsilon}).$$ Since $\Tr_{q^2/q}(\omega^{\frac{q+1}{2}\epsilon})=0$ by the definition of $X_{1,c}$, is reduced to $$-ss' u\omega^{\frac{q+1}{2}(\epsilon-\delta)}=tt'v,$$ where $u=\Tr_{q^2/q}(\omega^{c+\frac{q+1}{2}\epsilon})$ and $v=\Tr_{q^2/q}(\omega^{-c+\frac{q+1}{2}\delta})$. Here, $u,v\in C_0^{(2,q)}$ by (\[eq:traceapp1\]). Furthermore, $st^{-1}\in C_0^{(2,q)}$ by the definitions of $s,t$, and $-\omega^{\frac{q+1}{2}(\epsilon-\delta)}\in C_1^{(2,q)}$ by the definitions of $\epsilon,\delta$. Hence, either $(s',t')\in C_0^{(2,q)}\times C_1^{(2,q)}$ or $C_1^{(2,q)}\times C_0^{(2,q)}$ holds by noting that $(s',t')=(0,0)$ is impossible. Therefore, $a\in Y_{5,c}$, i.e., $-(Y_{3,c}\cup Y_{4,c})+c\subseteq Y_{5,c}$, follows. Finally, since $|-(Y_{3,c}\cup Y_{4,c})+c|=|Y_{3,c}\cup Y_{4,c}|=|Y_{5,c}|$, the statement of the proposition follows.
Finally, we show that the $R_{i}'$’s have property (P10).
\[re:dual\] Let $R_{i}'$, $i=1,2,3,4,5$, be defined as in Subsection \[subsec:set\]. Then, $R_i'$, $i=1,2,3,4,5$, take the character values listed in Table \[tab\_2\].
Since $Y_{i,c}-c+\frac{q+1}{2}\delta\equiv X_{i,c-\frac{q+1}{2}\delta+\frac{q+1}{2}\epsilon}$ by Lemma \[lem:x1c\], Remark \[rem:fives\] (5) and the definitions of $X_{i,c},Y_{i,c}$, $i=1,2,3,4,5$, the statement follows from Proposition \[prop:asso\].
[50]{} K. T. Arasu, J. F. Dillon, D. Jungnickel, A. Pott, The solution of the Waterloo problem, [*J. Combin. Theory, Ser. A*]{} [**71**]{} (1995), 316–331. B. Berndt, R. Evans, K. S. Williams, [*Gauss and Jacobi Sums*]{}, Wiley, 1997.
A. E. Brouwer, R. M. Wilson, Q. Xiang, Cyclotomy and strongly regular graphs, [*J. Alg. Combin.*]{} [**10**]{} (1999), 25–28.
Y. Q. Chen, On the existence of abelian Hadamard difference sets and a new family of difference sets, [*Finite Fields Appl.*]{} [**3**]{} (1997), 234–256.
J. A. Davis, Difference sets in abelian 2-groups, [*J. Combin. Theory, Ser. A*]{} [**57**]{} (1991), 262–286
M. van Eupen, V. D. Tonchev, Linear codes and the existence of a reversible Hadamard difference set in $\Z_2\times \Z_2\times \Z_5^4$, [*J. Combin. Theory, Ser. A*]{} [**79**]{} (1997), 161–167.
D. Jungnickel, [*Difference Sets*]{}. Contemporary design thepry, 241–324, Wiley-Intersci. Ser. Discrete Math. Optim., Wiley-Intersci. Publ., Wiley, New York, 1992.
R. G. Kraemer, Proof of a conjecture on Hadamard $2$-groups, [*J. Combin. Theory, Ser. A*]{} [**63**]{} (1993), 1–10.
R. Lidl, H. Niederreiter, [*Finite Fields*]{}, Cambridge Univ. Press, 1997.
R. L. McFarland, Difference sets in abelian groups of order $4p^2$, [*Mitt. Math. Sem. Giessen*]{} [**192**]{} (1989), 1–70.
R. J. Turyn, A special class of Williamson matrices and difference sets, [*J. Combin. Theory, Ser. A*]{} [**36**]{} (1984), 111–115.
R. M. Wilson, Q. Xiang, Constructions of Hadamard difference sets, [*J. Combin. Theory, Ser. A*]{} [**77**]{} (1997), 148–160.
M. Y. Xia, Some infinite classes of special Williamson matrices and difference sets, [*J. Combin. Theory, Ser. A*]{} [**61**]{} (1992), 230–242.
Q. Xiang, Y. Q. Chen, On Xia’s construction of Hadamard difference sets, [*Finite Fields Appl.*]{} [**2**]{} (1996), 87–95.
[^1]: $^{\dagger}$ Koji Momihara was supported by JSPS under Grant-in-Aid for Young Scientists (B) 17K14236 and Scientific Research (B) 15H03636.
[^2]: $^{\ast}$ Qing Xiang was supported by an NSF grant DMS-1600850.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
The polarization properties of the charmed $\Lambda^+_c$ baryon are investigated in weak non–leptonic four–body $\Lambda^+_c \to p + K^-
+ \pi^+ + \pi^0$ decay. The probability of this decay and the angular distribution of the probability are calculated in the effective quark model with chiral $U(3)\times U(3)$ symmetry incorporating Heavy Quark Effective theory (HQET) and the extended Nambu–Jona–Lasinio model with a linear realization of chiral $U(3)\times U(3)$ symmetry. The theoretical value of the probability of the decay $\Lambda^+_c \to p +
K^- + \pi^+ + \pi^0$ relative to the probability of the decay $\Lambda^+_c \to p + K^- + \pi^+$ does not contain free parameters and fits well experimental data. The application of the obtained results to the analysis of the polarization of the $\Lambda^+_c$ produced in the processes of photo and hadroproduction is discussed.
author:
- |
A. Ya. Berdnikov, Ya. A. Berdnikov[^1] , A. N. Ivanov,\
V. F. Kosmach,\
M. D. Scadron[^2] , and N. I. Troitskaya
title: 'On the polarization properties of the charmed baryon $\Lambda^+_c$ in the $\Lambda^+_c \to p + K^- + \pi^+ + \pi^0$ decay'
---
[*State Technical University of St. Petersburg, Department of Nuclear Physics,\
Polytechnicheskaya 29, 195251 St. Petersburg, Russian Federation*]{}
Introduction
============
It is known that in reactions of photo and hadroproduction the charmed baryon $\Lambda^+_c$ is produced polarized \[1\]. The analysis of the $\Lambda^+_c$ polarization via the investigation of the decay products should give an understanding of the mechanism of the charmed baryon production at high energies.
Recently \[2\] we have given a theoretical analysis of the polarization properties of the $\Lambda^+_c$ in the mode $\Lambda^+_c \to p + K^- +
\pi^+$. This is the most favourable mode of the $\Lambda^+_c$ decays from the experimental point of view. From the theoretical point of view this mode is the most difficult case of the analysis of the weak non–leptonic decays of the $\Lambda^+_c$ baryon \[1,2\]. Indeed, for the calculation of the matrix element of the transition $\Lambda^+_c \to p
+ K^- + \pi^+$ the baryonic and mesonic degrees of freedom cannot be fully factorized.
In spite of these theoretical difficulties the problem of the theoretical analysis of the decay $\Lambda^+_c \to p + K^- + \pi^+$ has been successfully solved within the effective quark model with chiral $U(3)\times U(3)$ symmetry incorporating Heavy Quark Effective Theory (HQET) \[3,4\] and the extended Nambu–Jona–Lasinio (ENJL) model with a linear realization of chiral $U(3)\times U(3)$ symmetry \[5–7\][^3]. Such an effective quark model with chiral $U(3)\times U(3)$ symmetry motivated by the low–energy effective QCD with a linearly rising interquark potential responsible for a quark confinement \[9\] describes well low–energy properties of light and heavy mesons \[5,6\] as well as the octet and decuplet of light baryons \[7\].
In the effective quark model with chiral $U(3)\times U(3)$ symmetry (i) baryons are the three–quark states \[10\] and do not contain any bound diquark states, then (ii) the spinorial structure of the three–quark currents is defined as the products of the axial–vector diquark densities $[\bar{q^c}_i(x)\gamma^{\mu}q_j(x)]$ and a quark field $q_k(x)$ transforming under $SU(3)_f\times SU(3)_c$ group like $(\underline{6}_f,\tilde{\underline{3}}_c)$ and $(\underline{3}_f,\underline{3}_c)$ multiplets, respectively, where $i,j$ and $k$ are the colour indices running through $i=1,2,3$ and $q
= u,d$ or $s$ quark field. This agrees with the structure of the three–quark currents used for the investigation of the properties of baryons within QCD sum rules approach \[11\]. As has been shown in Ref.\[9\] this is caused by the dynamics of strong low–energy interactions imposed by a linearly rising interquark potential. The fixed structure of the three–quark currents allows to describe all variety of low–energy interactions of baryon octet and decuplet in terms of the phenomenological coupling constant $g_{\rm B}$. The coupling constants $g_{\rm \pi NN}$, $g_{\rm \pi N \Delta}$ and $g_{\rm \gamma N \Delta}$ interactions, and the $\sigma_{\rm \pi
N}$–term of the low–energy ${\rm \pi N}$–scattering have been calculated in good agreement with the experimental data and other phenomenological approaches based on QCD \[7,12\].
In this paper we apply the effective quark model with chiral $U(3)\times U(3)$ symmetry \[2,5–7\] to the investigation of the polarization properties of of the $\Lambda^+_c$ baryon in weak non–leptonic four–body decays and treat the most favourable experimentally four–body mode $\Lambda^+_c \to p + K^- + \pi^+ +
\pi^0$. The experimental value of the probability of this decay is equal to \[13\] $$\begin{aligned}
\label{label1.1}
B(\Lambda^+_c \to p K^- \pi^+ \pi^0)_{\exp} = (3.4\pm
1.0)\,\%.\end{aligned}$$ Relative to the decay $\Lambda^+_c$ $\to$ p + K$^-$ + $\pi^+$ the experimental probability of which is $B(\Lambda^+_c \to p K^-
\pi^+) = 0.050\pm 0.013$ \[13\] the probability of the decay $\Lambda^+_c \to p + K^- + \pi^+ + \pi^0$ reads $$\begin{aligned}
\label{label1.2}
B(\Lambda^+_c \to p K^- \pi^+ \pi^0/\Lambda^+_c \to p K^-
\pi^+)_{\exp} = (0.68\pm 0.27).\end{aligned}$$ We would like to emphasize that the weak non–leptonic four–body mode $\Lambda^+_c \to p + K^- + \pi^+ + \pi^0$ as well as the mode $\Lambda^+_c \to p + K^- + \pi^+$ is rather difficult for the theoretical analysis \[1,2\], since baryonic and mesonic degrees of freedom cannot be fully factorized.
For the theoretical analysis of the weak non–leptonic decays of the $\Lambda^+_c$ baryon we would use the effective low–energy Lagrangian \[2\] (see also Refs.\[12,14\]) $$\begin{aligned}
\label{label1.3}
{\cal L}_{\rm eff}(x) &=& -\frac{G_F}{\sqrt{2}}\,V^*_{c s}\,V_{u
d}\,\Big\{C_1(\Lambda_{\chi})\,[\bar{s}(x)\,
\gamma^{\mu}(1-\gamma^5)\,c(x)]\,[\bar{u}(x)\,
\gamma_{\mu}(1-\gamma^5)\,d(x)]\nonumber\\
&& \hspace{0.7in} +
C_2(\Lambda_{\chi})\,[\bar{u}(x)\,
\gamma^{\mu}(1-\gamma^5)\,c(x)]\,[\bar{s}(x)\,
\gamma_{\mu}(1-\gamma^5)\,d(x)]\Big\},\end{aligned}$$ where $G_F=1.166\times 10^{-5}\;{\rm GeV}^{-2}$ is the Fermi weak constant, $V^*_{c s}$ and $V_{u d}$ are the elements of the CKM–mixing matrix, $C_i(\Lambda_{\chi})\,(i=1,2)$ are the Wilson coefficients caused by the strong quark–gluon interactions at scales $p > \Lambda_{\chi}$ (short–distance contributions), where $\Lambda_{\chi} = 940\,{\rm MeV}$ is the scale of spontaneous breaking of chiral symmetry (SB$\chi$S) \[2,5–7\]. The numerical values of the coefficients $C_1(\Lambda_{\chi}) = 1.24$ and $C_2(\Lambda_{\chi}) =
-0.47$ have been calculated in Ref.\[2\].
Following Ref.\[2\] for the calculation of the probability of the decay $\Lambda^+_c \to p + K^- + \pi^+ + \pi^0$ we suggest to use the effective Lagrangian Eq.(\[label1.3\]) reduced to the form $$\begin{aligned}
\label{label1.4}
{\cal L}_{\rm eff}(x) = -\frac{G_F}{\sqrt{2}}\,V^*_{c s}\,V_{u
d}\,\bar{C}_1(\Lambda_{\chi})\,[\bar{s}(x)\,\gamma_{\mu}
(1-\gamma^5)\,c(x)]\,[\bar{u}(x)\,\gamma^{\mu}(1-\gamma^5)\,d(x)]\end{aligned}$$ by means of a Fierz transformation \[2\], where $\bar{C}_1(\Lambda_{\chi}) = C_1(\Lambda_{\chi}) + C_2
(\Lambda_{\chi})/N$ with $N=3$, the number of quark colour degrees of freedom[^4].
The paper is organized as follows. In Sect.2 we calculate the amplitude of the decay mode $\Lambda^+_c \to p + K^- + \pi^+ + \pi^0$. In Sect.3 we calculate the angular distribution of the probability and the probability of the decay $\Lambda^+_c \to p + K^- + \pi^+ +
\pi^0$ relative to the probability of the $\Lambda^+_c \to p + K^- +
\pi^+$. In Sect.4 we analyse the polarization properties of the charmed baryon $\Lambda^+_c$. In the Conclusion we discuss the obtained results.
Amplitude of the $\Lambda^+_c \to
p + K^- + \pi^+ + \pi^0$ decay
=================================
The amplitude of the decay $\Lambda^+_c \to p + K^- +
\pi^+ + \pi^0$ decay we define in the usual way \[2,12\] $$\begin{aligned}
\label{label2.1}
\frac{\displaystyle{\cal M}(\Lambda^+_c(Q) \to p(q)
K^-(q_1)\pi^+(q_2)\pi^0(q_3))}{\displaystyle
\sqrt{2E_{\Lambda^+_c}V\,2E_p V\,2E_{K^-} V\,2E_{\pi^+} V\,2E_{\pi^0}
V}} = \langle p(q) K^-(q_1)\pi^+(q_2)\pi^0(q_3)|{\cal L}_{\rm
eff}(0)|\Lambda^+_c(Q)\rangle ,\end{aligned}$$ where $E_i\,(i=\Lambda^+_c,p,K^-,\pi^+,\pi^0)$ are the energies of the $\Lambda^+_c$, the proton and mesons, respectively.
Since experimentally the probability of the decay mode $\Lambda^+_c
\to p + K^- + \pi^+ + \pi^0$ is measured relative to the probability of the $\Lambda^+_c \to p + K^- + \pi^+$ decay, so that we would treat it with respect to the probability of the decay $\Lambda^+_c \to p +
K^- + \pi^+$ the partial width of which has been calculated in Ref.\[2\] and reads $$\begin{aligned}
\label{label2.2}
\hspace{-0.3in}\Gamma(\Lambda^+_c \to p\,K^- \pi^+) = |G_F\,V^*_{c
s}\,V_{u d}\,\bar{C}_1(\Lambda_{\chi})|^2 \,\Bigg[g_{\rm \pi
NN}\,\frac{4}{5}\,\frac{g_{\rm C}}{g_{\rm
B}}\,\frac{F_{\pi}\Lambda_{\chi}}{m^2}\Bigg]^2\,\times\,\Bigg[\frac{5
M^5_{\Lambda^+_c}}{512\pi^3}\Bigg]\,\times\,f(\xi).\end{aligned}$$ The function $f(\xi)$ is determined by the integral \[2\] $$\begin{aligned}
\label{label2.3}
\hspace{-0.5in}f(\xi) = \int\limits^{1 + \xi^2/4}_{\xi}\Bigg(1 -
\frac{3}{5}\,x + \frac{2}{15}\,x^2 + \frac{7}{60}\,\xi^2 -
\frac{2}{5}\,\frac{\xi^2}{x}\Bigg)\,x\,\sqrt{x^2 - \xi^2}\,dx = 0.065,\end{aligned}$$ where $\xi = 2M_p/M_{\Lambda^+_c}$. The numerical value has been obtained at $M_{\Lambda^+_c}=2285\,{\rm MeV}$ and $M_p = 938\,{\rm
MeV}$, the mass of the $\Lambda^+_c$ baryon and the proton, respectively, and in the chiral limit, i.e. at zero masses of daugther mesons. The coupling constants $g_{\rm B}$ and $g_{\rm C}$ determine the interactions of the proton and the $\Lambda^+_c$ baryon with the three–quark currents $\eta_{\rm N}(x) =
-\varepsilon^{ijk}[\bar{u^c}_i(x)\gamma^{\mu}u_j(x)]\gamma_{\mu}\gamma^5
d_k(x)$ and $\bar{\eta}_{\Lambda^+_c}(x) =
\varepsilon^{ijk}\bar{c}_i(x)\gamma_{\mu}\gamma^5[\bar{d}_j(x)
\gamma^{\mu}u^c_k(x)]$, respectively \[2,7\]: $$\begin{aligned}
\label{label2.4}
\hspace{-0.5in}{\cal L}_{\rm int}(x) = \frac{g_{\rm
B}}{\sqrt{2}}\,\bar{\psi}_p(x)\,\eta_{\rm N}(x) + \frac{g_{\rm
C}}{\sqrt{2}}\,\bar{\eta}_{\Lambda^+_c}(x)\,\psi_{\Lambda^+_c}(x) +
{\rm h.c.}.\end{aligned}$$ Here $\psi_p(x)$ and $\psi_{\Lambda^+_c}(x)$ are the interpolating fields of the proton and the $\Lambda^+_c$ baryon. The coupling constant $g_{\rm B}$ has been related in Ref.\[7\] to the quark condensate $\langle \bar{q}(0)q(0)\rangle = -\,(255\,{\rm MeV})^3$, the constituent quark mass $m = 330\,{\rm MeV}$ calculated in the chiral limit[^5], the leptonic coupling constant $F_{\pi} =
92.4\,{\rm MeV}$ of pions calculated in the chiral limit, the ${\rm
\pi NN}$ coupling constant $g_{\rm \pi NN} = 13.4$ and as well as the mass of the proton $M_{\rm p}$: $$\begin{aligned}
\label{label2.5}
g_{\rm \pi NN} = g^2_{\rm B}\,\frac{2m}{3F_{\pi}}\,\frac{\langle
\bar{q}(0)q(0)\rangle^2}{M^2_p}.\end{aligned}$$ Numerically $g_{\rm B}$ is equal to $g_{\rm B} =1.34\times
10^{-4}\,{\rm MeV}$ \[7\]. The coupling constant $g_{\rm C}$ has been fixed in Ref.\[2\] through the experimental value of the partial width of the decay $\Lambda^+_c \to p + K^- + \pi^+$. The coupling constant $g_{\rm C}$ appears in all partial widths of the decay modes of the $\Lambda^+_c$ baryon and cancels itself in the ratio $$\begin{aligned}
\label{label2.6}
B(\Lambda^+_c \to pK^-\pi^+\pi^0/\Lambda^+_c \to pK^-\pi^+) =
\frac{\Gamma(\Lambda^+_c \to p K^-\pi^+\pi^0)}{\Gamma(\Lambda^+_c \to
p\,K^- \pi^+)}.\end{aligned}$$ The amplitude of the decay $\Lambda^+_c \to p + K^- + \pi^+ + \pi^0$ we calculate in the tree–meson approximation and in the chiral limit \[2\] $$\begin{aligned}
\label{label2.7}
\hspace{-0.5in}&&\frac{\displaystyle{\cal M}(\Lambda^+_c(Q) \to p(q)
K^-(q_1)\pi^+(q_2)\pi^0(q_3))}{\displaystyle
\sqrt{2E_{\Lambda^+_c}V\,2E_p V\,2E_{K^-}
V\,2E_{\pi^+}V\,2E_{\pi^0}V}} = \langle p(q)
K^-(q_1)\pi^+(q_2)\pi^0(q_3)|{\cal L}_{\rm
eff}(0)|\Lambda^+_c(Q)\rangle =\nonumber\\
\hspace{-0.5in}&&= -\frac{G_F}{\sqrt{2}}\,
V^*_{c s}\,V_{u d}\,\bar{C}_1(\Lambda_{\chi})\,
\langle p(q) K^-(q_1)|\bar{s}(0)
\gamma_{\mu}(1-\gamma^5) c(0)|\Lambda^+_c(Q)\rangle \nonumber\\
\hspace{-0.5in}&&\times \langle
\pi^+(q_2)\pi^0(q_3)|\bar{u}(0)\,\gamma^{\mu}(1-\gamma^5)\,d(0)|0\rangle
,\end{aligned}$$ The matrix element of the transition $\Lambda^+_c \to p + K^-$ has been calculated in Ref.\[2\] and reads $$\begin{aligned}
\label{label2.8}
\hspace{-0.5in}&&\sqrt{2E_{\Lambda^+_c}V\,2E_p V\,2E_{K^-} V}\,\langle
p(q)
K^-(q_-)|\bar{s}(0)\,\gamma_{\mu}(1-\gamma^5)\,c(0)|\Lambda^+_c(Q)\rangle
= \nonumber\\
\hspace{-0.5in}&&= i g_{\rm \pi NN}\,\frac{4}{5}\,\frac{g_{\rm
C}}{g_{\rm
B}}\,\frac{\Lambda_{\chi}}{m^2}\,\bar{u}_p(q,\sigma^{\prime}\,)
\,[2\,v_{\mu}(1 - \gamma^5) + \gamma_{\mu}(1 +
\gamma^5)]\,u_{\Lambda^+_c}(Q,\sigma) =\nonumber\\
\hspace{-0.5in}&&= i g_{\rm \pi NN}\,\frac{4}{5}\,
\frac{g_{\rm C}}{g_{\rm B}}\,\frac{\Lambda_{\chi}}{m^2}\,
\bar{u}_p(q,\sigma^{\prime}\,) \,(1 - \gamma^5)\,
(2\,v_{\mu} + \gamma_{\mu})\,u_{\Lambda^+_c}(Q,\sigma),\end{aligned}$$ where $\bar{u}_p(q,\sigma^{\prime}\,)$ and $u_{\Lambda^+_c}(Q,\sigma)$ are the Dirac bispinors of the proton and the $\Lambda^+_c$ baryon, $v^{\mu}$ is a 4–velocity of the $\Lambda^+_c$ baryon defined by $Q^{\mu} = M_{\Lambda^+_c}\,v^{\mu}$.
The matrix element of the transition $0 \to \pi^+ + \pi^0$ has been calculated in \[5\] and reads $$\begin{aligned}
\label{label2.9}
\sqrt{2E_{\pi^+}V\,2E_{\pi^0}V} \langle\pi^+(q_2)\pi^0(q_3)|
\bar{u}(0)\gamma^{\mu}(1-\gamma^5)\,d(0)|0\rangle = - \sqrt{2}\,(q_2 -
q_3)^{\mu}.\end{aligned}$$ Hence, the amplitude of the decay $\Lambda^+_c \to p + K^- + \pi^+ +
\pi^0$ is given by $$\begin{aligned}
\label{label2.10}
\hspace{-0.7in}&&{\cal M}(\Lambda^+_c(Q) \to p(q)
K^-(q_1)\pi^+(q_2)\pi^0(q_3)) = i\,G_F\,V^*_{c s}\,V_{u
d}\,\bar{C}_1(\Lambda_{\chi})\,\nonumber\\
\hspace{-0.7in}&&\times \,\frac{4}{5}\,\frac{g_{\rm \pi
NN}}{M_{\Lambda^+_c}}\,\Bigg[\frac{g_{\rm C}}{g_{\rm
B}}\,\frac{\Lambda_{\chi}}{m^2}\Bigg]\,\bar{u}_p(q,\sigma^{\prime}\,)
\,(1 - \gamma^5)\,[2 Q\cdot (q_2 - q_3) + M_{\Lambda^+_c}(\hat{q}_2 -
\hat{q}_3)]\,u_{\Lambda^+_c}(Q,\sigma).\end{aligned}$$ Now we can proceed to the evaluation of the probability of the $\Lambda^+_c \to p + K^- + \pi^+ + \pi^0$ decay.
Probability and angular distribution of the decay $\Lambda^+_c \to p + K^- + \pi^+ + \pi^0$
===========================================================================================
The differential partial width of the $\Lambda^+_c \to
p + K^- + \pi^+ + \pi^0$ decay is determined by $$\begin{aligned}
\label{label3.1}
\hspace{-0.5in}&&d\Gamma(\Lambda^+_c \to p K^- \pi^+ \pi^0) =
\frac{1}{2M_{\Lambda^+_c}}\,\overline{|{\cal M}(\Lambda^+_c(Q) \to p(q)
K^-(q_1)\pi^+(q_2)\pi^0(q_3))|^2}\nonumber\\
\hspace{-0.5in}&&\times\,(2\pi)^4\,\delta^{(4)}(Q - q - q_1 - q_2 -
q_3)\,\frac{d^3q}{(2\pi)^3 2 E_p}\,\frac{d^3q_1}{(2\pi)^3 2
E_{K^-}}\,\frac{d^3q_2}{(2\pi)^3 2 E_{\pi^+}}\,\frac{d^3q_1}{(2\pi)^3
2 E_{\pi^0}}.\end{aligned}$$ We calculate the quantity $\overline{|{\cal M}(\Lambda^+_c(Q) \to p(q)
K^-(q_1)\pi^+(q_2)\pi^0(q_3))|^2}$ for the polarized $\Lambda^+_c$ and unpolarized proton $$\begin{aligned}
\label{label3.2}
\hspace{-0.5in}&&\overline{|{\cal M}(\Lambda^+_c(Q) \to p(q)
K^-(q_1)\pi^+(q_2)\pi^0(q_3))|^2} = |G_F\,V^*_{c s}\,V_{u
d}\,\bar{C}_1(\Lambda_{\chi})|^2\,\left[\frac{4}{5}\,\frac{g_{\rm \pi
NN}}{M_{\Lambda^+_c}}\,\frac{g_{\rm C}}{g_{\rm
B}}\,\frac{\Lambda_{\chi}}{m^2}\right]^2\nonumber\\
\hspace{-0.5in}&&\times\,\frac{1}{2}\,{\rm tr}\{(M_{\Lambda^+_c} +
\hat{Q})(1 + \gamma^5\hat{\omega}_{\Lambda^+_c})[2Q\cdot(q_2-q_3) +
M_{\Lambda^+_c}(\hat{q}_2 - \hat{q}_3)](1+\gamma^5)(M_p +
\hat{q})(1-\gamma^5)\nonumber\\
\hspace{-0.5in}&&\times\,[2Q\cdot(q_2-q_3) + M_{\Lambda^+_c}(\hat{q}_2
- \hat{q}_3)]\},\end{aligned}$$ where $\omega^{\mu}_{\Lambda^+_c}$ is a space–like unit vector, $\omega^2_{\Lambda^+_c} = - 1$, orthogonal to the 4–momentum of the $\Lambda^+_c$, $Q\cdot \omega_{\Lambda^+_c} = 0$. It is related to the direction of the $\Lambda^+_c$ spin defined by $$\begin{aligned}
\label{label3.3}
\omega^{\mu}_{\Lambda^+_c} =\left(\frac{\displaystyle \vec{Q}\cdot
\vec{\omega}_{\Lambda^+_c}}{\displaystyle M_{\Lambda^+_c}},
\vec{\omega}_{\Lambda^+_c} + \frac{\displaystyle \vec{Q}(\vec{Q}\cdot
\vec{\omega}_{\Lambda^+_c})}{\displaystyle
M_{\Lambda^+_c}(E_{\Lambda^+_c} + M_{\Lambda^+_c})}\right),\end{aligned}$$ where $\vec{\omega}^{\,2}_{\Lambda^+_c} = 1$. At the rest frame of the $\Lambda^+_c$ we have $\omega^{\mu}_{\Lambda^+_c} = (0,
\vec{\omega}_{\Lambda^+_c})$.
For the differential branching ratio $B(\Lambda^+_c \to p K^- \pi^+
\pi^0/\Lambda^+_c \to pK^-\pi^+)$ defined by Eq.(\[label2.6\]) we get $$\begin{aligned}
\label{label3.4}
\hspace{-0.5in}&&dB(\Lambda^+_c \to p K^- \pi^+ \pi^0/\Lambda^+_c \to
pK^-\pi^+) = \frac{1024\pi^3}{1.3 M^8_{\Lambda^+_c}}\,
\frac{1}{F^2_{\pi}}\,\frac{1}{2}\,{\rm
tr}\{(M_{\Lambda^+_c} + \hat{Q})(1 +
\gamma^5\hat{\omega}_{\Lambda^+_c})\nonumber\\
\hspace{-0.5in}&&\times\,[2Q\cdot(q_2-q_3) + M_{\Lambda^+_c}(\hat{q}_2
- \hat{q}_3)](1+\gamma^5)(M_p + \hat{q})(1-\gamma^5)\nonumber\\
\hspace{-0.5in}&&\times\,[2Q\cdot(q_2-q_3) + M_{\Lambda^+_c}(\hat{q}_2
- \hat{q}_3)]\}\,(2\pi)^4\,\delta^{(4)}(Q - q - q_1 - q_2 -
q_3)\nonumber\\
\hspace{-0.5in}&&\times\,\frac{d^3q}{(2\pi)^3 2 E_p}\,\frac{d^3q_1}{(2\pi)^3 2
E_{K^-}}\,\frac{d^3q_2}{(2\pi)^3 2 E_{\pi^+}}\,\frac{d^3q_3}{(2\pi)^3
2 E_{\pi^0}}.\end{aligned}$$ The trace amounts to $$\begin{aligned}
\label{label3.5}
\frac{1}{2}\,{\rm tr}\{\ldots\}&=& 16\,Q\cdot q\,(Q\cdot
(q_2-q_3))^2\nonumber\\ &+& M_{\Lambda^+_c}[16\,Q\cdot q\,Q\cdot
(q_2-q_3)\,(q_2-q_3)\cdot \omega_{\Lambda^+_c} - 32\,(Q\cdot
(q_2-q_3))^2\,q\cdot \omega_{\Lambda^+_c}]\nonumber\\ &+&
M^2_{\Lambda^+_c}[24\,Q\cdot (q_2-q_3)\,q\cdot (q_2-q_3) -
4\,Q\cdot q\,(q_2 - q_3)^2]\nonumber\\ &+&
M^3_{\Lambda^+_c}[8\,q\cdot (q_2-q_3)\,(q_2-q_3)\cdot
\omega_{\Lambda^+_c} - 4\,(q_2 - q_3)^2\,q\cdot
\omega_{\Lambda^+_c}].\end{aligned}$$ For the integration over the momenta of $\pi$ mesons it is useful to apply the formula \[2\] $$\begin{aligned}
\label{label3.6}
\hspace{-0.5in}\int (q_2 - q_3)_{\alpha}(q_2 -
q_3)_{\beta}\,\delta^{(4)}(P -q_2 - q_3)
\frac{d^3q_2}{2E_{\pi^+}}\frac{d^3q_3}{2E_{\pi^0}}= \frac{\pi}{6}\,
\Big(- P^2\,g_{\alpha\beta} + P_{\alpha}P_{\beta}\Big),\end{aligned}$$ where $P = Q - q - q_1$. Integrating over the momenta of pions we arrive at the following expression for the differential branching ratio $B(\Lambda^+_c \to p K^- \pi^+ \pi^0/\Lambda^+_c \to
pK^-\pi^+)$: $$\begin{aligned}
\label{label3.7}
\hspace{-0.5in}&&dB(\Lambda^+_c \to p K^- \pi^+ \pi^0/\Lambda^+_c \to
pK^-\pi^+) = \frac{2.1}{4\pi^4}\,\frac{1}{M^8_{\Lambda^+_c}}\,
\frac{1}{F^2_{\pi}}\,\{4\,Q\cdot q\,((Q\cdot P)^2 - Q^2P^2)\nonumber\\
\hspace{-0.5in}&& + M_{\Lambda^+_c}\,
[4\,Q\cdot q\, Q\cdot
P\, P\cdot \omega_{\Lambda^+_c} - 8\,((Q\cdot P)^2 - Q^2P^2)\,
P\cdot \omega_{\Lambda^+_c}]+ M^2_{\Lambda^+_c}\,
(- 3\,Q\cdot q\,P^2\nonumber\\
\hspace{-0.5in}&& + 6\,Q\cdot P\,q\cdot P) +
M^3_{\Lambda^+_c}\,(P^2\,q\cdot \omega_{\Lambda^+_c} + 2\,q\cdot
P\,P\cdot \omega_{\Lambda^+_c})\}\,\frac{d^3q}{E_p}\,\frac{d^3q_1}{
E_{K^-}}.\end{aligned}$$ After the integration over the momenta of the K$^-$ meson and the energies of the proton we obtain the angular distribution of the probability of the decay mode $ \Lambda^+_c \to p + K^- + \pi^+ +
\pi^0$ relative to the probability of the decay $\Lambda^+_c \to
p + K^- + \pi^+$ in the rest frame of the $\Lambda^+_c$ baryon: $$\begin{aligned}
\label{label3.8}
\hspace{-0.5in}4\pi\,\frac{dB}{d\Omega_{\vec{n}_p}}(\Lambda^+_c \to p
K^- \pi^+ \pi^0/\Lambda^+_c \to pK^-\pi^+) = 0.87\,(1 -
0.09\,\vec{n}_p\cdot\vec{\omega}_{\Lambda^+_c}),\end{aligned}$$ where $\vec{n}_p = \vec{q}/|\vec{q}\,|$ is a unit vector directed along the momentum of the proton and $\Omega_{\vec{n}_p}$ is the solid angle of the unit vector $\vec{n}_p$.
Integrating the angular distribution Eq.(\[label3.8\]) over the solid angle $\Omega_{\vec{n}_p}$ we obtain the total branching ratio $$\begin{aligned}
\label{label3.9}
B(\Lambda^+_c \to p K^- \pi^+ \pi^0/\Lambda^+_c \to pK^-\pi^+) = 0.87.\end{aligned}$$ The theoretical value fits well the experimental data Eq.(\[label1.2\]): $B(\Lambda^+_c \to p K^- \pi^+ \pi^0/\Lambda^+_c
\to p K^- \pi^+)_{\exp} = (0.68\pm 0.27)$.
Polarization of the charmed baryon $\Lambda^+_c$
================================================
The formula Eq.(\[label3.8\]) describes the polarization of the charmed $\Lambda^+_c$ baryon relative to the momentum of the proton in the decay mode $\Lambda^+_c \to p + K^- + \pi^+ + \pi^0$. If the spin of the $\Lambda^+_c$ is parallel to the momentum of the proton, the right–handed (R) polarization, the scalar product $\vec{\omega}_{\Lambda^+_c}\cdot \vec{n}_p$ amounts to $\vec{\omega}_{\Lambda^+_c}\cdot \vec{n}_p = \cos\vartheta$. The angular distribution of the probability reads $$\begin{aligned}
\label{label4.1}
4\pi\,\frac{dB}{d\Omega_{\vec{n}_p}}(\Lambda^+_c \to p K^- \pi^+
\pi^0/\Lambda^+_c \to pK^-\pi^+)_{\rm (R)} = 0.87\,(1 -
0.09\,\cos\vartheta).\end{aligned}$$ In turn, for the left–handed (L) polarization of the $\Lambda^+_c$, the spin of the $\Lambda^+_c$ is anti–parallel to the momentum of the proton, the scalar product reads $(\vec{\omega}_{\Lambda^+_c}\cdot
\vec{n}_p) = -\cos\vartheta$ and the angular distribution becomes equal to $$\begin{aligned}
\label{label4.2}
4\pi\,\frac{dB}{d\Omega_{\vec{n}_p}}(\Lambda^+_c \to p K^- \pi^+
\pi^0/\Lambda^+_c \to pK^-\pi^+)_{\rm
(L)} = 0.87\,(1 + 0.09\,\cos\vartheta).\end{aligned}$$ Since the coefficient in front of $\cos\vartheta$ is rather small, so that the angular distribution of the probability of the decays is practically isotropic. Therefore, one can conclude that in the four–body mode $\Lambda^+_c \to p + K^- + \pi^+ + \pi^0$ the charmed baryon $\Lambda^+_c$ seems to be practically unpolarized.
Conclusion
==========
We have considered the four–body mode of the weak non–leptonic decay of the charmed $\Lambda^+_c$ baryon: $\Lambda^+_c \to p + K^+ + \pi^+
+ \pi^0$. Experimentally this is the most favourable mode among the four–body modes of the $\Lambda^+_c$ decays. From theoretical point of view this mode is rather difficult for the calculation, since baryonic and mesonic degrees of freedom are not fully factorized. However, as has been shown in Ref.\[2\] this problem has been overcome for the three–body mode $\Lambda^+_c \to p + K^- +
\pi^+$ within the effective quark model with chiral $U(3)\times U(3)$ symmetry incorporating Heavy Quark Effective Theory (HQET) and the ENJL model \[2\].
Following \[2\] we have calculated in the chiral limit the probability and angular distribution of the probability of the mode $\Lambda^+_c
\to p + K^+ + \pi^+ + \pi^0$ in the rest frame of the $\Lambda^+_c$ baryon and relative to the momentum of the daughter proton. The probability of the mode $\Lambda^+_c \to p + K^+ + \pi^+ + \pi^0$ is obtained with respect to the probability of the mode $\Lambda^+_c \to
p + K^+ + \pi^+$. The theoretical prediction $B(\Lambda^+_c \to p K^-
\pi^+ \pi^0/\Lambda^+_c \to pK^-\pi^+)=0.87$ fits well the experimental data $B(\Lambda^+_c \to p K^- \pi^+ \pi^0/\Lambda^+_c \to
pK^-\pi^+)_{\exp}= (0.68\pm 0.27)$. We would like to accentuate that in our approach the probability $B(\Lambda^+_c \to p K^- \pi^+
\pi^0/\Lambda^+_c \to pK^-\pi^+)$ does not contain free parameters. Hence, such an agreement with experimental data testifies a correct description of low–energy dynamics of strong interactions in our approach.
The theoretical angular distribution of the probability of the decay mode $\Lambda^+_c \to p + K^- + \pi^+ + \pi^0$ predicts a rather weak polarization of the charmed baryon $\Lambda^+_c$. This means that for the experimental analysis of the polarization properties of the $\Lambda^+_c$ produced in reactions of photo and hadroproduction the three–body decay mode $\Lambda^+_c \to p + K^- + \pi^+$ seems to be preferable with respect to the four–body $\Lambda^+_c \to p + K^- +
\pi^+ + \pi^0$. Nevertheless, the theoretical analysis of polarization properties of the charmed baryon $\Lambda^+_c$ in the weak non–leptonic four–body modes like (1) $\Lambda^+_c \to \Lambda +
\pi^+ + \pi^+ + \pi^-$, (2) $\Lambda^+_c \to \Sigma^0 + \pi^+ + \pi^+
+ \pi^-$ and (3) $\Lambda^+_c \to p + \bar{K}^0 + \pi^+ + \pi^-$ with branching ratios \[13\] $$\begin{aligned}
B(\Lambda^+_c \to \Lambda \pi^+ \pi^+
\pi^-)_{\exp} &=& (3.3\pm 1.0)\,\%,\nonumber\\ B(\Lambda^+_c \to
\Sigma^0 \pi^+ \pi^+ \pi^-)_{\exp} &=& (1.1\pm 0.4)\,\%,\nonumber\\
B(\Lambda^+_c \to p \bar{K}^0 \pi^+ \pi^-)_{\exp} &=& (2.6\pm 0.7)\,\%\end{aligned}$$ comeasurable with the branching ratio of the mode $\Lambda^+_c \to p +
K^- + \pi^+ + \pi^0$ is rather actual and would be carried out in our forthcoming publications.
Acknowledgement {#acknowledgement .unnumbered}
===============
The work is supported in part by the Scientific and Technical Programme of Ministry of Education of Russian Federation for Fundamental Researches in Universities of Russia.
[9]{} J. D. Bjorken, Phys. Rev. D [**40**]{}, 1513 (1989) and references therein. Ya. A. Berdnikov, A. N. Ivanov, V. F. Kosmach, and N. I. Troitskaya, Phys. Rev. C [**60**]{}, 015201 (1999). E. Eichten and F. L. Feinberg, Phys. Rev. D [**23**]{}, 2724 (1981); E. Eichten , Nucl. Phys. B [**4**]{}, (Proc. Suppl.), 70 (1988); M. B. Voloshin, and M. A. Shifman, Sov. J. Nucl. Phys. [**45**]{}, 292 (1987); H. D. Politzer and M. Wise, Phys. Lett. B [**206**]{}, 681 (1988); Phys. Lett. B [**208**]{}, 504 (1988); H. Georgi, Phys. Lett. B [**240**]{}, 447 (1990). M. Neubert, Phys. Rep. [**245**]{}, 259 (1994); M. Neubert, [*Heavy Quark Effective Theory*]{} CERN–TH/96–292, hep–ph/9610385 17 October 1996, Invited talk presented at the 20th Johns Hopkins Workshop on Current Problems in Particle Theory, Heidelberg, Germany, 27–29 June 1996. A. N. Ivanov, M. Nagy, and N. I. Troitskaya, Intern. J. Mod. Phys. A [**7**]{}, 7305 (1992); A. N. Ivanov , Phys. Lett. B [**275**]{}, 450 (1992); Intern. J. Mod. Phys. A [**8**]{}, 853 (1993); A. N. Ivanov, N. I. Troitskaya, and M. Nagy, Intern. J. Mod. Phys. A [**8**]{}, 2027, 3425 (1993) ; Phys. Lett. B [**308**]{}, 111 (1993) ; Phys. Lett. B [**326**]{}, 312 (1994); Nuovo Cim. A [**107**]{}, 1375 (1994); A. N. Ivanov and N. I. Troitskaya, Nuovo Cimento A [**108**]{}, 555 (1995). A. N. Ivanov and N. I. Troitskaya, Phys. Lett. B [**342**]{}, 323 (1995); Phys. Lett. B [**345**]{}, 175 (1995); A. N. Ivanov and N. I. Troitskaya, Nuovo Cim. A [**110**]{}, 65 (1997); A. N. Ivanov, N. I. Troitskaya, and M. Nagy, Phys. Lett. B [**339**]{}, 167 (1994); F. Hussain, A. N. Ivanov and N. I. Troitskaya, Phys. Lett. B [**329**]{}, 98 (1994); Phys. Lett. B [**348**]{}, 609 (1995); Phys. Lett. B [**369**]{}, 351 (1996); A. N. Ivanov and N. I. Troitskaya, Phys. Lett. B [**390**]{}, 341 (1997); Phys. Lett. B [**394**]{}, 195 (1997); Phys. Lett. B [**387**]{}, 386 (1996); Phys. Lett. B [**388**]{}, 869 (1996) (Erratum). A. N. Ivanov, M. Nagy, and N. I. Troitskaya, Phys. Rev. C [**59**]{}, 541 (1999). T. Hakioglu and M. D. Scadron, Phys. Rev. D [**42**]{}, 941 (1990); Phys. Rev. D [**43**]{}, 2439 (1991); R. Karlsen and M. D. Scadron, Mod. Phys. Lett. A [**6**]{}, 543 (1991); M. D. Scadron, A [**7**]{}, 669 (1992); Phys. At. Nucl. [**56**]{}, 1595 (1993); R. Delbourgo and M. D. Scadron, Mod. Phys. Lett. A [**10**]{}, 251 (1995); L. R. Baboukhadia, V. Elias and M. D. Scadron, J. of Phys. G [**23**]{}, 1065 (1997); R. Delbourgo and M. D. Scadron, Int. J. Mod. Phys. A [**13**]{}, 657 (1998); A. Bramon, Riazuddin, and M. D. Scadron, J. of Phys. G [**24**]{}, 1 (1998);M. D. Scadron, Phys. Rev. D [**57**]{}, 5307 (1998); L. R. Baboukhadia and M. D. Scadron, Eur. Phys. J. C [**8**]{}, 527 (1999). A. N. Ivanov, N. I. Troitskaya, M. Faber, M. Schaler and M. Nagy, Nuovo Cim. A [**107**]{}, 1667 (1994); A. N. Ivanov, N. I. Troitskaya and M. Faber, Nuovo Cim. A [**108**]{}, 613 (1995). M. Gell–Mann, Phys. Rev. Lett. [**8**]{}, 214 (1964). B. L. Ioffe, Nucl. Phys. B [**188**]{}, 317 (1981); Nucl. Phys. B [**191**]{}, 591E (1981); P. Pascual and R. Tarrach, Barcelona preprint UBFT–FP–5–82, 1982; L. J. Reinders, H. R. Rubinstein and S. Yazaki, Phys. Lett. B [**120**]{}, 209 (1983). M. D. Scadron, in [*ADVANCED QUANTUM THEORY and its Applications Through Feynman Diagrams*]{}, Springer–Verlag, New York, 1st Edition 1979 and 2nd Edition 1991. D. E. Groom [*et al.*]{}, Eur. Phys. J. C [**15**]{}, 1 (2000). B. W. Lee and M. K. Gaillard, Phys. Rev. Lett. [**33**]{}, 108 (1974); G. Altarelli, G. Curci, G. Martinelli and S. Petrarca, Nucl. Phys. B [**187**]{}, 461 (1981); A. Buras, J.- M. G$\grave{{\rm e}}$rard and R. Ruckl, Nucl. Phys. B [**268**]{}, 16 (1986); M. Bauez, B. Stech and M. Wizbel, Z. Phys. C [**34**]{}, 103 (1987). M. D. Scadron and L. R. Thebaud, Phys. Phys. Rev. D [**8**]{}, 2190 (1973); R. E. Karlsen and M. D. Scadron, Phys. Rev. D [**43**]{}, 1739 (1991); M. D. Scadron and D. Tadi${\acute{c}}$, [*Hyperon Nonleptonic Weak Decays Revisited*]{}, hep–ph/0011328 November 2000, to appear in J. of Phys. G. V. Elias and M. D. Scadron, Phys. Rev. D [**30**]{}, 647 (1984); Phys. Rev. Lett. [**53**]{}, 1129 (1984).
[^1]: E–mail: [email protected]
[^2]: E–mail: [email protected], Physics Department, University of Arizona, Tucson, Arizona 85721, USA
[^3]: All results obtained below are valid for the Linear Sigma Model (L$\sigma$M) \[8\] supplemented by HQET as well.
[^4]: We would like to accentuate that our approach to non–leptonic decays of charmed baryons agrees in principle with the current–algebra analysis of non–leptonic decays of light and charmed baryons based on $(V-A)\times (V-A)$ effective coupling developed by Scadron [*et al.*]{} in Refs.\[15\].
[^5]: This agrees with the results obtained by Elias and Scadron \[16\].
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'Borcherds lift for an even lattice $L$ of signature $(p, q)$ is a lifting from weakly holomorphic modular forms of weight $(p-q)/2$ for the Weil representation of $L$. We introduce a product operation on the space of such modular forms, depending on the choice of a maximal isotropic sublattice of $L$, which makes this space a finitely generated filtered associative algebra, without unit element in general. This algebra structure is functorial with respect to embedding of lattices by the quasi-pullback map. We study the basic properties, prove for example that the algebra is commutative if and only if $L$ is unimodular. When $L$ is unimodular with $p=2$, the multiplicative group of Borcherds products of integral weight forms a subring.'
address: 'Department of Mathematics, Tokyo Institute of Technology, Tokyo 152-8551, Japan'
author:
- Shouhei Ma
title: Algebra of Borcherds products
---
[^1]
Introduction
============
Since ancient, mathematicians have introduced and studied product structures on various mathematical objects. In this paper we define a product structure on a space of certain vector-valued modular forms of fixed weight attached to an integral quadratic form, that is functorial and that reflects some properties of the quadratic form.
Let $L$ be an even lattice of signature $(p, q)$ with $p\leq q$ and $\rho_{L}$ be the Weil representation attached to the discriminant form of $L$. In [@Bo95], [@Bo98], Borcherds constructed a lifting from weakly holomorphic modular forms $f$ of weight $\sigma(L)/2=(p-q)/2$ and type $\rho_{L}$ to automorphic forms $\Phi(f)$ with remarkable singularity on the symmetric domain attached to $L$. When $p=2$ and the principal part of $f$ has integral coefficients, $\Phi(f)$ gives rise to a meromorphic modular form $\Psi(f)$ with infinite product expansion, known as Borcherds product.
The discovery of Borcherds has stimulated the study of weakly holomorphic modular forms of weight $\sigma(L)/2$ and type $\rho_{L}$. If we consider the space of such modular forms, say ${{M^{!}(L)}}$, it is a priori just an infinite dimensional ${{\mathbb{C}}}$-linear space. The purpose of this paper is to define a product operation on the space ${{M^{!}(L)}}$, depending on the choice of a maximal isotropic sublattice of $L$ up to the action of an arithmetic group, which makes ${{M^{!}(L)}}$ an associative ${{\mathbb{C}}}$-algebra, finitely generated and filtered but without unit element in general. Moreover, this product is functorial with respect to embedding of lattices by the so-called quasi-pullback operation. This gives a link between quadratic forms and noncommutative rings.
In order to state our result, we assume that the lattice $L$ has Witt index $p$ ($=$ maximal) and choose a maximal isotropic sublattice $I$ of $L$. Then $K=I^{\perp}/I$ is an even negative-definite lattice of rank $-\sigma(L)$. Let ${{\downarrow^{L}_{K}}}$ be the pushforward operation from $\rho_{L}$ to $\rho_{K}$ (§\[ssec: Weil representation\]), and ${{\Theta_{K^{+}}}}(\tau)$ be the $\rho_{K^{+}}$-valued theta series of the positive-definite lattice $K^{+}=K(-1)$. In §\[sec: product\], we define the $\Theta$-product of $f_{1}, f_{2} \in {{M^{!}(L)}}$ with respect to $I$ by $$f_{1} \ast_{I} f_{2} = \langle f_{1}{{\downarrow^{L}_{K}}}, {{\Theta_{K^{+}}}} \rangle \cdot f_{2}.$$ Then $f_{1} \ast_{I} f_{2}$ is again an element of ${{M^{!}(L)}}$. By considering Fourier expansion, this could be viewed as a sort of product of two $\rho_{L}$-valued Laurent series, one suitably contracted with the theta series ${{\Theta_{K^{+}}}}$.
In what follows, an *associative algebra* is not assumed to have a unit element. Our basic results can be summarized as follows.
\[thm: main\] The $\Theta$-product $\ast_{I}$ makes ${{M^{!}(L)}}$ a finitely generated filtered associative ${{\mathbb{C}}}$-algebra. The algebra ${{M^{!}(L)}}$ has a unit element if and only if $L\simeq U\oplus \cdots \oplus U$. The algebra ${{M^{!}(L)}}$ is commutative if and only if $L$ is unimodular.
If $L'$ is a sublattice of $L$ of signature $(p, q')$ with $I_{{{\mathbb{Q}}}}\subset L'_{{{\mathbb{Q}}}}$, the map $${{M^{!}(L)}} \to M^{!}(L'), \quad f\mapsto |I/I'|^{-1} \cdot f|_{L'},$$ is a homomorphism of ${{\mathbb{C}}}$-algebras, where $I'=I\cap L'$ and $f|_{L'}\in M^{!}(L')$ is the quasi-pullback of $f\in {{M^{!}(L)}}$ as defined in .
Here the filtration on ${{M^{!}(L)}}$ is defined by the degree of principal part. $U$ stands for the integral hyperbolic plane, namely the even unimodular lattice of signature $(1, 1)$. The quasi-pullback map $|_{L'}\colon {{M^{!}(L)}}\to M^{!}(L')$ is an operation coming from quasi-pullback of Borcherds products ([@Bo95], [@B-K-P-SB], [@Ma]), which is a sort of renormalized restriction. The kernel of ${{M^{!}(L)}}\to M^{!}(L')$ for various sublattices $L'\subset L$ provide natural examples of two-sided ideals of ${{M^{!}(L)}}$ contained in the left annihilator ideal. The statements in Theorem \[thm: main\] are proved in Propositions \[prop: associativity et al\], \[prop: unimodular commutative\], \[prop: RUE\], \[prop: f.g.\], and \[prop: functorial main\].
The algebra structure on ${{M^{!}(L)}}$ requires the choice of $I$, but actually it depends only on the equivalence class of $I$ under a natural subgroup of the orthogonal group of $L$. Geometrically, when $p=2$, such equivalence classes correspond to maximal boundary components of the Baily-Borel compactification of the associated modular variety.
In some special cases, $\Theta$-product is a quite simple operation. When $L$ is unimodular, so that $f_{1}, f_{2}$ and ${{\Theta_{K^{+}}}}=\theta_{K^{+}}$ are scalar-valued, $f_{1}\ast_{I}f_{2}$ is just the product $f_{1} \cdot \theta_{K^+} \cdot f_{2}$ (Example \[ex: unimodular\]). When $I$ comes from $pU=U\oplus \cdots \oplus U$ embedded in $L$, so that we have a splitting $L=pU\oplus K$, $f_{1}, f_{2}$ correspond to weakly holomorphic Jacobi forms $\phi_{1}(\tau, Z), \phi_{2}(\tau, Z)$ of weight $0$ and index $K^+$ (see [@Gr]). Then the Jacobi form corresponding to $f_{1}\ast_{I}f_{2}$ is just $\phi_{1}(\tau, 0)\cdot \phi_{2}(\tau, Z)$ (Example \[ex: Jacobi form\]). In general, one can say that $\Theta$-product $\ast_{I}$ is a functorial extension of this simple product to all pairs $(L, I)$.
We expect that the complexity of the lattice $L$ (within fixed $p$ or up to direct summand of $U$) would be reflected in the complexity of the algebra ${{M^{!}(L)}}$ in some way. The first examples are stated in Theorem \[thm: main\]: commutativity and existence of (two-sided) unit element. More widely, we show that ${{M^{!}(L)}}$ has a left unit element if it contains a modular form with very mild singularity that is not a left zero divisor (Proposition \[prop: LUE\]). Some reflective modular forms provide typical examples of such a modular form (Examples \[ex: reflective 1\] and \[ex: reflective 2\]). This might remind us of Borcherds’ philosophy [@Bo00b] that for $L$ Lorentzian, existence of a reflective modular form should be related to interesting property of the reflection group of $L$.
Another example of our general expectation is that the minimal number of generators of ${{M^{!}(L)}}$ would reflect the size of $L$, and generators of small degree would have some significance (§\[ssec: f.g.\]). We prove a finiteness result on lattices $L$ with bounded number of generators of ${{M^{!}(L)}}$ (Proposition \[prop: finiteness\]). In the simple example $L=pU\oplus \langle -2 \rangle$, ${{M^{!}(L)}}$ is generated by two basic reflective modular forms (Example \[ex: generator\]). We hope to find further connection between the algebra ${{M^{!}(L)}}$ and the lattice $L$. It would be also a natural problem to find an interesting ${{M^{!}(L)}}$-module.
A natural but subtle problem is whether various subgroups of ${{M^{!}(L)}}$ defined by arithmetic condition on the coefficients of principal part are closed under $\Theta$-product. We give a general criterion in the positive direction, and use it to deduce the following (§\[ssec: integral part\]):
- The real part $M^!(L)_{{{\mathbb{R}}}}\subset {{M^{!}(L)}}$ is closed under $\ast_{I}$.
- When $L$ is unimodular with $p=2$, the multiplicative group of Borcherds products of integral weight is closed under $\ast_{I}$.
In general, the possible obstruction can be expressed as a $2$-cocycle in group cohomology.
This paper is organized as follows. §\[sec: preliminary\] is recollection of modular forms for the Weil representation. In §\[sec: product\] we define $\Theta$-product. In §\[sec: first property\] we study some basic properties of the algebra ${{M^{!}(L)}}$. In §\[sec: functorial\] we prove that $\Theta$-product is functorial. §\[sec: first property\] and §\[sec: functorial\] may be read independently.
**Convention.** *Unless stated otherwise, every ring in this paper is not assumed to be commutative nor have a unit element.*
Weil representation and modular forms {#sec: preliminary}
=====================================
In this section we recall some basic facts about modular forms of Weil representation type following [@Bo98], [@Br].
Weil representation {#ssec: Weil representation}
-------------------
Let $L$ be an even lattice, namely a free abelian group of finite rank equipped with a nondegenerate symmetric bilinear form $(\cdot , \cdot) \colon L\times L \to {{\mathbb{Z}}}$ such that $(l ,l)\in 2{{\mathbb{Z}}}$ for all $l\in L$. When $L$ has signature $(p, q)$, we write $\sigma(L)=p-q$. The dual lattice of $L$ is denoted by $L^{\vee}$. The quotient $A_L=L^{\vee}/L$ is called the *discriminant group* of $L$, and is endowed with the canonical ${{\mathbb{Q}/\mathbb{Z}}}$-valued quadratic form $q_L \colon A_L\to {{\mathbb{Q}/\mathbb{Z}}}$, $q_L(x)=(x, x)/2+{{\mathbb{Z}}}$, called the *discriminant form* of $L$. In general, a finite abelian group $A$ endowed with a nondegenerate quadratic form $q \colon A\to{{\mathbb{Q}/\mathbb{Z}}}$ is called a *finite quadratic module*. We will frequently abbreviate $(A, q)$ as $A$. Every finite quadratic module $A$ is isometric to the discriminant form of some even lattice $L$. We then write $\sigma(A)=[\sigma(L)]\in {{\mathbb{Z}}}/8$. We denote by ${{\mathbb{C}}}A$ the group ring of $A$. The standard basis vector of ${{\mathbb{C}}}A$ corresponding to an element $\lambda\in A$ will be denoted by ${{{\mathbf e}_{\lambda}}}$.
Let ${{{\rm Mp}_2(\mathbb{Z})}}$ be the metaplectic double cover of ${{{\rm SL}_2(\mathbb{Z})}}$. Elements of ${{{\rm Mp}_2(\mathbb{Z})}}$ are pairs $(M, \phi)$ where $M=\begin{pmatrix}a & b \\ c & d \end{pmatrix}\in {{{\rm SL}_2(\mathbb{Z})}}$ and $\phi$ is a holomorphic function on the upper half plane such that $\phi(\tau)^2=c\tau+d$. The group ${{{\rm Mp}_2(\mathbb{Z})}}$ is generated by $T = \left( \begin{pmatrix}1&1\\ 0&1\end{pmatrix}, 1 \right)$ and $S = \left( \begin{pmatrix}0&-1\\ 1&0\end{pmatrix}, \sqrt{\tau} \right)$, and the center of ${{{\rm Mp}_2(\mathbb{Z})}}$ is generated by $Z = S^{2} = \left( \begin{pmatrix}-1& 0\\ 0 & -1 \end{pmatrix}, \sqrt{-1} \right)$.
The *Weil representation* $\rho_A$ of ${{{\rm Mp}_2(\mathbb{Z})}}$ attached to a finite quadratic module $A$ is a unitary representation on ${{\mathbb{C}}}A$ defined by $$\begin{aligned}
\rho_A(T)({{{\mathbf e}_{\lambda}}}) & = & e(q(\lambda)){{{\mathbf e}_{\lambda}}}, \\
\rho_A(S)({{{\mathbf e}_{\lambda}}}) & = &
\frac{e(-\sigma(A)/8)}{\sqrt{|A|}} \sum_{\mu\in A}e(-(\lambda, \mu)){{{\mathbf e}_{\mu}}}. \end{aligned}$$ Here $e(z)={\rm exp}(2\pi i z)$ for $z\in{{\mathbb{Q}}}/{{\mathbb{Z}}}$. We have $$\rho_A(Z)({{{\mathbf e}_{\lambda}}}) = e(-\sigma(A)/4)\mathbf{e}_{-\lambda}.$$ We will also write $\rho_A=\rho_L$ when $A=A_{L}$ for an even lattice $L$.
Let $A(-1)$ be the $(-1)$-scaling of $A$, namely the same underlying abelian group with the quadratic form $q$ replaced by $-q$. Then $\rho_{A(-1)}$ is canonically isomorphic to the dual representation $\rho_A^{\vee}$ of $\rho_A$. The isomorphism is defined by sending the standard basis of ${{\mathbb{C}}}A(-1)$ to the dual basis $\{ \mathbf{e}_{\lambda}^{\vee} \}$ of the standard basis $\{ \mathbf{e}_{\lambda} \}$ of ${{\mathbb{C}}}A$ through the identification $A(-1)=A$ as abelian groups. We will tacitly identify $\rho_{A(-1)}=\rho_{A}^{\vee}$ in this way.
Let $I\subset A$ be an isotropic subgroup. Then $A'=I^{\perp}/I$ inherits the structure of a finite quadratic module. Let $p:I^{\perp}\to A'$ be the projection. We define linear maps $$\label{eqn: pull push}
\uparrow_{A'}^{A} : {{\mathbb{C}}}A' \to {{\mathbb{C}}}A, \qquad
\downarrow_{A'}^{A} : {{\mathbb{C}}}A \to {{\mathbb{C}}}A',$$ called *pullback* and *pushforward* respectively, by $${{{\mathbf e}_{\lambda}}}\uparrow_{A'}^{A}=\sum_{\mu\in p^{-1}(\lambda)}{{{\mathbf e}_{\mu}}}, \qquad
{{{\mathbf e}_{\mu}}}\downarrow_{A'}^{A} =
\begin{cases}
\mathbf{e}_{p(\mu)}, & \mu\in I^{\perp}, \\
0, & \mu\not\in I^{\perp},
\end{cases}$$ for $\lambda\in A'$ and $\mu\in A$. Then $\uparrow_{A'}^{A}$ and $\downarrow_{A'}^{A}$ are homomorphisms between the Weil representations (see, e.g., [@Bo98], [@Br], [@Ma]). Note that $\downarrow^{A}_{A'}\circ \uparrow^{A}_{A'}$ is the scalar multiplication by $|I|$. Note also that $\uparrow_{A'}^{A}$ and $\downarrow_{A'}^{A}$ are adjoint to each other with respect to the standard Hermitian metrics on ${{\mathbb{C}}}A$ and ${{\mathbb{C}}}A'$. When $A=A_{L}$ for an even lattice $L$, the isotropic subgroup $I$ corresponds to the even overlattice $L\subset L'\subset L^{\vee}$ of $L$ with $L'/L=I$. Then $A'=A_{L'}$. In this situation, we will also write $\uparrow_{A'}^{A} = \uparrow_{L'}^{L}$ and $\downarrow_{A'}^{A}=\downarrow_{L'}^{L}$.
Modular forms {#ssec: modular form}
-------------
Let $A$ be a finite quadratic module and let $k\in \frac{1}{2}{{\mathbb{Z}}}$ with $k\equiv \sigma(A)/2$ modulo $2{{\mathbb{Z}}}$. (We will be mainly interested in the case $k\leq 0$.) A $ {{\mathbb{C}}}A$-valued holomorphic function $f$ on the upper half plane is called a *weakly holomorphic modular form* of weight $k$ and type $\rho_A$ if it satisfies $f(M\tau)=\phi(\tau)^{2k}\rho_A(M, \phi)f(\tau)$ for every $(M, \phi)\in {{{\rm Mp}_2(\mathbb{Z})}}$ and is meromorphic at the cusp. We write $$f(\tau) = \sum_{\lambda\in A} \sum_{n\in q(\lambda)+{{\mathbb{Z}}}} c_{\lambda}(n) q^{n}{{{\mathbf e}_{\lambda}}}$$ for the Fourier expansion of $f$ where $q^n=\exp (2\pi in\tau)$ for $n\in {{\mathbb{Q}}}$. By the invariance under $Z$, we have $c_{-\lambda}(n)=c_{\lambda}(n)$. The finite sum $\sum_{\lambda} \sum_{n<0} c_{\lambda}(n) q^{n}{{{\mathbf e}_{\lambda}}}$ is called the *principal part* of $f$. When $k<0$, $f$ is determined by its principal part; when $k=0$, $f$ is determined by its principal part and constant term. According to Borcherds duality theorem ([@Bo00a], [@Bo00b], [@Br]), which polynomial can be realized as principal part is determined by certain cusp forms as follows.
Let $P=\sum_{\lambda, n}c_{\lambda}(n)q^n{{{\mathbf e}_{\lambda}}}$ be a ${{\mathbb{C}}}A$-valued polynomial where $\lambda\in A$ and $n\in q(\lambda)+{{\mathbb{Z}}}$ with $n<0$, such that $c_{-\lambda}(n)=c_{\lambda}(n)$. Then $P$ is the principal part of a weakly holomorphic modular form of weight $k\equiv \sigma(A)/2$ mod $2{{\mathbb{Z}}}$ and type $\rho_{A}$ if and only if $\sum_{n<0}c_{\lambda}(n)a_{\lambda}(-n)=0$ for every cusp form $\sum_{\lambda, m} a_{\lambda}(m)q^{m}\mathbf{e}_{\lambda}^{\vee}$ of weight $2-k$ and type $\rho_{A}^{\vee}$.
This will be used in §\[sec: first property\]. The version in [@Bo00a] also takes the constant term into account and replaces cusp forms by holomorphic modular forms.
We write $M^{!}_{k}(\rho_A)$ for the space of weakly holomorphic modular forms of weight $k$ and type $A$. For a subring $R$ of ${{\mathbb{C}}}$ (typically ${{\mathbb{Z}}}$ or ${{\mathbb{Q}}}$ or ${{\mathbb{R}}}$), we write $M^{!}_{k}(\rho_A)_{R} \subset M^{!}_{k}(\rho_A)$ for the subgroup of those $f$ whose principal part has coefficients in $R$. It is clear that $M^{!}_{k}(\rho_A)_{{{\mathbb{Z}}}}\otimes_{{{\mathbb{Z}}}}{{\mathbb{Q}}} = M^{!}_{k}(\rho_A)_{{{\mathbb{Q}}}}$. Moreover, McGraw’s rationality theorem [@Mc] and Borcherds duality theorem tell us that $$M^{!}_{k}(\rho_A)_{{{\mathbb{Q}}}}\otimes_{{{\mathbb{Q}}}}{{\mathbb{C}}} = M^{!}_{k}(\rho_A).$$ If $f\in M^{!}_{k}(\rho_A)_{{{\mathbb{Q}}}}$, its coefficients $c_{\lambda}(0)$ of the constant term ($\lambda$ isotropic) are also rational number. This follows from the version of Borcherds duality theorem in [@Bo00a] and the rationality of Fourier coefficients of Eisenstein series due to Bruinier-Kuss [@B-K] ($\lambda=0$) and Schwagenscheidt [@Sc] ($\lambda$ general).
Theta series are typical examples of holomorphic modular forms of Weil representation type. Let $N$ be an even positive-definite lattice. By Borcherds [@Bo98], the $\rho_N$-valued function $$\Theta_{N}(\tau)
= \sum_{l\in N^{\vee}} q^{(l, l)/2}\mathbf{e}_{[l]}
= \sum_{\lambda, n}c_{\lambda}^{N}(n)q^{n}{{{\mathbf e}_{\lambda}}},$$ where $c_{\lambda}^{N}(n)$ is the number of vectors $l$ in $\lambda+N\subset N^{\vee}$ such that $(l, l)=2n$, is a holomorphic modular form of weight ${\rm rk}(N)/2$ and type $\rho_{N}$. All Fourier coefficients of $\Theta_{N}(\tau)$ are nonnegative integers. If $N'$ is an even overlattice of $N$, we have $\Theta_{N'}=\Theta_{N}\!\downarrow^{N}_{N'}$.
Let $L$ be an even lattice. For $A=A_{L}$ and $k=\sigma(L)/2$, we write $${{M^{!}(L)}} = M^{!}_{\sigma(L)/2}(\rho_{L}), \qquad
{{M^{!}(L)_{R}}} = M^{!}_{\sigma(L)/2}(\rho_{L})_{R}.$$ We especially write $$M^{!}=M^{!}(U\oplus \cdots \oplus U)=M^{!}(\{ 0 \}),$$ which is just the space of scalar-valued weakly holomorphic modular forms of weight $0$. Then $M^{!}$ is the polynomial ring in the $j$-function $j(\tau)=q^{-1}+744+ \cdots$. It is a fundamental remark that for every even lattice $L$, ${{M^{!}(L)}}$ is a $M^{!}$-module.
When $p=2$ and for $f\in{{M^{!}(L)_{\mathbb{Z}}}}$, Borcherds [@Bo95], [@Bo98] constructed a meromorphic modular form $\Psi(f)$ on the Hermitian symmetric domain attached to $L$, which has weight $c_{0}(0)/2\in {{\mathbb{Q}}}$ and whose divisor is a linear combination of Heegner divisors determined by the principal part of $f$. The lifting $f\mapsto \Psi(f)$ is multiplicative (at least up to constant). Thus, at least when $R\supset {{\mathbb{Q}}}$, ${{M^{!}(L)_{R}}}={{M^{!}(L)_{\mathbb{Z}}}}\otimes_{{{\mathbb{Z}}}}R$ can be thought of as a scalar extension of the multiplicative group of Borcherds products.
$\Theta$-product {#sec: product}
================
Let $L$ be an even lattice of signature $(p, q)$ with $p\leq q$ and assume that $L$ has Witt index $p$. We choose and fix a maximal ($=$ rank $p$, primitive) isotropic sublattice $I$ of $L$. In this section we define $\Theta$-product $\ast_{I}$ on the space ${{M^{!}(L)}}=M_{\sigma(L)/2}^{!}(\rho_{L})$ with respect to $I$, which makes ${{M^{!}(L)}}$ an associative algebra. §\[ssec: lattice lemma\] is lattice-theoretic preliminary. $\Theta$-product is defined in §\[ssec: theta product\]. In §\[ssec: example\] we look at some examples.
Preliminary {#ssec: lattice lemma}
-----------
We first prepare a lattice-theoretic lemma. We write $K=I^{\perp}\cap L/I$, which is an even negative-definite lattice of rank $-\sigma(L)$. We shall realize $K$ as an orthogonal direct summand of a canonical overlattice of $L$. Let $I^{\ast}=I_{{{\mathbb{Q}}}}\cap L^{\vee}$ be the primitive hull of $I$ in the dual lattice $L^{\vee}$. Then $L^{\ast}=\langle L, I^{\ast} \rangle$ is an even overlattice of $L$ with $L^{\ast}/L\simeq I^{\ast}/I$. For $rU=U\oplus \cdots \oplus U$ ($r$ times) we denote by $e_{1}, f_{1}, \cdots , e_{r}, f_{r}$ its standard basis, namely $(e_{i}, f_{j})=\delta_{ij}$ and $(e_i, e_j)=(f_i, f_j)=0$. We write $I_{r}=\langle e_{1}, \cdots, e_{r} \rangle$.
\[lem: overlattice split\] There exists an embedding $\varphi\colon pU \hookrightarrow L^{\ast}$ such that $\varphi(I_{p})=I^{\ast}$. In particular, we have $L^{\ast} = \varphi(pU)\oplus \varphi(pU)^{\perp} \simeq pU\oplus K$. The induced isometry $A_{L^{\ast}}\to A_{K}$ does not depend on the choice of $\varphi$.
By the primitivity of $I^{\ast}$ in $L^{\vee}$, we have $(l, L^{\ast})=(l, L)={{\mathbb{Z}}}$ for any primitive vector $l$ in $I^{\ast}$. We take one such vector $l_1\in I^{\ast}$ and a vector $m_1\in L^{\ast}$ with $(l_1, m_1)=1$. Then $\langle l_1, m_1 \rangle \simeq U$ and we have a splitting $L^{\ast}=\langle l_1, m_1 \rangle \oplus L_{1}$ where $L_{1}=\langle l_1, m_1 \rangle^{\perp}\cap L^{\ast}$. The intersection $I_{1}=I^{\ast}\cap L_{1}$ satisfies $I^{\ast}=I_{1}\oplus {{\mathbb{Z}}}l_{1}$ and we have $(l, L_{1})=(l, L^{\ast})={{\mathbb{Z}}}$ for any primitive vector $l\in I_{1}$. Then we can repeat the same process for $I_{1}\subset L_{1}$. This eventually defines an embedding $\varphi\colon pU\hookrightarrow L^{\ast}$ with $\varphi(I_{p})=I^{\ast}$. We have natural isomorphisms $$\varphi(pU)^{\perp}\cap L^{\ast} \stackrel{\simeq}{\to}
(I^{\ast})^{\perp}\cap L^{\ast}/I^{\ast} = I^{\perp}\cap L/I =K.$$
For the last assertion, we use the following construction. ($I^{\ast}\subset L^{\ast}$ will be $I\subset L$ below.)
\[claim: AL AK\] Let $L$ be an even lattice and $I\subset L$ be a primitive isotropic sublattice. Let $\varphi_{1}, \varphi_{2} \colon rU \hookrightarrow L$ be two embeddings with $\varphi_{1}(I_{r})=\varphi_{2}(I_{r})=I$. Then there exist
- an isometry $\gamma_{L}$ of $L$ which preserves $I$ and acts trivially on $K=I^{\perp}/I$ and $A_{L}$, and
- an isometry $\gamma_{rU}$ of $rU$ which preserves $I_{r}$,
such that $\varphi_{2}=\gamma_{L} \circ \varphi_{1} \circ \gamma_{rU}$.
If we write $K_{i}=\varphi_{i}(rU)^{\perp}\cap L$, then we have $\gamma_{L}(K_{1})=K_{2}$. The properties of $\gamma_{L}$ imply that the composition $A_{L}\to A_{K_{1}} \to A_{K}$ coincides with $A_{L}\to A_{K_{2}} \to A_{K}$, hence the last assertion of Lemma \[lem: overlattice split\] follows.
We prove Claim \[claim: AL AK\] by induction on $r$. We may assume that $\varphi_{1}|_{I_{r}}=\varphi_{2}|_{I_{r}}$ by composing an isometry of $rU$ preserving $I_{r}$ and $\langle f_1, \cdots, f_r \rangle \simeq I_{r}^{\vee}$. When $r=1$, we let $l=\varphi_{i}(e_{1})$ and $m_{i}=\varphi_{i}(f_{1})$. Then as $\gamma_{L}$ we take the Eichler transvection $E_{l,m_{2}-m_{1}}$ (see, e.g., [@G-H-S]) which fixes $l$, sends $m_{1}$ to $m_{2}$, and acts trivially on $K$ and on $A_{L}$.
For general $r$, let $rU=(r-1)U\oplus U$ be the apparent decomposition and let $\varphi_{i}'=\varphi_{i}|_{(r-1)U}$ and $I'=\varphi_{i}(I_{r-1})$. By induction, there exists an isometry $\gamma_{L}'$ of $L$ which preserves $I'$ and acts trivially on $(I')^{\perp}/I'$ and $A_{L}$, and an isometry $\gamma_{(r-1)U}$ of $(r-1)U$ preserving $I_{r-1}$, such that $\varphi_{2}'=\gamma_{L}' \circ \varphi_{1}' \circ \gamma_{(r-1)U}$. Note that $\gamma_{L}'$ also preserves $I$ and acts trivially on $K$. We put $L'=\varphi_{2}((r-1)U)$, $L''=(L')^{\perp}\cap L$ and $I''=I\cap L''$. Then we have $\gamma_{L}' \circ \varphi_{1}({{\mathbb{Z}}}e_r)=\varphi_{2}({{\mathbb{Z}}}e_{r})=I''$. Thus we can apply the result for $r=1$ to $\varphi_{1}''=\gamma_{L}' \circ \varphi_{1}|_{U}$, $\varphi_{2}''= \varphi_{2}|_{U}$, and $I''\subset L''$. This provides us with an isometry $\gamma_{L''}$ of $L''$ which preserves $I''$ and acts trivially on $(I'')^{\perp}/I''\simeq K$ and $A_{L''}\simeq A_{L}$, and an isometry $\gamma_{U}=\pm{\rm id}_{U}$ of $U$, such that $\varphi_{2}''=\gamma_{L''}\circ \varphi_{1}'' \circ \gamma_{U}$. Now $\gamma_{L}=({\rm id}_{L'}\oplus \gamma_{L''})\circ \gamma_{L}'$ and $\gamma_{rU}=\gamma_{(r-1)U} \oplus \gamma_{U}$ satisfy the desired properties.
The lattice $K$ can also be realized as a sublattice of $I^{\perp}\cap L$ as follows. We choose a basis $l_1, \cdots, l_p$ of $I$ and its dual basis $l_1^{\vee}, \cdots, l_p^{\vee}$ from $L^{\vee}$. We put $\tilde{K}=\langle l_1^{\vee}, \cdots, l_p^{\vee} \rangle^{\perp} \cap I^{\perp} \cap L$. By construction we have a splitting $I^{\perp}\cap L = I \oplus \tilde{K}$, so the projection gives an isometry $\tilde{K}\to K$.
$\Theta$-product {#ssec: theta product}
----------------
We now define $\Theta$-product $\ast_{I}$ on ${{M^{!}(L)}}$. We put $K^{+}=K(-1)$, which is an even positive-definite lattice of rank $-\sigma(L)$. We identify $A_{L^{\ast}}=A_{K}$ as in Lemma \[lem: overlattice split\]. Let $${{\downarrow^{L}_{K}}} = \downarrow^{L}_{L^{\ast}} : A_{L} \to A_{L^{\ast}} = A_{K}$$ be the pushforward operation defined in . If $f\in {{M^{!}(L)}}$, then $f{{\downarrow^{L}_{K}}}$ is an element of $M^{!}(K)$. We take the tensor product $f{{\downarrow^{L}_{K}}}\otimes {{\Theta_{K^{+}}}}$ with the theta series ${{\Theta_{K^{+}}}}$. This is a weakly holomorphic modular form of weight $0$ and type $\rho_{K}\otimes \rho_{K^{+}} \simeq \rho_{K}\otimes \rho_{K}^{\vee}$. Taking the contraction $\rho_{K}\otimes \rho_{K}^{\vee} \to {{\mathbb{C}}}$ produces a scalar-valued weakly holomorphic modular form of weight $0$, namely an element of $M^{!}$. We denote this modular function by $$\xi(f) = \langle f{{\downarrow^{L}_{K}}}, {{\Theta_{K^{+}}}} \rangle \; \; \in M^{!}.$$ The map $\xi \colon {{M^{!}(L)}}\to M^{!}$ is $M^{!}$-linear.
Now if $f_1, f_2\in {{M^{!}(L)}}$, we define $$f_{1} \ast_{I} f_{2} = \xi(f_{1}) \cdot f_{2} = \langle f_{1}{{\downarrow^{L}_{K}}}, {{\Theta_{K^{+}}}} \rangle \cdot f_2.$$ This is again an element of ${{M^{!}(L)}}$. The map $$\ast_{I} : {{M^{!}(L)}} \times {{M^{!}(L)}} \to {{M^{!}(L)}}$$ is $M^!$-bilinear.
Explicitly, if $f_{i}(\tau)=\sum_{\lambda, n}c_{\lambda}^{i}(n)q^{n}{{{\mathbf e}_{\lambda}}}$ for $i=1, 2$ and ${{\Theta_{K^{+}}}}(\tau)=\sum_{\nu, m} c_{\nu}^{K}(m)q^{m}\mathbf{e}_{\nu}^{\vee}$, the Fourier coefficients of $f_{1}\ast_{I}f_{2}=\sum_{\lambda, n}c_{\lambda}(n)q^{n}{{{\mathbf e}_{\lambda}}}$ are given by $$c_{\lambda}(n) =
\sum_{m+l+k=n}\sum_{\mu\in J^{\perp}}
c^{1}_{\mu}(m) \cdot c^{K}_{p(\mu)}(l) \cdot c^{2}_{\lambda}(k).$$ Here $J=I^{\ast}/I\subset A_{L}$ and $p:J^{\perp}\to A_{K}$ is the projection. Note that even coefficients of $f_1, f_2$ in $n>0$, sometimes not being paid much attention, may contribute to the principal part of $f_1\ast_{I} f_2$.
\[prop: associativity et al\] We have $$(f_1\ast_{I} f_2)\ast_{I} f_3 = f_1\ast_{I}(f_2\ast_{I} f_3)$$ for $f_1, f_2, f_3 \in {{M^{!}(L)}}$. Therefore $\Theta$-product $\ast_{I}$ makes ${{M^{!}(L)}}$ an associative ${{\mathbb{C}}}$-algebra. Moreover, the map $\xi \colon {{M^{!}(L)}}\to M^{!}$ is a ring homomorphism.
For the first assertion, we have $$\begin{aligned}
(f_1\ast_{I} f_2)\ast_{I} f_3
& = &
\xi(f_1\ast_{I}f_2) \cdot f_3
\; = \; \xi(\xi(f_1) \cdot f_2) \cdot f_3 \\
& = &
\xi(f_1) \cdot \xi(f_2) \cdot f_3
\; = \; \xi(f_1) \cdot (f_2 \ast_{I} f_3) \\
& = &
f_{1}\ast_{I} (f_2 \ast_{I} f_3). \end{aligned}$$ For the second assertion, we calculate $$\xi(f_{1} \ast_{I} f_{2}) =
\xi(\xi(f_{1})\cdot f_{2}) =
\xi(f_{1})\cdot \xi(f_{2}).$$ Thus $\xi$ preserves the products.
The algebra ${{M^{!}(L)}}$ has the following filtration. For a natural number $d$ we denote by $M^{!}(L)_{d}\subset {{M^{!}(L)}}$ the subspace of modular forms $f$ whose principal part has degree $\leq d$. Then we have $$M^{!}(L)_{d} \ast_{I} M^{!}(L)_{d'} \subset M^{!}(L)_{d+d'}.$$ Hence ${{M^{!}(L)}}$ is a filtered algebra with this filtration.
By construction, this algebra structure on ${{M^{!}(L)}}$ requires the choice of a maximal isotropic sublattice $I$, so we should write $\xi=\xi_{I}$ and ${{M^{!}(L)}}=M^{!}(L, I)$ when we want to specify this dependence. In fact, the freedom of choice is finite. If $\gamma\colon L\to L$ is an isometry of $L$, then $\gamma$ acts on $A_{L}$. Since the induced action on ${{\mathbb{C}}}A_{L}$ preserves the Weil representation $\rho_L$, $\gamma$ acts on ${{M^{!}(L)}}$. We have $\xi_{\gamma I}(\gamma f)=\xi_{I}(f)$ and so $$(\gamma f_{1})\ast_{\gamma I} (\gamma f_{2})
= \gamma (f_{1}\ast_{I} f_{2}).$$ In other words, the action of $\gamma$ on ${{M^{!}(L)}}$ gives an isomorphism $$\gamma : M^!(L, I) \to M^!(L, \gamma I)$$ of algebras. In particular, when $\gamma$ acts trivially on $A_{L}$, its action on ${{M^{!}(L)}}$ is also trivial, so we have $M^{!}(L, I)=M^{!}(L, \gamma I)$ as algebras.
To summarize, if ${\rm O}(L)$ is the orthogonal group of $L$ and $\Gamma_{L}<{\rm O}(L)$ is the kernel of the reduction map ${\rm O}(L)\to {\rm O}(A_{L})$, then $M^!(L, I)$ depends only on the $\Gamma_{L}$-equivalence class of $I$. Moreover, its isomorphism class depends only on the ${\rm O}(L)$-equivalence class of $I$. In particular, we have only finitely many algebra structures $M^!(L, I)$ on ${{M^{!}(L)}}$ for a fixed lattice $L$.
Geometrically, the $\Gamma_{L}$-equivalence class of $I$ corresponds more or less to a boundary component of some compactification of the locally symmetric space associated to $\Gamma_{L}$. (For example, when $p=2$, a boundary curve in the Baily-Borel compactification.) Perhaps this geometric picture might lead one to wonder whether it is possible to interpolate $M^!(L, I)$ and $M^!(L, I')$ for $I\not\sim I'$ by some continuous family of algebraic objects.
Examples {#ssec: example}
--------
We look at $\Theta$-product in some examples.
\[ex: unimodular\] Assume that $L$ is unimodular. Then $8 | \sigma(L)$. Modular forms of type $\rho_{L}$ are just scalar-valued modular forms. For any maximal isotropic sublattice $I$ we can find a splitting $L\simeq pU \oplus K$ with $I\subset pU$, and $K$ is also unimodular. In particular, ${{\downarrow^{L}_{K}}}$ is identity and ${{\Theta_{K^{+}}}}=\theta_{K^{+}}$ is also scalar-valued. In this case, $\Theta$-product is just the product $$f_1 \ast_{I} f_{2} = f_1 \cdot \theta_{K^{+}} \cdot f_2$$ for $f_1, f_2\in {{M^{!}(L)}}$. This shows that ${{M^{!}(L)}}$ is commutative and has no zero divisor. Furthermore, ${{M^{!}(L)}}$ has no unit element unless when $L=pU$. Indeed, if $f\in {{M^{!}(L)}}$ is a unit element, then $f\cdot \theta_{K^{+}}=1$, but this is impossible when $K\ne \{ 0 \}$ because then $f$ would be a holomorphic modular form of negative weight.
\[ex: Jacobi form\] More generally, assume that we have a splitting $L=pU\oplus K$ with $I\subset pU$ ($K$ not necessarily unimodular). This is equivalent to $I=I^{\ast}$. In this situation, modular forms of type $\rho_{L}=\rho_{K}$ correspond to Jacobi forms of index $K^{+}$ as follows (see [@Gr] for more detail). Let $\Theta_{K^{+}}(\tau, Z) = \sum_{\lambda\in A_{K}} \theta_{K^{+}+\lambda}(\tau, Z)\mathbf{e}_{\lambda}^{\vee}$ be the $\rho_{K^{+}}$-valued Jacobi theta series. If $f(\tau)=\sum_{\lambda\in A_K}f_{\lambda}(\tau){{{\mathbf e}_{\lambda}}}$ is a weakly holomorphic modular form of weight $\sigma(L)/2$ and type $\rho_{K}$, the function $$\phi(\tau, Z)
= \langle f(\tau), \: \Theta_{K^{+}}(\tau, Z) \rangle
= \sum_{\lambda\in A_{K}} f_{\lambda}(\tau)\theta_{K^{+}+\lambda}(\tau, Z)$$ given by the contraction $\rho_K\otimes \rho_K^{\vee} \to {{\mathbb{C}}}$ is a weakly holomorphic Jacobi form of weight $0$ and index $K^{+}$. This gives a one-to-one correspondence between two such forms. Note that the restriction $\phi(\tau, 0)$ of $\phi(\tau, Z)$ to $Z=0$ is just the modular function $\xi(f)$ because $\Theta_{K^{+}}(\tau, 0) = \Theta_{K^{+}}(\tau)$. Now let $f_1, f_2 \in {{M^{!}(L)}}$ and $\phi_1, \phi_2$ be the corresponding Jacobi forms. Then the Jacobi form corresponding to $f_1\ast_{I} f_2$ is $$\phi_1(\tau, 0) \cdot \phi_2(\tau, Z).$$ Indeed, we have $$\begin{aligned}
\langle f_1\ast_{I} f_2(\tau), \: \Theta_{K^{+}}(\tau, Z) \rangle
& = &
\langle \xi(f_{1})(\tau) \cdot f_2(\tau), \: \Theta_{K^{+}}(\tau, Z) \rangle \\
& = &
\xi(f_{1})(\tau) \cdot \langle f_2(\tau), \: \Theta_{K^{+}}(\tau, Z) \rangle \\
& = &
\phi_1(\tau, 0) \cdot \phi_2(\tau, Z). \end{aligned}$$ Thus Jacobi form interpretation of $\Theta$-product is simple: substitute $Z=0$ into $\phi_1$ to obtain a scalar-valued modular function, and multiply it to $\phi_2$. $\Theta$-product for general $(L, I)$, not necessarily coming from $pU\hookrightarrow L$, can be thought of as a functorial extension of this simple operation using the pushforward operation $\downarrow^{L}_{K}$.
Basic properties {#sec: first property}
================
In this section we study some basic properties of the algebra ${{M^{!}(L)}}$. Except in Proposition \[prop: finiteness\], the reference maximal isotropic sublattice $I\subset L$ is fixed throughout. In §\[ssec: annihilator\] we study the left annihilator ideal of ${{M^{!}(L)}}$, which plays a basic role in the study of ${{M^{!}(L)}}$. In §\[ssec: unit\] we study existence/nonexistence of unit element. In §\[ssec: f.g.\] we prove that ${{M^{!}(L)}}$ is finitely generated. In §\[ssec: integral part\] we study the problem whether the $R$-part ${{M^{!}(L)_{R}}}$ of ${{M^{!}(L)}}$ or its variant is closed under $\ast_{I}$. §\[ssec: unit\] should be read after §\[ssec: annihilator\], but §\[ssec: f.g.\] and §\[ssec: integral part\] may be read independently.
Left annihilator {#ssec: annihilator}
----------------
The left annihilator ideal of ${{M^{!}(L)}}$ is a two-sided ideal of ${{M^{!}(L)}}$. Since ${{M^{!}(L)}}$ is torsion-free as a $M^{!}$-module, this coincides with the kernel of $\xi \colon {{M^{!}(L)}}\to M^{!}$, which we denote by $$\Theta^{\perp} =
\{ \: f\in{{M^{!}(L)}} \: | \: \langle f{{\downarrow^{L}_{K}}}, {{\Theta_{K^{+}}}} \rangle = 0 \: \}.$$ This is also a sub $M^!$-module. Note that $\Theta^{\perp}$ also coincides with the left annihilator of any *fixed* $g\ne0 \in {{M^{!}(L)}}$. We have $(\Theta^{\perp})^{2}=0$ and $\Theta^{\perp}$ is the maximal nilpotent ideal of ${{M^{!}(L)}}$, consisting of all nilpotent elements of ${{M^{!}(L)}}$.
\[prop: theta kernel basic\] The quotient ring ${{M^{!}(L)}}/\Theta^{\perp}$ is canonically identified with a nonzero ideal of the polynomial ring $M^{!}={{\mathbb{C}}}[j]$. Every homomorphism from ${{M^{!}(L)}}$ to a ring without nonzero nilpotent element factors through ${{M^{!}(L)}}\to {{M^{!}(L)}}/\Theta^{\perp}$.
By the definition $\Theta^{\perp}={\rm Ker}(\xi)$, the quotient ${{M^{!}(L)}}/\Theta^{\perp}$ is identified with the image $\xi({{M^{!}(L)}})\subset M^!$ of $\xi$. Since $\xi$ is a $M^!$-linear map, $\xi({{M^{!}(L)}})$ is an ideal of $M^!$. We shall show that $\xi$ is a nonzero map. Since the map ${{\downarrow^{L}_{K}}}\colon {{M^{!}(L)}}\to M^{!}(K)$ is surjective, it suffices to check that the map $\langle \cdot , {{\Theta_{K^{+}}}} \rangle \colon M^!(K)\to M^{!}$ is nonzero. This can be seen, e.g., by taking a modular form $f\in M^!(K)$ with Fourier expansion of the form $f(\tau)=q^{n}\mathbf{e}_{0}+o(q^{n})$ for some negative integer $n$, which is possible as guaranteed by Lemma \[lem: leading term\]. The last assertion follows by a standard argument.
By general theory of associative algebra, ${{M^{!}(L)}}$ has the structure of a Lie algebra by the commutator bracket $$[ f_1, f_2 ] = f_1\ast_{I} f_2 - f_2\ast_{I} f_1.$$ Since $\xi$ is a ring homomorphism and $M^!$ is commutative, these brackets are contained in $\Theta^{\perp}$. In other words, $$f_1\ast_{I} f_2\ast_{I} f_3 = f_2\ast_{I} f_1\ast_{I} f_3.$$
\[prop: unimodular commutative\] The following three conditions are equivalent.
\(1) $L$ is unimodular.
\(2) ${{M^{!}(L)}}$ is commutative.
\(3) $\Theta^{\perp} = \{ 0 \}$.
Moreover, if $\Theta^{\perp}\ne \{ 0 \}$, we have $\dim \Theta^{\perp} = \infty$.
\(1) $\Rightarrow$ (2), (3) is observed in Example \[ex: unimodular\]. (3) $\Rightarrow$ (2) holds because $[f_1, f_2]\in \Theta^{\perp}$. We check (2) $\Rightarrow$ (3). If $\Theta^{\perp}\ne \{ 0 \}$, we take $f_1\ne 0 \in \Theta^{\perp}$ and $f_2\not\in \Theta^{\perp}$. Then $f_1\ast_{I}f_2=0$ but $f_{2}\ast_{I}f_{1}\ne 0$, so ${{M^{!}(L)}}$ is not commutative.
Finally, we prove (3) $\Rightarrow$ (1). Suppose that $L$ is not unimodular. We shall show that $\dim \Theta^{\perp} = \infty$. We consider separately according to whether $K$ is unimodular or not. When $K$ is unimodular, $\Theta^{\perp}$ coincides with the kernel of the pushforward ${{\downarrow^{L}_{K}}}\colon {{M^{!}(L)}} \to M^{!}(K)$. We show that $\dim {\rm Ker}({{\downarrow^{L}_{K}}})= \infty$. The map ${{\downarrow^{L}_{K}}}$ preserves the degree filtration, namely $M^!(L)_{d}{{\downarrow^{L}_{K}}} \subset M^!(K)_d$. By Borcherds duality theorem, we have $$\begin{aligned}
\dim M^!(L)_d & = & |A_L/\pm 1| \cdot d+O(1), \\
\dim M^!(K)_d & = & 1\cdot d+O(1), \end{aligned}$$ as $d$ grows. Therefore $$\dim ({\operatorname{Ker}}({{\downarrow^{L}_{K}}})\cap M^!(L)_d) \: \geq \: (|A_L/\pm1|-1)\cdot d + O(1) \to \infty$$ as $d\to \infty$. Here $|A_L/\pm1|>1$ because $A_{L}\ne \{ 0 \}$.
When $K$ is not unimodular, we can still argue similarly. The map ${{\downarrow^{L}_{K}}}\colon {{M^{!}(L)}}\to M^!(K)$ is surjective as the composition $\downarrow^{L}_{K} \circ \uparrow^{L}_{K}$ is a nonzero scalar multiplication. Therefore it is sufficient to show that the subspace ${\operatorname{Ker}}\langle \cdot, {{\Theta_{K^{+}}}} \rangle$ of $M^!(K)$ has dimension $\infty$. The map $\langle \cdot, {{\Theta_{K^{+}}}} \rangle \colon M^!(K)\to M^!$ preserves the degree filtration, so we have similarly $$\dim ({\operatorname{Ker}}\langle \cdot, {{\Theta_{K^{+}}}} \rangle \cap M^!(K)_d) \: \geq \: (|A_K/\pm 1|-1)\cdot d + O(1) \to \infty$$ as $d\to \infty$. This finishes the proof of (3) $\Rightarrow$ (1).
By Proposition \[prop: theta kernel basic\], ${{M^{!}(L)}}$ is decomposed into two parts: the ideal $\xi({{M^{!}(L)}})$ in the polynomial ring $M^{!}={{\mathbb{C}}}[j]$, and the left annihilator $\Theta^{\perp}$. By the proof of (2) $\Rightarrow$ (3) in Proposition \[prop: unimodular commutative\], the Lie brackets $[f, g]$ generate a large part of $\Theta^{\perp}$ containing at least $\xi({{M^{!}(L)}})\cdot \Theta^{\perp}$. In §\[sec: functorial\] we will see that the kernels of the quasi-pullback maps to sublattices of $L$ provide natural examples of two-sided ideals contained in $\Theta^{\perp}$.
We have only studied the left annihilator. The right annihilator of a fixed $f\in {{M^{!}(L)}}$ coincides with the whole ${{M^{!}(L)}}$ if $f\in \Theta^{\perp}$, while it is $\{ 0 \}$ if $f\not\in \Theta^{\perp}$.
Unit element {#ssec: unit}
------------
Next we study existence/nonexistence of unit element. Right unit element exists only in the apparent case.
\[prop: RUE\] ${{M^{!}(L)}}$ has a right unit element if and only if $L=pU$. In this case it is actually the two-sided unit element.
It suffices to verify the “only if” direction. Let $g\in {{M^{!}(L)}}$ be a right unit element. If $L$ is not unimodular, we can take $f\ne 0 \in \Theta^{\perp}$ by Proposition \[prop: unimodular commutative\]. Then $f\ast_{I}g=0\ne f$, which is absurd. So $L$ must be unimodular. Then the assertion follows from the last part of Example \[ex: unimodular\].
On the other hand, left unit element, though still relatively rare, exists in more cases. They are exactly modular forms $f\in {{M^{!}(L)}}$ with $\xi(f)=1$. In particular, if $f$ is a left unit element, every element of $f+\Theta^{\perp}$ is so, and vice versa.
\[prop: LUE\] (1) ${{M^{!}(L)}}$ has a left unit element if and only if the homomorphism $\xi\colon {{M^{!}(L)}}\to M^{!}$ is surjective. This always holds when $\sigma(L)=0$.
\(2) If there exists a modular form $f\in {{M^{!}(L)}}\backslash \Theta^{\perp}$ with $f(\tau)=o(q^{-1})$, then ${{M^{!}(L)}}$ has a left unit element. Such a modular form $f$ exists only when $|\sigma(L)|<24$.
The first assertion of (1) holds because $\xi$ is $M^!$-linear. When $\sigma(L)=0$, we have $M^!(K)=M^!$ and $\xi={{\downarrow^{L}_{K}}}\colon {{M^{!}(L)}}\to M^{!}(K)$ is surjective.
Next we prove (2). If $f=o(q^{-1})$, we have $\xi(f) = o(q^{-1})$. Since Fourier expansion of elements of $M^!$ have only integral powers of $q$, we have in fact $\xi(f) = O(1)$. Hence $\xi(f)$ is a holomorphic modular function, namely a constant, which is nonzero by our assumption $f\not\in \Theta^{\perp}$. As for the last assertion of (2), we consider the product $\Delta\cdot f$ with the $\Delta$-function. This is a cusp form , so its weight $12+\sigma(L)/2$ must be positive.
The condition $f\not\in \Theta^{\perp}$ in Proposition \[prop: LUE\] (2) is satisfied when the principal part of $f{{\downarrow^{L}_{K}}}$ has nonnegative (at least one nonzero) coefficients. Indeed, $\Theta_{K^{+}}(\tau)=\mathbf{e}_{0}^{\vee}+o(1)$ has nonnegative coefficients and the coefficient $c_{0}(0)$ of $f{{\downarrow^{L}_{K}}}$ is positive ([@Br], [@B-K]), so $\xi(f)$ has nonzero constant term.
Some reflective modular forms provide typical examples of modular forms as in Proposition \[prop: LUE\] (2).
\[ex: reflective 1\] Let $L= pU \oplus \langle -2 \rangle$. Then $K^{+}= \langle 2 \rangle$. Let $\phi_{0,1}$ be the weak Jacobi form of weight $0$ and index $1$ constructed by Eichler-Zagier in [@E-Z] Theorem 9.3. The corresponding modular form in ${{M^{!}(L)}}$ has Fourier expansion $f(\tau)=q^{-1/4}\mathbf{e}_{1}+10\mathbf{e}_{0}+o(1)$ where $\mathbf{e}_{i}$ is the basis vector of ${{\mathbb{C}}}A_{L}$ corresponding to $[i]\in {{\mathbb{Z}}}/2\simeq A_{L}$. This modular form satisfies the condition in Proposition \[prop: LUE\] (2). We will return to this example in Example \[ex: generator\].
\[ex: reflective 2\] More generally, let $L=pU\oplus \langle -2t \rangle$. Then $K=K_{t}=\langle -2t \rangle$. Eichler-Zagier’s Jacobi form $\phi_{0,1}$ was generalized by Gritsenko-Nikulin in [@G-N] §2.2 to Jacobi forms $\phi_{0,t}$ of weight $0$ and index $t$. For $t=2, 3, 4$, the $\rho_{K_{t}}$-valued modular form $f_{t}$ corresponding to $\phi_{0,t}$ has Fourier expansion $f_{t}(\tau) = q^{-1/4t}\mathbf{e}_{1}+a_{t}\mathbf{e}_{0}+ \cdots $ where $a_{t}=4, 2, 1$ for $t=2, 3, 4$ respectively. Thus $f_{t}$ for $t=2, 3, 4$ satisfy the condition in Proposition \[prop: LUE\] (2).
Finite generation {#ssec: f.g.}
-----------------
In this subsection we prove that ${{M^{!}(L)}}$ is finitely generated and give rough estimates, from above and below, on the minimal number of generators.
\[prop: f.g.\] The algebra ${{M^{!}(L)}}$ is finitely generated over ${{\mathbb{C}}}$.
For the proof we need the following construction.
\[lem: leading term\] There exists a natural number $d_{0}$ such that for any pair $(\lambda, n)$ with $\lambda\in A_{L}$ and $n\in q(\lambda)+{{\mathbb{Z}}}$, $n<-d_{0}$, there exists a modular form $f_{\lambda,n}\in {{M^{!}(L)}}$ with Fourier expansion $f_{\lambda,n}(\tau)=q^{n}({{{\mathbf e}_{\lambda}}}+\mathbf{e}_{-\lambda})+o(q^{n})$.
For simplicity we assume $\sigma(L)<0$; the case $\sigma(L)=0$ can be dealt with similarly. For each natural number $d$ we let $V_{d}$ be the space of ${{\mathbb{C}}}A_{L}$-valued polynomials of the form $$\label{eqn: principal part}
\sum_{\lambda\in A_{L}}
\sum_{\substack{-d\leq m <0 \\ m\in q(\lambda)+{{\mathbb{Z}}}}}
c_{\lambda}(m)q^{m}{{{\mathbf e}_{\lambda}}}, \qquad
c_{\lambda}(m) = c_{-\lambda}(m).$$ Then $\dim V_{d} = |A_{L}/\pm 1|\cdot d$. The filter $M^{!}(L)_{d}$ of ${{M^{!}(L)}}$ is canonically embedded in $V_{d}$ by associating the principal parts. Let $S=S_{2-\sigma(L)/2}(\rho_{L}^{\vee})$ be the space of cusp forms of weight $2-\sigma(L)/2$ and type $\rho_{L}^{\vee}$. By Borcherds duality theorem, the subspace $M^{!}(L)_{d}$ of $V_{d}$ is characterized as $$M^{!}(L)_{d} = {{\operatorname{Ker}}}(V_{d}\to S^{\vee}).$$ When $d\gg 0$, $V_{d}\to S^{\vee}$ is surjective ([@Bo00a]), and hence $$\dim M^{!}(L)_{d} = |A_L/\pm 1|\cdot d - \dim S.$$ In particular, we find that $$\dim M^{!}(L)_{d+1} - \dim M^{!}(L)_{d} = |A_{L}/\pm 1|.$$ On the other hand, $M^{!}(L)_{d}$ as a subspace of $M^{!}(L)_{d+1}$ is the kernel of the map $\rho_{d}\colon M^{!}(L)_{d+1} \to {{\mathbb{C}}}(A_{L}/\pm 1)$ that associates coefficients of the principal part in degree $\in [-d-1, -d)$. Therefore $\rho_{d}$ must be surjective when $d\gg 0$. The form $f_{\lambda,n}$ as desired can be obtained as $\rho_{d}^{-1}({{{\mathbf e}_{\lambda}}}+\mathbf{e}_{-\lambda})$ for suitable $d$.
We now prove Proposition \[prop: f.g.\].
We first define a set of generators. First we take $f_{0}\in{{M^{!}(L)}}$ whose Fourier expansion is of the form $q^{-d_{1}}\mathbf{e}_{0}+o(q^{-d_{1}})$ for some natural number $d_{1}$. Next, letting $d_{0}$ be as in Lemma \[lem: leading term\], we put $$\Lambda_{1} =
\{ \: f_{\lambda,m} \: | \: \lambda\in A_{L}/\pm 1, \: m\in q(\lambda)+{{\mathbb{Z}}}, \: -d_{0}-d_{1} \leq m < -d_{0} \: \}.$$ Then we take a basis of $M^{!}(L)_{d_{0}}$ and denote it by $\Lambda_{2}$. We shall show that $f_{0}$, $\Lambda_{1}$ and $\Lambda_{2}$ generate ${{M^{!}(L)}}$ as a ${{\mathbb{C}}}$-algebra.
By definition $M^{!}(L)_{d_{0}+d_{1}}$ is generated by $\Lambda_{1}\cup \Lambda_{2}$ as a ${{\mathbb{C}}}$-linear space. The quotient $M^!(L)/M^!(L)_{d_0+d_1}$ is generated as a ${{\mathbb{C}}}$-linear space by any set of modular forms whose Fourier expansion is of the form $q^{n}({{{\mathbf e}_{\lambda}}}+\mathbf{e}_{-\lambda})+o(q^{n})$ where $\lambda$ varies over $A_{L}/\pm 1$ and $n$ varies over $q(\lambda)+{{\mathbb{Z}}}$ with $n<-d_{0}-d_{1}$. Therefore it suffices to show that we can construct such a modular form as a product of $f_{0}$ and elements of $\Lambda_{1}$. Since $f_{0}(\tau){{\downarrow^{L}_{K}}}=q^{-d_1}\mathbf{e}_{0}+o(q^{-d_1})$ and ${{\Theta_{K^{+}}}}(\tau)=\mathbf{e}_{0}^{\vee}+o(1)$, we have $\xi(f_0)=q^{-d_1}+o(q^{-d_1})$. We take $m\equiv n$ modulo $d_1$ from $-d_0-d_1 \leq m < -d_0$ and put $r=(m-n)/d_1\in {{\mathbb{N}}}$. Then $$\begin{aligned}
& &
f_{0} \ast_{I} \cdots \ast_{I} f_{0} \ast_{I} f_{\lambda,m} \qquad (f_{0} \: \: r \: \textrm{times}) \\
& = &
(q^{-d_1}+o(q^{-d_1}))^{r} (q^{m}({{{\mathbf e}_{\lambda}}}+\mathbf{e}_{-\lambda})+o(q^{m})) \\
& = &
q^{n}({{{\mathbf e}_{\lambda}}}+\mathbf{e}_{-\lambda}) + o(q^{n}). \end{aligned}$$ This gives a desired modular form.
\[remark: f.g. as module\] By a similar (and easier) argument, using the $j$-function in place of $f_{0}$, we see that ${{M^{!}(L)}}$ is also finitely generated as a $M^!$-module. Indeed, multiplication by $j(\tau)=q^{-1}+O(1)$ defines an embedding $M^{!}(L)_{d}/M^{!}(L)_{d-1}\hookrightarrow M^!(L)_{d+1}/M^!(L)_{d}$ for every $d$, which stabilizes to an isomorphism in $d \gg 0$.
By the proof of Proposition \[prop: f.g.\], the number of generators can be bounded above by $$\label{eqn: bound generator}
1 + d_{1}\cdot |A_{L}/\pm 1| + \dim M^!(L)_{d_{0}}
\: \leq \:
1+(d_0+d_1)\cdot |A_{L}/\pm 1|.$$ In this upper bound is reflected the size of $L$. Indeed, $|A_{L}/\pm 1|$ reflects $|A_{L}|$, and $d_{0}, d_{1}$ reflect $|\sigma(L)|$ by the following well-known property.
\[lem: sgn bound\] If $M^!(L)_{d}\ne \{ 0 \}$, then $|\sigma(L)|\leq 24d$.
If $f\ne 0 \in M^!(L)_{d}$, the product $\Delta^{d}\cdot f$ with the $\Delta$-function is holomorphic also at the cusp. Hence its weight $\sigma(L)/2+12d$ must be nonnegative.
On the other hand, a lower bound leads to the following.
\[prop: finiteness\] Let $p\leq q$ be fixed. Let $N$ be a fixed natural number. Then up to isometry there are only finitely many pairs $(L, I)$ of an even lattice $L$ of signature $(p, q)$ and Witt index $p$ and a maximal isotropic sublattice $I\subset L$ such that the algebra $M^!(L, I)$ can be generated by at most $N$ elements.
In §\[ssec: theta product\], we observed that the dependence on $I$ is finite for a fixed lattice $L$. Hence it is sufficient to prove finiteness of lattices $L$. Since $f\ast_{I}g=\xi(f)\cdot g$, generators of ${{M^{!}(L)}}$ as algebra also serve as generators as $M^!$-module. Since we need at least $|A_{L}/\pm 1|$ generators as $M^!$-module, we obtain the bound $$N \: \geq \: |A_{L}/\pm 1| \: > \: |A_{L}|/2.$$ Then our assertion follows from finiteness of even lattices of fixed signature and bounded discriminant.
It would be a natural problem whether the finiteness still holds even if we let $q$ vary with $p$ fixed. The same statement is not true for generators *as $M^!$-module*. Indeed, when $L$ is unimodular, ${{M^{!}(L)}}$ can be generated by one element as $M^!$-module (cf. Remark \[remark: f.g. as module\]).
We close this subsection with some simple examples.
Assume that the obstruction space $S_{2-\sigma(L)/2}(\rho_{L}^{\vee})$ is trivial. (Such lattices $L$ with $p=2$ are classified in [@B-E-F].) Then every polynomial as in is the principal part of some modular form in ${{M^{!}(L)}}$. In this case, using the notation in the proof of Proposition \[prop: f.g.\], we have $d_{0}=0$, $d_{1}=1$, $\Lambda_{2}=\emptyset$, and the modular form $f_{0}$ can be included in $\Lambda_{1}$. Therefore ${{M^{!}(L)}}$ can be generated by modular forms $f_{\lambda}=q^n(\mathbf{e}_{\lambda}+\mathbf{e}_{-\lambda})+O(1)$ with $\lambda\in A_L/\pm1$ and $n\in q(\lambda)+{{\mathbb{Z}}}$, $-1\leq n < 0$. The minimal number of generators is thus equal to $|A_{L}/\pm1|$. The generator $f_{\lambda}$ with $\lambda\ne 0$ is either a left unit element or a left zero divisor according to Proposition \[prop: LUE\] (2).
\[ex: generator\] We go back to Example \[ex: reflective 1\] where $L=pU \oplus \langle -2 \rangle$. The algebra ${{M^{!}(L)}}$ is generated by the two elements $f_{0}=q^{-1}\mathbf{e}_{0}+O(1)$ and $f_{1}=q^{-1/4}\mathbf{e}_{1}+O(1)$ with the relation $f_{1}\ast_{I} f_{1}= 12 f_{1}$ and $f_{1}\ast_{I} f_{0}= 12 f_{0}$. Thus the two basic reflective modular forms for $L$ give minimal generators of the algebra ${{M^{!}(L)}}$.
On the integral part {#ssec: integral part}
--------------------
One of interests in ${{M^{!}(L)}}$ would lie in the integral part ${{M^{!}(L)_{\mathbb{Z}}}}$ because when $p=2$ Borcherds products can be constructed from modular forms in ${{M^{!}(L)_{\mathbb{Z}}}}$. It seems to be a subtle problem whether ${{M^{!}(L)_{\mathbb{Z}}}}$ is closed under $\Theta$-product. There are examples of $f_{1}, f_{2}\in {{M^{!}(L)_{\mathbb{Z}}}}$ with $f_{1}\ast_{I}f_{2}\in {{M^{!}(L)_{\mathbb{Z}}}}$, but in general there seems to be an obstruction coming from the possibility that Fourier coefficients of $f\in{{M^{!}(L)_{\mathbb{Z}}}}$ in positive degree might be no longer integer. In this subsection we study some aspects of this problem.
We first give a sufficient condition that guarantees $f_{1}\ast_{I}f_{2}\in {{M^{!}(L)_{\mathbb{Z}}}}$. Since the proof is similar, we work with a general subring $R$ of ${{\mathbb{C}}}$.
\[prop: criterion\] Let $R$ be a subring of ${{\mathbb{C}}}$. Let $f_{1}, f_{2}\in M^!(L)_{R}$ and $f_{i}(\tau) = \sum_{\lambda, n}c_{\lambda}^{i}(n)q^{n}{{{\mathbf e}_{\lambda}}}$ be their Fourier expansion. Let $f_{1}\in M^!(L)_{d}$. Assume that
\(a) the coefficients $c_{\lambda}^{1}(0)$ of the constant term of $f_{1}$ are contained in $R$;
\(b) the coefficients $c_{\lambda}^{2}(n)$ of $f_{2}$ in $n< d$ are contained in $R$.
Then $f_{1}\ast_{I}f_{2}\in M^!(L)_{R}$.
We shall show that $\xi(f_{1})$ has Fourier coefficients in $R$. Since only coefficients of $f_{2}$ in degree $<d$ contribute to the principal part of $f_{1}\ast_{I}f_{2}=\xi(f_{1})f_{2}$, the assertion then follows from the condition (b).
In order to show that $\xi(f_{1})$ has coefficients in $R$, we first note that the principal part of $f_{1}{{\downarrow^{L}_{K}}}$ has coefficients in $R$. By the condition (a), the constant term of $f_{1}{{\downarrow^{L}_{K}}}$ also has coefficients in $R$. Since ${{\Theta_{K^{+}}}}$ is holomorphic and has integral coefficients, we find that the principal part and the constant term of $\xi(f_{1})$ have coefficients in $R$. We write $\xi(f_{1})$ as a polynomial $P(j)$ of the $j$-function $j(\tau)=q^{-1}+744+\cdots$. In view of the fact that $j(\tau)$ has integral coefficients, this implies that the polynomial $P$ has coefficients in $R$. This in turn concludes that $\xi(f_{1})=P(j)$ has Fourier coefficients in $R$.
We apply this criterion in two cases.
\[prop: real\] The real part $M^{!}(L)_{{{\mathbb{R}}}}$ is closed under $\ast_{I}$.
We prove that any modular form $f=\sum_{\lambda,n}c_{\lambda}(n)q^n{{{\mathbf e}_{\lambda}}}$ in $M^!(L)_{{{\mathbb{R}}}}$ has real Fourier coefficients: this enables us to apply Proposition \[prop: criterion\]. We use the results of Bruinier in [@Br] §1.3. Let $F_{\lambda,n}(\tau)$ be the Maass-Poincare series constructed in [@Br] Proposition 1.10. By [@Br] Proposition 1.12, $f$ can be written as a linear combination of $F_{\lambda,n}(\tau)$ as $$f(\tau) = \frac{1}{2} \sum_{\lambda\in A_{L}} \sum_{\substack{n\in q(\lambda)+{{\mathbb{Z}}} \\ n<0}}
c_{\lambda}(n) F_{\lambda,n}(\tau).$$ Here the non-holomorphic parts $\tilde{F}_{\lambda,n}$ of $F_{\lambda,n}$ cancel out (cf. [@Br] Theorem 1.17). By [@Br] Remark 1.14, the Fourier coefficients of $F_{\lambda,n}$ are real. Since $c_{\lambda}(n)$ are real in $n<0$, this implies that $f$ has real Fourier coefficients.
In general, arithmetic property of Fourier coefficients of Maass-Poincare series seems to be a subtle problem. See [@B-F-O-R].
Next we let $L$ be unimodular. We show that a natural subgroup of ${{M^{!}(L)_{R}}}$ is closed under $\ast_{I}$. Let $M^{!}(L)_{R}'\subset {{M^{!}(L)_{R}}}$ be the subgroup of those $f\in {{M^{!}(L)_{R}}}$ whose constant term $c(0)$ is also contained in $R$.
\(1) When $R\supset{{\mathbb{Q}}}$, we have $M^!(L)_{R}'=M^{!}(L)_{R}$ by Borcherds duality theorem in [@Bo00b] and the rationality of Fourier coefficients of (scalar-valued) Eisenstein series.
\(2) When $R={{\mathbb{Z}}}$, we have in fact $c(0)\in 2{{\mathbb{Z}}}$ for $f\in M^!(L)_{{{\mathbb{Z}}}}'$ by [@W-W]. Thus, when $p=2$, $M^!(L)_{{{\mathbb{Z}}}}'$ can be thought of as the multiplicative group of Borcherds products of integral weight.
\[prop: Z-part unimodular\] Let $R$ be a subring of ${{\mathbb{C}}}$. When $L$ is unimodular, $M^{!}(L)_{R}'$ is closed under $\ast_{I}$.
By Proposition \[prop: criterion\], it suffices to show that if $f(\tau)=\sum_{n}c(n)q^{n}$ is an element of $M^{!}(L)_{R}'$, its coefficients $c(n)$ in $n>0$ are also contained in $R$. By the same argument as in the proof of Proposition \[prop: criterion\], we find that all Fourier coefficients of $f\cdot \theta_{K^{+}}\in M^!$ are contained in $R$. If we write $\theta_{K^{+}}(\tau)=\sum_{n\geq 0}c^{K}(n)q^{n}$ and notice that $c^{K}(0)=1$, this means that $$c(n)+\sum_{m<n}c(m)c^{K}(n-m)\in R$$ for every $n$. Since we already know that $c(m)\in R$ for $m\leq 0$ and $c^{K}(l)\in {{\mathbb{Z}}}$ for every $l$, induction on $n$ tells us that $c(n)\in R$ for every $n$. This proves our claim.
When $L$ is unimodular with $p=2$, the multiplicative group of Borcherds products of integral weight has the structure of a commutative ring under $\Theta$-product.
We close this subsection with a remark that how far ${{M^{!}(L)_{R}}}$ is not closed under $\ast_{I}$ can be expressed as a $2$-cocyle in group cohomology. We view ${{M^{!}(L)}}/{{M^{!}(L)_{R}}}$ as a ${{M^{!}(L)_{R}}}$-module with trivial action. We define a map $$\phi : {{M^{!}(L)_{R}}}\times {{M^{!}(L)_{R}}} \to {{M^{!}(L)}}/{{M^{!}(L)_{R}}}$$ by $\phi(f_{1}, f_{2})=[f_{1}\ast_{I}f_{2}]$, where $[ \: ]$ means the image in ${{M^{!}(L)}}/{{M^{!}(L)_{R}}}$.
\(1) ${{M^{!}(L)_{R}}}$ is closed under $\ast_{I}$ if and only if $\phi\equiv 0$.
\(2) $\phi$ is a $2$-cocycle of the abelian group ${{M^{!}(L)_{R}}}$ with value in ${{M^{!}(L)}}/{{M^{!}(L)_{R}}}$.
\(1) is obvious from the definition of $\phi$. For (2), the cocycle condition is $$\phi(f_{2}, f_{3}) + \phi(f_{1}, f_{2}+f_{3}) =
\phi(f_{1}+f_{2}, f_{3}) + \phi(f_{1}, f_{2}).$$ This holds true by the bilinearity of $\ast_{I}$.
When $R\subset {{\mathbb{R}}}$, we may replace ${{M^{!}(L)}}/{{M^{!}(L)_{R}}}$ with $M^!(L)_{{{\mathbb{R}}}}/{{M^{!}(L)_{R}}}$ by Proposition \[prop: real\].
Functoriality {#sec: functorial}
=============
In this section we prove that $\Theta$-product is functorial with respect to embedding of lattices if we use quasi-pullback as morphism. The statement is Proposition \[prop: functorial main\], and the proof is given in §\[ssec: finite pullback\] and §\[ssec: general case\]. In §\[ssec: functorial push\] we also prove functoriality with respect to pushforward to a special type of overlattice. Except for Corollaries \[cor: functorial add consequence\] and \[cor: functorial add consequence II\], this section may be read independently of §\[sec: first property\].
Quasi-pullback {#ssec: quasi-pullback}
--------------
Let $L$ be an even lattice of signature $(p, q)$ and $L'$ be a sublattice of $L$ of signature $(p, q')$. We do not assume that $L'$ is primitive in $L$. Following [@Ma], we define a linear map $|_{L'} \colon {{M^{!}(L)}}\to M^{!}(L')$ as follows. Let $N=(L')^{\perp}\cap L$, which is a negative-definite lattice. We write $N^{+}=N(-1)$. The lattice $L'\oplus N$ is of finite index in $L$. Let $f\in {{M^{!}(L)}}$. We first take the pullback $f\!\uparrow_{L}^{L'\oplus N}$, which is an element of $M^!(L'\oplus N)$. Since $\rho_{L'\oplus N} = \rho_{L'}\otimes \rho_{N}$, we can take contraction of $f\!\uparrow_{L}^{L'\oplus N}$ with the $\rho_{N^{+}}$-valued theta series $\Theta_{N^{+}}$ of $N^{+}$. This produces a $\rho_{L'}$-valued weakly holomorphic modular form of weight $\sigma(L')/2$, which we denote by $$\label{eqn: define quasi-pullback}
f|_{L'} = \langle f\!\uparrow_{L}^{L'\oplus N}, \: \Theta_{N^{+}} \rangle \quad \in M^{!}(L').$$ We call $f|_{L'}$ the *quasi-pullback* of $f$ to $L'$. The map $|_{L'} \colon {{M^{!}(L)}}\to M^!(L')$ is $M^!$-linear.
The geometric significance of this operation comes from Borcherds products as follows. Assume that $p=2$ and $f$ has integral principal part, and let $\Psi(f)$ be the Borcherds product associated to $f$ on the Hermitian symmetric domain $\mathcal{D}_{L}$ for $L$. The Hermitian symmetric domain $\mathcal{D}_{L'}$ for $L'$ is naturally embedded in $\mathcal{D}_{L}$. The quasi-pullback of $\Psi(f)$ from $L$ to $L'$, discovered by Borcherds [@Bo95], [@B-K-P-SB], is defined by first dividing $\Psi(f)$ by suitable linear forms to get rid of zeros and poles containing $\mathcal{D}_{L'}$, and then restricting the resulting form to $\mathcal{D}_{L'}\subset \mathcal{D}_{L}$. It is proved in [@Ma] that this quasi-pullback of $\Psi(f)$ coincides with the Borcherds product for $f|_{L'}\in M^{!}(L')$ up to constant. Thus the operation $|_{L'}$ defined in can be thought of as a formal ${{\mathbb{C}}}$-linear extension of the quasi-pullback operation on Borcherds products.
We can now state the main result of this §\[sec: functorial\]. We assume that $p\leq q' \leq q$ and both $L$ and $L'$ have Witt index $p$.
\[prop: functorial main\] Let $L'\subset L$ be as above. Let $I$ be a maximal isotropic sublattice of $L$ such that $I_{{{\mathbb{Q}}}}\subset L'_{{{\mathbb{Q}}}}$. We set $I'=I\cap L'$. Then we have $$\label{eqn: functorial}
(f|_{L'})\ast_{I'}(g|_{L'}) = |I/I'|\cdot (f\ast_{I}g)|_{L'}$$ for $f, g \in {{M^{!}(L)}}$. In particular, the map $$|I/I'|^{-1}\cdot |_{L'} : M^!(L, I) \to M^{!}(L', I')$$ is a ring homomorphism.
This means that the assignment $$(L, I) \mapsto M^{!}(L, I)$$ is a contravariant functor from the category of pairs $(L, I)$ to the category of associative ${{\mathbb{C}}}$-algebras, by assigning the morphism $|I/I'|^{-1}\cdot |_{L'}$ to an embedding $(L', I')\hookrightarrow (L, I)$.
The proof of Proposition \[prop: functorial main\] is reduced to the following assertion.
\[prop: functorial\] Let $L'\subset L$ and $I'\subset I$ be as in Proposition \[prop: functorial main\]. We put $K=I^{\perp}\cap L / I$ and $K'=(I')^{\perp}\cap L'/I'$. Let $\xi \colon {{M^{!}(L)}}\to M^{!}$ and $\xi' \colon M^{!}(L')\to M^{!}$ be the maps $\xi= \langle \cdot {{\downarrow^{L}_{K}}}, {{\Theta_{K^{+}}}} \rangle$ and $\xi'= \langle \cdot \! \downarrow^{L'}_{K'}, \Theta_{(K')^{+}} \rangle$ respectively. Then we have $$\xi' \circ |_{L'} = |I/I'| \cdot \xi.$$
Indeed, if we admit Proposition \[prop: functorial\], we can calculate $$\begin{aligned}
(f|_{L'})\ast_{I'}(g|_{L'})
& = &
\xi'(f|_{L'})\cdot (g|_{L'})
= |I/I'| \cdot \xi(f) \cdot (g|_{L'}) \\
& = &
|I/I'| \cdot (\xi(f)\cdot g)|_{L'}
= |I/I'| \cdot (f\ast_{I}g)|_{L'}. \end{aligned}$$ Thus Proposition \[prop: functorial\] implies Proposition \[prop: functorial main\].
Before going on, we note some consequences.
\[cor: functorial add consequence\] Let $\Theta^{\perp}(L)\subset {{M^{!}(L)}}$ and $\Theta^{\perp}(L')\subset M^!(L')$ be the respective left annihilators. Then we have $|_{L'}^{-1}(\Theta^{\perp}(L'))=\Theta^{\perp}(L)$. In particular, we have ${\rm Ker}(|_{L'})\subset \Theta^{\perp}(L)$. The map ${{M^{!}(L)}}/\Theta^{\perp}(L) \to M^!(L')/\Theta^{\perp}(L')$ induced by $|I/I'|^{-1}\cdot |_{L'}$ is inclusion of ideals in the polynomial ring $M^!={{\mathbb{C}}}[j]$.
The equality $|_{L'}^{-1}(\Theta^{\perp}(L'))=\Theta^{\perp}(L)$ follows from Proposition \[prop: functorial\]. Since $\xi$ and $\xi'$ embed ${{M^{!}(L)}}/\Theta^{\perp}(L)$ and $M^!(L')/\Theta^{\perp}(L')$ as ideals in $M^!$ respectively, the last assertion follows.
\[cor: functorial add consequence II\] Assume that $L$ is unimodular and let $R$ be a subring of ${{\mathbb{C}}}$. Then the subgroup $|I/I'|^{-1}\cdot M^!(L)_{R}'|_{L'}$ of $M^!(L')$ is closed under $\ast_{I'}$. In particular, $M^{!}(L)_{R}'|_{L'}\subset M^{!}(L')_{R}$ is closed under $\ast_{I'}$.
This follows from Propositions \[prop: Z-part unimodular\] and \[prop: functorial main\].
The proof of Proposition \[prop: functorial\] occupies §\[ssec: finite pullback\] and §\[ssec: general case\]. It is divided into two parts, reflecting the fact that the quasi-pullback $|_{L'}$ is composition of two operators $\uparrow_{L}^{L'\oplus N}$ and $\langle \cdot, \Theta_{N^{+}} \rangle$. In §\[ssec: finite pullback\] we consider the case when $L'$ is of finite index in $L$. In §\[ssec: general case\] we consider the case when the splitting $L=L'\oplus N$ holds. The proof in the general case is a combination of these two special cases.
The case of finite pullback {#ssec: finite pullback}
---------------------------
In this subsection we prove Proposition \[prop: functorial\] in the case when $L'$ is of finite index in $L$. In this case, the quasi-pullback $|_{L'}$ is the operation $\uparrow^{L'}_{L}$, and Proposition \[prop: functorial\] takes the following form.
\[prop: finite pullback\] When $L'\subset L$ is of finite index, we have for $f \in {{M^{!}(L)}}$ $$\xi'(f\!\uparrow^{L'}_{L}) = |I/I'| \cdot \xi(f).$$
This is a consequence of the following calculation in finite quadratic modules.
\[lem: finite pullback FQM\] Let $A$ be a finite quadratic module and $I_1, I_2 \subset A$ be two isotropic subgroups. We set $A_1=I_1^{\perp}/I_1$, $A_2=I_2^{\perp}/I_2$ and $$A'= (I_{1}^{\perp}\cap I_{2}^{\perp}) / ((I_{1}\cap I_{2}^{\perp})+(I_{2}\cap I_{1}^{\perp})).$$ Let $I_{2}'=I_{2}\cap I_{1}^{\perp}/I_{1}\cap I_{2}$ be the image of $I_{2}\cap I_{1}^{\perp}$ in $A_{1}$, and $I_{1}'=I_{1}\cap I_{2}^{\perp}/I_{1}\cap I_{2}$ be the image of $I_{1}\cap I_{2}^{\perp}$ in $A_{2}$. Then, under the natural isomorphism $$\label{eqn: description of A'}
A' \simeq (I_{2}')^{\perp}\cap A_{1}/I_{2}' \simeq (I_{1}')^{\perp}\cap A_{2}/I_{1}',$$ we have $$\label{eqn: pull push commute FQM}
\downarrow^{A}_{A_{2}} \circ \uparrow^{A}_{A_{1}} =
| I_{1} \cap I_{2} | \uparrow^{A_{2}}_{A'} \circ \downarrow^{A_{1}}_{A'}$$ as linear maps ${{\mathbb{C}}}A_{1} \to {{\mathbb{C}}}A_{2}$.
We postpone the proof of Lemma \[lem: finite pullback FQM\] for a moment, and first explain how Lemma \[prop: finite pullback\] is deduced from Lemma \[lem: finite pullback FQM\].
Let $K=I^{\perp}\cap L /I$ and $K'=(I')^{\perp}\cap L' /I'$. We have a canonical embedding $K'\hookrightarrow K$ of finite index. Since ${{\Theta_{K^{+}}}}=\Theta_{(K')^{+}}\!\downarrow^{(K')^{+}}_{K^{+}}$, we find that $$\xi(f)
= \langle f{{\downarrow^{L}_{K}}}, {{\Theta_{K^{+}}}} \rangle
= \langle f{{\downarrow^{L}_{K}}}, \: \Theta_{(K')^{+}}\!\downarrow^{(K')^{+}}_{K^{+}} \rangle
= \langle f{{\downarrow^{L}_{K}}}\uparrow_{K}^{K'}, \: \Theta_{(K')^{+}} \rangle.$$ On the other hand, we have $$\xi'(f \! \uparrow^{L'}_{L}) =
\langle f {{\uparrow_{L}^{L'}}}\downarrow^{L'}_{K'}, \: \Theta_{(K')^{+}} \rangle.$$ Thus it is sufficient to show that $$\label{eqn: pull push commute lattice}
\downarrow^{L'}_{K'} \circ {{\uparrow_{L}^{L'}}} = |I/I'| \uparrow^{K'}_{K} \circ {{\downarrow^{L}_{K}}}$$ as linear maps ${{\mathbb{C}}}A_{L}\to {{\mathbb{C}}}A_{K'}$.
We apply Lemma \[lem: finite pullback FQM\] as follows. Let $I^{\ast}=I_{{{\mathbb{Q}}}}\cap L^{\vee}$ and $(I')^{\ast} = I_{{{\mathbb{Q}}}}\cap (L')^{\vee}$. We set $A=A_{L'}$, $I_{1}=L/L'$ and $I_{2}=(I')^{\ast}/I'$. Then $A_{1}\simeq A_{L}$ and $A_{2}\simeq A_{K'}$. We have $I_{2}\cap I_{1}^{\perp}=I^{\ast}/I'$ and $$I_{1} \cap I_{2}
= ( L\cap \langle L', (I')^{\ast} \rangle )/L'
= \langle L', I \rangle /L'
= I/I'.$$ This implies that $I_{2}'=I^{\ast}/I \subset A_{L}$ and $A'= A_{K}$. Thus we have $$\uparrow^{A}_{A_{1}} = \uparrow^{L'}_{L}, \quad
\downarrow^{A}_{A_{2}} = \downarrow^{L'}_{K'}, \quad
\downarrow^{A_{1}}_{A'} = \downarrow^{L}_{K}, \quad
\uparrow^{A_{2}}_{A'} = \uparrow^{K'}_{K},$$ hence implies .
We now prove Lemma \[lem: finite pullback FQM\].
We first justify the isomorphism , which also implies that $A'$ is nondegenerate. We write $\hat{I}_{1}'=I_{1}\cap I_{2}^{\perp}$ and $\hat{I}_{2}'=I_{2}\cap I_{1}^{\perp}$. We shall establish the following commutative diagram: $$\label{eqn: CD}
\xymatrix{
& (\hat{I}_2')^{\perp}\cap I_{1}^{\perp} \ar[rd]^{p_1} & \\
(\hat{I}_{1}')^{\perp}\cap I_{2}^{\perp} \ar[rd]_{p_2} &
I_{1}^{\perp}\cap I_{2}^{\perp} \ar[r]^{p_1'} \ar@{^{(}-_>}[u] \ar[d]^{p_2'} \ar@{_{(}-_>}[l]
& (I_{2}')^{\perp}\cap A_{1} \ar[d]_{q_{2}} \\
& (I_{1}')^{\perp}\cap A_{2} \ar[r]_{q_1} & A'
}$$ Here $p_i$ is the quotient map by $I_{i}$ and $p_{i}'$ is the restriction of $p_{i}$. Since we have $\hat{I}_{1}' = I_{1} \cap (I_{1}^{\perp}\cap I_{2}^{\perp})$ and $$I_{1} / \hat{I}_{1}' \simeq
((\hat{I}_2')^{\perp}\cap I_{1}^{\perp}) / (I_{1}^{\perp}\cap I_{2}^{\perp})
\simeq (\hat{I}_{2}')^{\perp}/I_{2}^{\perp},$$ we see that $p_{1}'$ is surjective and is the quotient map by $\hat{I}_{1}'$. This induces the map $q_{2}\colon (I_2')^{\perp}\cap A_{1} \to A'$ as the quotient map by $I_2'$. Similarly, we find that $p_2'$ is the quotient map by $\hat{I}_{2}'$ and $q_{1}$ is induced as the quotient map by $I_{1}'$.
We now prove . Let $\lambda\in A_{1}$. It suffices to show that $$\label{eqn: pullback FQM}
{{{\mathbf e}_{\lambda}}}\uparrow_{A_{1}}^{A}\downarrow^{A}_{A_{2}} =
|I_{1}\cap I_{2}| \cdot {{{\mathbf e}_{\lambda}}}\downarrow^{A_{1}}_{A'}\uparrow^{A_{2}}_{A'}.$$ When $\lambda\not\in (I_{2}')^{\perp}$, we have $\mathbf{e}_{\lambda}\! \downarrow^{A_{1}}_{A'}=0$. On the other hand, we have $(\tilde{\lambda}, \hat{I}_{2}')\not\equiv 0$ for every $\tilde{\lambda}\in I_{1}^{\perp}$ in the inverse image of $\lambda$. In particular, we have $(\tilde{\lambda}, I_2)\not\equiv 0$ and hence $\mathbf{e}_{\tilde{\lambda}}\downarrow^{A}_{A_{2}}=0$. This implies that ${{{\mathbf e}_{\lambda}}}\uparrow_{A_{1}}^{A}\downarrow^{A}_{A_{2}}=0$.
Next let $\lambda\in (I_{2}')^{\perp}$. By the above commutative diagram, we can choose $\tilde{\lambda}\in I_1^{\perp}\cap I_2^{\perp}$ such that $p_{1}'(\tilde{\lambda})=\lambda$. Then $$\label{eqn: push then pull}
\mathbf{e}_{\lambda}\downarrow^{A_{1}}_{A'} \uparrow_{A'}^{A_{2}} \: = \:
\mathbf{e}_{q_{2}(\lambda)}\uparrow^{A_{2}}_{A'} \: = \:
\sum_{\mu'\in I_{1}'} \mathbf{e}_{p_{2}'(\tilde{\lambda})+\mu'}.$$ On the other hand, we have $$\label{eqn: pull then push}
\mathbf{e}_{\lambda}\uparrow_{A_{1}}^{A}\downarrow^{A}_{A_{2}}
\: = \:
\sum_{\mu\in I_{1}} \mathbf{e}_{\tilde{\lambda}+\mu}\downarrow^{A}_{A_{2}}
\: = \:
\sum_{\mu\in \hat{I}_{1}'} \mathbf{e}_{p_{2}(\tilde{\lambda}+\mu)}
\: = \:
\sum_{\mu\in \hat{I}_{1}'} \mathbf{e}_{p_{2}'(\tilde{\lambda})+p_{2}'(\mu)}.$$ Here we used the equality $(\tilde{\lambda}+I_{1}) \cap I_{2}^{\perp} = \tilde{\lambda}+\hat{I}_{1}'$. Since the map $p_{2}'\colon \hat{I}_{1}'\to I_{1}'$ is the quotient map by $I_{1}\cap I_{2}$, its fibers consist of $|I_{1}\cap I_{2}|$ elements. Comparing and , we obtain the desired equality .
The split case {#ssec: general case}
--------------
Next we prove Proposition \[prop: functorial\] in the case when the splitting $L=L'\oplus N$ holds. In this case, $\uparrow^{L'\oplus N}_{L}$ is identity, $I'$ coincides with $I$, so Proposition \[prop: functorial\] takes the following form.
\[prop: split case\] When the splitting $L=L'\oplus N$ holds, we have for $f \in {{M^{!}(L)}}$ $$\xi'(\langle f, \Theta_{N^{+}} \rangle) = \xi(f).$$
Since $K=K'\oplus N$, we have ${{\Theta_{K^{+}}}}=\Theta_{(K')^{+}} \otimes \Theta_{N^+}$ under the natural isomorphism $\rho_{K^+}\simeq \rho_{(K')^{+}}\otimes \rho_{N^+}$. Therefore $$\begin{aligned}
\xi'(\langle f, \Theta_{N^{+}} \rangle)
& = &
\langle \langle f, \Theta_{N^{+}} \rangle \! \downarrow^{L'}_{K'}, \: \Theta_{(K')^{+}} \rangle
=
\langle \langle f{{\downarrow^{L}_{K}}}, \Theta_{N^{+}} \rangle, \Theta_{(K')^{+}} \rangle \\
& = &
\langle f{{\downarrow^{L}_{K}}}, \Theta_{N^{+}}\otimes \Theta_{(K')^{+}} \rangle
= \xi(f). \end{aligned}$$ This proves the desired equality.
We can now prove Proposition \[prop: functorial\] in the general case.
Let $L'\oplus N\subset L$ and $I'\subset I$ be as in Proposition \[prop: functorial\]. We write $L''=L'\oplus N$, $K''=(I')^{\perp}\cap L''/I'$ and $\xi''= \langle \cdot\!\downarrow^{L''}_{K''}, \Theta_{(K'')^{+}} \rangle$. By using Lemma \[prop: split case\] for $L'\subset L''$ and Lemma \[prop: finite pullback\] for $L''\subset L$, we see that $$\xi'(f|_{L'})
= \xi'(\langle f\!\uparrow_{L}^{L''}, \Theta_{N^{+}} \rangle )
= \xi''(f\!\uparrow_{L}^{L''})
= |I/I'|\cdot \xi(f).$$ This proves Proposition \[prop: functorial\] in the general case.
Special finite pushforward {#ssec: functorial push}
--------------------------
We close this section by noticing that $\Theta$-product is also covariantly functorial with respect to pushforward to a special type of overlattices. Let $I\subset L$ be as before.
\[prop: functorial push\] Let $L'$ be a sublattice of $L$ of finite index. Assume that $L= \langle L', I \rangle$. We set $I'=I\cap L'$. Then we have $$(f\! \downarrow^{L'}_{L})\ast_{I}(g\! \downarrow^{L'}_{L}) =
(f\ast_{I'}g)\! \downarrow^{L'}_{L}$$ for $f, g\in M^!(L')$.
We use the notation in the proof of Lemma \[prop: finite pullback\]. Since $I_1=L/L'$ coincides with $I_1\cap I_2=I/I'$, we have $I_1 \subset I_2$. Hence $I_{1}'=\{ 0 \}$ and so the canonical embedding $K'\hookrightarrow K$ is isomorphic. Moreover, since $I_{2}=(I')^{\ast}/I'$ coincides with $I_{2}\cap I_{1}^{\perp}$, we have $(I')^{\ast}=I^{\ast}$ and hence $\langle L, I^{\ast} \rangle = \langle L', (I')^{\ast} \rangle$. These equalities imply that $\downarrow^{L'}_{K'} \: = \: \downarrow^{L}_{K} \circ \downarrow^{L'}_{L}$. Therefore we have $$\xi'(f) =
\langle f\! \downarrow^{L'}_{K'}, \: \Theta_{(K')^{+}} \rangle =
\langle f\! \downarrow^{L'}_{L} \downarrow^{L}_{K}, \: \Theta_{K^{+}} \rangle =
\xi(f\! \downarrow^{L'}_{L}).$$ As in the case of quasi-pullback, this implies $$\begin{aligned}
(f\! \downarrow^{L'}_{L})\ast_{I}(g\! \downarrow^{L'}_{L})
& = &
\xi(f\! \downarrow^{L'}_{L})\cdot (g\! \downarrow^{L'}_{L})
\: = \: \xi'(f)\cdot (g\! \downarrow^{L'}_{L}) \\
& = &
(\xi'(f)\cdot g)\! \downarrow^{L'}_{L}
\: = \: (f\ast_{I'}g)\! \downarrow^{L'}_{L}. \end{aligned}$$ This proves Proposition \[prop: functorial push\].
[99]{}
Borcherds, R. *Automorphic forms on $O_{s+2,2}({{\mathbb{R}}})$ and infinite products.* Invent. Math. **120** (1995), no. 1, 161–213.
Borcherds, R. *Automorphic forms with singularities on Grassmannians.* Invent. Math. **132** (1998), no. 3, 491–562.
Borcherds, R. *The Gross-Kohnen-Zagier theorem in higher dimensions.* Duke Math. J. **97** (1999), no. 2, 219–233.
Borcherds, R. *Reflection groups of Lorentzian lattices.* Duke Math. J. **104** (2000), no. 2, 319–366.
Borcherds, R.; Katzarkov, L.; Pantev, T.; Shepherd-Barron, N. I. *Families of K3 surfaces.* J. Algebraic Geom. **7** (1998), no. 1, 183–193.
Bringmann, K.; Folsom, A.; Ono, K.; Rolen, L. *Harmonic Maass forms and mock modular forms: theory and applications.* American Mathematical Society, 2017.
Bruinier, J. H. *Borcherds products on $O(2, l)$ and Chern classes of Heegner divisors.* Lecture Notes in Math. **1780**, Springer-Verlag, 2002.
Bruinier, J. H.; Ehlen, S.; Freitag, E. *Lattices with many Borcherds products.* Math. Comp. **85** (2016), no. 300, 1953–1981.
Bruinier, J. H.; Kuss, M. *Eisenstein series attached to lattices and modular forms on orthogonal groups.* Manuscripta Math. **106** (2001), no. 4, 443–459.
Eichler, M.; Zagier, D. *The theory of Jacobi forms.* Birkhäuser, 1985.
Gritsenko, V. A. *Modular forms and moduli spaces of abelian and K3 surfaces.* St. Petersburg Math. J. **6** (1995), no. 6, 1179–1208.
Gritsenko, V.; Hulek, K.; Sankaran, G. K. *Abelianisation of orthogonal groups and the fundamental group of modular varieties.* J. Algebra **322** (2009), no. 2, 463–478.
Gritsenko, V. A.; Nikulin, V. V. *Automorphic forms and Lorentzian Kac-Moody algebras. II.* Internat. J. Math. **9** (1998), no. 2, 201–275.
Ma, S. *Quasi-pullback of Borcherds products.* Bull. Lond. Math. Soc. **51** (2019), no. 6, 1061–1078.
McGraw, W. J. *The rationality of vector valued modular forms associated with the Weil representation.* Math. Ann. **326** (2003), no. 1, 105–122.
Schwagenscheidt, M. *Eisenstein series for the Weil representation.* J. Number Theory **193** (2018), 74–90.
Wang, H.; Williams, B. *Borcherds products of half-integral weight.* arXiv: 2007.00055.
[^1]: Supported by JSPS KAKENHI 17K14158 and 20H00112.
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Let $\D$ be a masa in $\B(\H)$ where $\H$ is a separable Hilbert space. We find real numbers $\eta_0<\eta_1<\eta_2<\dots<\eta_6$ so that for every bounded, normal $\D$-bimodule map $\Phi$ on $\B(\H)$ either $\|\Phi\|>\eta_6$, or $\|\Phi\|=\eta_k$ for some $k\in
\{0,1,2,3,4,5,6\}$. When $\D$ is totally atomic, these maps are the idempotent Schur multipliers and we characterise those with norm $\eta_k$ for $0\leq k\leq 6$. We also show that the Schur idempotents which keep only the diagonal and superdiagonal of an $n\times n$ matrix, or of an $n\times (n+1)$ matrix, both have norm $\frac2{n+1}\cot(\frac\pi{2(n+1)})$, and we consider the average norm of a random idempotent Schur multiplier as a function of dimension. Many of our arguments are framed in the combinatorial language of bipartite graphs.\
[*Keywords:*]{} idempotent Schur multiplier, normal masa bimodule map, Hadamard product, norm, bipartite graph\
[*MSC (2010):*]{} [47A30, 15A60, 05C50]{}
address: |
School of Mathematical Sciences\
University College Dublin\
Dublin 4\
Ireland
author:
- 'Rupert H. Levene'
title: Norms of idempotent Schur multipliers
---
Introduction {#sec:intro}
============
Let $\bF$ be either $\bR$ or $\bC$, and let $m,n\in\bN\cup\{\aleph_0\}$. If $A=[a_{ij}]$ and $X=[x_{ij}]$ are $m\times n$ matrices with entries in $\bF$, then the Schur product of $A$ and $X$ is their entrywise product: $$A\schur X=[a_{ij}x_{ij}].$$ This is also known as the Hadamard product. Let $\B=\B(\ell^2_n,\ell^2_m)$ be the space of matrices defining bounded linear operators $\ell^2_n\to\ell^2_m$, where $\ell^2_k$ is the $k$-dimensional Hilbert space of square-summable $\bF$-valued sequences. An $m\times n$ matrix $A$ with entries in $\bF$ is called a Schur multiplier if $X\mapsto A\schur X$ leaves $\B$ invariant. It then follows that Schur multiplication by $A$ defines a bounded linear map $\B\to \B$, so the Schur norm of $A$ given by $$\|A\|_\schur=\sup\{ \|A\schur X\|_\B\colon {{X\in\B,\ \|X\|_\B\leq
1}}\}$$ is finite. Under matrix addition, the Schur product $\bullet$ and the norm $\|\cdot\|_\bullet$, the set of all $m\times n$ Schur multipliers forms a unital commutative semisimple Banach algebra. Several properties of Schur multipliers and the norm $\|\cdot\|_{\bullet}$ are known; see for example [@bennett; @mathias; @dav-don]. Here, we focus on the norms of the idempotent elements of this algebra: those Schur multipliers $A$ for which every entry of $A$ is either $0$ or $1$.
If $S\subseteq \bF$, then we write $M_{m,n}(S)$ for the set of all $m\times n$ matrices with entries in $S$. For $m,n\in\bN$, consider the finite sets of non-negative real numbers $$\N(m,n)=\{\|A\|_\schur \colon A\in M_{m,n}(\{0,1\})\}.$$ We will see in Remark \[rk:real-complex\] below that this set does not depend on whether $\bF=\bR$ or $\bF=\bC$. Adding rows or columns of zeros to a matrix does not change its Schur norm, so if $n\leq n'$ and $m\leq m'$, then $\N(m,n)\subseteq \N(m',n')$. We will be interested in the set $$\N=\N(\aleph_0,\aleph_0)$$ consisting of the norms of all idempotent Schur multipliers on $\B(\ell^2)$. Every element of $\N$ is the supremum of a sequence in $\bigcup_{m,n\in\bN}\N(m,n)$, obtained by considering the Schur norms of the upper-left hand corners of the corresponding infinite $0$–$1$ matrix.
It has been known for some time that $\N$ is closed under multiplication (consider $A_1\otimes A_2$) and under suprema (consider $\bigoplus_{i} A_i$), that $\N$ is not bounded above [@kwapien-pel] and that $\N$ contains accumulation points [@bcd]. On the other hand, many basic properties of $\N$ seem to be unknown. For example: is $\N$ closed? Does $\N$ have non-empty interior? Might we have $\N \supseteq [a,\infty)$ for some $a\geq 0$? Or, in the opposite direction, is $\N$ actually countable?
We say that a non-empty open interval $(a,b)$ is a gap in $\N$ if $a,b\in \N$ but $(a,b)\cap \N=\emptyset$. The idempotent elements $p$ of any Banach algebra satisfy $$\|p\|=\|p^2\|\leq \|p\|^2,$$ so if $\|p\|\leq 1$ then $\|p\|\in \{0,1\}$. In particular, this shows that $(0,1)$ is a gap in $\N$. However, $\N$ contains further gaps, a perhaps unexpected phenomenon.Indeed, Livschits [@livschits] proves that $$\{0,1,\sqrt{4/3}\}\subseteq \N\subseteq \{0,1\}\cup
[\sqrt{4/3},\infty),$$ so the open interval $(1,\sqrt{4/3})$ is also a gap in $\N$. Livschits’ theorem has since been generalised by Katavolos and Paulsen [@kat-paulsen], and has been recently used by Forrest and Runde to describe certain ideals of the Fourier algebra of a locally compact group [@forrest-runde].
We will show that there are at least four further gaps:
\[thm:gaps\] Consider the real numbers $\eta_0<\eta_1<\eta_2<\eta_3<\eta_4<\eta_5<\eta_6$ given by $$\begin{gathered}
\eta_0=0,\quad
\eta_1=1,\quad
\eta_2=\sqrt{\frac43},\quad
\eta_3=\frac{1+\sqrt2}2,\\
\eta_4=\frac1{15}\sqrt{169+38\sqrt{19}},\quad
\eta_5=\sqrt{\frac 32},\quad
\eta_6=\frac25\sqrt{5+2\sqrt5}.\end{gathered}$$ We have $$\{\eta_0,\eta_1,\eta_2,\eta_3,\eta_4,\eta_5,\eta_6\}\subseteq \N\subseteq\{\eta_0,\eta_1,\eta_2,\eta_3,\eta_4,\eta_5\}\cup [\eta_6,\infty),$$ so $(\eta_{j-1},\eta_{j})$ is a gap in $\N$ for $1\leq j\leq 6$.
Since it is fundamental to many of the calculations that follow, we recall here the connection between the problem of finding $\|A\|_{\schur}$ and factorisations $A=S^*R$. If $m,n\in\bN$ and $A\in M_{m,n}(\bC)$, the well-known Haagerup estimate states $$\|A\|_{\schur} \leq \|W\|\,\|V\|\quad\text{where}\quad A\schur
X=\sum_{j=1}^k W_jXV_j\text{ for all $X\in M_{m,n}(\bC)$}.$$ Here $k$ is a natural number, $W$ is a block row of $m\times m$ matrices $W_1,W_2,\dots,W_k$ and $V$ is a block column of $n\times n$ matrices $V_1,V_2,\dots,V_k$; the norms of $V$ and $W$ are computed by allowing them to act as linear operators between Hilbert spaces of the appropriate finite dimensions. Moreover, the norm $\|A\|_{\schur}$ is the minimum of these estimates $\|W\|\,\|V\|$. Stated in this generality, the same is true for an arbitrary elementary operator on $M_{m,n}(\bC)$; for Schur multipliers, the minimum is attained by a row $W$ and a column $V$ with $k\leq \min\{m,n\}$ for which the entries of $W$ and $V$ are all *diagonal* matrices. We can then rewrite the Haagerup estimate in the compact form $$\|A\|_{\schur} \leq c(S) c(R) \quad \text{where}\quad A=S^*R$$ by taking $R$ to be the $k\times n$ matrix whose rows are the diagonals of the entries of $V$, and $S$ to be the $k\times m$ matrix whose rows are the complex conjugates of the diagonals of the entries of $W$, and defining $c(R)$ and $c(S)$ to be the maximum of the $\ell^2$-norms of the columns of the corresponding matrices $R$ and $S$. This notation comes from [@ang-cow-nar; @cowen-et-al].
The structure of this paper is as follows. We will use the combinatorial language of bipartite graphs to describe idempotent Schur multipliers, and this is explained in Section \[sec:bipartite\]. Section \[sec:basicresults\] briefly recalls some basic results about the norms of general Schur multipliers, and casts them in this language. Section \[sec:snakes\] is concerned with the calculation of the norms of the idempotent Schur multipliers corresponding to simple paths; these are the maps which keep only the main diagonal and superdiagonal elements of a matrix. Somewhat unexpectedly, we get the same answer in the $n\times
n$ and the $n\times (n+1)$ cases. In Section \[sec:calcs\] we compute or estimate the norms of some “small” idempotent Schur multipliers. Section \[sec:proofmain\] uses these results and simple combinatorial arguments to characterise the Schur idempotents with norm $\eta_k$ for $1\leq k\leq 6$, and hence to prove Theorem \[thm:gaps\]. Using work of Katavolos and Paulsen [@kat-paulsen], this allows us to show in Section \[sec:cbn\] that these gaps persist in the set of norms of all bounded, normal, idempotent masa bimodule maps on $\B(\H)$ where $\H$ is a separable Hilbert space. Finally, in Section \[sec:random\] we estimate the average Schur norm of a random Schur idempotent, in which each entry is chosen independently to be $1$ with probability $p$ and $0$ with probability $1-p$.
Bipartite graphs {#sec:bipartite}
================
Let $m,n\in\bN\cup \{\aleph_0\}$, and consider an $m\times n$ matrix $A=[a_{ij}]$ where each $a_{ij}\in
\{0,1\}$. To $A$ we associate an undirected countable bipartite graph $G=G(A)$, specified as follows. The vertex set $V(G)$ is the disjoint union of two sets, $R$ and $C$, where $|R|=m$ and $|C|=n$. Fixing enumerations $R=\{r_1,r_2,\dots\}$ and $C=\{c_1,c_2,\dots\}$, we define the edge set of $G$ to be $$E(G)=\big\{ (r_i,c_j) \colon a_{ij}=1\big\}.$$ For example, if $$A=
\begin{bmatrix}
1&1&0&0\\0&1&1&0\\0&0&0&1
\end{bmatrix},$$ then the corresponding graph is $$G(A)=\ger{\nn11\nn12\nn22\nn23\nn34}$$ where we have drawn the set of “row vertices” $R=\{r_1,r_2,r_3\}$ above the “column vertices” $\{c_1,c_2,c_3,c_4\}$. In general, $G$ will be a bipartite graph with bipartition $(R,C)$, which simply means that no edges join an element of $R$ to an element of $C$. We call such a graph an $(R,C)$-bipartite graph. Clearly the map $A\mapsto G(A)$ is a bijection from the set of all $m\times n$ matrices of $0$s and $1$s onto $\Gamma(R,C)$, the set of all $(R,C)$-bipartite graphs. We remark in passing that in the linear algebra and spectral graph theory literature, $A$ is called the biadjacency matrix of $G(A)$.
We will write $A=M(G)$ to mean that $G=G(A)$, and we adopt the shorthand $$\|G\|:=\|M(G)\|_{\schur}.$$ In particular, if $R$ and $C$ are countably infinite sets, then $$\N=\{\|G\|\colon G\in \Gamma(R,C)\}.$$
More generally, if $X$ and $Y$ are any sets and $G\subseteq X\times
Y$, then we may think of $G$ as a bipartite graph whose vertex set $V(G)$ is the disjoint union of $X$ and $Y$, and whose edge set is $E(G)=G$. We write $\Gamma(X,Y)$ for the power set of $X\times Y$, viewed as the collection of all such bipartite graphs.
If $G\in \Gamma(X,Y)$ and $G'\in \Gamma(X',Y')$, then we say that the graphs $G$ and $G'$ are isomorphic if there is an isomorphism of bipartite graphs from $G$ to $G'$. This means that there is a bijection $\theta\colon V(G)\to V(G')$ which either maps $X$ onto $X'$ and $Y$ onto $Y'$ or maps $X$ onto $Y'$ and $Y$ onto $X'$, so that $\theta$ induces a bijection from $E(G)$ onto $E(G')$. We do not distinguish between isomorphic graphs, so for example we write $G=G'$ if $G$ and $G'$ are merely isomorphic.
If $G_0\in \Gamma(X_0,Y_0)$ and $G\in\Gamma(X,Y)$, then $G_0$ is an induced subgraph of $G$ if $X_0\subseteq X$, $Y_0\subseteq Y$ and for $x_0\in X_0$ and $y_0\in Y_0$ we have $$(x_0,y_0)\in E(G_0)\iff (x_0,y_0)\in E(G).$$ In other words, $G_0=G\cap (X_0\times Y_0)$; we will abbreviate this as $G_0=G[X_0,Y_0]$.
If we merely have $$(x_0,y_0)\in E(G_0)\implies (x_0,y_0)\in E(G),$$ so that $G_0$ may be obtained by removing some edges from an induced subgraph of $G$, then we say that $G_0$ is a subgraph of $G$. We will write $G_0\leq G$ or $G\geq G_0$ to mean that $G_0$ (or a graph isomorphic to $G_0$) is an induced subgraph of $G$; and we will write $G_0\subseteq G$ to mean that $G_0$ (or a graph isomorphic to $G_0$) is a subgraph of $G$. Similarly, we write $G_0< G$ to mean that $G_0\leq G$ but $G_0$ is not isomorphic to $G$.
Let $G$ be a graph and let $v$ be a vertex of $G$. The set $N(v)$ of neighbours of $v$ in $G$ consists of all vertices joined to $v$ by an edge of $G$. The degree $\deg(v)$ of $v$ in $G$ is the cardinality of $N(v)$. If the vertices of $G$ have bounded degree, then we write $$\deg(G)=\max_{v\in V(G)}\deg(v).$$
We say that vertices $v,w$ in $G$ are twins in $G$ if $N(v)=N(w)$. A graph $G$ is twin-free if no pair of distinct vertices are twins.
\[prop:dupe-free\] Any graph $G$ has a maximal twin-free induced subgraph $\operatorname{tf}(G)$, which is unique up to graph isomorphism. If $G$ is bipartite, then so is $\operatorname{tf}(G)$.
Being twins is an equivalence relation on the vertices of $G$. If we choose a complete set of equivalence class representatives, then the corresponding induced subgraph of $G$ is twin-free, and by construction it is maximal with respect to $\leq$ among the twin-free induced subgraphs of $G$. Passing from one choice of equivalence class representatives to another produces an isomorphism of graphs. On the other hand, if $v$ and $w$ are any two distinct vertices in a twin-free induced subgraph $S\leq
G$, then $v$ and $w$ are not twins in $S$, so they cannot be twins in $G$. So the vertices of $S$ all lie in different equivalence classes, so $S$ is an induced subgraph of one of the maximal induced subgraphs we have described. Since any subgraph of a bipartite graph is bipartite, the second assertion is trivial.
Note that $M(\tf(G))$ is obtained from $M(G)$ by repeatedly deleting duplicate rows and columns.
Let $G$ be any graph. If $v,v'$ are distinct vertices of $G$, then a path in $G$ from $v$ to $v'$ of length $k$ is a finite sequence $(v_0,v_1,v_2,\dots,v_k)$ of vertices of $G$, where $v=v_0$ and $v'=v_k$, so that $v_j$ is joined by an edge in $G$ to $v_{j+1}$ for $1\leq j<k$. This is a simple path if no vertex appears twice. The distance between $v$ and $v'$ is the smallest possible length of such a path in $G$. Being joined by some path in $G$ is an equivalence relation on the vertices of $G$; by a connected component of $G$ we mean an equivalence class for this relation, and we say that $G$ is connected if it is a connected component of itself.
It is easy to see that:
\[lem:df-conn\] A graph $G$ is connected if and only if $\operatorname{tf}(G)$ is connected.
The size $|G|$ of a graph $G$ is the cardinality of its vertex set. We say that $G$ is finite if $|G|<\infty$. Let $\F(G)$ be the set $$\F(G) = \{F\leq G\colon \text{$F$ is finite, connected and
twin-free}\}.$$We will use the following observation in Section \[sec:cbn\].
\[lem:finite-connected-subgraphs\] Let $X,Y$ be sets and let $G\in \Gamma(X,Y)$ be a connected bipartite graph. If $\F(G)$ contains finitely many non-isomorphic bipartite graphs, then $\operatorname{tf}(G)$ is finite.
Suppose instead that $\operatorname{tf}(G)=G[S,T]$ where $S\subseteq X$ and $T\subseteq Y$ and $S$ is infinite. Let $A$ be a finite subset of $S$ with $|A|>|F|$ for every $F\in \F(G)$. Since $G[S,T]$ is twin-free, for any pair $a_1,a_2$ of distinct vertices in $A$ there is a vertex $t=t(a_1,a_2)\in T$ so that one of $(a_1,t)$ and $(a_2,t)$ is an edge of $G$, and the other is not. Consider $$B=\{t(a_1,a_2)\colon a_1,a_2\in A,\ a_1\ne a_2\}.$$ Since $\operatorname{tf}(G)$ is connected, we can find finite sets $A',B'$ with $A\subseteq A'\subseteq S$ and $B\subseteq B'\subseteq T$ so that $G[A',B']$ is connected. Consider $F=\operatorname{tf}(G[A',B'])$. By construction, $F\in \F(G)$. However, $|F|\geq |A|$ since no two vertices in $A$ are twins in $G[A',B']$, so $F$ cannot be (isomorphic to) an element of $\F(G)$, a contradiction.
Basic results {#sec:basicresults}
=============
If $A$ and $B$ are matrices, then we will write $$A\simeq B$$ to mean that $B=UAV$ for some permutation matrices $U,V$; in other words, permuting the rows and columns of $A$ yields $B$.
The following facts about the norms of Schur multipliers are well-known.
\[prop:basic-matrices\] Let $A$ and $B$ be matrices with countably many rows and columns.
1. $\|A\|_\schur=\|A^t\|_{\schur}$
2. If $A\simeq B$, then $\|A\|_\schur=\|B\|_\schur$.
3. If $B$ can be obtained from $A$ by deleting some rows or columns, then $\|B\|_{\schur}\leq \|A\|_{\schur}$.
4. $\|A_1\oplus A_2\oplus A_3\oplus\dots\|_\schur=\sup_{j}\|A_j\|_{\schur}$
5. $\|A\|_{\schur}=\left\|\left[\begin{smallmatrix}
A&A\\A&A
\end{smallmatrix}\right]\right\|_{\schur}$
6. If $B$ can be obtained from $A$ by duplicating rows or columns, then $\|B\|_{\schur}=\| A\|_{\schur}$.
Statements (1)–(4) all follow easily from properties of the operator norm $\|\cdot\|_\B$. For (5), let us write $S_A\colon \B\to\B$, $X\mapsto A\schur X$ for the mapping of Schur multiplication by $A$. The two-fold ampliation $S_A^{(2)}\colon M_2(\B)\to M_2(\B)$ of $S_A$ (in the sense of operator space theory) may be naturally identified with $S_B$ where $B=\left[\begin{smallmatrix}
A&A\\A&A
\end{smallmatrix}\right]$. Now $$\|A\|_{\schur}=\|S_A\|\leq
\|S_A^{(2)}\|=\|B\|_{\schur}\leq
\|S_A\|_{cb}=\|S_A\|=\|A\|_\schur$$ where the equality $\|S_A\|_{cb}=\|S_A\|$ is a theorem commonly attributed to an unpublished manuscript of Haagerup (see [@paulsen-book p. 115], for example) and is also established in [@smith91]. We therefore have equality, hence (5). Replacing the number $2$ in this argument with some other countable cardinal and using statement (3) then yields a proof of statement (6).
Specialising to idempotent Schur multipliers and restating in terms of bipartite graphs, we have:
\[prop:basic-graphs\] Let $R$ and $C$ be countable sets, and let $G\in \Gamma(R,C)$.
1. If $G'\in\Gamma(R',C')$ and $G'$ is isomorphic to $G$, then $\|G'\|=\|G\|$.
2. If $G_0\leq G$ then $\|G_0\|\leq \|G\|$.
3. The norm of $G$ is the supremum of the norms of the connected components of $G$.
4. $\|G\|=\|\operatorname{tf}(G)\|$.
\(1) follows from assertions (1) and (2) of Proposition \[prop:basic-matrices\]. For $j=2,3$, assertion ($j$) here is a rewording of assertion ($j+1$) of Proposition \[prop:basic-matrices\]. (4) follows easily using the proof of Proposition \[prop:dupe-free\] and Proposition \[prop:basic-matrices\](6).
It is natural to ask whether Proposition \[prop:basic-graphs\](2) generalises to to all subgraphs, and not merely induced subgraphs. In other words, is following implication valid? $$G_0 \subseteq G\stackrel{\text{?}}{\implies} \|G_0\|\leq \|G\|$$ The answer is no. The complete graph $K$ in $\Gamma(\aleph_0,\aleph_0)$ corresponds to the matrix of all $1$s, which has Schur multiplier norm $1$ since it gives the identity mapping, but as is well-known [@kwapien-pel], the upper-triangular subgraph $T\subseteq K$ whose matrix is $$M(T)=
\begin{bmatrix}
1&1&1&\dots\\
0&1&1&\dots\\
0&0&1&\dots\\
\vdots &\vdots &\vdots &\ddots
\end{bmatrix}$$ has $\|T\|=\infty$. Note that $T$ is twin-free, but $K$ is certainly not. In view of Proposition \[prop:basic-graphs\](4), we might then ask whether this implication holds provided either $G$ alone, or both $G$ and $G_0$, are required to be twin-free. Again, the answer is no; a counterexample is given by (7) and (8) of Proposition \[prop:smallnorms\] below.
\[rk:real-complex\] We now explain why our results are identical regardless of whether we choose $\bF=\bR$ or $\bF=\bC$. Let $\B_\bF$ be the space of bounded linear maps from $\ell^2_{n,\bF}$ to $\ell^2_{m,\bF}$, the corresponding $\ell^2$ spaces with entries in $\bF$. For a Schur multiplier $A\in M_{m,n}(\bF)$, we temporarily write $\|A\|_{\schur,\bF}$ for the norm of the map $$S_{A,\bF}\colon
\B_{\bF}\to \B_{\bF},\quad B\mapsto A\schur B$$ and let $\|A\|_{\schur,\bF,cb}$ be the completely bounded norm of $S_{A,\bF}$. For the theory of operator spaces over $\bR$, we refer to [@ruan-real1; @ruan-real2]. Note that $S_{A,\bF}$ is a bimodule map over the algebra $\D_\bF$ of diagonal operators in $\B_\bF$, and $\D_\bF$ has a cyclic vector. Since the proof of [@smith91 Theorem 2.1] works equally well in the real and complex cases, we have $$\|A\|_{\schur,\bF}=\|A\|_{\schur,\bF,cb}.$$
If $A$ is a Schur multiplier in $M_{m,n}(\bR)$, then $S_{A,\bC}$ is the complexification of $S_{A,\bR}$. Hence by [@ruan-real2 Theorem 2.1], $$\|A\|_{\schur,\bR}=\|A\|_{\schur,\bR,cb}= \|A\|_{\schur,\bC,cb}
=\|A\|_{\schur,\bC}.$$ We conclude that a Schur multiplier with real entries has the same norm whether we consider it as a mapping on $\B_\bR$ or on $\B_\bC$.
For $\bF=\bC$, this allows a slight simplification of the formula defining the norm of a Schur multiplier with real entries. Indeed, any Schur multiplier $A\in M_{m,n}(\bR)$ has $$\|A\|_{\schur}=\sup_{X\in O(m,n)} \|A\schur X\|_{\B_{\bR}}$$ where $O(m,n)$ is the set of extreme points of the unit ball of $\B_{\bR}$: the set of isometries in $\B_\bR$ if $n\leq m$, or the set of coisometries if $n\geq m$.
Norms of simple paths {#sec:snakes}
=====================
For $n\in\bN$, the $n$-cycle $\Lambda(n)$ is the maximal cycle in $\Gamma(n,n)$; equivalently, it is connected and every vertex has degree $2$. For example, $\Lambda(3)=\loopee$.
The $(n,n)$ path $\Sigma(n,n)$ and the $(n,n+1)$ path $\Sigma(n,n+1)$ are the maximal simple paths in $\Gamma(n,n)$ and $\Gamma(n,n+1)$, respectively; for example, $\Sigma(3,3)=\snakeee$ and $\Sigma(3,4)=\snakeer$.
For $n\in\bN$, we write $\theta_n=\frac{\pi}{2n}$. By [@dav-don Example 4.6], we have $$\tag{$*$}\label{eq:davdon-loops} \|\Lambda(n)\|=
\begin{cases}
\frac{2}{n}\cot\theta_{n}&\text{if~$n$ is even,}\\
\frac{2}{n}\csc\theta_{n}&\text{if~$n$ is odd.}
\end{cases}$$ The main result of this section is:
\[thm:snakes\] For every $n\in\bN$, $$\|\Sigma(n,n)\|=\|\Sigma(n,n+1)\|=\frac
2{n+1}\cot \theta_{n+1}.$$
Before the proof, we make some remarks.
Observe that while $\Sigma(n,n)< \Sigma(n,n+1)<\Lambda(n+1)$, the norms of the first two graphs are equal for every $n$ and all three have equal norm for odd $n$. However, these assertions do not follow from any of the easy observations of Proposition \[prop:basic-graphs\] since these graphs are all connected and twin-free.
Is there a combinatorial characterisation of the connected twin-free bipartite graphs $G_0< G$ with $\|G_0\|=\|G\|$?
Theorem \[thm:snakes\] improves the following bounds of Popa [@popa-thesis]: $$\frac1n\left(\csc\left(\frac{\pi}{4n+2}\right)-1\right)\leq
\|\Sigma(n,n)\|\leq \frac2{n+1}\cot \theta_{n+1}.$$ She establishes the upper bound using results of Mathias [@mathias], and the lower bound using some eigenvalue formulae due to Yueh [@yueh].
The following corollary is also noted in [@popa-thesis]. Another proof can be found by applying a theorem of Bennett [@bennett Theorem 8.1] asserting that the norm of a Toeplitz Schur multiplier $A$ is the total variation of the Borel measure $\mu$ on $\bT$ with $a_{i-j}=\hat
\mu(i-j)$.
\[cor:4/pi\] The infinite matrix $A$ with $a_{ij}=1$ if $j\in \{i,i+1\}$ and $a_{ij}=0$ otherwise has $\|A\|_{\schur}=4/\pi$.
The Schur multiplier norm of $A$ is the supremum of the Schur multiplier norms of its $n\times n$ upper left-hand corners $A_n$, and $G(A_n)=\Sigma(n,n)$. Hence $$\|A\|_{\schur}= \sup_{n\ge1}\|A_n\|_{\schur}=\sup_{n\ge1}
\frac{2}{n+1}\cot\theta_{n+1}=\frac
4\pi.\qedhere$$
Recall that $\N$ denotes the set of norms of all (bounded) Schur idempotents.
$\N$ is not discrete: its accumulation points include $2$ by [@bcd], $\sqrt 2$ by Remark \[rk:sqrt2\] below, and $4/\pi$ by Corollary \[cor:4/pi\]. By Theorem \[thm:gaps\], the infimum of the set of accumulation points of $\N$ is in the interval $[\eta_6,4/\pi]$.
Is this infimum equal to $4/\pi$?
Is $\N$ closed? Does it have non-empty interior? Are there any limit points from above which are not limit points from below?
We turn now to the proof of Theorem \[thm:snakes\], which will occupy us for the rest of this section. Fix $n\in\bN$. For $j\in\bZ$, write $$\kappa(j)=\cos(j\theta_{n+1})\quad\text{and}\quad \lambda(j)=\sin(j\theta_{n+1})$$ where as above, $\theta_{n+1}=\tfrac\pi{2(n+1)}$. Clearly, $\lambda(j)=0\iff j\in 2(n+1)\bZ$. The following useful identity, valid for $N\in\bN$, $f\in
\{\kappa,\lambda\}$ and $a,d\in\bZ$ with $\lambda(d)\ne0$, is an immediate consequence of the formulae in [@knapp]. $$\tag{$\bigstar$}\label{eq:bigstar}
\sum_{j=0}^{N} f(a+2dj) = \frac{\lambda((N+1)d)}{\lambda(d)} f(a+Nd)$$
\[lem:altzero\] Let $a\in\bZ$ and let $f,g,h\in \{\pm \kappa,\pm\lambda\}$.
1. If $m\in2\bZ$ and $|m|\leq 2n$, then $$\displaystyle\sum_{j=0}^{2n+1} (-1)^j f(a+mj)=0.$$
2. If $s,t\in\bZ$ with $\max\{|s|,|t|\}\leq
n-1$ and $s\equiv t\pmod 2$, then $$\displaystyle \sum_{j=0}^{2n+1}(-1)^j f(a+2j)g(sj)h(tj)=0=\sum_{j=0}^{2n+1}(-1)^j g(sj)h(tj).$$
<!-- -->
1. We have $$\begin{aligned}
\sum_{j=0}^{2n+1} (-1)^j f(a+mj)
=
\sum_{j=0}^n f(a+2mj) - \sum_{j=0}^n f(a+m + 2mj).
\end{aligned}$$ If $m=0$ then this difference is clearly $0$. If $m\ne 0$, then $|m|<2(n+1)$ gives $\lambda(m)\ne0$ and $\lambda((n+1)m)=0$ since $m$ is even, so by ($\bigstar$) we have $$\sum_{j=0}^n
f(a+2mj)=\sum_{j=0}^n f(a+m + 2mj)=0.$$
2. Using the product-to-sum trigonometric identities, we can write $$f(a+2j)g(sj)h(tj) = \frac14\sum_{k=0}^3f_k(a+m_kj)$$ where $f_k\in \{\pm \kappa,\pm \lambda\}$ and $$m_k = 2+(-1)^ks +
(-1)^{\lfloor k/2\rfloor} t.$$ Since $m_k$ is even and $|m_k|\leq
2+ |s|+|t|\leq 2n$, the first equality follows from (1). The second equality is proven using a simplification of the same argument.
Let $\rho$ be the $2\times 2$ rotation matrix$$\rho=
\begin{bmatrix}
\kappa(1)&-\lambda(1)\\\lambda(1)&\kappa(1)
\end{bmatrix}.$$Note that for $s\in\bZ$, we have $$\rho^s =
\begin{bmatrix}
\kappa(s)&-\lambda(s)\\\lambda(s)&\kappa(s)
\end{bmatrix}$$ so that, in particular, each entry of $\rho^s$ is of the form $g(s)$ for some $g\in \{\kappa,\pm\lambda\}$. Define an $n\times n$ orthogonal matrix $W$ by $$W=
\begin{cases}
\rho\oplus\rho ^{3}\oplus \rho^5\oplus\dots\oplus\rho^{n-1 } &\text{if~$n$ is even,}\\
[1]\oplus \rho^2\oplus\rho ^{4}\oplus\dots\oplus\rho^{n-1} &\text{if~$n$ is odd.}
\end{cases}$$ Here, $[1]$ is the $1\times 1$ matrix whose entry is $1$ and $\oplus$ is the block-diagonal direct sum of matrices. Let $v$ be the $n\times 1$ vector $$v=
\begin{cases}
[1\ 0\ 1\ 0\ \dots\ 1\ 0]^*&\text{if~$n$ is even,}\\
[1\ 1\ 0\ 1\ 0\ \dots\ 1\ 0]^*&\text{if~$n$ is odd.}
\end{cases}$$ For $j\in\bZ$, let $q_j=W^j v$, and consider the rank one operators $$Q_j = q_jq_j^*.$$ We write $\operatorname{conv}(S)$ for the convex hull of a subset $S$ of a vector space.
\[prop:conv\] Consider the real numbers $t_j=\kappa(1)-\kappa(3+4j)$ for $0\leq j\leq n$.
1. $t_j>0$ for $0\leq j< n$ and $t_n=0$.\
2. $\displaystyle\sum_{j=0}^{2n+1}(-1)^j \kappa(a+2j)Q_j=0=\displaystyle\sum_{j=0}^{2n+1}(-1)^j Q_j$ for any $a\in\bR$.\
3. $\displaystyle\sum_{j=0}^{n-1}t_jQ_{2j}=\sum_{j=0}^{n-1}t_jQ_{2(n-j)-1}$.\
4. $\operatorname{conv}(\{Q_0,Q_2,\dots,Q_{2(n-1)}\})\cap
\operatorname{conv}(\{Q_1,Q_3,\dots,Q_{2n-1}\})\ne\emptyset$.
<!-- -->
1. This is clear.
2. The $k$th entry of $q_j$ has the form $g_k(s_kj)$ where $g_k\in \{\kappa,\pm\lambda\}$ and $s_k\in\bZ$ with $|s_k|\leq n-1$ and $s_k\equiv n-1\pmod2$. Hence the $(k,\ell)$ entry of $Q_j$ is $g_k(s_kj)g_\ell(s_\ell j)$, so the claim follows from Lemma \[lem:altzero\](2).
3. For $\ell\in\bZ$, we have $W^\ell Q_j W^{-\ell} =
Q_{j+\ell}$. By (2), $$W^{-1} \left(\sum_{j=0}^{2n+1}(-1)^j \kappa(1+2j)Q_j\right)W =
\sum_{j=-1}^{2n} (-1)^{j+1}\kappa(3+2j) Q_j=0.$$ Rearranging, reindexing and using the identity $\kappa(4(n+1)-x)=\kappa(x)$ gives $$\begin{aligned}
\sum_{j=0}^n \kappa(3+4j)Q_{2j} = \sum_{j=0}^n \kappa(3+4j)Q_{2(n-j)-1}.
\end{aligned}$$ We have $W^{2(n+1)}=(-1)^{n+1}I$, so $$Q_{-1}=W^{2(n+1)}Q_{-1}W^{-2(n+1)}=Q_{2n+1}.$$ By the second equality in (2), $$\sum_{j=0}^n \kappa(1)Q_{2j} = \sum_{j=0}^n \kappa(1)Q_{2(n-j)-1}.$$ Taking differences gives $$\sum_{j=0}^n t_j Q_{2j} = \sum_{j=0}^n t_j Q_{2(n-j)-1}.$$ Since $t_n=0$, this is the desired identity.
4. This is immediate from (1) and (3).
For $1\leq j\leq n-1$, let $$r_j =
\begin{cases}
\sqrt{\tfrac2{n+1}}&\text{if~$j=1$ and~$n$ is odd},\\
\sqrt{\tfrac{4\kappa(j)}{n+1}}&\text{otherwise}.
\end{cases}$$ Let $r$ be the $n\times 1$ vector $$r=
\begin{cases}
[r_1\ 0\ r_3\ 0\ \dots\ r_{n-1}\ 0]^*&\text{if~$n$ is even,}\\
[r_1\ r_2\ 0\ r_4\ 0\ \dots\ r_{n-1}\ 0]^*&\text{if~$n$ is odd.}
\end{cases}$$ A calculation using ($\bigstar$) gives $$\|r\|^2= \frac2{n+1}\cot \theta_{n+1}.$$ Consider the rank one operators $$P_j=W^jr(W^jr)^*$$ for $j\in\bZ$.
\[rk:PQ\] The diagonal matrix $$D=
\begin{cases}
\operatorname{diag}(r_1,r_1,r_3,r_3,\dots,r_{n-1},r_{n-1})&\text{if $n$ is even}\\
\operatorname{diag}(r_1,r_2,r_2,r_4,r_4,\dots,r_{n-1},r_{n-1})&\text{if $n$ is odd}
\end{cases}$$ commutes with $W$ and $Dv=r$; hence $DQ_jD=P_j$. Since $D$ is invertible, it follows that for any finite collection of scalars $t_j$ we have $$\sum_j t_jP_j=0\iff \sum_j t_j Q_j=0.$$
Let $R=[r\ W^2r\ W^4r\ \dots\ W^{2(n-1)}r]$ and let $S=WR$. Also let $\tilde R=[R\ W^{2n}r]$ and let $\tilde S=W\tilde R$. Let us write $X_i$ for the $i$th column of a matrix $X$. Since $W$ is an isometry, for $1\leq i,j\leq n+1$ we have $$\|\tilde R_j\|^2=\|\tilde S_i\|^2=\|r\|^2
= \frac2{n+1}\cot\theta_{n+1}.$$ Let $\tilde B$ be the $(n+1)\times (n+1)$ matrix whose $(i,j)$ entry is $$b_{ij}=
\begin{cases}
1&\text{if~$j-i\in \{0,1\}$,}\\
(-1)^{n+1}&\text{if~$(i,j)=(n+1,1)$,}\\
0&\text{otherwise}.
\end{cases}$$ Let $B$ be the upper-left $n\times n$ corner of $\tilde B$ and let $B'$ consist of the first $n$ rows of $\tilde B$. Observe that $G(B)=\Sigma(n,n)$, $G(B')=\Sigma(n,n+1)$ and if $n$ is odd, then $G(\tilde B)=\Lambda(n+1)$.
\[prop:upperbound\] We have $S^*R=B$ and $\tilde S^*\tilde R=\tilde B$.
Since $S^*R$ is the upper-left $n\times n$ corner of $\tilde
S^*\tilde R$, it suffices to show that $\tilde S^* \tilde R=\tilde
B$. Let $k=|2(i-j)+1|$, a positive odd integer. We have $$(\tilde S^* \tilde R)_{i,j} = \langle \tilde R_j,\tilde S_i\rangle
= \langle W^{2(j-1)}r,W^{2(i-1)+1}r\rangle=\langle
W^{k}r,r\rangle.$$ Since $W^{2(n+1)}=(-1)^{n+1}I$, the $(n+1,1)$ entry of $\tilde S^*\tilde R$ is $$\langle W^{2n+1}r,r\rangle = (-1)^{n+1}\langle W^*r,r\rangle =
(-1)^{n+1}\langle Wr,r\rangle.$$ It therefore only remains to show that $$\langle W^k r,r\rangle =
\begin{cases}
1&\text{if }k=1\\
0&\text{if $k$ is odd with $3\leq k\leq 2n-1$}.
\end{cases}$$ We prove this by direct calculation, giving the details for even $n$; the calculation for odd $n$ is very similar. We have $$\begin{aligned}
\langle Wr,r\rangle &= \sum_{j=0}^{n/2-1} r_{1+2j}^2 \kappa(1+2j)
\\
&= \frac4{n+1} \sum_{j=0}^{n/2-1} \kappa(1+2j)^2\\
&= \frac2{n+1} \sum_{j=0}^{n/2-1} 1+\kappa(2+4j)\\
&=\frac2{n+1}\left(\frac n2 + \frac12\right) = 1
\end{aligned}$$ Here we have used ($\bigstar$) to perform the summation in the penultimate line. If $3\leq k\leq 2n-1$ and $k$ is odd, then $$\begin{aligned}
\langle W^kr,r\rangle &=
\sum_{j=0}^{n/2-1} r_{1+2j}^2\, \kappa(k(1+2j))\\
&=
\frac 4{n+1} \sum_{j=0}^{n/2-1} \kappa(1+2j)\,\kappa(k(1+2j))\\
&=
\frac 2{n+1} \sum_{j=0}^{n/2-1} \kappa((k-1)(1+2j))+\kappa((k+1)(1+2j))\\
&=\frac1{n+1}\left(
\frac{ \lambda(n(k-1))}{\lambda(k-1)} + \frac{\lambda(n(k+1))}{\lambda(k+1)}\right).
\end{aligned}$$ Since $k$ is odd, $(k\pm1)(n+1)\theta_{n+1}=\frac{k\pm1}2 \pi$ is an integer multiple of $\pi$ and $$\lambda(n(k\pm1))=\lambda((k\pm1)(n+1)-(k\pm1))=(-1)^{(k\mp1)/2}\lambda(k\pm1),$$ so $$\langle W^kr,r\rangle=\frac{1}{n+1}\left( (-1)^{(k+1)/2} +
(-1)^{(k-1)/2}\right)=0.\qedhere$$
By Proposition \[prop:conv\] and Remark \[rk:PQ\], there are two sets of positive scalars $\{a_j\}_{j=1}^n$ and $\{b_j\}_{j=1}^n$, each summing to $1$, so that $$\sum_{j=1}^n a_j R_jR_j^* = \sum_{j=1}^n b_j S_jS_j^*.$$ The $n\times n$ diagonal matrices $X=\operatorname{diag}(\sqrt a_j)$ and $Y=\operatorname{diag}(\sqrt b_j)$ have $$RX(RX)^* = SY(SY)^*,$$ so there is a unitary matrix $U$ with $RX=SYU$. (Indeed, $B$ and $Y$ are both invertible, so $SY$ is invertible and $U=RX(SY)^{-1}$ has real entries and is an orthogonal matrix). As shown in [@ang-cow-nar], this implies that the factorisation $B=S^*R$ attains the Haagerup bound. Indeed, the unit vectors $x=[\sqrt a_j]_{1\leq j\leq n}$ and $y=[\sqrt{b_j}]_{1\leq
j\leq n}$ satisfy $$\begin{aligned}
\langle (B\schur U^t)x,y\rangle &= \operatorname{trace}((SY)^*RXU) =
\operatorname{trace}(S^*SYY^*) \\&= \sum_{j=1}^n \|S_j\|^2|y_j|^2 =
c(S)^2=c(S)c(R),\end{aligned}$$ so by Proposition \[prop:upperbound\], $$\begin{aligned}
\tfrac2{n+1}\cot\theta_{n+1} = c(S)c(R) &\leq \|B\schur U^t\|\\&\leq
\|B\|_{\schur} \leq \|\tilde B\|_\schur \leq c(\tilde S)c(\tilde
R)=\tfrac 2{n+1}\cot\theta_{n+1}.\end{aligned}$$ Hence $\|\Sigma(n,n)\|=\|B\|_\schur=\|\Sigma(n,n+1)\|=\|\tilde
B\|_\schur=\frac 2{n+1}\cot \theta_{n+1}$.
Calculations and estimates of small norms {#sec:calcs}
=========================================
In this section, we calculate or estimate the norms of some particular idempotent Schur multipliers. Our first result is Proposition \[prop:smallnorms\], in which we find the exact norms of some idempotent Schur multipliers in low dimensions. We then find lower bounds for the norms of some other Schur idempotents which we will use to establish Theorem \[thm:gaps\] in the following section.
\[prop:smallnorms\]
1. $\|\gqqq\|=\eta_1=1$.
2. $\|\snakeww\|=\|\snakewe\|=\eta_2=\sqrt{4/3}\approx 1.15470$.
3. $\|\geeq\|=\|\snakeer\|=\|\looprr\|=\eta_3=\frac{1+\sqrt2}2\approx 1.20711$.
4. $\|\geew\|=\|\getq\|=\eta_4=\frac1{15}\sqrt{169+38\sqrt{19}}\approx 1.21954$.
5. \[prop:smallnorms:sqrt32\] $\|\grew\|=\eta_5=\sqrt{3/2}\approx 1.22474$.
6. $\|\snakerr\|=\|\snakert\|=\eta_6=\tfrac25\sqrt{5+2\sqrt5}\approx 1.23107$.
7. \[prop:smallnorms:trie\] $\|\geee\|=\frac1{15}(9+4\sqrt6)\approx 1.25320$.
8. $\|\geer\|=9/7\approx 1.28571$.
9. $\|\geet\|=4/3\approx 1.33333$.
\(1) is trivial, and assertions (2), (3) and (6) are consequences of Theorem \[thm:snakes\] and equation .
1. Let $B=\left[\begin{smallmatrix}
1 & 0 & 0 & 1 & 0 \\
1 & 1 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 & 1
\end{smallmatrix}\right]\simeq M(\getq)$. Consider the matrices $$ P=\begin{bmatrix}
\eta & \alpha & \beta\\ \alpha & \eta & \alpha\\ \beta & \alpha & \eta \\ \end{bmatrix}\quad\text{and}\quad
Q=
\begin{bmatrix}
\eta & \gamma & \delta & \sigma & \tau \\
\gamma & \eta & \gamma & -\sigma & -\sigma \\
\delta & \gamma & \eta & \tau & \sigma \\
\sigma & -\sigma & \tau & 2 \sigma & \alpha \\
\tau & -\sigma & \sigma & \alpha & 2 \sigma
\end{bmatrix}$$ where $\eta=\eta_4$ and $$\begin{aligned}
\alpha&= \frac{1}{15} \sqrt{139-22 \sqrt{19}},&
\qquad
\beta&=-\frac{1}{15} \sqrt{24-2 \sqrt{19}},\\
\gamma&=\frac{2}{15} \sqrt{16+2\sqrt{19}},&
\delta&=\frac{1}{15} \sqrt{424-82 \sqrt{19}},\\
\sigma&=\frac1{15}\sqrt{61+2 \sqrt{19}},
&
\tau&=-\frac{1}{15} \sqrt{256-58 \sqrt{19}}.\end{aligned}$$ One can check with a computer algebra system that $C=\left[\begin{smallmatrix} P & B \\ B^* & Q
\end{smallmatrix}\right]$ has rank $3$ and its non-zero eigenvalues are positive, so $C$ is positive semidefinite. The maximum diagonal entry of $C$ is $\max\{\eta,2\sigma\}=\eta$, so $\|\getq\|\leq \eta$ by [@pps] (see also [@paulsen-book Exercise 8.8(v)]).
On the other hand, $$U=\frac1{15}
\begin{bmatrix}
8+\sqrt{19} & -\sqrt{74-2 \sqrt{19}} & -7+\sqrt{19} \\
\sqrt{74-2 \sqrt{19}} & 1+2 \sqrt{19} & \sqrt{74-2 \sqrt{19}} \\
-7+\sqrt{19} & -\sqrt{74-2 \sqrt{19}} & 8+\sqrt{19}
\end{bmatrix}$$ is orthogonal, and if $B=
\left[\begin{smallmatrix}
1 & 0 & 0\\
1 & 1 & 1 \\
0 & 0 & 1
\end{smallmatrix}\right]\simeq M(\geew)$, then $\|B\schur U\|=\eta\leq \|\geew\|$. Since $\geew\leq \getq$, this shows that $ \eta\leq \|\geew\|\leq \|\getq\|\leq \eta$ and we have equality.
2. Let $$S=\frac1{2\cdot 54^{1/4}}
\left[\begin{smallmatrix}
2\sqrt6&2\sqrt6&2\sqrt6\\-2\sqrt3&\sqrt3&\sqrt3\\
0&3&-3
\end{smallmatrix}\right]
\quad\text{and}\quad
R=\frac1{54^{1/4}}
\left[\begin{smallmatrix}
3&1&1&1\\
0&-2\sqrt2&\sqrt2&\sqrt2\\
0&0&\sqrt6&-\sqrt6
\end{smallmatrix}\right].$$ Then $S^*R=\left[\begin{smallmatrix}
1&1&0&0\\1&0&1&0\\1&0&0&1
\end{smallmatrix}\right]\simeq M(\grew)$, so $\|\grew\|\leq
c(S)c(R)=\sqrt{3/2}$. On the other hand, consider $$V=
\frac14\begin{bmatrix}
\sqrt 5 & 3 & -1 & -1 \\
\sqrt 5 & -1 & 3 & -1 \\
\sqrt 5 & -1 & -1 & 3
\end{bmatrix}.$$ It is easy to see that $V$ is a coisometry with $
\|\left[\begin{smallmatrix}
1&1&0&0\\1&0&1&0\\1&0&0&1
\end{smallmatrix}\right]\schur V\|=\sqrt{3/2}$. Hence $\|\grew\|\geq \sqrt{3/2}$.
3. Consider $$S=
\begin{bmatrix} 1&1&1/2\\a&-a&b\\-a&a&c
\end{bmatrix} \quad\text{and}\quad R=
\begin{bmatrix} 1&1&1/2\\a&-a&b\\a&-a&-c
\end{bmatrix}$$ where $$a=\sqrt{\tfrac1{15}(-3+2\sqrt6)},\ b=\tfrac{1}{2}
\sqrt{\tfrac{1}{15} (3+8 \sqrt{6})}\text{ and }
c=\sqrt{\tfrac{1}{30} (9+4 \sqrt{6})}.$$ Since $a(b+c)=\frac12$ and $b^2-c^2=-\tfrac14$, we have $S^*R=
\left[\begin{smallmatrix} 1&1&1\\1&1&0\\1&0&0
\end{smallmatrix}\right]\simeq M(\trie)$, so $\|\trie\|\leq
c(S)c(R)=\frac1{15}(9+4\sqrt6)$. On the other hand, calculations may be performed to show that the matrix $$U=\frac1{15} \begin{bmatrix}
9-\sqrt{6} & \sqrt{54-6 \sqrt{6}} & 2 \sqrt{21
+ 6 \sqrt6} \\ \sqrt{54-6 \sqrt{6}} & 3 (1+\sqrt{6}) & -2
\sqrt{27-3 \sqrt{6}} \\ 2 \sqrt{21 + 6 \sqrt6} & -2 \sqrt{27-3
\sqrt{6}} & 3-2 \sqrt{6}
\end{bmatrix}$$ is orthogonal, and $\|
\left[\begin{smallmatrix} 1&1&1\\1&1&0\\1&0&0
\end{smallmatrix}\right]\schur U\|=\frac1{15}(9+4\sqrt6)$.
4. Let $$S=\frac1{\sqrt{14}}
\begin{bmatrix}
3 & 4 & 3 \\
-\sqrt{2} & \sqrt{2} & -\sqrt{2} \\
-\sqrt{7} & 0 & \sqrt{7}
\end{bmatrix}
\quad\text{and}\quad
R=\frac1{\sqrt{14}}
\begin{bmatrix}
3 & 4 & 3 \\
\sqrt{2} & -\sqrt{2} & \sqrt{2} \\
-\sqrt{7} & 0 & \sqrt{7}
\end{bmatrix}
.$$ Then $S^*R=
\left[\begin{smallmatrix}
1&1&0\\1&1&1\\0&1&1
\end{smallmatrix}\right]=M(\geer)$, so $\|\geer\|\leq c(S)c(R)=9/7$. The matrix $$U=\frac17
\begin{bmatrix}
3 & 2 \sqrt{6} & -4 \\
2 \sqrt{6} & 1 & 2 \sqrt{6} \\
-4 & 2 \sqrt{6} & 3
\end{bmatrix}$$ is orthogonal, and $\|\left[\begin{smallmatrix}
1&1&0\\1&1&1\\0&1&1
\end{smallmatrix}\right]\schur U\|=9/7\leq \|\geer\|$.
5. This is proven in [@bcd Theorem 2.1], and is also a consequence of .
Since $M(\geee)\simeq
\left[\begin{smallmatrix}
1&1&1\\0&1&1\\0&0&1
\end{smallmatrix}\right]$, part (\[prop:smallnorms:trie\]) of the preceding result gives the norm of the upper-triangular truncation map on the $3\times 3$ matrices. This result has previously been stated in [@ang-cow-nar p. 131], but a detailed calculation does not appear in that reference.
\[rk:sqrt2\] Proposition \[prop:smallnorms\](\[prop:smallnorms:sqrt32\]) may be generalised to show that $$\|[\mathbf 1\ I_n]\|_{\schur}=\sqrt{\frac{2n}{n+1}}$$ where $\mathbf 1$ is the $n\times 1$ vector of all ones and $I_n$ is the $n\times n$ identity matrix. We omit the details.
\[prop:grrc\] $\|\grrc\|>\|\trie\|$.
The matrix $U=\displaystyle\frac16
\left[\begin{smallmatrix}
3&3&3&3\\
3&-5&1&1\\
3&1&-5&1\\
3&1&1&-5
\end{smallmatrix}\right]$ has $U=2P-I$ where $P$ is the rank one projection onto the linear span of $\left[\begin{smallmatrix}
3\\1\\1\\1
\end{smallmatrix}\right]$, so $U$ is orthogonal. Clearly $B=
\left[\begin{smallmatrix}
1&1&1&1\\
0&1&0&0\\
0&0&1&0\\
0&0&0&1
\end{smallmatrix}\right]$ has $M(\grrc)\simeq B$, and $Y=6B\schur U$ has $$\begin{aligned}
\|Y\|^2&= \left\|\left[\begin{smallmatrix}
3&3&3&3\\
0&-5&0&0\\
0&0&-5&0\\
0&0&0&-5
\end{smallmatrix}\right]
\left[\begin{smallmatrix}
3&0&0&0\\
3&-5&0&0\\
3&0&-5&0\\
3&0&0&-5
\end{smallmatrix}\right]\right\|=\left\|
25I +
\left[\begin{smallmatrix}
11&-15&-15&-15\\-15&0&0&0\\-15&0&0&0\\-15&0&0&0
\end{smallmatrix}\right]\right\|.
\end{aligned}$$ Since the norm of $YY^*$ is its spectral radius, a calculation gives $$\|Y\|^2=\tfrac12(61+\sqrt{2821}).$$ Hence $$\|\grrc\|=\|B\|_{\schur}\geq \tfrac16\|Y\| = \tfrac16\sqrt{\tfrac12(61+\sqrt{2821})}>\tfrac1{15}(9+4\sqrt6) = \|\trie\|.\qedhere$$
\[prop:grrx\] $\|\grrx\|>\|\snakerr\|$.
Consider the unit vectors $x$ and $y$ appearing in the proof of Theorem \[thm:snakes\] in the case $n=4$. It turns out that $$x=\frac1{\sqrt5}
\begin{bmatrix}
\sqrt{\tfrac12(3-\sqrt5)}\\
\sqrt{\frac12(1+\sqrt5)}\\
\sqrt2\\
1
\end{bmatrix}\quad\text{and}\quad
y=\frac1{\sqrt5}
\begin{bmatrix}
1\\\sqrt{2}\\
\sqrt{\frac12(1+\sqrt5)}\\
\sqrt{\tfrac12(3-\sqrt5)},
\end{bmatrix}$$ and that the matrix $B=
\left[\begin{smallmatrix}
1&1&0&0\\
0&1&1&0\\
0&0&1&1\\
0&0&1&0
\end{smallmatrix}\right]\simeq M(\grrx)$ has $\|B^t\schur (xy^*)\|_1>1.235>\|\snakerr\|$, where $\|\cdot\|_1$ is the trace-class norm. It is well-known and easy to see that $S_B\colon \B\to \B$, $A\mapsto B\schur A$ is the dual of $T_{B^t}\colon \C_1\to \C_1$, $C\mapsto B^t\schur C$, the mapping of Schur multiplication by $B^t$ on the trace-class operators $\C_1$ (viewed as the predual of $\B$). Since $x$ and $y$ are unit vectors, $\|xy^*\|_1=1$ and so $\|B\|_{\schur}=\|S_B\|=\|T_{B^t}\|>\|\snakerr\|$.
\[prop:grrr\] $\|\grrr\|>\|\snakerr\|$.
Consider $$B=
\begin{bmatrix}
1&1&0&0\\1&0&1&0\\1&0&0&1\\0&0&0&1
\end{bmatrix}\quad\text{and}\quad
U=\frac14\begin{bmatrix}
\sqrt5&3&-1&-1\\\sqrt5&-1&3&-1\\\sqrt5&-1&-1&3\\-1&\sqrt5&\sqrt5&\sqrt5
\end{bmatrix}$$ The matrix $U$ is orthogonal, and $M(\grrr)\simeq
B$. Now $$\begin{aligned}
16\|B\schur U\|^2&=\left\|
\begin{bmatrix}
15&3\sqrt5&3\sqrt5&3\sqrt5\\3\sqrt5&9&0&0\\3\sqrt5&0&9&0\\3\sqrt5&0&0&14
\end{bmatrix}\right\| = \left\|9I + Z\right\|
\end{aligned}$$ where $Z=\left[\begin{smallmatrix}
6&\sqrt5&\sqrt5&\sqrt5\\\sqrt5&0&0&0\\\sqrt5&0&0&0\\\sqrt5&0&0&5
\end{smallmatrix}\right]$, which has characteristic polynomial $p(x)=x(x^3-11x-105x+450)$. By estimating the roots of $p(x)$, one can show that $\|B\schur U\|=\frac14\sqrt{9+\lambda}$ where $\lambda$ is the largest root of $p(x)$, and that $\|\grrr\|\geq \|B\schur U\|>\|\snakerr\|$.
\[prop:grrz\] $\|\grrz\|> \|\snakerr\|$.
Consider the symmetric matrices $$B= \begin{bmatrix}
0&0&0&1\\0&1&1&1\\0&1&0&0\\1&1&0&0
\end{bmatrix} \quad\text{and}\quad U=
\begin{bmatrix}
0&0&-1/\sqrt2&1/\sqrt2\\
0&1/3&2/3&2/3\\
-1/\sqrt2 &2/3&-1/6&-1/6\\
1/\sqrt2 &2/3&-1/6&-1/6
\end{bmatrix}.$$ By direct calculation, $U$ is orthogonal, and $M(\grrz)\simeq B$. The characteristic polynomial of $B\schur U$ is $p(x)=\frac1{18}(x+1)(18 x^3-24x^2-x+4)$. It is easy to see that $p(x)$ has two negative roots and two positive roots, and the smallest root is $-1$ while the largest root is larger than $1$. Since $B\schur U$ is symmetric, $\|B\schur U\|$ is the spectral radius of $p(x)$, which is the largest root of $p(x)$. But $p(\|\snakerr\|)<0$ and $p'(x)>0$ for $x>1$, so $\|\grrz\|\geq \|B\schur U\|>\|\snakerr\|$.
Numerical methods produce the following estimates for these norms, each correct to $5$ decimal places: $\|\grrx\|\approx 1.24131$, $\|\grrr\|\approx 1.25048$, $\|\grrz\|\approx 1.25655$ and $\|\grrc\| \approx 1.25906$. To see this, we apply the numerical algorithm described in [@cowen-et-al] to $M(G)$ for each of these graphs $G$. The algorithm requires a unitary matrix without zero entries as a seed. Using the $4\times 4$ Hadamard unitary $H_4=H_2\otimes H_2$ where $H_2=\frac1{\sqrt2}\left[\begin{smallmatrix}1&1\\1&-1
\end{smallmatrix}\right]$, after 20 or fewer iterations, in each case the algorithm produces real matrices $R$ and $S$ for which the Haagerup estimate gives an upper bound $\beta=c(S)c(R)$, and an orthogonal matrix $U$ giving a lower bound $\alpha=\|M(G)\schur U\|$, so that $\beta-\alpha<10^{-6}$.
A characterisation of the Schur idempotents with small norm {#sec:proofmain}
===========================================================
We now use the results of the previous section to characterise the Schur idempotents with norm $\eta_k$ for $1\leq k\leq 6$. This will yield a proof of Theorem \[thm:gaps\].
We will write $$\Gamma=\bigcup_{1\leq m,n\leq \aleph_0}\Gamma(m,n).$$ Note that $\N=\{\|G\|\colon G\in \Gamma\}\setminus\{\infty\}$.
In the arguments below, we frequently encounter the following situation: $G$ is a twin-free bipartite graph with an induced subgraph $H$, and $H$ contains two vertices $v_1$ and $v_2$ which are twins (in $H$). Since $G$ is twin-free, we can conclude that there is a vertex $w$ in $G$ which is joined to one of $v_1$ and $v_2$ but not the other. We will say that the vertex $w$ *distinguishes* the vertices $v_1$ and $v_2$.
\[lem:deg3b\] Let $G\in\Gamma$ be twin-free.
1. If $\deg(G)\geq 3$, then $G$ contains either $\geew$, $\geee$ or $\geer$ as an induced subgraph.
2. If $1<\|G\|<\eta_4$, then $\deg(G)=2$.
\(1) Let $v$ be a vertex in $G$ of degree at least $3$ and consider an induced subgraph $\kqe$ with $v$ at the top. Since $G$ is twin-free, it is not hard to see that there are at least two other row vertices in $G$ which distinguish the neighbours of $v$, and that this necessarily yields one of the induced subgraphs in the statement.
\(2) follows from (1), since $\geew$, $\geee$ and $\geer$ all have norm at least $\eta_4$.
\[lem:deg2\] If $G\in \Gamma$ is connected with $\deg(G)=2$ and $\|G\|<4/\pi$, then $\|G\|=\|\Sigma(n,n)\|$ for some unique $n\ge2$. Moreover, $$E\leq
G\leq F$$ where $E=\Sigma(n,n)$ and $$F=
\begin{cases}
\Sigma(n,n+1)&\text{if~$n$ is even,} \\
\Lambda(n+1)&\text{if~$n$ is odd.}
\end{cases}$$
The graph $G$ is connected and $\deg(G)=2$, so $G$ is either a path or a cycle. Since the sequence $\frac2{n}\cot\theta_n$ is strictly increasing with limit $4/\pi$ and $\frac2n\csc\theta_n>4/\pi$ for every $n$, the claim follows from (\[eq:davdon-loops\]) and Theorem \[thm:snakes\].
\[lem:deg3\] If $G\in\Gamma$ is twin-free with $\|G\|< \|\trie\|$, then
1. $G\not \geq\gwem$ and
2. $\deg(G)\leq 3$.
\(1) Otherwise, since $G$ is twin-free, there is a row vertex $r$ in $G$ which distinguishes the twin column vertices in $\gwem$. Hence either $G\geq\trie$ or $G\geq \geer$, and so $\|G\|\geq \|\trie\|$ by Proposition \[prop:smallnorms\].
\(2) Suppose that $\deg(G)>3$, so that $G\geq \gqrq$. In order to distinguish between the four twin column vertices, there must be another row vertex in $G$ attached to one but not all of these, so $\gwr{\nn11\nn21\nn22\nn23\nn24}\subseteq G$. In fact, to avoid the induced subgraph forbidden by (1), we must have $G\geq\gwr{\nn11\nn21\nn22\nn23\nn24}$. Distinguishing between the remaining columns using the same argument shows that $G\geq\grrc$, so $\|G\|\geq \|\grrc\|>\|\trie\|$ by Proposition \[prop:grrc\], contrary to hypothesis.
We define graphs $E_j\leq F_j$ for $1\leq j\leq 6$ by: $$E_1=F_1=\gqqq,\qquad E_2=\gwwq,\ F_2=\gweq,\qquad E_3=\snakeee,\ F_3=\looprr$$ $$E_4=\geew,\ \!F_4=\getq,\quad\! E_5=F_5=\grew,\quad\! E_6=\snakerr,\ \!F_6=\snakert$$ Note that $\|E_j\|=\|F_j\|=\eta_j$ for $1\leq j\leq 6$.
\[thm:characterisations\] Let $G\in \Gamma$ be a twin-free, connected bipartite graph. For each $k\in \{1,2,3,4,5,6\}$, the following are equivalent:
1. $E_k\leq G\leq F_k$;
2. $\|G\|=\eta_k$;
3. $\eta_{k-1}<\|G\|\leq \eta_{k}$.
For each $k$, the implication $(1)\implies(2)$ follows from Propositions \[prop:basic-graphs\] and \[prop:smallnorms\], and $(2)\implies(3)$ is trivial.
Suppose that $G$ satisfies (3).
If $k=1$, then $0<\|G\|\leq 1$, so $\|G\|=1$ and $G$ is a disjoint union of complete bipartite graphs by [@kat-paulsen Theorem 4]. Since $G$ is connected and twin-free, $G=\gqqq$.
If $k\in \{2,3\}$, then $\deg(G)=2$ by Lemma \[lem:deg3b\], so $E_k\leq G\leq F_k$ by Lemma \[lem:deg2\].
If $k\in \{4,5,6\}$ but $E_6\ne G\ne F_6$, then $\deg(G)\ne2$ by Lemma \[lem:deg2\] and $\deg(G)\leq 3$ by Lemma \[lem:deg3\], so $\deg(G)=3$. Since $\|G\|<\|\trie\|<\|\geer\|$, we have $E_4=\geew\leq G$ by Lemma \[lem:deg3b\].
If $G$ has the same row vertices as $E_4$, then any column vertex $c$ in $G$ which is not in $E_4$ must be joined to $E_4$ so as to avoid the induced subgraph $\loopee$, and $c$ cannot be joined to the degree $3$ vertex of $E_4$ since $\deg(G)=3$. Hence $c$ must be joined to precisely one of the degree one row vertices of $E_4$. Since $G$ is twin-free, this gives $G\leq \getq=F_4$, so $E_4\leq G\leq F_4$.
If on the other hand $G$ has at least four row vertices, choose a row vertex of $G$ of smallest possible distance $\delta\in \{1,2\}$ to the induced subgraph $E_4\leq G$. If $\delta=2$, then $\grrx\subseteq G$, and the rightmost row vertex $r_4$ of $\grrx$ is not connected to any of $c_1,c_2,c_3$ in $G$. Since $\grrx$ is not an induced subgraph of $G$ by Proposition \[prop:grrx\] and $\deg(G)=3$, we have $G\geq
\grr{\vrrx\nn14}$; but removing the two degree $1$ vertices then shows that $G$ contains the forbidden induced subgraph $\loopee$, a contradiction.
So $\delta=1$. We claim that $G=E_5$. Indeed, since $\delta=1$ we know that one of the following is an induced subgraph of $G$: $$G_1=\gre{\veew\nn43}
\quad\! G_2=E_5=\grew
\quad G_3=\gre{\nn11\nn21\nn22\nn23\nn33\nn43\nn42}
\quad\! G_4=\gre{\veew\nn43\nn41}
\quad G_5=\gre{\veew\nn43\nn42\nn41}$$ Observe that $\trie$ is an induced subgraph of both $G_3$ and $G_4$, so the norms of these are too large. We can also rule out $G_5$ since it has a pair of twin row vertices of degree $3$, so these cannot be distinguished in $G$. If $G_1\leq G$, then since the vertices $r_3$ and $r_4$ are twins in $G_1$ but not in $G$, there there is a column vertex $c_4$ attached to $r_4$ (say) but not $r_3$. We cannot join $c_4$ to the maximal degree vertex $r_2$, so we find that either $$\grr{\veew\nn43\nn44}
\quad \text{or}\quad
\grr{\veew\nn43\nn44\nn14}$$ is an induced subgraph of $G$ containing $G_1$. However, the first is ruled out by Propostion \[prop:grrz\] and the second contains an induced subgraph $\loopee$, so cannot occur either. So $E_5\leq
G$. If $E_5$ is a proper induced subgraph of $G$, then since we must avoid $\loopee$ and also the induced subgraph $\grrr$ by Proposition \[prop:grrr\], it follows that no column vertex of $G$ has distance $1$ to $E_5$. So there is a row vertex of $G$ with distance $1$ to $E_5$. Avoiding $\trie$ and twin vertices of degree $3$, we find an induced subgraph $\gte{\nn12\nn22\nn32\nn33\nn43\nn34\nn54}\leq G$. To distinguish between the first two row vertices, we add a column vertex while avoiding $\loopee$, and conclude that $\gtr{\nn11\nn12\nn22\nn32\nn33\nn43\nn34\nn54}\leq
G$. Removing one row vertex gives $\grrz\leq G$, contradicting Proposition \[prop:grrz\].
In summary: if $k=4$, then $E_4\leq G\leq F_4$; if $k=5$, then $G=E_5$; and if $k=6$ then $E_6\leq G\leq F_6$.
Theorem \[thm:gaps\] is an immediate consequence of Theorem \[thm:characterisations\] and Proposition \[prop:basic-graphs\]. We also obtain:
\[cor:characterisations\] Let $k\in \{1,2,3,4,5,6\}$.
1. If $G\in \Gamma$ is twin-free and connected, then $$\|G\|\leq \eta_k\iff G\leq F_j\text{ for some~$j\leq k$}.$$
2. If $G\in\Gamma$, then $\|G\|=\eta_k$ if and only if:
1. each component $H$ of $G$ satisfies $\operatorname{tf}(H)\leq F_j$ for some $j\leq k$; and
2. there is a component $H$ of $G$ with $E_k\leq \operatorname{tf}(H)$.
Normal masa bimodule projections {#sec:cbn}
================================
Let $\H$ be a separable Hilbert space $\H$. Given a masa (maximal abelian selfadjoint subalgebra) $\D\subseteq \B(\H)$, we write ${\mathit{NCB}_\D(\B(\H))}$ for the set of normal completely bounded linear maps $\B(\H)\to\B(\H)$ which are bimodular over $\D$. Smith’s theorem [@smith91] ensures that $\|\Phi\|=\|\Phi\|_{cb}$ for any $\Phi\in {\mathit{NCB}_\D(\B(\H))}$. Moreover, by [@SS-masas Theorem 2.3.7], there is a standard finite measure space $(X,\mu)$ so that $\D$ is unitarily equivalent to $L^\infty(X,\mu)$ acting by multiplication on $L^2(X,\mu)$. Hence we will take $\D=L^\infty(X,\mu)$ and $\H=L^2(X,\mu)$ without loss of generality.
Recall that a set $R\subseteq X\times X$ is marginally null if $R\subseteq (N\times X)\cup (X\times N)$ for some null set $N\subseteq X$. Two Borel functions $\phi,\psi\colon X\times X\to
\bC$ are equal marginally almost everywhere (m.a.e.) if $\{(x,y)\in X\times X\colon \phi(x,y)\ne \psi(x,y)\}$ is marginally null. We write $[\phi]$ for the equivalence class of all Borel functions which are equal m.a.e. to $\phi$. Let $L^\infty(X,\ell^2)$ denote the Banach space of essentially bounded measurable functions $X\to
\ell^2$, identified modulo equality almost everywhere. For $f,g\in
L^\infty(X,\ell^2)$, we write $\langle f,g\rangle\colon X\times X\to \bC$ for the function given m.a.e. by $\langle f,g\rangle(s,t)=\langle
f(s),g(t)\rangle$. As shown in [@kat-paulsen], there is a bijection $$\Gamma\colon {\mathit{NCB}_\D(\B(\H))}\to \{ [\langle f,g\rangle]\colon f,g\in
L^\infty(X,\ell^2)\}$$ so that for every $\phi\in \Gamma(\Phi)$, the map $\Phi$ is the normal extension to $\B(\H)$ of pointwise multiplication by $\phi$ acting on the (integral kernels of) Hilbert-Schmidt operators in $\B(\H)$. Moreover, $\Gamma$ is a homomorphism with respect to composition of maps and pointwise multiplication, and $$\|\Phi\|=\inf\{ \|f\|\,\|g\|\colon f,g\in L^\infty(X,\ell^2),\
\Gamma(\Phi)=[\langle f,g\rangle]\}$$ and this infimum is attained. In the discrete case, this reduces to [@paulsen-book Corollary 8.8].
\[lem:direct-sums-cts\] Let $\Phi\in {\mathit{NCB}_\D(\B(\H))}$. If $\Gamma(\Phi)=[\phi]$ and $\{R_j\}_{j\ge1}$, $\{C_j\}_{j\ge1}$ are two countable Borel partitions of $X$ with $\phi^{-1}(\bC\setminus\{0\})\subseteq \bigcup_{j\ge1} R_j\times
C_j$, then $\|\Phi\|=\sup_j\|\Phi_j\|$ where $\Gamma(\Phi_j)=[\chi_{R_j\times C_j}\cdot\phi]$.
Let $P_j=\chi_{R_j}$ and $Q_j=\chi_{C_j}$. Note that $\{P_j\}$ and $\{Q_j\}$ are then partitions of the identity in $\D$. By [@kat-paulsen Theorem 10], the map $\Psi$ given by $\Psi(T)=\sum_{j\ge1} P_j TQ_j$ is in ${\mathit{NCB}_\D(\B(\H))}$, and $$\Gamma(\Psi)=[\chi_{K}]\quad\text{where}\quad K=\bigcup_{j\ge1}R_n\times
C_n.$$ Since $\Gamma$ is a homomorphism and $\phi=\chi_K\cdot\phi$, we have $$\Gamma(\Phi)=\Gamma(\Psi)\cdot \Gamma(\Phi)=\Gamma(\Psi\circ \Phi),$$ hence $\Phi=\Psi\circ \Phi$. Let $\Psi_j\in{\mathit{NCB}_\D(\B(\H))}$ be given by $\Psi_j(T)=P_j TQ_j$, and let $\Phi_j=\Psi_j\circ
\Phi$. Since $\Gamma$ is a homomorphism, $\Gamma(\Phi_j)=\Gamma(\Psi_j\circ \Phi)=[\chi_{R_j\times C_j}\cdot
\phi]$, and for any $T\in \B(\H)$, $$\|\Phi(T)\|=\|\Psi\circ \Phi(T)\|=\sup_{j\ge1} \|P_j\Phi(T)Q_j\|=\sup_{j\ge1} \|\Phi_j(T)\|.
\qedhere$$
\[prop:biggraph\] Let $\Phi\in{\mathit{NCB}_\D(\B(\H))}$ be idempotent and let $\eta>\|\Phi\|$.
1. There exist a Borel set $G\subseteq X\times X$ and weakly Borel measurable functions $f,g\colon X\to \ell^2$ so that
1. $\Gamma(\Phi)=[\chi_G]$;
2. $\chi_G(x,y)=\langle f(x),g(y)\rangle$ for all $x,y\in
X$; and
3. $\sup_{x,y\in X} \|f(x)\|\,\|g(y)\|<\eta$.
2. For such a set $G$, there are two countable families of disjoint Borel subsets of $X$, say $\{R_j\}$ and $\{C_j\}$, so that the components of $G$ are the Borel sets $G_j=G[R_j,C_j]$, and there are maps $\Phi_j\in {\mathit{NCB}_\D(\B(\H))}$ with $\Gamma(\Phi_j)=[\chi_{G_j}]$ and $\|\Phi\|=\sup_j \|\Phi_j\|$.
3. If $F$ is a countable induced subgraph of $G$, then $\|F\|<\eta$.
4. If $\operatorname{tf}(G)$ is countable, then $\|\Phi\|\leq \|\operatorname{tf}(G)\|$.
<!-- -->
1. We have $\Phi=\Phi\circ \Phi$, and $\Gamma$ is a homomorphism. Hence if $\phi\colon X\times X\to \bC$ is Borel with $\Gamma(\Phi)=[\phi]$, then $[\phi]=\Gamma(\Phi)=\Gamma(\Phi)^2=[\phi^2]$, from which it follows that $[\phi]=[\chi_{G}]$ where $G$ is the Borel set $G=\phi^{-1}(1)$. Hence there are $f,g\in L^\infty(X,\ell^2)$ with $[\chi_G]=[\langle f,g\rangle]$ and $\|f\|\,\|g\|=\|\Phi\|<\eta$; by multiplying $f$ and $g$ by $\chi_{X\setminus N}$ for some null set $N$ and removing the marginally null set $(N\times X)\cup (X\times N)$ from $G$, we can achieve both pointwise equality $\chi_G=\langle f,g\rangle$ on $X\times X$ and $\sup_{x,y\in X}\|f(x)\|\,\|g(y)\|<\eta$.
2. As in [@kat-paulsen], we can use the following argument of Arveson to show that $G$ is a countable union of Borel rectangles. Since $\ell^2$ is separable, the open set $\{(\xi,\eta)\in\ell^2\times \ell^2\colon \langle
\xi,\eta\rangle\ne0 \}$ is a countable union $\bigcup_{n\ge1}
U_n\times V_n$ where $U_n,V_n$ are open subsets of $\ell^2$. Let $A_n=f^{-1}(U_n)$ and $B_n=g^{-1}(V_n)$. These are Borel sets, and $G=\bigcup_{n\ge1} A_n\times B_n$. Discard empty sets, so that $A_n,B_n\ne\emptyset$ for all $n\ge1$.
For each $j\in\bN$, the component of $G$ containing $A_j$ and $B_j$ may be found as follows. Let $W_j^1=\{j\}$, and for $k\ge1$, let $$W_j^{k+1}=\{n\in\bN\colon \exists\,m\in W_j^k \text{
s.t.~either $A_m\cap A_n\ne\emptyset$ or $B_m\cap
B_n\ne\emptyset$}\}.$$ Let $W_j=\bigcup_{k\ge1} W_j^{k}$, and consider the Borel sets $R_j=\bigcup_{n\in W_j} A_n$ and $C_j=\bigcup_{n\in W_j}B_n$. By construction, $G_j=G[R_j,C_j]$ is Borel. It is easy to see that $G_j$ is the component of $G$ containing $A_j$ and $B_j$, and that every component of $G$ is of this form for some $j$. Discard duplicates and relabel so that $G_j\ne G_k$ for $j\ne k$; the families $\{R_j\}$ and $\{C_j\}$ are then disjoint. Extending each family to a countable Borel partition of $X$ and applying Lemma \[lem:direct-sums-cts\], we see that $\|\Phi\|=\sup_j\|\Phi_j\|$ where $\Phi_j=\Gamma^{-1}([\chi_{G_j}])$.
3. Let $F$ be a countable induced subgraph of $G$, so that $F=G[A,B]$ for countable sets $A,B\subseteq X$. Considering the functions $f|_A\in \ell^\infty(A,\ell^2)$ and $g|_B\in
\ell^\infty(B,\ell^2)$, we see that $\|F\|<\eta$ by [@paulsen-book Corollary 8.8].
4. Now suppose that $F=\operatorname{tf}(G)=G[A,B]$. By [@paulsen-book Corollary 8.8], there are functions $f_A\in \ell^\infty(A,\ell^2)$ and $g_B\in
\ell^\infty( B, \ell^2)$ so that $$\langle f_A,g_B\rangle = \chi_{\operatorname{tf}(G)}\colon A\times B\to \{0,1\}
\text{ and }\|f_A\|\,\|g_B\|=\|\operatorname{tf}(G)\|.$$ For $x,y\in X$, write $$G_x=\{y\in X\colon (x,y)\in G\} \quad\text{and}\quad G^y=\{x\in X\colon
(x,y)\in G\}.$$ For each $a\in A$ and $b\in B$, the equivalence classes $S(a)=\{ x\in X\colon G_a=G_x\}$ and $T(b)=\{y\in X\colon
G^b=G^y\}$ are all Borel; indeed, $$S(a)=f^{-1}\left( f(a)+\{g(y)\colon y\in
Y\}^\perp\right)$$and$$T(b)=g^{-1}\left(g(b)+\{f(x)\colon x\in X\}^\perp\right).$$ Hence $\tilde f=\sum_{a\in A} f_A(a)\chi_{S(a)}$ and $\tilde g=\sum_{b\in
B} g_B(b)\chi_{T(b)}$ are Borel functions $X\to\ell^2$, and $\chi_{G}(x,y)=\langle \tilde f(x),\tilde g(y)\rangle$ for every $x,y\in X$. So $$\|\Phi\|\leq \|\tilde f\|\,\|\tilde g\| \leq
\|f_A\|\,\|g_B\|=\|\operatorname{tf}(G)\|.\qedhere$$
Let $\H$ be a separable Hilbert space, and let $\D$ be a masa in $\B(\H)$. The set $\N(\D)=\{\|\Phi\|\colon \Phi\in {\mathit{NCB}_\D(\B(\H))},\
\text{$\Phi$ idempotent}\}$ satisfies $$\N(\D)\subseteq \{
\eta_0,\eta_1,\eta_2,\eta_3,\eta_4,\eta_5\}\cup [\eta_6,\infty).$$
Let $k\in \{1,2,3,4,5,6\}$ and suppose that $\Phi\in {\mathit{NCB}_\D(\B(\H))}$ is idempotent with $\eta_k>\|\Phi\|$. Taking $\eta=\eta_k$, let $G,f,g,\Phi_j$ be as in Proposition \[prop:biggraph\]. Since $\|\Phi\|=\sup_j\|\Phi_j\|$, every $\Phi_j$ has $\|\Phi_j\|<\eta_k$. Hence we may assume that $\Phi=\Phi_1$, so that $G$ is connected. Recall from \[sec:bipartite\] that $\F(G)$ is the set of (isomorphism classes of) finite, connected, twin-free subgraphs of $G$. If $F\in
\F(G)$, then $\|F\|<\eta_k$ by Proposition \[prop:biggraph\](3), so $\|F\|\leq \eta_{k-1}$ by Theorem \[thm:gaps\]. By Corollary \[cor:characterisations\], $\F(G)$ consists entirely of induced subgraphs of some finite bipartite graph, so $\F(G)$ is finite. By Lemma \[lem:finite-connected-subgraphs\], $\operatorname{tf}(G)\in
\F(G)$, so by Proposition \[prop:biggraph\](4), $\|\Phi\|\leq
\|\operatorname{tf}(G)\|\leq \eta_{k-1}$.
Let $\H$ be an infinite-dimensional separable Hilbert space. Do we have $\N(\D)=\N$ for every masa $\D$ in $\B(\H)$?
Random Schur idempotents {#sec:random}
========================
For $p\in (0,1)$ and $m,n\in\bN$, let $\G(m,n,p)$ be the probability space of bipartite graphs in $\Gamma(m,n)$ where each of the possible $mn$ edges appears independently with probability $p$.
How does $\bE_{m,n,p}(\|G\|)$, the expected value of the norm of the Schur idempotent arising from $G\in \G(m,n,p)$, behave as a function of $m$ and $n$?
Here is a crude result in this general direction.
$\bE_{m,n,p}(\|G\|)\to \infty$ as $\min\{m,n\}\to \infty$.
Let $s,t\in \bN$, fix $H\in \Gamma(s,t)$ and let us write $\bP_{m,n,p}( H\leq G )$ for the probability that a random graph $G\in \G(m,n,p)$ contains an induced subgraph isomorphic to $H$. We claim that $$\bP_{m,n,p}( H\leq G ) \to 1\quad\text{as } \min\{m,n\}\to
\infty.$$ Indeed, as in [@diestel Proposition 11.3.1], one can see that the complementary event $H\not\leq G$ satisfies $$\bP_{m,n,p}(H\not\leq G) \leq
(1-r)^{\min\{\lfloor{m/s}\rfloor,\lfloor{n/t}\rfloor\}}\to
0\quad\text{as } \min\{m,n\}\to \infty$$ where $r>0$ is the probability that a random graph in $\G(s,t,p)$ is isomorphic to $H$. Hence $$\begin{aligned}
\bE_{m,n,p}(\|G\|)&=\sum_{G\in \Gamma(m,n)} \|G\|\,\bP_{m,n,p}(\{G\})\\
&\geq \sum_{H\leq G\in \Gamma(m,n)} \|G\| \,\bP_{m,n,p}(\{G\})\\
&\geq \|H\|\sum_{H\leq G\in \Gamma(m,n)}\,\bP_{m,n,p}(\{G\})=
\|H\|\, \bP_{m,n,p}(H \leq G),
\end{aligned}$$ so $$\lim_{\min\{m,n\}\to \infty} \bE_{m,n,p}(\|G\|) \geq
\sup\{ \|H\|\colon H\in \Gamma(s,t),\ s,t\in\bN\}=\infty.\qedhere$$
For $p=1/2$, we can say more about the growth rate of $\bE_{m,n,p}(\|G\|)$.
\[lem:probs\] Let $m,n\in\bN$, fix an $m\times n$ matrix $A$ with complex entries and let $\mu$ be the uniform probability measure on $M_{m,n}(\{-1,1\})$. If $$\int_{\epsilon\in M_{m,n}(\{-1,1\})}
\|\epsilon\schur A\|_\schur\,d\mu(\epsilon)=M,$$ then $$\|\epsilon\schur A\|_\schur\leq 4M$$ for every $\epsilon\in M_{m,n}(\{-1,1\})$.
Let $\nu$ be the probability measure on $M_{m,n}(\bT)=\bT^{m\times
n}$ which is the product of $m\times n$ copies of normalised Haar measure on $\bT$. The arguments in [@kahane 2.6] show that $$\int_{z\in M_{m,n}(\bT)} \|\operatorname{Re}(z)\schur A\|_\schur \,d\nu(z) \leq
M$$and$$\int_{z\in M_{m,n}(\bT)} \|\operatorname{Im}(z)\schur
A\|_\schur \,d\nu(z) \leq M,$$ where $\operatorname{Re}(z)=[\operatorname{Re}(z_{ij})]$ and $\operatorname{Im}(z)=[\operatorname{Im}(z_{ij})]$. Hence $$\int_{z\in M_{m,n}(\bT)}
\|z\schur A\|_\schur \,d\nu(z)\leq 2M.$$ By [@pisier Theorem 2.2(i) and Remark 2.3], $A=B+C$ where $c(B)\leq 2M$ and $c(C^t)\leq 2M$. For any $\epsilon\in M_{m,n}(\{-1,1\})$, we have $$\|\epsilon\schur B\|_{\schur} \leq c(\epsilon\schur B)=c(B)\leq 2M$$ and similarly $\|\epsilon\schur C\|_{\schur}\leq 2M$, so $$\|\epsilon\schur A\|_\schur \leq
\|\epsilon\schur B\|_\schur+\|\epsilon\schur C\|_\schur \leq 4M.\qedhere$$
$\bE_{m,n,1/2}(\|G\|)\geq \frac1{8}\sqrt{\frac k2}-1$ where $k=\min\{m,n\}$.
Let $\mu$ be the probability measure of the lemma, and write $$M=\int
\|\epsilon\|_\schur\, d\mu(\epsilon).$$ Note that $${\bE_{m,n,1/2}(\|G\|)}= \int \|2\epsilon -\mathbf 1\|_{\schur}\,d\mu(\epsilon)
\geq 2 M - 1,$$ where $\mathbf 1$ is the all ones matrix. On the other hand, [@dav-don Theorem 2.4] implies that there is a matrix $\epsilon\in M_{m,n}(\{\pm1\})$ with $\|\epsilon\|_{\schur}\geq \frac14 \sqrt{\frac{mn}{m+n}}\geq
\frac14\sqrt{\frac k2}$. By Lemma \[lem:probs\], $M\geq \frac14
\|\epsilon\|_{\schur}$. Combining these three inequalities gives the desired lower bound on ${\bE_{m,n,1/2}(\|G\|)}$.
Acknowledgements {#acknowledgements .unnumbered}
================
The author is grateful to Chris Boyd, Ken Davidson, Charles Johnson and Ivan Todorov for stimulating discussion and correspondence on topics related to this paper.
[99]{} J.R. Angelos, C.C. Cowen and S.K. Narayan, *Triangular truncation and finding the norm of a Hadamard multiplier*, Lin. Alg. Appl. [**170**]{} (1992) 117–135. G. Bennett, *Schur multipliers*, Duke Math. J. [**44**]{} (1977), 603–639. R. Bhatia, M.D. Choi and C. Davis, *Comparing a matrix to its off-diagonal part*, Operator Theory: Advances and Applications [**40**]{} (1989) 151–164. C.C. Cowen, P.A. Ferguson, D.K. Jackman, E.A. Sexauer, C. Vogt and H.J. Woolf, *Finding Norms of Hadamard Multipliers*, Lin. Alg. Appl. [**247**]{} (1996) 217–235. K.R. Davidson, *Nest algebras*, Pitman (1988). K.R. Davidson and A.P. Donsig, *Norms of Schur multipliers*, Illinois J. Math. [**51**]{} (2007) 743–766. R. Diestel, *Graph theory*, Graduate Texts in Mathematics 173, Springer-Verlag (2000). Brian E. Forrest and Volker Runde, *Norm One Idempotent cb-Multipliers with Applications to the Fourier Algebra in the cb-Multiplier Norm*, Canad. Math. Bull. [**54**]{} (2011), 654–662. Jean-Pierre Kahane, *Some random series of functions*, CUP 1985 (second edition). Aristedes Katavolos and Vern Paulsen, *On the ranges of bimodule projections*, Canad. Math. Bull. 48 (2005) no. 1, 91–111. Michael Knapp, *Sines and cosines of angles in arithmetic progression*, Math. Magazine 82 (2009) no. 5, 371–372. S. Kwapien and A. Pelczynski, *The main triangle projection in matrix spaces and its application*, Studia Math. 34 (1970) 43–68. Roy Mathias, *The Hadamard operator norm of a circulant and applications*, SIAM J. Matrix Anal. Appl. **14** (1993), 1152–1167. Leo Livschits, *A note on 0-1 Schur multipliers*, Lin. Alg. Appl. [**22**]{} (1995), 15–22. Vern I. Paulsen, *Completely bounded maps and operator algebras*, Cambridge Studies in Advanced Mathematics, 78. CUP 2002. Vern Paulsen, Stephen Power and Roger Smith, *Schur products and matrix completions*, J. Funct. Anal. [**85**]{} (1989), 151–178. Gilles Pisier, *Multipliers and lacunary sets in non-amenable groups*, Amer. J. Math. [**117**]{} (1995), 337–376. Ana-Maria Popa, *On completely bounded multipliers of the Fourier algebras $A(G)$*, PhD thesis, University of Illinois at Urbana-Champaign, 2008. Zhong-Jin Ruan, *On Real Operator Spaces*, Acta Math. Sin., English Series [**19**]{} (2003), 485–496. Zhong-Jin Ruan, *Complexifications of real operator spaces*, Illinois J. Math. [**47**]{} (2003), 1047–1062. Allan M. Sinclair and Roger R. Smith, *Finite von Neumann algebras and masas*, LMS Lect. Notes 351, CUP 2008.Roger R. Smith, *Completely bounded module maps and the Haagerup tensor product*, J. Funct. Anal. [**102**]{} (1991), 156–175. W.C. Yueh, *Eigenvalues of several tridiagonal matrices*, Applied Math. E-Notes [**5**]{} (2005), 66–74.
|
{
"pile_set_name": "ArXiv"
}
|
---
author:
- Alexandre Emsenhuber
- Christoph Mordasini
- Remo Burn
- Yann Alibert
- Willy Benz
- Erik Asphaug
bibliography:
- 'manu.bib'
- 'add.bib'
date: 'Received DD MMM YYYY / Accepted DD MMM YYYY'
subtitle: 'II. Planetary population of solar-like stars and overview of statistical results[^1]'
title: 'The New Generation Planetary Population Synthesis (NGPPS)'
---
Introduction
============
Exoplanets are common. Results from the *Kepler* survey show that, on average, there are more exoplanets than stars, at least in the galactic environment probed by Kepler [e.g., @2018AJMulders]. The number of discovered exoplanets, principally through large surveys, either radial velocity (RV), such as HARPS [@2011MayorArxiv] and Keck & Lick [@2010ScienceHoward], or transit, as *CoRoT* [@2013IcarusMoutou] or *Kepler* [@2010ScienceBorucki; @2018ApJSThompson], permits to constrain properties of exoplanetary systems, about their mass, radii, distances, eccentricities, spacing and mutual inclinations . In addition, various correlations with stellar properties have also been determined [@2003AASantos; @2011MayorArxiv; @2018AJPetigura].
Yet, understanding how the formation and evolution of these planets work remains a challenge. Observations of the progenitors (circumstellar discs) are plentiful, but only few forming planets are known, such as PDS 70b [@2018AAKeppler; @2018AAMueller]. Reliance on theoretical modelling for the formation stage is then necessary. A model that reproduces the final systems accounting for the initial state can provide valuable information about how planetary systems formation and evolution works.
Constraints of the final properties
-----------------------------------
For the constrains on planets, we can divide them in three main categories: 1) the characteristics of the planets themselves, for example their mass, radii, distances and eccentricities, 2) the properties of planetary systems and their diversity in terms of architecture, such as their multiplicity, mutual spacing and correlations between occurrences of different planet types, and 3) the correlations between the previous items and stellar properties, such as its metallicity.
Giant planets within 5– around FGK stars have a frequency of 10- [@2008PASPCumming; @2010PASPJohnson; @2011MayorArxiv]. Earlier works found that giants had a constant probability of occurrence in log(P) between 2 and 2000 days [@2008PASPCumming], with an excess of hot-Jupiters, as they occur on 0.5- of Sun-like star [@2010ScienceHoward; @2011MayorArxiv; @2012ApJWright]. More recent analysis found a decrease in the occurrence rate with distance, where the onset of the reduction could be already at - [@2016ApJBryan] and a occurrence rate for detectable distant (tens to hundreds of AUs), massive planets [@2016PASPBowler; @2016AAGalicher]. This means that the frequency of giant planets must peak at intermediate distances, possibly near the snow-line [@2019ApJFernandes].
System-level statistics provide additional information about properties in a system versus the whole population level. Diversity within each system compared to the whole population is a good example. For instance planets in small-mass systems have similar masses [@2017ApJMillholland], sizes and spacing [@2018AJWeiss]. Planet multiplicity tend to decrease for systems that host more massive planets [@2011ApJLatham]. For giant planets, hot Jupiters do not usually have nearby companions [@2012PNASSteffen], but roughly half of them have more distant ones [@2014ApJKnutson]. Conversely, distant giants also have a multiplicity rate of roughly [@2016ApJBryan; @2019ApJWagner]. There are also correlations between Super Earths and Jupiter analogs [@2019AJBryan] and between Super Earths and cold giants [@2018AJZhuWu]. This will be the subject of a companion work [@NGPPS3]. In addition, @2016ApJBryan observed that planets in multiple systems have on average a higher eccentricity than single giant planets; a different result from previous studies that found that planets in multiple systems had on average lower eccentricities [@2013ScienceHoward; @2015PNASLimbach].
Correlations between stellar and planetary properties provide important information on the formation mechanism. Protoplanetary discs properties, especially their heavy-elements content, is linked to the host star’s metallicity [@2016ApJGaspar], as they form from the same molecular cloud. Giant planets are preferentially found around metal-rich stars . For low-mass planets, such a correlation still exists although it is weaker .
Further, we now have correlations between architecture and metallicity, with compact multi-planetary systems being more common on metal-poor stars [@2018ApJBrewer] while systems around metal-rich stars are more diverse [@2018AJPetigura]. Also, the eccentricities of giant planets around metal-rich stars tend to be higher than the one around metal-poor stars [@2018ApJBuchhave].
Constraints on protoplanetary discs
-----------------------------------
From the survey of star forming regions, we can determine the distribution of some characteristics of protoplanetary disc. The percentage of stars with a disc decreases with age in an exponential fashion with a characteristic time of a few [@2009AIPCMamajek; @2010AAFedele]. Correlations were also found between disc masses and sizes [@2010ApJAndrews; @2018ApJAndrewsA; @2017ApJTripathi; @2020ApJHendler], stellar masses [@2013ApJAndrews; @2016ApJAnsdell; @2016ApJPascucci] and accretion rate onto the star [@2016AAManaraB; @2019AAManara; @2017ApJMulders].
With these observations, it is possible to retrieve the characteristics at early stages of disc evolution [@2018ApJSTychoniec], which are relevant for the initial conditions, and constraints on the transport mechanism in effect [@2017ApJMulders].
Linking both
------------
To link protoplanetary discs to final systems, we need to use a formation model. Yet, modelling planetary formation is not an easy task, as many physical effects occur concurrently: growth of micron-size dust to planetary-sized bodies, the accretion of gas, orbital migration and dynamical interactions for multi-planetary systems. In @NGPPS1 [hereafter ], we present an update of the *Bern* model of planetary formation and evolution. This is a global model, i.e. it includes the relevant processes that occur from the accretion of the protoplanets up to their long-term evolution.
Nevertheless, the constraints derived from observation for a single exoplanetary system compared to the model parameter does not permit to understand planetary formation at the individual system level. Working at the population level, with planetary population synthesis is a much more powerful tool to understand planetary formation in general. This allows to determine how the different mechanisms that occur during planetary systems formation of interact.
Theoretical models that are able to reproduce the characteristic of the observed exoplanets can be used to make predictions about the real population, which is helpful when designing future observations and instruments. For discovered planets, they can be used to propose a pathway for their formation [@2020NatureArmstrong], or point to other formation mechanisms if they cannot be reproduced at all [@2019ScienceMorales].
This work
---------
In this work, we apply the Generation III *Bern* model of planetary formation and evolution described in to obtain synthetic populations of planetary systems. We provide the methods that we use to perform population synthesis, which are an update from .
We then present five synthetic planet populations for solar-like stars where we vary the initial number of embryos per system. The goal is to test the convergence of our model with respect to this parameter. The populations with a larger number of embryos are capable to follow the formation of terrestrial planets () but they are expensive to compute. On the other hand, the populations with a lower number of embryos are much cheaper to compute (with the extreme case of a single embryo per system), but fails to follow properly terrestrial planets. This test will be useful for future works in this series about the effects of the parameters of the model or physical processes, which requires the computation of multiple populations.
Formation and evolution model {#sec:model}
=============================
The model is described in ; so we will give here only a brief summary. In our coupled formation and evolution model, we first model the planets’ main formation phase for a fixed time interval (set to ) during which planets accrete solids and gas, migrate, and interact via the N-body. Afterwards, in the evolutionary phase, we follow the thermodynamical evolution of each planet individually to .
The formation model derives from the work of . It follows the evolution of a viscous accretion disc [@1952ZNatALust; @1974NMRASLyndenBellPringle]. The turbulent viscosity is provided by the standard $\alpha$ parameter . Solids are represented by planetesimals, whose dynamical state is given by the drag from the gas and the stirring from the other planetesimals and the growing protoplanets . This disc provides gas and solids from which the protoplanets can accrete while also affecting the bodies that are inside it, by gas-driven planetary migrations.
The formation of the protoplanets is based on the core accretion paradigm [@1974IcarusPerriCameron; @1980PThPhMizuno], assuming planetesimal accretion in the oligarchic regime . Gas accretion is initially governed by the ability of the planet to radiate away the potential energy [@1996IcarusPollack; @2015ApJLeeChiang], and so the envelope mass is determined by solving the internal structure equations [@1986IcarusBodenheimerPollack]. Once the planet is massive enough (on the order of ), cooling becomes efficient, and runaway gas accretion can occur. In that situation, the envelope is no longer in equilibrium with the surrounding gas disc and contracts [@2000IcarusBodenheimer] while gas accretion is limited by the supply of the gas disc.
Multiple embryos can form concurrently in each system, and the gravitational interactions are modelled using the `mercury` *N*-body package [@1999MNRASChambers].
Once the formation stage is finished, the model transitions to the evolutionary phase, where planets are followed individually to . The planetary evolution model is based on and includes atmospheric escape [@2014ApJJin] and migration due to tides raised on the star [@2011AABenitezLlambay].
Population synthesis
====================
Quantity Value
--------------------------------------------------- ------------
Stellar mass $\mstar$
Disc viscosity parameter $\alpha$
Power law index of the gas disc $\betag$ 0.9
Power law index of the solids disc $\betas$ 1.5
Characteristic radius of the solids disc $\rcuts$ $\rcutg/2$
Planetesimal radius
Planetesimal density (rocky)
Planetesimal density (icy)
Embryo mass $\mstart$
Opacity reduction factor $\fopa$
: Fixed parameters for the formation and evolution model.[]{data-label="tab:params"}
To perform a population synthesis of planetary systems, we use a Monte Carlo approach for the initial conditions of the discs, in a similar fashion that has been performed in and . The Monte Carlo variables are selected as:
- The initial mass of the gas disc $\mgas$
- The external photo-evaporation rate $\mwind$
- The dust-to-gas ratio $\fpg=M_\mathrm{s}/\mgas$
- The inner edge of the gas disc $\rin$
- The initial location of the embryos
The other fixed parameters used in this study are provided in Tab. \[tab:params\]. These are taken to remain the same in all systems.
In the rest of this section, we discuss each Monte Carlo variable and their distributions, as well as the related fixed parameters.
Gas disc mass
-------------
Source $\mu$ $\sigma$
---------------------------------------- ------- ----------
Fit to Taurus^*a*^ -1.66 0.74
Fit to Ophiuchus^*a*^ -1.38 0.49
@2010ApJAndrews -1.66 0.56
Fit to class I from @2018ApJSTychoniec -1.49 0.35
Class I from @2019ApJWilliams -2.94 0.86
: Mean and standard deviation of the normal distribution of the disc mass for different observational sample.[]{data-label="tab:mdisc"}
\
^*a*^ Fit to the values obtained by @1996NatureBeckwithSargent performed by .
![Probability density functions for the different distributions given in Table \[tab:mdisc\]. In addition, we show the histogram of Class I discs from Fig. 12 of @2018ApJSTychoniec in black. All the curves are normalised so that the surface below them is unity.[]{data-label="fig:mdsic"}](mdiscs.pdf)
It is very difficult to observe directly H~2~ in protoplanetary discs, and so the most reliable method to determine disc masses remains the measurement of the continuum emission of the dust. To recover the gas mass, a dust-to-gas ratio similar to the interstellar medium is applied [@1996NatureBeckwithSargent; @2005ApJAndrewsWilliams; @2010ApJAndrews].
Several observational data for protoplanetary disc masses are reported in Table \[tab:mdisc\] and plotted in Fig. \[fig:mdsic\]. The first two values, for the fits on the distributions of Taurus and Ophiuchus star-forming regions were obtained by by fitting log-normal distributions on the results of @1996NatureBeckwithSargent. The third value was directly given in @2010ApJAndrews, while for the fourth one, we applied the same procedure as for the first two, but using the histogram of Class I disc masses reported in Fig. 12 of @2018ApJSTychoniec. Finally, we provide ALMA data of Class I discs in the Ophiuchus star-forming region from @2019ApJWilliams. The latter was converted to gas masses using a gas-to-dust mass ratio of 100:1 (as in [@2018ApJSTychoniec]).
There is more than one order of magnitude difference between the results from ALMA for the Ophiuchus star-forming region [@2019ApJWilliams] and others, such as those obtained with the VLA for Perseus [@2018ApJSTychoniec]. These differences are discussed in @2020arXivTychoniec, where the authors argue that 1) their median masses from VLA are more complete and 2) Class 0/I objects are more likely to be representative of the discs at early stage of planetary formation. The second point is related to our modelling, as the model used in this work begins once the protoplanetary disc is formed and dust has grown into planetesimals. Class I discs are hence the most relevant for our study. Thus, the work of @2018ApJSTychoniec is then the best suitable for our initial conditions, and this is the one we select. To avoid extreme values, we only allow disc masses between and . With this upper mass limit, the discs are always self-gravitationally stable.
Compared to the populations obtained with earlier versions of the model, our disc masses are smaller than the ones from , which used the parameters derived from fitting the values in the Ophiuchus star-forming region from @1996NatureBeckwithSargent. It should noted that unlike , we model the entire disc and not only the innermost , so we do not need to scale the disc masses to obtain only the innermost region. However, the distribution we adopted has a higher mean than what was obtained by @2010ApJAndrews; so we have overall larger disc masses than in the works of . , , and also used the results from @2010ApJAndrews, albeit in a different fashion, where initial masses were bootstrapped from the specific values of the observed discs.
Initial gas surface density: Spatial distribution
-------------------------------------------------
With spatially resolved discs it is possible to estimate the distribution of the material with respect to the distance from the star. The surface density typically goes with $r^{-1}$ until a characteristic radius where it relates more to an exponential decrease [@2008ApJHughes; @2009ApJAndrews; @2010ApJAndrews]. While in principle both the index of the power law and the characteristic radius would require their own distributions, we decided against adding more parameters for the initial conditions of our populations.
The power law index is fixed to $\betag=0.9$, which is consistent with the results from @2010ApJAndrews. For the characteristics radius $\rcutg$ as a function of disc mass, we use the following relationship, which is taken from Fig. 10 of @2010ApJAndrews, $$\frac{\mgas}{\SI{2e-3}{\msun}}=\left(\frac{\rcutg}{\SI{10}{\au}}\right)^{1.6}.$$ The relationship is somewhat different that the $\mgas\propto\rcutg^2$ found in more recent work [@2017ApJTripathi; @2018ApJAndrewsA]. The latter is however not universal across different stellar forming regions [@2020ApJHendler]. For this reason, we kept the shallower relationship between disc masses and sizes.
External photo-evaporation rate {#sec:mwind}
-------------------------------
![Fractions of stars with a protoplanetary disc as function of their age. The black line shows our results, while the blue line follow the exponential decay with a time scale of from @2009AIPCMamajek. The purple points are from @2017PhDAnsdell.[]{data-label="fig:lifetimes"}](lifetimes.pdf)
The photo-evaporation rate $\mwind$ and the viscosity parameter $\alpha$ are the main parameters that determine the life time of the gas discs. This is a degenerate problem, as increasing either $\alpha$ or $\mwind$ leads to shorter disc life times. However, $\alpha$ also constrains the mass that is accreted onto the star, which we can use to lift the degeneracy. Our aim is then to find combinations of $\alpha$ and $\mwind$ that provide accretion rates onto the star and disc life times that are in agreement with observations.
@2017ApJMulders combined the *ALMA* observations of the disc mass $M_\mathrm{disc}$ from @2016ApJPascucci and the *X-Shooter* accretion rate onto the star $\dot{M}_\mathrm{acc}$ from @2016AAManaraA [@2017AAManara] for the Chamaeleon I star-forming region and *ALMA* from @2016ApJAnsdell and *X-Shooter* from @2014AAAlcala [@2017AAAlcala] for the Lupus region. The $M_\mathrm{disc}$–$\dot{M}_\mathrm{acc}$ relation obtained by the combination of the two region is shallower than linear, indicating that another effect than viscous dissipation is potentially at play. Nevertheless, they obtained that for $\alpha$ values between and , it is possible to find relations that are comparable with observation.
@2019AAManara compared the $M_\mathrm{disc}$–$\dot{M}_\mathrm{acc}$ relation predicted in a population synthesis obtained with an earlier version of the formation used in this work to an extended sample relative to @2017ApJMulders. The synthetic disc population for a constant $\alpha$ fails to reproduce the whole scatter observed in the actual $M_\mathrm{disc}$–$\dot{M}_\mathrm{acc}$ relationship. Nevertheless, the synthetic population of discs is able to retrieve the observed correlation of $M_\mathrm{disc}$ and $\dot{M}_\mathrm{acc}$. Thus, to avoid introducing one more Monte Carlo variable in our population synthesis scheme, we will stick to a single $\alpha$ value for all discs. We selected a value of $\alpha=\num{2e-3}$, which is the same as the comparison shown in @2019AAManara. This leaves only the value of the external photo-evaporate to determine the life times of the discs.
Proptoplanetary discs have a lifetime in the 3– range [@2001ApJHaisch; @2010AAFedele; @2018MNRASRichert]. Fitting the results with an exponential law gives time constants of [@2009AIPCMamajek] or [@2017PhDAnsdell].
Given the fixed $\alpha=\num{2e-3}$ and the fixed distribution of initial disk masses described above, we determine an empirical distribution of external photoevaporaiton rates that leads to a distribution of the lifetimes of the synthetic disks that is in agreement with the observed distribution of disk lifetimes.
In this way, we find a log-normal distribution with parameters $\log_{10}(\mu/(\si{\msun\per\year}))=-6$ and $\sigma=\SI{0.5}{dex}$. Note that these rates would give the actual photoevaporation rates if the modelled discs would have a size of (). In reality, their outer radius are smaller ($\sim\SI{100}{\au}$) and given dynamically by the equilibrium of viscous spreading that acts to increase the outer radii and external photoevaporation which reduces the radii.
The selection of those values was made so that we have a cluster of disc life times at about . We show in Fig. \[fig:lifetimes\] the corresponding life times obtained using our model for the disc masses, $\alpha$ and $\mwind$ that we selected. While we miss the short-lived discs (less than ), our distribution is more able to reproduce some longer-lived clusters in the range of .
Dust-to-gas-ratio
-----------------
Source $\mu$ $\sigma$
----------------- ------- ----------
@2005AASantos -0.02 0.22
@2017AJPetigura 0.03 0.18
: Mean and standard deviation of the normal distribution of $\feh$ for different observational sample.[]{data-label="tab:feh"}
The initial mass of the solids disc is linked to that of the gas disc by a factor $\fpg$. To determine the distribution of this parameter, we assume that stellar and disc metallicites are identical. Hence we have the relation [@2001ApJMurray] $$\frac{\fpg}{\fpgsun}=10^{\feh}.$$
Furthermore, we now assume that the dust-to-gas of the Sun, $\fpgsun=0.0149$ [@2003ApJLodders]. It should be noted that this value is quite lower than in the first generation of our planetary population syntheses , where it was taken to be a factor roughly three times greater.
There are multiple possibilities for the distribution of the parameter; as stellar metallicities vary among different regions in the galaxy. The choice depends on the kind of observational survey we aim to compare to. RV surveys will favour stars in the neighbourhood of the Sun, while transit and in particular microlensing surveys can reach greater distances. For instance, the *Kepler* survey targets stars only in one specific direction towards Cygnus and Lyra. We provide the parameters of a normal distribution from different sources in Table \[tab:feh\]. For the population syntheses presented below, we use the distribution from @2005AASantos for the Coralie RV search sample.
One thing to mention is that the normal distribution is unbound on both sides. Hence to avoid modelling system that have metallicities not occurring in the solar neighbourhood given galactic chemical evolution, we restrict the selection of the parameter to the $-0.6<\feh<0.5$ range.
Inner edge
----------
![Probability density functions for the different distributions of inner radius as given in the text. All the curves are normalised so that the surface below them is unity.[]{data-label="fig:inner-rad"}](inner_rad.pdf)
The position of the inner edge of the disc plays an important role for the final location of the close-in planets. For planets that form and then migrate inward, migration will stall when the planet reaches a location where gas is no longer present. If planets rather form in-situ, then the inner edge is also linked to where planets are able to accrete.
If we assume that the inner disc is truncated by magnetospheric accretion at the corotation radius, then the location of the inner edge can be derived from rotation rates of young stellar objects (YSOs). We show several distributions of those values in Fig. \[fig:inner-rad\]: a uniform distribution in the period between and that is compatible with the results of @1999AJStassun, a normal distribution with parameters $\mu=\SI{8.3}{\day}$ and $\sigma=\SI{5}{\day}$ derived by @2019AAHeller based on the work of @2008MNRASIrwin, and a log-normal distribution with a mean $\log_{10}(\mu/\mathrm{d})=0.67617866$ and deviation $\sigma=\SI{0.3056733}{dex}$ that is derived from the work of @2017AAVenuti.
In the present work, we adopt the last one, based on @2017AAVenuti. Here, the mean corresponds to a rotation period of 4.7 days or a distance of . To avoid that some discs have inner edges that are smaller than the initial stellar radius predicted by the stellar evolution model (), we truncate the distribution so that no inner edge can be within , which corresponds to a period of 0.77 days. We use the period as the main variable to obtain the inner radius as it is largely independent of the stellar mass at young ages [e.g. @2012ApJHendersonStassun].
It should be noted that the means of all the distributions presented here are lower than what is obtained in other works, such as 10 days in @2017ApJLeeChiang. The value of 10 days also correspond the peak of the location of the innermost planet as found by *Kepler* [@2018AJMulders].
Planetesimal disc masses
------------------------
![Distribution of initial planetesimals disc masses. The blue curve is an histogram of the actual values while the yellow curve show a log-normal fit to the data, whose mean (in log-space) is and a standard deviation of . The gray area denotes the possible range of values for the minimum-mass solar nebula (MMSN).[]{data-label="fig:pla-disc-masses"}](pla_disc_masses.pdf)
The total mass in solids is not itself a Monte Carlo variable, but the product of the gas disc mass $\mgas$ with the dust-to-gas ratio $\fpg$. However, it is one of the most important quantities that determines the types of planets that will be formed. Thus, it is still worth discussing. The distribution of the total mass in solids is shown in Fig. \[fig:pla-disc-masses\]. The disc masses were computed using the disc model, in a similar fashion than for the disc life times (Sect. \[sec:mwind\]). As the distribution of solids mass is the product of two log-normal distributions (the gas disc mass and the dust-to-gas ratio), it also close to a log-normal distribution (because the two underlying distributions are truncated plus the reduction of solids mass by volatiles being in the gas inside the corresponding ice lines). We therefore fitted a log-normal distribution, whose parameters are a mean (in log-space) of and a standard deviation of .
To compare the obtained masses with the solar system, we overlay the distributions with a possible range of values for the minimum-mass solar nebula. The lower boundary was chosen according to the lowest estimates for the core masses of the giant planets, at while the upper boundary was calculated as , from the higher estimates, plus needed for the outward planetesimals-driven migration of Neptune.
Embryos
-------
The embryos are initialised in the following way: we place a predetermined number of bodies of initial mass $\mstart=\SI{e-2}{\mearth}$ with a uniform probability in the logarithm of the distance between $\rin$ and $\SI{40}{\au}$. This spacing was selected to reproduce the outcomes of *N*-body studies of runaway and oligarchic growth where embryos have a constant spacing in terms of Hill radius [@1998IcarusKokubo]. We further enforce that no pair of embryos can lie within 10 Hill radii of each other, which is the usual spacing at the end of runaway growth [@1998IcarusKokubo; @2006IcarusChambers; @2010IcarusKobayashi; @2019IcarusWalshLevison].
The embryos start right at the beginning of the simulation. This means we assume that they form in a negligible time compared to evolution of the gas disc. This is obviously a strong assumption and will be revised in future generations of the model by addressing the evolution of the solids at early times (drift, planetesimal formation, embryo formation, see [@2020AAVoelkel]).
Results {#sec:pop}
=======
In this work, we perform five population syntheses, that differ only by the initial number of planetary embryos per system: 100 (NG76), 50 (NG75), 20 (NG74), 10 (NG84), and 1 (NG73). Here, per system also means per star and per disc. We will use the terms interchangeably in the following discussion. The names in parentheses refer to populations identifiers on the online archive DACE[^2].
For the populations with multiple embryos per system, we model $\nsystot=\num{1000}$ systems, whereas the single embryo population includes $\nsystot=\num{30000}$ systems to compensate the overall lower number of embryos. In the remainder of this section, we will discuss results at the population level without taking into account how planets are distributed in the systems. System-level statistics will be discussed in Sect. \[sec:types\].
Mass-distance diagrams {#sec:res-am}
----------------------



A key result of synthetic populations is the mass-against-distance diagram of the final planets. It shows what kind and where the formed planets are. This and the corresponding 2D histogram for the single embryo population are provided in Fig. \[fig:1emb\]. For the four populations with multiple embryos per system, the diagrams are shown in Fig. \[fig:ame\], and the corresponding histograms in Fig. \[fig:amh\]. To generate these snapshots, we used the state at . For the mass-distance diagrams, the time at which the results are plotted has a limited effect, as long as it is during the evolution stage (after ). Only the close-in planets may be affected, either by tidal migration or photo-evaporation.
All the populations show a certain reduction of the number of planets whose masses are between that of Neptune and Jupiter. This range is where planets reach the critical mass to undergo runaway gas accretion. Planet accrete mass rather quickly here, and it is therefore unlikely that the gas disc vanishes during the short period of time planets spend in this mass range. @2004ApJIda1 called this deficit of planets the “planetary desert”. Another common feature is the gradual inward migration of icy planets (shown in blue symbols on the diagrams) for intermediate masses causing planets with masses higher than to to reach the inner edge of the disc. This formation of this morphological feature is similar to the “horizontal branch” of planets found first in , as we will see in Sect. \[sec:res-form\]. As the Type I migration rate is proportional to the planet’s mass [e.g. @2002ApJTanaka; @2014PPVIBaruteau], more massive planets will tend to end up at locations that are further inward from their original position than lower-mass planets, as long as the planets are not so massive that they migrate in slower Type II migration regime. An important consequence of this is that the ice content of planets when starting in the left bottom corner increases not only when going outwards to larger orbital separations as it is trivially expected, but also when moving upwards to higher masses.
Coming to the differences between the populations, we see that the single-embryo population stands out compared to the others. Among the major differences we can cite: 1) the presence of a pile up of planets between and at the inner edge of the disc (about to ), 2) a different mass for the transition to envelope-dominated planets as visible in the transition from the blue to the red points ( to in single-embryo population compared to to in the multi-embryos case, as shown by the horizontal dashed lines on Figs. \[fig:1emb\] and \[fig:ame\]), 3) the effect of the convergence zone for Type I migration (see for instance ; ) which are most visible in the single-embryo population and less as the number of embryos per system increases, and 4) a total lack of distant giant planets in the single embryo population (the upper right region on the left panel of Fig. \[fig:1emb\]).
The first two effects are due to the intricate link between accretion and migration that we discuss in Sect. \[sec:am-mig-acc\]. The following two effects are extension of the changes we see in the multi-embryos populations. If we look at all of them, we see gradual changes in the imprint of migration, the masses and locations of the giant planets. These will be discussed in Sect. \[sec:am-nemb\]. The last effect is due to close encounters resulting in planet-planet scatterings that cannot happen in the presence of only a single protoplanet. In addition, only the 100-embryos population shows one important feature about the inner low-mass planets, namely that inside of , there are less planets of very low mass () than planets of . In the populations with less than 100 embryos, there are in contrast many embryos inside that have not grown significantly. We will discuss this in Sect. \[sec:am-small\].
### The interplay between migration and accretion: single versus multiple embryos per disc {#sec:am-mig-acc}
The single-embryo population completely lacks dynamical interactions. The only possibility for a planet to change its location is through orbital migration. The low- and middle-mass planets (up to a few tens of Earth masses, see Fig. 7 of ) will undergo type I orbital migration, whose rate is proportional the planet’s mass [e.g., @2002ApJTanaka; @2014PPVIBaruteau]. Thus, the least massive planets (below ) will remain close to their original location. On the other hand, planets in the range between and migrate the fastest inward. Now, in the single-embryo population, this fast migration will only stop under two conditions: 1) when the planet reaches the inner edge of the disc, or 2) when the planet grows sufficiently to open a gap in the disc and switches to type II migration, which is significantly slower.
Thus, to avoid being taken to the inner edge, planets must grow rapidly while they are in the to range. The planets are still in the planet-limited gas accretion regime at this epoch (that is, the attached phase): gas accretion is limited by the ability of the planet to radiate away the gravitational energy gained by accretion of both solids and gas. Thus, if the planet is still accreting solids, its ability to bind a large amount of gas is severely limited. To be able to undergo runaway gas accretion, the planet must either strongly decrease its solids accretion rate or attain a mass large enough so that cooling (and therefore contraction which allows gas accretion) become efficient, hence being able to accrete gas despite the solids accretion rate remaining large.
Here is a key difference between the single-embryo population and the ones with multiple embryos: the former effect, that is, a reduction of the accretion of solids cannot happen when a single embryo is present. This is because once the planet begins to migrate, it always finds new material to accrete from as it migrates into regions that contain the full untouched planetesimals surface density, and will most likely end close to the inner edge of the disc. The thermal support of the envelope because of strong continuous planetesimal accretion is sufficient to prevent runaway gas accretion except for the most massive cores. Hence, giant planets in that population always have a massive core, because it is the only way for them to undergo runaway gas accretion quickly. This effect also requires that a large amount of solids is present where the planet forms, so that it can accrete a very massive core without migrating too much.
On the other hand, when multiple embryos are present, the competition for solids provides a different pathway for giant planets to form. In this scenario, the initial part of the accretion of the core, until planets start to migrate, remains similar as in the single-embryo case. However, once the core experiences inward type I migration, it will enter at some point a region where another embryo has grown and depleted the planetesimals. This will deprive the first core of material to accrete from and cause a sudden decrease in the accretion rate of solids. As consequence, there will be a drop in the luminosity released by the accretion of solids, which opens the pathway to trigger runaway gas accretion at lower masses.
This difference is able to explain the first two items we mentioned above, namely the pile-up of massive close-in planets at the inner edge in the single-embryo population and the difference for the transition to envelope-dominated planets ( versus to ). Also, the more embryos there are, the less migration is needed to enter the region where another embryo has already accreted, as the embryos are more tightly packed. This results in a lower extent of Type I migration in the many-embryos populations, as the planets will undergo gas runaway more rapidly and switch to the slower Type II migration.
This effect also means that the multi-embryos populations have a way to limit the accretion of planetesimals as it would occur if the embryos “shepherd” the planetesimals while they migrate [e.g., @1999IcarusTanakaIda]. Thus, the single-embryo populations does not represent the true situation with the efficient accretion of planetesimals during planetary migration.
To see such effects, it is necessary to calculate the interior structure to get the accretion rate dependent on the core accretion rate and the corresponding luminosity. With a model where the envelope mass depends only on the core mass, such an effect cannot be reproduced. It should also be noted that collisions with other embryos are included in our model. An additional contribution to the luminosity by collisions is included in the internal structure calculation (), and it does not provide a continuous luminosity source. They do not hinder gas accretion on the long term as does a relatively continuous accretion of planetesimals .
### The effect of the number of embryos {#sec:am-nemb}
For the other differences (the imprint of the convergence zones and the distant giant planets), we see gradual changes as the number of embryos increase. These effects are mainly due to gravitational interactions between the protoplanets.
In the multi-embryos populations, we also notice the presence of planets in locations not populated in the single-embryo case. This includes the distant giants and planets with masses below and distance below (i.e., inside the inner edge of the disc). In the single-embryo population, planets follow precisely the migration prescription. This is not the case when there are multiple bodies in the same systems. Planets trapped in mean-motion resonances (MMR) will migrate together and there is the potential for dynamical interactions affecting the orbits, with the extreme case being close encounters resulting in sudden changes of the planet orbits.
MMRs can push planets inside the inner edge of the disc by the inward migration of another planet which is still located within the disc, hence we find planets closer to the star than the inner edge of the disc in the corresponding populations.
Both MMRs and dynamical interactions can push planets out of the convergence zones of orbital migration. In the single-embryo population, we observe three different zones with a lack of planets in between. The first two are for rocky planets, while the latter contains mostly icy planets. In the 10, 20, and 50-embryos we still see some imprint of the convergence zones, each time with a decreased intensity. In the 100-embryos population, the effects of the convergence zones have nearly vanished.
We set the limit for the transition between ice-poor (rocky) and ice-rich planets at of volatiles by mass in the core. This is to avoid having planets with extremely low amount of volatile appearing as icy in the diagram. The limit was set according to the amount of water (the main component of volatiles) to obtain high-pressure ice at the bottom of oceans of a planet [@2014AAAlibert]. In the 50 and 100-embryos populations, ice-rich planets are found in regions populated only with ice-free planets in the 10-embryos population. This can be seen at the position of the Earth on Fig. \[fig:ame\]. In the 10-embryos population, the Earth lies in a region harbouring only ice-free planets. In the 20 and 50-embryos populations, the Earth lies at the transition between the two, while in the 100-embryos population, it is in the ice-rich region. Further, dynamical interactions are able to send icy low-mass planets in the inner region of the disc (inside )
As the number of embryos increases, we observe a greater mixing of the rocky and icy planets. In the single embryo population, the two are well separated, while as the number of embryos grows, we note more and more icy planets of a few Earth masses in the inner part of the disc. This affects only planets of more than a few Earth masses, or regions directly inside of the ice line; for instance, we do not obtain icy planets less than an Earth mass within fractions of an AU. As the single embryo population shows, bringing icy low-mass bodies to the inner part of systems is not possible by migration only, so there must be multi-body effects, such as close encounter and capture in resonance, that send part of the icy planets forming outside of the ice line in the inner part of the disc.
At large orbital distances, the populations with multiple embryos per systems contain planets located outside the outer limit of embryo starting locations () while the population with a single embryo does not. The only possibility for planets to end at those position are scattering events due to close encounters with other planets, as outward migration does not happen at these locations. The black horizontal bars of these planets on Fig. \[fig:ame\] show their eccentricities. We see all of these planets have a periapsis inside , indicating that these planets come in a location where other planets are present at some point during their orbit. We then might find planets formed by core accretion at large separation, but with our model these planet remain on eccentric orbit, as circularisation does not happen on a sufficiently short time scale before the dispersal of the gas disc. We could thus explain directly-imaged planets at large separation, such as HIP 65426b only if they have a significant eccentricity to have a periapsis at a distance where core accretion is efficient, i.e., inside of . This formation scenario was studied extensively in @2019AAMarleau.
### Low- and intermediate-mass planets inside {#sec:am-small}
The 100-embryos population shows a specific feature absent in the other multi-embryo and the single embryo population, namely the inner low- and intermediate-mass planets, that is, within and for masses up to about . We observe a decrease of the occurrence rate with decreasing mass at masses less than , with a near total absence of bodies of the mass of Mars ().
This effect relates to the formation of the terrestrial planets we discussed in . It applies to systems with a low metallicity, where migration is unimportant because growth is slow. Only the 100-embryos population has a mean spacing between the initial bodies so that they stir each other already before reaching their isolation mass. Hence, in that population, the inner region of the disc is fully depleted in planetesimals and the embryos end their growth with a “giant impact” stage, similar to the terrestrial planets in the solar system [@1985ScienceWetherill; @2002ApJKokubo]. In the other populations, the spacing between the embryos is too large and they end up growing as if they were isolated. This means that they grow only to masses that are much less compared to the case that all solids in the inner disc end up in planets as in the 100-embryos population, instead of remaining in planetesimals.
The 100-embryos populations should hence be representative of the formation of planets spanning the entire mass range, at least with . This will enable use to compare architectures of low-mass (i.e. terrestrial) systems with observations to determine if planet pairs have similar masses [@2017ApJMillholland], radii and spacing [@2018AJWeiss]; see @NGPPS5.
There are two caveats with our model for the formation of terrestrial planets. Firstly, we set the limit for the formation stage to , where planets can accrete and dynamically interact via *N*-body integration. As the accretion time scale increases with distance, this limitation affects more the outer part of the system. In the case of a system with an initial mass comparable to the minimum-mass solar nebula [MMSN; @1977ApSSWeidenschilling; @1981PThPSHayashi], we found in that by about the instability phase had mostly finished inside of . This is comparable to what @2001IcarusChambers found is required for Earth-like planets to accrete half of their mass for a similar scenario. Growth is more rapid in systems with a higher solid content though. It follows that for a disc with MMSN content of solids, the low-mass planets obtained in our population are mostly at the end of their formation in the inner region, roughly a factor two too small around , and that much longer times are required for more distant planets. The limited time for the *N*-body interactions (which in our model occur only during the formation) can also mean that we miss dynamical instabilities at late times. For instance, @2019arXivIzidoro found that it can take up to for systems to go unstable.
The second caveat is that terrestrial planets accrete predominantly from other embryos, rather than planetesimals as it the case for the giant planets. However, giant impacts, due to similar size of the involved bodies have a variety of possible outcomes [@2010ChEGAsphaug]. Accretion (or merging) is not the most common result of such a collision [@2012ApJStewart; @2013IcarusChambers; @2016ApJQuintana]. Despite this, our collision model is unconditional merging when it comes to terrestrial planets (). For instance, we see that we are unable to form equivalents of the smaller terrestrial planets of the Solar System, Mercury and Mars as a majority of terrestrial-planet forming systems give planet masses similar to $\SI{1}{\mearth}$. Mercury might be the result of a series of erosive collisions [@2007SSRvBenz; @2014NatGeoAsphaug; @2018ApJChau] that are not part of our model. Using a more realistic collision model, @2013IcarusChambers found that the resulting planets have slightly lower mass and eccentricities, and that the overall time required to form these planets increases, as collisional accretion is not as efficient.
Formation tracks {#sec:res-form}
----------------

To better understand the differences between the populations and how the interactions between embryos affect planetary formation, we want to analyse how different types of planets form in two populations. We therefore show in Fig. \[fig:am\_tracks\] formation tracks in the mass-distance diagram of selected groups of planets, for two populations: the one with a single embryo per system and the one with 100. In that figure, there are nine groups, divided in 3 series. For selecting the planets that belong in each group, we use the following procedure: first, we select a reference mass and distance. Then we search for the ten closest planets to that reference point, the metric being the difference in the logarithm of both quantities (possibly with a second criterion on the minimum mass). This ten planets are highlighted and their formation tracks are superimposed on the overall mass-distance diagram. The top panels show Earth-mass planets; the reference points are as follows: and (A; green), and (B; light blue), and and (C; dark blue). The middle panels show intermediate-mass planets, with the reference points being and (D; red), and (E; orange), and and (F; yellow, with a minimum mass of ). The bottom panels show giant planets; the reference points are and (G; maroon), and (H; purple), and and (I; pink, with a minimum of ).
### Low-mass planets {#sec:res-form-low}
The formation tracks of the low-mass planets in the single-embryo case (top left panel of Fig. \[fig:am\_tracks\]) is straightforward. As we already mentioned in and Sect. \[sec:res-am\], gas-driven migration is weak for these planets, so that they end close to where they started, with minimal inward migration. We can still note that the close-in group (A, in green), there is either outward migration all the way through, or inward migration followed by an episode of outward migration without accretion. This effect is caused by the presence of the innermost outward migration zone for low-mass planets (see Fig. 7 of ). We are in the presence of two scenarios that depend of the disc characteristics: either planets are inside the outward migration zone from the beginning and they will move out while they accrete, or they are in the inward migration zone at the beginning and pass in the outward zone later on during the disc’s evolution. In the latter case, there is no accretion during the second pass in a region because all planetesimals were previously accreted. It is hard to see on the plot, but most of these planets experience a last inward migration phase just before the dispersal of the gas disc when the outward migration zone has shifted to lower-mass bodies or closer-in.
In the 100-embryos population, the formation of the same resulting planets are more varied. For the two innermost groups (A and B), we divide the planets in two groups. First, for 16 planets, there is growth by giant impacts that we had anticipated and discussed in and Sect. \[sec:am-small\]. These planets have starting distances of to . The second (4 planets) is growth by accretion of planetesimals at a much larger distance (starting distances of to ) followed by a strong inward migration combined with limited accretion. This pathway is unseen in the single-embryos population because the inward migration is caused by the trapping in resonance chains with other more massive planets (around ) that experience stronger migration. Clearly, these different formation pathways could result in diversity in terms of the composition of the planets, as we will discuss below. For the outermost group, we see that there also two formation pathways, but they are not the same as for the inner groups. The first pathway is the same as in the single-embryo population, where the only effect is limited inward migration. The second is growth stirred by more massive planets, which causes jitter in their location and occasional scattering events. However, we see that these planets undergo much less giant impacts that their close-in counterparts. This implies that these planets growth mostly from the accretion of planetesimals, in a similar way than planets in the single-embryo case.
These formation pathways have for consequences that the terrestrial planets in the 100-embryos are made from material that originate from a broader region of the disc compared to the ones in the single-embryo case that have only accreted from neighbouring places. This explains why we obtain close-in planets that have some content of icy material. Accretion through giant impacts is stochastic in nature, and planets may well have collided with bodies originate from beyond the ice lines, or that have themselves had giant impacts with such distant embryos. For the second pathway (forced migration), we see that those planets have accreted most of their mass before experience the strong migration. We can thus expect that they harbour a large amount of icy material.
### Intermediate-mass planets {#sec:res-form-mid}
The formation of the intermediate-mass bodies in the single-embryo population (left middle panel of Fig. \[fig:am\_tracks\]) has already been largely discussed in Sect. \[sec:am-mig-acc\] for the innermost group (D) and in Sect. 8 of for the more distant groups (E and F). They are in the range where migration is most efficient, massive enough to undergo strong migration, but not massive enough to open a gap and migrate in the slow Type II regime. The inner group (D, in red) is similar to the “horizontal branch” of .
In the 100-embryos population, some of the planets in the innermost group (D) form in a similar fashion as in the single-embryos case, with the exception of some giants impact near the end of their migration. However, a few of these planets had their envelope removed in the aftermath of giant impacts that occurred at about . Were it not for the impacts, these planets would have most certainly ended up being giants at larger separation.
The more distant groups (E and F) have, however, a different formation history. Half of the planets in the mid-distance group had at some point a mass larger than while their final mass is close to . The mass loss is due to the removal of the envelope following a giant impact, due to the burst of luminosity caused by the sudden accretion of the impactor. The mass loss is delayed by about from the time of impact due to the timescale of release of the supplementary luminosity following . It is then possible for the giant impact to not be marked exactly at the location of the mass loss in the formation tracks. We also see that some of these planets spent time around after beginning the runaway gas accretion closer to . The formation tracks of these planets also show sudden change in their position, both outward and inward. Thus, not only migration is responsible for their final locations, but also close encounters. Actually, migration plays a lesser role in the 100-embryos population, as most of these planets began inside , while in the single-embryo population most of the embryos come from outside .
It is obvious that the effects induced by the concurrent formation of several planets introduces strong additional diversity in the formation pathways of planets. In the single-embryo case, planets may undergo gas runaway only once, and this must be late in the evolution of the gas disc to not accrete too much gas. With multiple embryos, the possibility of giant impacts means that it is possible to undergo gas runaway multiple times, provided the envelope is removed in between.
### Giant planets {#sec:res-form-giant}
The formation of giant planets in the single-embryos population (left bottom panel of Fig. \[fig:am\_tracks\]) has also been discussed in Sect. 8 of . They follow a similar pattern than for the intermediate masses at the beginning, but accretion dominates over migration, as indicated by the different slopes of the tracks in the mass-distance plane. Starting with roughly , the formation tracks of the different group begin to be well separated from each other. Then, the planets that finish closer-in will undergo larger migration up to about before undergoing runaway gas accretion. Migration takes over again later and, during later times, the planet migrate with only little accretion, as indicated by the different slopes of the tracks in the mass-distance plane. For the more distant planets (groups H and I), a similar structure of the formation tracks is observed, but migration remains overall less efficient. This is because accretion is very fast, as they must undergo runaway gas accretion in the early times so that they can accrete gas during most of the life time of the protoplanetary disc. As accretion is so fast, it leaves only little time for migration to act.
For the 100-embryos population, we first note that gas-driven migration is overall less efficient than in the single-embryo population. For instance, there are no giants inside , so planets in the innermost group (G) are on average further away than in the single-embryo case. These planet undergo gas runaway at little bit further out than in the single-embryo case: the former are slightly outside , while the latter are inside of that mark. The difference comes later, with the planets in the 100-embryo population not migrating as much afterwards. The initial location of the seeds forming these planets is also slightly different: in the 100-embryos population they are concentrated between 3 and (and one case at about ) while in the single-embryo close-in giant form from seeds between 3 and , only half of them being inside of . For the intermediate-distance planets (group H), in the case of the 100-embryos population, we see many giant impacts occurs. However, since these are giant planets, most of them did not loose their envelope. The few that did loose it have accreted a secondary envelope, and since they had already migrated inward, ended closer-in that we would have if they had not lost the envelope. The final part of their formation track is otherwise very similar to the single-embryo population. The initial part is different, with the same observation as the for the close-in giants: the seeds come from closer-in. Actually, the seeds for the close-in and intermediate-distance giants are from the same region of the disc in the 100-embryos population (between 3 and ). In the single-embryo population however, these are centred at about .
For the giant planets ending up at and , we see by the numerous stars along the brown and violet tracks that once the giant planets have triggered runaway gas accretion and masses , they are hit by numerous planets. The mass of the impactors are mostly and lower. These impactors also mostly come from nearby the giant planet they collide with. The collisions are due to the destabilisation of other less massive planets by the forming giant because of its strong mass growth coupled with migration. The effect of the impacts on the luminosity both during the formation and evolution phases will be investigated in a future publication of this series.
Distant giant planets in the 100-embryos population (group I) grow even less smoothly than the other groups. Interestingly enough, they come again from the same region of the disc as the two other groups in the 100-embryos population. They are then scattered outward by other massive planets in the same system. The fact that there are other massive planets in the same system is recognisable by the jitter in their distance. The large diversity of formation tracks of planets with very similar final mass and orbital distance in the 100-embryos population illustrates how difficult it is to infer the formation tracks of a specific planet just from the final position of an individual planet in the mass-distance diagram.
Formation time {#sec:res-time}
--------------

To further characterise the differences between the populations, we seek to determine the time it takes for the planets to form. For this, we plot in Fig. \[fig:am\_hcm\] the time at which the core’s mass is half of its final value, itself given in terms of the life time of the protoplanetary disc. We show only the two end members of our populations: the single-embryo one of the left, and the 100-embryos population on the right.
In both populations, we note that the core of the giant planets formed early. In the case of the single-embryo population it is not particularly different from other types of planets (principally the close-in ones), but in the 100-embryos population it does stand out compared to the rest of the population, where most of the other planets form much later. Further, the single-embryo planet shows a consistent trend for the formation time, with the most massive giant planets forming their core earliest, while the less massive ones, and the planets in the desert forming late. This trend is much less perceptible in the 100-embryos population. Some of the planets in the desert formed quite late and show a formation history similar to what is obtained in the single-embryo population, that is, a smooth growth and slight inward migration. Others however had their core formed early, in the same way as for the most massive giant planets. However the planet-planet interactions enable other pathways for the formation of these planets. For instance, some have had lost partly or entirely their envelope following a giant impact, leaving the planets with a limited time to accrete gas again. Others have been trapped in mean-motion resonances with massive planets still in the disc. These resonances will prevent the fast inward migration, thus enabling a formation process that is somewhat similar to one of giant planets without migration [e.g., @1996IcarusPollack], that is, with a significant delay between the accretion of the bulk of the core and the onset of the runaway gas accretion.
For the intermediate-mass planets, we observe that the ones that are close-in had their core formed quite early. In the single-embryo population, these are the planets that are close to the inner boundary of the disc, while for the 100-embryos population, this concerns some close-in planets in the to range. This is related to the discussion in Sect. \[sec:am-mig-acc\], about the planets that formed close to or beyond the ice line and rapidly migrated inward without undergoing runaway gas accretion, similar to the “horizontal branch” of .
For the inner low-mass planets, (up to and ), we obtain that the time taken for the formation increases along with the number of embryos. This can be seen in the right panel of Fig. \[fig:am\_hcm\], where many of these planet only attain half of their final mass after the dispersal of the gas disc. Thus a large part of the accretion process occurs in a gas-free environment. This helps explain for instance the disappearing of the features related to the gas disc, such as the migration traps.
We hence come back to the discussion of Sect. \[sec:res-form-low\] about the integration time required to model the formation of these systems. Our limit of $\SI{20}{\mega\year}$ is still too short for planets beyond $\sim\SI{2}{\au}$, as here we have a increasing fraction of planets that still accreted most of their mass while the gas disc was still present. This indicates that there have not been interactions after the dispersal of the latter, hence that a longer integration time could lead to fewer, more massive planets. This process however takes a long time to happen, on the order of or more, as we have seen with the terrestrial planets. Thus the direct modelling would be a very expensive task computationally in the context of a population synthesis, i.e. for around planetary systems.
One consequence of the late formation of the low-mass planets, which turns out to be more similar to the formation of the solar system’s terrestrial planets is the (in)ability of these planets to retains an envelope. Giant impacts that occur after the dispersal of the gas disc may lead to the ejection of the planet’s envelope, but it cannot be reaccreted any more. Atmospheric escape is then not the only mean to loose the envelope, and the evaporation valley [@2009AALammer; @2012ApJLopez; @2012MNRASOwenJackson; @2014ApJJin; @2018ApJJin] is not as clear as in the populations with a lower initial number of embryos per disc.
In future work, we will improve the way how impact stripping of gaseous envelopes is dealt with [@2018SSRvSchlichting; @2020MNRASDenman]. As described in , at the moment the impact energy is added into the internal structure calculation. This however neglects the mechanical removal of some gas during the impact via momentum exchange, and also assumes that the entire impact energy is deposited evenly deep in the planet’s envelope, at the CEB. In reality, only a part of the envelope close to the impact location might be strongly heated. Both these effects affect the efficiency of impact stripping. Interestingly enough, in a population synthesis, the emptiness of the valley can be used to observationally constraint the efficiency of impact stripping. The fact that the valley seems to be too populated compared to observations in the 100-embryos model is an indication that the current model for impact stripping in the Bern model is too efficient.
Mass-radius relationship {#sec:mass-dist}
------------------------

The mass-radius relation in the context of formation and evolution of planetary systems with 1 embryo per disc was extensively discussed in and @2014AAMordasiniA. To compare that scenario with the case of many embryos per disc, we show in Fig. \[fig:mrad\] the cases of the populations with initially one and 100 embryos per disc. The populations with an intermediate number of embryos per disc exhibit behaviours that are in between these two, so we are only showing the end members.
As discussed in , the global structure of the mass-radius relationship is caused by the combined effects of the properties of the equation of state of the main planetary forming materials (iron, silicates, ices, H/He), and the increase of the H/He mass fraction with the planet mass. The overall lower core masses in the population with multiple embryos per disc result in comparatively increased radius and lower metallicity for a given planet mass as the number of embryos increases. The spread of radii for a given mass is due to both different planet metallicities, $\mcore/M$, and distance to the central star. The last effect is more important in our work that in due to the prescription for the bloating of the close-in planets following @2018AJThorngrenFortney with a criterion for a minimum stellar flux of from @2011ApJSDemorySeager. Planets that satisfy this criterion and have their symbol set to a star in Fig. \[fig:mrad\]. We note in the single-embryo population, there are two branches in the mass-distance diagram, with the bloated planets having a radius few larger than their more distant counterpart. This branch does not continue for masses above because we do not have that massive planets at close-in locations. The 100-embryos population, however, does not show the second branch in the mass-distance diagram for the bloated planets because there are no giant planets at close-in locations, and only few for less massive bodies.
The most bloated planets have a mass of about . Observationally, the most bloated planets have in contrast masses larger than . This reflects that using the empirical bloating model of @2018AJThorngrenFortney for planets of any mass leads in our model to a M-R relation that differs from the observed one. There could be several reasons for this: the actual physical bloating mechanism has a mass dependency (or dependency on a parameter linked to the mass like the magnetic field strength or the metallicity) not accounted for in the empirical model which was derived mostly on giant planets. The discrepancy could also be due rather to the evaporation model in the sense that atmospheric loss for bloated $\sim\SI{60}{\mearth}$ planets is more efficient than predicted by our evaporation model. This would reduce the radii of these planets. The morphology of the close-in population will be studied further in a dedicated NGPPS publication.
The presence of multiple embryos in a disc lead to more diverse formation tracks, as we have seen in Sect. \[sec:res-form\]. This is reflected in the bigger spread of radii and envelope mass fraction for a given mass. The spread works in both directions. Planets in the multi-embryos population can have higher envelope contents, for instance, at $M=\SI{10}{\mearth}$, the largest radius is around $\SI{5}{\rearth}$ for the single embryo population, whereas for 100-embryos case, planets can have radii up to $\SI{8}{\rearth}$. Planets in the single-embryo population have a smooth formation and similar tracks for similar final positions and masses. It follows that these planets have similar core masses, which limits the core-mass effect. The 100-embryos, however, has two more effects that can change the core mass fraction in opposite ways, that we will now discuss in more details.
The first effect to alter the core mass fraction in the 100-embryos population is the competition for solids. We have discussed in Sect. \[sec:am-mig-acc\] that giant planets in the multiple-embryos populations have lower core masses. This is reflected in the mass-radius diagram by the difference in the core mass fractions. For instance, there are no planets with a core mass fraction of less than in the single-embryo population while we frequently obtain this value in the 100-embryo population for planets above . Even if the radius is only weakly dependent on the metallicity for these masses, we see that the maximum radius for the non-bloated planets is slightly larger in the 100-embryos population.
The second effect is giant impacts. Due to their random nature, they add some spread to the core mass fractions at a given mass. Some planets suffered from collisions with other bodies relatively late during their formation. These collisions can lead to the loss of a significant part of the planet’s envelope. As consequence, these planets have a lower envelope mass fraction for a given mass, because there is no loss of solids during such an event. Thus a collision does not simply reset the planet back to an earlier time, rather it can induce an increase of the bulk metallicity. We can see examples of such planets at intermediate masses. In the single-embryo population, the minimum envelope mass fraction increases significantly starting with about and no planet more massive than that values remains without envelope. In the 100-embryos however, we have several examples of planets with higher masses that exhibit small radii, including one roughly core without any surrounding envelope. One can also note a few giant planets in the 100-embryos population that have smaller radii than in the single embryo case. They are also caused by giant impact.
The timing of the collision is important. Early-on events when the gas disc is still important, may even lead to a more massive envelope that there could have been if no collision occurred, because the collision enables the core to cool more efficiently, thus increasing the gas accretion rate . Otherwise, the lack of a gas reservoir prevents the re-accretion of an envelope, namely when the collision occurs during the late stage of the gas disc presence in which case the envelope will not grow back to its previous mass.
Collisions are also the reason why there is a more extended range of planet masses without any envelope in the populations with multiple embryos per system. In the single embryo population, where only atmospheric escape acts, there are no planets without an envelope past $\SI{40}{\mearth}$, whereas we do have such cases in the other populations.
Distance-radius plot
--------------------
{width="\textwidth"}
In Fig. \[fig:araH\] we show the population NG73 (single embryo) and NG76 (100 embryos initially) as they would appear to transit and direct imaging surveys, that is, by showing the planes of orbital distance versus radius and apoastron distance versus absolute H magnitude.
A first major goal of the New Generation Planetary Population Synthesis was to predict directly and self-consistently all important observable characteristics of planets in multi-planetary systems, and not only masses and orbital elements as in previous generations of the Bern model. To achieve this, we have included (see ) in the Generation 3 Bern model the calculation of the internal structure of all planets in all phases, in particular also in the detached phase, which was not done in Generation 2. We have also coupled the formation phase to the long-term thermodynamic evolution phase (cooling and contraction) over Gigayear timescales. With this we can predict also the radius, the luminosity, and the magnitudes for each planet, from its origins as a seed to potentially a massive deuterium burning super-Jupiter. In this way it becomes possible to compare one population to all major observational techniques (radial velocity, transits, direct imaging, but also microlensing). These techniques probe all distinct parameters spaces of the planetary populations, and thus constrain different aspects of the theory of planet formation and evolution. Taken together, they lead to compelling combined constraints, and help to eventually derive a standard model of planet formation that is able to explain all major observational findings for the entire population, as opposed to a theory that is tailored to explain a certain sub-type of planets, but fails at other planets.
A second major goal of the new generation population synthesis was to be able to simulate planets ranging from Mars mass to super-Jupiters, and from star-grazing to very distant, or even rogue planets. For close-in planets for which the stellar proximity strongly influences the evolution, this meant that we had to include the effects of atmospheric escape, bloating, and stellar tides. Only then it becomes possible to meaningfully link formation and observations at an age of typically several Gigayears.
The top panels of the figure show the $a-R$ diagram of the two populations, at an age of . The quantitative description of the radius distribution, the formation tracks leading to the radii, and the statistical comparison to transit surveys will be the subject of a dedicated NGPPS paper ([@NGPPS5], see also [@2019ApJMulders]), therefore we here only give a short qualitative overview.
In the right plot, the roman numerals shown important morphological features of the close-in population.
\(I) are the bloated hot Jupiters. The bloating model is the empirical model of @2018AJThorngrenFortney, leading to an increase of the radii with decreasing orbital distance inside of about [@2011ApJSDemorySeager; @2020AASestovicDemory].
\(II) is the (sub)Neptunian desert, which is an absence of very close intermediate mass planets that was observationally characterized for example by @2016NatCommLundkvist, @2016AAMazeh, or @2018AABourrier. It is likely a consequence of atmospheric escape [e.g., @2007AALecavelierDesEtangs; @2014ApJKurokawaNakamoto; @2019ApJMcDonald]. In the plot, it is not very well visible, but the hot Jupiters “above” are indeed found to down to smaller orbital distances than the intermediate mass planets.
\(III) corresponds to the the hot and ultra hot solid planets like Corot-7b [@2009AALeger] or Kepler-10b [@2011ApJBatalha].
\(IV) is the evaporation valley [@2017AJFulton] which was predicted theoretically by several planet evolution models including atmospheric escape [@2013ApJOwenWu; @2013ApJLopezFortney; @2014ApJJin]. Super-Earth planets below the valley have lost their H/He as the temporal integral over the stellar XUV irradiation absorbed by these planets exceeded the gravitational binding energy of their envelope in the potential of the core [@2020AAMordasini].
\(V) are the Neptunian and sub-Neptunian planets above the valley. They appear numerous in the single embryo population, but less so in the 100-embryos population. In the analysis of @2019ApJMulders of an earlier generation of the Bern model, it was found that this class is the only one occurring with a significantly different rate (a lower one) than inferred observationaly from the Kepler survey.
\(VI) are the giant planets. In the synthetic population, outside of about (that is, where no bloating is acting), the giant planets lead to an almost horizontal, thin pile-up of radii (but note the logarithmic y-axis). This concentration is the consequence of the following : the mass-radius relationship in the giant planet mass range has a maximum at around 3 Jovian masses, and is relatively flat. This causes many planets from a quite wide mass range to fall in a similar radius range, close to . In the synthetic population, this concentration effect is artificially accentuated: during both the formation and evolutionary phase, the molecular and atomic opacities [from @2014ApJSFreedman] correspond to a solar-composition gas, identically for all planets. In reality, the atmospheric compositions and thus opacities differ, inducing via different contraction timescales [@2007ApJBurrows] a certain spread in the mass-radius relation that cannot occur in the synthesis. Similarly, in reality planets do not have all exactly the same age.
Several of these features are also visible in the top left panel showing the 100-embryos population, albeit often in a less clear way. This is a consequence of the stochastic nature of the *N*-body interactions. Giant impacts that strip the H/He envelope are an additional effect that is important for the radii that cannot occur in the single embryo population. Two consequences of giant impacts are obvious: first, they populate the evaporation valley with cores that would otherwise be too massive for the envelope to be lost only via atmospheric escape. The fact that the valley appears rather too blurred in the 100 embryo population compared to observations [@2017AJFulton; @2018AJPetigura] could be an observational hint that impact stripping might be overestimated in the model and should be improved in further model generations, for example along the lines of @2020MNRASDenman. Second, in the 100 embryo population, in the group of planets at around $a=\SI{1}{au}$ and with radii between about and (which is above the evaporation valley), there is a region of mixed planets with some possessing H/He, and others without it. In the single embryo population, all planets above the evaporation valley possess in contrast H/He envelopes. The black points in the 100-embryos population are thus the results of giant impact envelope stripping.
The quantitative comparison of the populations with transit surveys is as mentioned beyond the scope of this overview paper, but we note that many similar features are also found in the observed population. This reflects that the Generation III Bern is in contrast to older model generations able to simulate the formation and evolution also of close-in planets that are observationally particularly important.
Distance-magnitude plot, mass-magnitude relation and giant planets at large orbital distances
---------------------------------------------------------------------------------------------
While transit surveys probe the planetary population at close-in orbital distance, direct imaging surveys like GPIS [@2019AJNielsenA], NACO LP [@2017AAVigan], or SPHERE SHINE [@RevAAVigan] probe young giant planets at large orbital distances.
In the bottom left panel of Figure \[fig:araH\], the population with 100 embryos (NG76) is shown in the plane of apoastron distance versus absolute magnitude in the H band. The magnitude was calculated using the AMES COND atmospheric grid [@2012PTRSAAllard] assuming a solar-composition atmosphere. Magnitudes are a strong function of planet mass, therefore they accentuated massive giant planets.
Note that similarly to the radii, also here, the magnitudes were not obtained from some pre-computed mass-time-luminosity (or magnitude) relation or fit, but from solving the planetary internal structure of all planets during their entire “life”, i.e., from a planet’s birth as a lunar-mass embryo to present day, possibly as a massive deuterium-burning planet. To the best of our knowledge, the Generation III Bern model is currently the only global model predicting self-consistently besides the orbital elements and masses also the radii, luminosities, and magnitudes.
The plot shows that the synthesis only predicts few giant planets outside of about 5 AU. The main cause for this absence is rapid inward migration, explaining why in the single embryo population, there are no giant planets at all outside of about (Fig. \[fig:1emb\]). As mentioned above (see the formation tracks of group I in Fig. \[fig:am\_tracks\]), in the multiple embryo populations, giant planets at larger orbital distances are the result of violent scattering events among several concurrently forming giant planets [@2019AAMarleau]. Such events take preferentially place in very massive and metal-rich disks, explaining why the distant giant planets are massive (about 3 to several ten Jovian masses), in particular more massive than the “normal” giants in the pile-up at about . This is reflected in the apoastron-magnitude plot by an absence of distant planets with higher magnitudes (i.e., fainter planets). Compared to the mass-distance plot, the clustering is amplified by another effect : we see that there is a pile up of planets that have similar magnitudes of 11 to 9. To understand this pile-up, we need to consider the bottom right panel showing the mass - H magnitude relation at , , and . This plot is equivalent to the mass-radius plot shown above in connecting fundamental observable characteristic (here mass and luminosity).
Besides the expected general decrease of the magnitude with mass, we also see that there is a bump in the relation. It is caused by deuterium burning which is modelled as described in . At , the bump is centered at around , and at around 13- at later times. Deuterium burning delays the cooling, and causes planets of a relatively large mass range (at about to ) to fall in the same aforementioned magnitude range. This leads to the pile-up seen on the left.
In terms of the statistical properties and frequency of distant giants in the 100 embryo population, we find (see Table \[tab:props\]) that giant planets () are found outside of 5, 10, and for only 3.5, 1.6, and of all stars (compared to for all orbital distances). For comparison, in the SPHERE SHINE survey, the observed fraction of stars with at least one planet with $\num{1}-\num{75}{\mj}$ and $a=\num{5}-\SI{300}{\au}$ is $5.9^{+4.5}_{-2.7}$ % [@RevAAVigan]. In this paper, a statistical analysis of the NG76 population in the context of the SPHERE SHINE survey can be found. The distant synthetic planets are also eccentric (mean eccentricity of about 0.4-0.6), found around high \[Fe/H\] stars (mean: 0.2-0.3), and their multiplicity is unity, i.e., there is only one distant giant planet. They do, however, often have another massive companion closer in. For example, of the 8 giant planets with $a>\SI{20}{\au}$ in the population, 5 have a giant companion inside of . These properties are all signposts of the violent formation pathway of these planets.
In the future, comparisons with direct imaging surveys should include besides the planet frequency such architectural aspects and also that due to different formation histories, at a given moment in time, there is no unique mass - magnitude conversion as it is traditionally often assumed. This is again visible in the bottom right plot, where the magnitudes as a function of mass obtained in the synthesis are compared to the well known @2015AABaraffe models. They start form arbitrary hot initial conditions. The general trend is as expected similar in the two cases, and the magnitudes are very similar at lower masses at and , but above there are differences of almost 1 mag. The peak caused by D-burning is clearly sharper in our simulations. This is partially due to the coarse sampling in the @2015AABaraffe tables, but not only. This could affect that analyses of direct imaging surveys. We also see the intrinsic spread in the self-consistent model population model especially at young ages which comes from the different formation histories. The spread includes now in particular also the effects of giant impacts. The spread means that there is no 1-to-1 conversion from magnitude to mass, even if all other complexities (like cold vs. hot start, atmospheric composition, clouds etc.) would be solved. At , the spread induces a fundamental uncertainty in the mass-magnitude relation of maximum at lower masses without D-burning. In the mass range where D burning occurs, the impact is much larger, inducing an uncertainty of up to .
The planetary mass function (PMF) {#sec:mass}
---------------------------------
![Histogram (*top*) and reverse cumulative distribution (*bottom*) of the planet masses for the four populations presented in this study. The values are normalised by the number of systems in each population. Only planets that reached the end of the formation stage are counted; the maximum number of planets per system (the top left ending of the cumulative curves) can then be lower than the initial number of embryos.[]{data-label="fig:mass"}](mass_hist.pdf "fig:") ![Histogram (*top*) and reverse cumulative distribution (*bottom*) of the planet masses for the four populations presented in this study. The values are normalised by the number of systems in each population. Only planets that reached the end of the formation stage are counted; the maximum number of planets per system (the top left ending of the cumulative curves) can then be lower than the initial number of embryos.[]{data-label="fig:mass"}](mass_cumul.pdf "fig:")
The prediction of the planetary mass function is a fundamental outcome of any population synthesis. The PMF is a key quantity because of its observability and because it bears the imprint of the formation mechanism. We show the PMF and its reverse cumulative distribution of the different populations in Fig. \[fig:mass\]. Both give the average number of number planets per systems, i.e. the total number of planets divided by the number of systems in each population. The intersection of the curves with the left axis gives the average number of planets per system in each population. In the single-embryo population, the number is close to one, as all planet reach the end of the formation stage and can only be lost by tidal migration during the evolution phase. In the multi-embryos populations, giant impacts lead to the loss of embryos, especially which is especially important in the populations with the largest initial number of embryos. For instance, in the 100-embryos population, on average, only 32 embryos per system reach . To improve clarity, we will stick to the same colour code throughout the remainder of this article when comparing the populations: black curve denotes the 100-embryos population, red the 50-embryos population, orange the 20-embryos population, green the 10-embryos population, and blue the single-embryo one.
When comparing the overall results, we may divide the relative behaviour in three different regions, as shown with the grey dotted lines on Fig. \[fig:mass\]. Region 1, a relatively flat region in the histogram, with its upper boundary depending on the population: about to for the multi-embryos populations and for the single-embryo population. Region 2 shows a drop in the occurrence rates, up to the where the cumulative distribution indicates that we have an increased percentage of planets with the population with 50 embryos compared to the one with 20. Finally, region 3 of the giant planets, where there is first a minimum of occurence rate at about followed by a local maximum located in the – range.
In the first region, the increase of the number of embryos results in the corresponding increase of planets, there is thus virtually no other effect occurring. The only different is the end of this region, which gradually tends toward lower masses as the number of embryos increases. For the population with a single embryo, we observe a steeper drop of the cumulative curve in the 20– range. Planets that are contributing to this feature are actually located at the inner disc edge; these are planets that migrated inward without accreting substantial material during their migration. Furthermore, we note that the first bin in the histogram has a greater value that the other ones; this is due to the far out embryos that do not grow, or only very little during the formation process.
### Independence on the number of embryos for the giant planets
For the planets above , the mass function shows limited variations in all the populations with multiple embryos. The highest mass achieved in each populations shows a trend with the number of embryos. Except for that, the results we obtain are robust. This includes the common slope in the histogram for masses below and the “planetary desert” [@2004ApJIda1] for planets around .
Thus, to obtain a mass function for planets above $\sim\SI{50}{\mearth}$, the number of embryos is unimportant. The single-embryo population show an overall lower number of planets, but this is due to missed opportunities to form giant planets because it is unlikely that the embryo will start at a location which is needed to form these planets. Applying a correction factor on the outcome of that population is also a possibility to retrieve the mass function obtained from multi-embryos populations while limiting the computational needs (because the *N*-body is the most resource-intensive part of the model). We will use this to study the effects of the model parameters in the subsequent papers of this series.
### Location of the giant planets {#sec:res-giant-loc}
![Cumulative distribution of the distance of the giant planets (mass greater than ) for the five populations presented in this study. The higher the number of embryos, the more distant the giant planets.[]{data-label="fig:dist-giant"}](dist_giant.pdf)
![Cumulative distribution of the location of the planets between and with respect to the inner edge of the disc for the five populations presented in this study. If only one embryo per disk is present, more than of all planets in this mass range end up at the inner edge of the gas disc.[]{data-label="fig:dist-subgi"}](dist_subgi.pdf)
However, while the mass function of the giant planets is similar between the populations, the location of the giant planets is not. We find that there is a steady increase in the distance as the number of embryos grows. To illustrate this effect, we provide cumulative distributions of the giant planet’s distances for the different populations in Fig. \[fig:dist-giant\]. Also, both the 50- and 100-embryos population have of the giant planets beyond .
Nevertheless, all the populations show a similar pattern in the distribution of these planets. We observe a pileup of planets around , which is consistent with results that suggest a maximum occurrence rate close to the ice line [@2019ApJFernandes]. In our populations, the median location of the ice line is at , while the median location of the giant is , , , and in the 1, 10, 20, 50 and 100-embryos populations respectively. The giant planets are further in than the ice line, which is caused by the gas-driven migration.
We note that there are two causes for this change. First is the reduction of the importance of migration. We have already discussed in Sects. \[sec:res-form-mid\] and \[sec:res-form-giant\] that in the 100-embryos population the final location of the planets is closer to the starting location of the embryos than in the single-embryo case. The second cause is the increase of the close-encounters that put planets on wide orbits. This effect is responsible for the increase of the distant planets. The underlying cause will be discussed in more details in Sect. \[sec:types-div-giant\].
All the populations have a similar percentage of planets in the region between and . The differences remain in the inner or outer locations, where the populations with a higher number of embryos have more planets beyond , while the populations with less embryos have more planets inside . Thus, the number of planets in the middle region, between and is independent of the initial number of embryos. This feature will be useful, as
It was discussed in Sect. \[sec:am-mig-acc\] that the single-embryo population exhibit a different accretion pattern that the multi-embryos populations. In the former case, only very massive cores ($\gtrsim\SI{50}{\mearth}$) can undergo runaway gas accretion because the luminosity due to the accretion of solids does not drop during the inward migration. This means many planets will end up at the inner edge of the disc. To illustrate the effect, we show in Fig. \[fig:dist-subgi\] the location of the planets in the to range, normalised to the inner edge of the gas disc. It can be observed that for the single-embryo population more than of the planets are located within or at the inner edge of the gas disc, while we see no special pile-up of planets at the inner edge for the other populations. For the multi-embryos populations, unlike for the giant planets, we do not obtain any systematic shift between the populations. They are also closer-in, with the median distance being to .
### A common slope for medium-mass planets
For planets between and , it can be seen that all population show the same behaviour in the histogram. To highlight this point, an additional dashed grey line with a slope $\partial\log{N}/\partial\log{M}=-2$ has been superimposed. Here $N \textrm{d}\log{M}$ is the number of planets whose masses are between $\log{M}$ and $\log{M}+\textrm{d}\log{M}$ (the bin sizes being constant in the logarithm of the planet mass). This corresponds to $N\propto M^{-2}$, as well as $P\propto M^{-2}+C$, where $P$ is the total number of planets whose masses are larger than $M$ (the cumulative distribution) and $C$ a constant of integration. In the case of the single-embryo population, the mass range where the number of planets is similar to the ones of the other populations is limited to masses above , but on the other hand the distribution follows the line to larger masses, up to .
It is still not entirely clear to us what is the cause of the common slope. Nevertheless, we can point to two effects that happen to planets in this mass range. One thing is that these have not undergone runaway gas accretion. They hold a significant amount of H/He, but in the majority of these planets, the dominant component is the core. This slope cannot be achieved only with solids accretion, because without gas, the most massive planets in this range cannot be reproduced. Nevertheless, this is the range where the most massive planets would be found, were it not for gas accretion.
The second effect is planetary migration. Without it, the decrease in the occurrence rate begins before . Without migration, planets can accrete only up to their isolation mass. With migration however, planets can access a larger mass reservoir. However, it is not clear how migration results in a planet mass function with this slope.
In the four multi-embryos populations, the end location where the common slope is encountered is similar, but not for the beginning location. The single-embryo population is different, first because it start to follow the slope at higher masses (about ) and second because the end is also for larger masses, at about . As we mentioned before, the slope only occurs where planets are core-dominated. What is different with the single-embryo population with respect to the others is the different behaviour of accretion and migration, as we saw in Sect. \[sec:am-mig-acc\]. The resulting planets are mostly located at the inner edge of the disc (Fig. \[fig:dist-subgi\]). This being the case, the maximum gas accretion of those planets remains low, as the gas surface density is low (and as we use the Bondi rate to compute the maximum gas accretion rate, ). This explain the shift to larger masses for the change in behaviour of the single-embryo population compared to the multi-embryos one. This interaction, which can also be seen as a competition for solids, can shift the location of where this common slope is found, but will not change it fundamentally.
### Convergence for the lower masses
For small masses, the histograms flatten, which is the expected behaviour with our setup. To highlight this, let us remember that the initial surface density of solids is follows $\sigmasol\propto r^{-\betas}$ and define $b$ the half-width of the feeding zone given in terms of the Hill radius (). Then, let us assume that all bodies grow to their isolation mass, $$\miso = \frac{\left(4\pi b r^2 \sigmasol\right)^\frac{3}{2}}{(3\mstar)^\frac{1}{2}} \propto r^{\frac{3}{2}\left(2-\betas\right)}$$ [@1987IcarusLissauer]. As we place the embryos with a uniform probability in the logarithm of the distance $r$, we have $\mathrm{d} P\propto \mathrm{d}\log{r}$. Substituting for the isolation mass, we have $\mathrm{d} P\propto (3/2)(2-\betas)\mathrm{d}\log{\miso}$. So as long as $\betas\neq 2$, we have $\mathrm{d} P\propto \mathrm{d}\log{\miso}$, i.e. $\mathrm{d} P\propto \left(1/\miso\right) \mathrm{d}\miso$. This relationship results in flat histogram when the bin sizes are uniform in the logarithm of the mass, as it is the case in Fig. \[fig:mass\].
There are other mechanisms affecting the mass distribution. For instance, not all planets will grow to their isolation mass, especially the ones at large separation. This results in the number of planets decreasing with the mass, as distant planets, the ones with the largest isolation mass, will need more time. Close-in planets, whose isolation mass is low, will have short accretion times compared to that of the protoplanetary disc and will not suffer from time constraints. However, this effect alone is not able to explain the shape of the distribution for the small-mass planets.
Another mechanism that will affect the distribution is planetary migration. The consequence is that embryos will have access to a larger reservoir of solids that they can accrete. As migration efficiency increases with the mass in the range under consideration here [e.g. @2002ApJTanaka; @2014PPVIBaruteau]. This will lead to planets that would attain a mass of $\sim\SI{1}{\mearth}$ to migrate and have access to new planetesimals. This pushes the mass distribution toward larger values, which should tend to flatten the curve. After some testing, planetary migration seems to be responsible for the reduced occurrence rate of planets between and .
Planet types and system-level analysis {#sec:types}
======================================
So far, we have only performed population-level analysis, disregarding the properties planets of planetary systems. Here, we will define different planet types (or categories). This allows to separate the diverse planets from our population and analyse them separately. In addition, this will help to quantitatively compare certain regimes of our populations with the known exoplanets.
Definitions of planet categories {#sec:types-def}
--------------------------------
-------------------------- ------- ------- ------- -------
Min. Max. Min. Max.
mass mass dist. dist.
Type \[\] \[\] \[\] \[\]
Mass 1 … … …
Earth-like 0.5 2 … …
super Earth 2 10 … …
Neptunian 10 30 … …
Sub-giant 30 300 … …
Giant 300 … … …
D-burning 4322 … … …
Earth-like 0.5 2 … 1
super Earth 2 10 … 1
Neptunian 10 30 … 1
Sub-giant 30 300 … 1
Giant 300 … … 1
Habitable zone 0.3 5 0.95 1.37
Kepler [@2018AJPetigura] … 0.88
Kepler [@2018ApJZhu] … 1.06
Hot Jupiter 100 … … 0.15
Jupiter analogues 105.3 953.4 3 7
Giant 300 … 5 …
Giant 300 … 10 …
Giant 300 … 20 …
-------------------------- ------- ------- ------- -------
: Constraints for the different planet categories[]{data-label="tab:type-defs"}
The planet categories were selected as follows: we first have a series that are constrained only by the planet masses: Earth-like plants are between and , then super Earth up to , Neptunian to , Sub-giant to and giant above. In addition we also provide Deuterium-burning planets for masses larger than , which overlaps with the giant planets category. The mass range of the sub-giants was chosen so that the category is located where the planetary desert discussed in Sect. \[sec:mass\] is found in the multi-embryo population. We also set categories for the same masses, but for planets inside . The presence of the second series of categories is to avoid counting embryos that did not finish growth during the formation stage of our model (see the discussion in Sect. \[sec:am-small\]).
We defined planets in the habitable zone as planets between 0.3 and in mass and located between 0.95 and [@2014PNASKasting]. We also include two different category that relate to the Kepler’s observatory biases. The first follows @2018AJPetigura, which contains planets with a period $P<\SI{300}{\day}$ () that also satisfy $$\frac{\rtot}{\si{\rearth}}>1.37\left(\frac{P}{\SI{100}{\day}}\right)^{0.19},
\label{eq:def-kepler-petigura}$$ with $\rtot$ the planet’s radius. The second follows @2018ApJZhu, with planets that have a period $P<\SI{400}{\day}$ () and satisfy $$\frac{\rtot}{\si{\rearth}}>2\left(\frac{\aplanet}{\SI{0.7}{\au}}\right)^{0.31}.
\label{eq:def-kepler-zhu}$$
Finally we have several categories related to giant planets: hot Jupiters have more than and are located within , Jupiter analogues have masses between $1/3$ and and semi-major axis between 3 and , and three categories for giant planets (mass above ) further out than 5, 10 and . These categories were chosen to identify planets that lie outside of the bulk of giants. Such planets are prime targets for direct imaging surveys. For instance, there are no giant planets outside in the single-embryo population (see Fig. \[fig:1emb\]), so they ended up there because of planet-planet interactions in the multi-embryos populations. All these definitions are summarised in Table \[tab:type-defs\].
Occurrence rates and multiplicity as function of the number of embryos {#sec:types-conv}
----------------------------------------------------------------------
-------------------------- ---------------- ---------------- ------------------ ---------------- ------------------ ---------------- ------------------ ---------------- ------------------
Type $f_\mathrm{s}$ $f_\mathrm{s}$ $\mu_\mathrm{p}$ $f_\mathrm{s}$ $\mu_\mathrm{p}$ $f_\mathrm{s}$ $\mu_\mathrm{p}$ $f_\mathrm{s}$ $\mu_\mathrm{p}$
Mass 3.3 4.3 7.0 8.4
Earth-like 2.1 3.3 4.9 5.2
Super Earth 2.1 2.8 4.8 5.6
Neptunian 1.2 1.3 1.3 1.4
Sub-giant 1.1 1.2 1.2 1.2
Giant 1.5 1.5 1.5 1.6
D-burning 1.0 1.0 1.0 1.0
Earth-like 1.8 2.9 3.7 2.8
Super Earth 1.9 2.5 3.7 3.7
Neptunian 1.2 1.2 1.2 1.4
Sub-giant 1.0 1.1 1.1 1.1
Giant 1.2 1.2 1.1 1.1
Habitable zone 1.2 1.3 1.5 1.3
Kepler [@2018AJPetigura] 3.0 3.5 4.6 4.4
Kepler [@2018ApJZhu] 2.8 3.3 4.3 4.5
Hot Jupiter 1.0 1.1 1.1 1.0
Jupiter analogue 1.0 1.0 1.0 1.0
Giant 1.0 1.0 1.0 1.0
Giant 1.0 1.0 1.0 1.0
Giant 1.0 1.0 1.0 1.0
-------------------------- ---------------- ---------------- ------------------ ---------------- ------------------ ---------------- ------------------ ---------------- ------------------


One of the goal of this work is to determine the convergence of our formation model with respect to the initial number of embryos. For this, we provide the occurrence rates and the multiplicity of these categories of planets in Table \[tab:types\]. These quantities are computed as follows. The total number of systems in each population is $\nsystot$, whose value is in the multi-embryos population and in the single-embryo population. The number of planet in each category is $\npla$ and the number of systems where at least one such planet is present is $\nsys$. From these, we define the occurrence rate $o_\mathrm{p}=\npla/\nsystot$, the fraction of systems harbouring such planets $f_\mathrm{s}=\nsys/\nsystot$ and the mean multiplicity of such planets $\mu_\mathrm{p}=\npla/\nsys$. It follows that $o_\mathrm{p}=f_\mathrm{s}\mu_\mathrm{p}$.
A graphical representation of the values for the categories of planets as function of their masses for any location is provided in Fig. \[fig:convergence-mass\], while the same information for planets inside is provided in Fig. \[fig:convergence-close\]. In the latter case, the category of Deuterium-burning planets has been left out has it is always empty. In addition to the overall occurrence of these kinds of planets (as we discussed in Sect. \[sec:mass\] for the mass function), this gives additional insights on the distribution of planet types in the different systems.
Overall, the results confirm what we discussed in the previous section: convergence is achieved with a smaller number of embryos for the most massive planets than for the lower-mass ones. In the low-mass range (habitable zone, Earth-like and Super-Earth planets) the trend is an increasing number of planets along with the number of embryos. As we already discussed in and Sect. \[sec:am-small\], the growth of the planetary bodies is not finished for larger separation by the time our model switched from the formation stage to evolution. Thus, the bodies that are further out may not reflect the end state of planetary systems. For this reason, we also provide categories accounting bodies that are inside , where growth should be mostly finished at the end of the formation stage of our model (), and whose result we show in Fig. \[fig:convergence-close\]. In that plot, we may note that the multiplicity of the Earth-like planets drops in the 100-embryos population compared to the 50-embryos one. This effect is related to how the growth of small-mass planets is followed up to a giant-impact phase only in the 100-embryos population. With less embryos, the planets do not disturb their orbits to the same extent, and the final phase of planetary growth via giant impacts is missing. This is corroborated by the median mass of the planets in this category: in the 20 and 50-embryos populations, these value is and while in the 100-embryos population it increases to .
For the most massive planets (Neptunian, sub-giants, giants and Deuterium-burning) however, we obtain similar numbers in the populations that have at least 10 embryos. Nevertheless, we still see some trends. The first three categories (Neptunian, sub-giants and giants) have slight reductions in their fraction of systems as the initial number of embryos increase while the multiplicity slightly increases so that the overall number of such planets remain quite constant. On the other hand, for the last category (deuterium-burning) we observe first an increase of the occurrence rate along with the number of embryos. Then it becomes constant at for both the 50 and 100-embryos populations.
For the location of the giant planets, the different categories based on the separation show results that are consistent with what is shown on Fig. \[fig:dist-giant\]. The fraction of systems with Hot-Jupiters peaks for the 10-embryos population at of systems, down to for the 100-embryo population. For comparison, the observed occurrence rate of these planets is 0.5- [e.g. @2012ApJSHoward]. Thus, only the 50-embryo with shows a value that is consistent with the observations. As we have discussed previously, the overall separation of the giant planets increase along with the number of embryos (see Sect. \[sec:am-nemb\]), so that 100-embryos population has very few inner giant planets. The decrease of the number of hot-Jupiters is consistent with the decrease of the efficiency of migration with increasing embryo number that we observed in Sect. \[sec:res-form-giant\], as the embryos forming hot-Jupiters come mostly from beyond the ice-line.
Conversely, for the most distant giants, their number increase along with the initial amount of embryos. The fraction Jupiter analogues increases, with an occurrence rate of up to in the 100-embryos population. Observational estimates for this class of planets are: [@2011ApJWittenmyer], [@2008PASPCumming] and [@2016ApJRowan]. We thus obtain values lower than the observational results for this class, even for the embryos population. The same increase with the initial number of embryos applies for the distant giant planets (beyond ). It should be noted that there is a value that is the same for all categories and populations: there is never more that a single distant giant planet in any system.
Multiplicity of the different types of planets {#sec:multi-types}
----------------------------------------------

To investigate the distributions of multiplicities in a more detailed fashion than just the mean values shown in Figs. \[fig:convergence-mass\] and \[fig:convergence-close\], and Table \[tab:types\], we provide in Fig. \[fig:hist\] histograms of these for five types of planets. These categories are the five ones defined in Sect. \[sec:types-def\] that have a mass criterion. For the first three (Earth-like, Super-Earth and Neptunian) we use the categories that are limited to planets inside while for the last two (Sub-giant and Giant) we selected the categories without restriction on the planet’ distance. This choice is consistent with our discussion about the lack of convergence for the smaller-mass planets at larger distances.
The results here are very similar to our previous discussion. For the giant and sub-giant categories, all the multi-embryos populations show a similar distribution. Although we do not show it, this is valid for both set of categories (all distances or only within ). Thus, for the most massive planets, the number of embryos does play a role for the final multiplicity, as long as that number is at least around 10. This result is in line with . It can be noted that there are roughly the same number of systems with giant planets, that have a multiplicity of 1 and 2. This result is consistent with the results of @2016ApJBryan that half of the systems with a giant planet inside have a companion planet.
For Neptunian and super Earth planets inside , we also see that the distributions of multiplicity converge. The Neptunian category does not such much variation between the population, as for the sub-giant and giants planets. However, the convergence of the Super Earths is only achieved between the two populations with the most embryos per system. In the 10-embryos population a steady decrease of the number of systems for higher multiplicities, while in the populations with more embryos it is more likely to find systems with several such planets than lower counts. The Earth-like category shows a similar behaviour, except for the 100-embryos population. Here, the 100-embryos shows less systems with high multiplicity than the 50-embryos population. This is most likely related to the formation of the terrestrial planets that we discussed in and Sect. \[sec:am-small\]. Thus, for planets above , increasing the further the number of embryos would not increase the planet count further.
It should also be noted that unlike for the other categories or other populations, the Earth-like and super Earth categories in the 50- and 100-embryos populations show a plateau for the low-multiplicity counts. Here, the multiplicities between 1 and 3 have similar probabilities and they account for of the systems with Earth-like planets and of the systems with Super-Earths in the 100-embryos population.
In summary, we find that convergence for the overall multiplicity (that is, the total number of planets of a given type divided by the number of systems having such planets) is a good indicator for the convergence of the underlying distribution of multiplicities. The multiplicity of the sub-giant and giant planets at all locations are similar in all multi-embryos populations (though not their locations, see Sect. \[sec:res-giant-loc\]); the same applies for the Neptunian planets inside . For the inner Super-Earths, only the 50 and 100-embryos populations show similar results while for inner Earth-like planets, the 100-embryos population show a decrease of the multiplicity of the Earth-like planets. The 100-embryos population should be the only one used to analyse Earth-like planets.
Correlations between the different types of planets {#sec:types-corr}
---------------------------------------------------
----------- ----- ----- ----- ----- ----- ----- ----- -----
Diversity 10 20 50 100 10 20 50 100
0 48 37 29 19 99 99 101 67
1 217 195 152 110 273 325 337 355
2 372 451 496 499 342 360 380 416
3 188 158 175 225 168 125 127 121
4 93 66 48 33 75 54 43 31
5 82 93 100 114 43 37 12 10
----------- ----- ----- ----- ----- ----- ----- ----- -----
To determine the diversity of planets inside a system compared to the overall population, we need a method to determine how different (or similar) the planets inside a given system are. In this work, we use the five mass-categories that we defined above (Earth-like, super Earth, Neptunian, sub-giant and giant) to compute a diversity index. The deuterium-burning type has been left out as it is a subset of the giant planets. The diversity index is computed as follow. For each system, we look for the different types that are present is the above kinds. The index is then the span of the different kinds present. For instance, if all planets are of a single kind, then the diversity is 1. If a system has only Earth-like and Super Earth planets, or Sub-giant and Giant planets, then the number is 2. If a system has Earth-like to Neptunian planets, the diversity number is 3, no matter whether Super Earths are present in the system or not. The highest value is 5, for systems that have Earth-like to Giant planets. For systems that have no planets in any of the categories, we set a diversity number to 0.
We provide the number of systems with each diversity number for the four populations with multiple embryos per system in Table \[tab:div\], for both the categories that span all distances (under “All”) and the ones that consider only planets within (under “”). We see that overall the diversity remains quite limited. In all cases, more than of the system have a diversity number of 2 or lower. Naturally, many of the underlying systems have only low-mass planets in the Earth-like and Super Earth categories. The same is true for the diversity number of 3, which is dominated by systems spanning Earth-like to Neptuninan planets. Whether the masses of the planets inside a given system are similar, as suggested by @2017ApJMillholland, will be the subject of a more dedicated investigation [@NGPPS5].
Nevertheless, if we compare the number of systems with giant planets ( for the 50-embryos population and for the 10-embryos population; see Table \[tab:types\]) to the number of high-diversity systems ( to for a value of 5; to for a value of 4), we see that not all systems with giant planets also have low-mass companions. It should nevertheless be noted that in the case of the categories encompassing all distances, we still count systems as having a high diversity. However, it is possible that the low-mass planets that contribute to the diversity index are embryos that did not finish growth at large separation (several aus). As we discussed in Sect. \[sec:types-conv\], distant low-mass planets have not finished to grow by the time the model reaches the end of the formation stage. The diversity index computed from categories encompassing all distances might then not reflect the real diversity of planetary systems. To avoid this problem, we also show the diversity index when accounting only for planets within . In that case, about half of the systems with giant planets (from for the 100-embryos population to for the 10-embryos one) have a diversity number of 4 or 5 ( combined for the 100-embryos population, for the 10-embryos one). Thus, at least in the 100-embryos population, most of the systems that have a inner giant planet have no other inner planets smaller than Neptunian.
### Inner companions of giant planets {#sec:types-div-giant}
![Histograms of the lowest-mass planets inside of systems with giant planets. For each multi-embryo populations, we show the fraction of systems whose lowest-mass planet within is of the given kind. Systems with no planet larger than (the lower boundary of the Earth-like category) as shown as *None*.[]{data-label="fig:div-giant"}](div_giant.pdf)
For the systems with giant planets, we provide an additional representation of their inner companions (within ) in Fig. \[fig:div-giant\]. What can be observed is that the diversity of these systems is anti-correlated to the initial number of embryos. In the 10-embryos population, of the system with a giant plant also harbour an Earth-like, Super-Earth, or Neptunian planet while this is the case of only of the systems in the 100-embryos population. On the other hand, the fraction of systems with only sub-giant or giant planets increases. The same goes for the fraction of systems without any planet inside . As the overall distance of the giant planets increase with the initial number of embryos (see Sect. \[sec:res-giant-loc\] and Fig. \[fig:dist-giant\]), the number of systems that are analysed here whose giant planet is beyond also increases. Now, if these giant planets are the only planets in the system, that will be reported under “giant” if there is at least one planet inside and “none” otherwise. Then, only because of this effect, the number of systems without any planet inside is expected to increase slightly with the number of embryos. This, however, is not the main reason for the differences seen here.
The main reason for the decrease of the diversity with the number of embryos is a shift of the spacing between the embryos. The initial spacing between embryos is smaller with increasing number of embryos. Thus, the embryos in the 100-embryos population are the most tightly packed. In the 10-embryos population, the number of embryos that are in a favourable location to become giant planets, i.e. just beyond the ice line, is limited. This means that a single (or sometimes two) embryo will attain a mass large enough to undergo runaway gas accretion. These planets will migrate inward, but as we discussed in Sect. \[sec:am-mig-acc\], they might enter a zone where another embryo has grown previously and trigger the gas runaway by halting the accretion of solids. Depending on where the other embryos are, they can be lost because of the perturbation by the forming giant, but still closer-in planets can survive. This is not the case when there are multiple in the region where the cores of giant planets can form. In that case, these massive planets will be on similar orbits. A likely outcome of such a situation is for the system to be dynamically unstable and clear the inner region so that only few distant planets remain.
A more in-depth analysis of the correlation of occurrences of inner super Earths and cold giant planets is done in @NGPPS3. It should be noted that there are several differences between this analysis and the one of the other paper: 1) we do not include any observational bias and 2) the giant planets whose companions we search for includes all distances not only distant ones (in [@NGPPS3], only the ones with a period larger than are accounted for).
Statistical results on the 100 embryos population {#sec:res-100emb}
-------------------------------------------------
-------------------------- ----------- ----------- ------------ ------------ --------- ----------------- -----------------
Number of Number of Fraction Occurrence Multi- Mean \[Fe/H\] Mean ecc.
Type planets systems of systems rate plicity $\pm$ std. dev. $\pm$ std. dev.
All 32030 1000
Mass 8065 960
Earth-like 4660 901
Super Earth 4603 821
Neptunian 438 303
Sub-giant 106 85
Giant 284 181
D-burning 45 45
Earth-like 1618 572
Super Earth 2421 661
Neptunian 359 262
Sub-giant 71 65
Giant 105 92
Habitable zone 560 437
Kepler [@2018AJPetigura] 3344 767
Kepler [@2018ApJZhu] 2934 657
Hot Jupiter 3 3
Jupiter analogues 8 8
Giant 35 35
Giant 16 16
Giant 8 8
-------------------------- ----------- ----------- ------------ ------------ --------- ----------------- -----------------

For the 100-embryos population, we provide key statistical characteristics of the different kinds of planets in Table \[tab:props\], which constitutes the overall predictions of our formation model. The column fraction of systems is the same as in Table \[tab:types\]. The mean \[Fe/H\] column denotes the mean host star metallicity of systems where the relevant kinds of planets are found. We provide an annotated graphical view in Fig. \[fig:am\_legend\]. This figures shows the same as the bottom right panel of Fig. \[fig:ame\], but the colouring has been removed and dot sizes go with the logarithm of the planets’ physical radii. Following the discussion Sects. \[sec:am-small\] and \[sec:res-time\], the Earth-like, super Earth, and Neptunian at all distances should be taken cautiously.
The values of the occurrence rate column for the “All”, “Mass ” and “Giant” categories give of the cumulative distribution shown in the bottom panel of Fig. \[fig:mass\] at , and respectively. Out of the initial embryos ( systems with 100 embryos each), only remain at . Most of the embryos () were lost due to giant impacts, were ejected, ended in the central star following close-encounters during formation stage, ended in the central star due to tidal migration in the evolution stage, and were fully evaporated during the evolution stage. Thus, on average, 32 embryos per disc remain. But these are mainly embryos that did not grow, in outer parts of disk where accretion is very slow. Of the average of 32 embryos per disc that remain, only 8.4 have a mass larger than , as indicated by the “Mass ” category. For comparison, the solar system has five planets matching the same criterion (plus Venus that has a mass of ). The values are hence not different. The multiplicity is larger for systems with only terrestrial planets, as giants will usually lead to the removal of terrestrial planets .
Most of the sub-giants are also found to be in the inner part of the disc, with of them being within . These planets either form late or had their envelope ejected just before the dispersal of the gas disc to have a limited time in the runaway gas accretion. They spent then more time when their masses where in the range, which means they experienced more migration than giant planets that had to form quicker or terrestrial planets that are largely unconstrained by the life time of the protoplanetary gas disc. It can also be seen that the multiplicity of sub-giant planets is the lowest, as it is unlikely for two planets to be in the same situation in the same system.
The multiplicity of the distant giant planets is always unity. This means that in no systems we find two (or more) giant planets beyond . As we discussed Sect. \[sec:res-form-giant\], these planets mostly originate from seeds that were initially positioned within . They are then moved to their final location by one or more close encounters with other massive planets. Out of those systems, nearly the half only have one giant planet remaining, that is, the one beyond while the other have one (or in one case, two) other giant planets further in. Nevertheless all these systems had two giant planets at some point, some of which were subsequently lost, mostly by ejections. The study of systems with giant planets will be the subject of further work [@NGPPS11].
The comparison between the inner categories and the others allows to recover some information about the location of these planets. Only few systems have multiple Sub-giant and giant planets inside , as we can see in Table \[tab:props\]. What we can learn in addition here is that these do not occur for systems with the highest metallicity, but rather for moderate values.
Effect of metallicity {#sec:feh}
---------------------


The occurrence rate of giant planets is known to be correlated with the host star’s metallicity . Lower-mass planets on orbits of less than are also preferentially found around metal-rich stars, but the correlation is weaker for other planets [@2016AJMulders; @2018AJPetigura]. This finding, particularly in the case of the giant planets, has been an argument to promote the core accretion paradigm, as the formation of a sufficiently massive core takes less time when more solids are present, leaving more time for gas accretion [@2004ApJIda2; @2018ApJWang].
The mean stellar metallicity of the systems harbouring the different kind of planets is provided in Table \[tab:props\]. For both sets of categories that depend of the planet masses (all distances and inside only), the mean metallicity increases with the masses. The means for Earth-like () and Super-Earths () planets are close to the one of the overall population (), so it is for the inner Super-Earths. This means that there is almost no metallcity effect for these planet kinds. However, systems with Earth-like planets inside and habitable zone are more metal-poor ( and ); these are the only two categories whose mean is lower than the one of the overall population. The mean of the systems with Neptunian and Sub-giant increase, but they are similar each for all distance and inner planets. This suggests that there are no dependency on the stellar metallicity for the location of these planets. Giant planets behave similarly, although the mean of the systems with giants inside is slightly lower that the one for all distances. On the other hand, the 3 hot-Jupiters have again a higher mean metallicity hosts than distant giant planets.
The trend of increasing stellar metallicity with planet mass continues to the brown dwarfs (deuterium-burning). This is compatible with the results of @2019GeoSciAdibekyan, who found that the brown dwarfs can be explained by the core accretion paradigm, as we do in this work. They also found that it is possible for massive brown dwarfs to form around star with solar-like metallicity, but this is for more massive stars that we do not model in this work.
We also note that there is a trend of the metallicity for giant planets at intermediate and large orbital distances. The ones at larger separation are found still over more metal-rich stars than the general population: and for the ones beyond and versus otherwise. We remember that all these systems formed more than one giant planet, some of which were subsequently lost (see Sect. \[sec:res-100emb\]). Most of the distant giants were brought to their distant orbits after one or more close encounter with other massive planets. These encounters happen after the planets have undergone runaway gas accretion, though the planets may continue to accrete after being sent on wide orbits. Hence, it is necessary for multiple giant planets to form in a single systems for close encounters to strong enough to alter the orbits from to more than .
Correlation between multiplicity and metallicity {#sec:res-feh-mult}
------------------------------------------------
Another way to check for a metallicity effect is to look at the correlation between the numbers of certain types of planets as a function of the stellar metallicity. The results of this analysis of the 100-embryos population are provided in Fig. \[fig:feh-all\] for the categories encompassing all distances and Fig. \[fig:feh-close\] for the ones restricted to planets inside . The systems are divided in six equally spaced metallicity bins spanning metallicities from to .
The results for the most massive planets exhibit the expected behaviour: the fraction of systems with massive planets (Neptunian, sub-giants, and giants) increases monotonically with stellar metallicity. The lowest-metallicity bin does not have any system with Neptunian planets or above. The second bins has some systems with Neptunian planets, very few systems with Sub-giants and none with giants. The next bins show a gradual increase of the fraction of systems with these kind of planets, with roughly the half of the systems in the highest metallicity bin. Additionally, we can see the dependency of the multiplicity on the metallicity. For the sub-giants, we observe that as the metallicity increase, there are first systems with only one such planet, and the further on systems with two and for a few systems even three appear, starting roughly with a solar metallicity. For the giant planets however, the story is interestingly different. In this case, we have that, at the metallicities high enough to form giant planets, the percentage of systems with a single giant planet with respect to the population of systems with any number of giant planets increases. This comes to say that the mean multiplicity is anticorrelated to the metallicity. This is visible by the fact that systems with two giant planets are less frequent in the highest metallicity bin than in the one below. Similarly, the five systems with three giants are not in the highest metellicity bin.
Giant planet formation is then a self-limiting process. The more giant planets are formed, the more likely is that these systems will be unstable. When an instability occurs, it will lead to the loss of planets, by collisions between planets, ejections, or, in small fraction of the cases, accretion by the central star.
In , dedicated to giant planets, we will quantify the number of giants lost in collisions with other planets and the star, and by the ejection out of the system where they become rogue planets.
A effect is happening for the systems with the highest metallicity: the number of inner planets decreases. All the low-metallicity systems have some inner planets, although they can be very low mass (as there quite less planets that are Earth-like or more). However, this does not mean that these systems do not form planets; it can be seen that all these systems have at least one Earth-mass planet at least (top left panel of Fig. \[fig:feh-all\]). What happens in these systems form several massive planets; due their number, the systems become dynamically unstable and the inner planets are lost. Much of these planets collide or are ejected, some fall in the star. In all but one of the resulting systems, a giant planet remains beyond . In the last case, a smaller planet remains, but its low mass is due to envelope ejection.
Summary and conclusions
=======================
In this work, we use the new Generation III Bern model of planetary formation and evolution presented in to compute synthetic planetary populations of solar-like stars. The model assumes that planets form according to the core accretion paradigm. During the formation stage (0 to ), the model self-consistently evolves a 1D radial constant-$\alpha$ gas disc with internal and external photoevaporation, and the dynamical state of planetesimals under viscous stirring and damping by gas drag. Accretion of solids by the protoplanets includes both of planetesimals and giant impacts, while gas accretion is obtained by solving the 1D spherically-symmetric internal structure equations. The model also includes gas-driven planetary migration and gravitational interactions between the protoplanets by means the `mercury` *N*-body integrator. During the evolutionary phase ( to ) we follow the thermodynamic evolution (cooling and contraction) of the individual planets the effects of atmospheric escape, bloating, and stellar tides.
To synthesise populations, we vary four initial conditions of the the model according to observed distributions. These Monte Carlo variables are: the initial mass of the gas disk [@2018ApJSTychoniec], the dust-to-gas ratio which is tied to the stellar \[Fe/H\] [@2005AASantos], the external photoevaporation rate which is distributed such that the synthetic disks have a lifetime distribution compatible with the observed one (see Sect. \[sec:mwind\]), and the inner edge of the protoplanetary disc [@2017AAVenuti]. Lunar-mass () planetary seeds are put with a uniform probability in the logarithm of the distance into the disc. We compute five populations, each with a different the initial number of seeds per system (or disc).
One aim of this study is to determine the convergence of the model with respect to this free parameter. Our results for this part are:
- There is a strong difference between the single and multi-embryos populations. We find that migration in the single-embryo is more effective than in the multi-embryos population.
- The properties of the giant planets are only weakly affected by the number of embryos, as long as the latter is at least about 10, consistent with previous work . For example, the fraction of stars with giant planets and their multiplicity is and 1.5 in the 10 embryo case, and and 1.6 in the 100 embryo case.
- For the lower-mass planets, a higher number of embryos is necessary. Only the 100-embryos population is able to track the formation of the lower-mass planets up to giant impacts stage (large embryo-embryo collisions).
There are two main reasons for these changes. The first is the dynamical interactions between the embryos, as we discussed in . A tighter spacing between the embryos increases their mutual gravitational interactions, which gives them access to more planetesimals to accrete. This helps small-mass systems to accrete a large percentage of the planetesimals at small separation during the time of our formation models (). For the larger-mass planets however, the increased number of embryos results in more competition for solids. When the embryos grow to several Earth masses, they undergo gas-driven migration, which result in access to a larger mass reservoir. However, other embryos will have accreted planetesimals at different places of the disc, resulting in migrating embryos experiencing a sudden drop in their growth rate. The more embryos there are, the less migration embryos must have performed before experiencing this effect. This in turn can trigger runaway gas accretion (see discussion in Sect. \[sec:am-mig-acc\]). The last effect is presence of multiple large embryos. With many embryos, it is more likely to form multiple giant planets. This means that the protoplanets can experience giant impacts. They can lead to envelope stripping of some giant planets. Thus, we find a small proportion of massive cores with a tiny envelope compared to the usual scenario provided by the core accretion paradigm. Systems with many embryos offer a greater diversity of envelopes mass fractions. The increase of dynamical interactions with the number of embryos has repercussions on the formation tracks, with planets being scattered to wide and eccentric orbits.
One of the reason for this study is to determine if results of the population with many embryos per systems can be recovered by populations with a lower initial embryo count. The more embryos are put in each systems, the larger the computational requirements are (mainly due to the *N*-body). For future work where we want to study the effects of model parameters, it is then more efficient to run the simulations with a lower number of embryos. From this study, we find that planets whose masses are roughly or more are insensitive to this parameter provided there are at least 10 embryos per system. There some effects of including more embryos, such as an overall increased distance for the giant planets (see Sect. \[sec:res-giant-loc\] and Fig. \[fig:dist-giant\]), but this is small enough to not create major problems. The single-embryo population is different from the others and most of its properties are not recovered in multi-embryos populations. Nevertheless, some outcomes, such as the mass function for planets above roughly can be retrieved. This means that the study of gas accretion in the detached phase or the overall fraction of giant planets (provided a correction factor is taken into account) can be done with these simple populations that require very limited computational resources.
Based on our population with the highest number of embryos per system (100), we computed properties of different planet kinds that are provided in Table \[tab:props\] and graphically in Fig. \[fig:am\_legend\]. These values represent the predictions of our formation model. The main points are:
- Overall, planetary systems contain on average 8 planets larger than . The fraction of systems with giants planets at all orbital distances is , but only have one further than . System with giants contain on average 1.6 giants. This value is consistent with observations [@2016ApJBryan].
- Inside of , the planet type with the highest occurrence rate and multiplicity are super Earth (2.4 and 3.7), followed by Earth-like planets (1.6 and 2.8). They are followed by Neptunian planets, but with an already clearly reduced occurrence rate and multiplicity (0.4 and 1.4).
- The planet mass function varies as $M^{-2}$ between and . Both at low and high masses, it follows approximately $M^{-1}$.
- The frequency of terrrestrial and super Earth planets peaks at a stellar metallicity of -0.2 and 0.0 respectively. At lower metallicities, they are limited by a lack of building blocks and at higher metallicities by detrimental growth of more massive, potentially dynamically active planets, which results in accretion or ejection of terrestrial planets and super Earths. The frequency of more massive planet types (Neptunian, giants) increases in contrast monotonically with \[Fe/H\].
These results support observations about metallicity effect for the giant planets (see Figs. \[fig:feh-all\] and \[fig:feh-close\]).
In future work, we will compare these populations with observational data, in a similar fashion that was already done for radial-velocity surveys and transit [@2019ApJMulders]. This will determine how our populations compare to the known exoplanet population. This allows us to make important steps towards the development of a standard model of planetary system formation and evolution. Observationally, the syntheses represent a large data set that can be searched for comparison synthetic platenary systems that show how observed systems may have come into existence. The systems, including their full formation and evolution tracks are avaialble online. Knowing the underlying population will also help to understand the pathways certain categories of system follow to reach their final stage and the initial conditions they require. It would also permit to make predictions on the yet-unobserved regions of the parameter space, which is important for the development of future exoplanet discovery missions.
The authors thank Ilaria Pascucci and Rachel B. Fernandes for fruitful discussions. A.E. and E.A. acknowledge the support from The University of Arizona. A.E. and C.M. acknowledge the support from the Swiss National Science Foundation under grant BSSGI0\_155816 “PlanetsInTime”. R.B. and Y.A. acknowledge the financial support from the SNSF under grant 200020\_172746. Parts of this work have been carried out within the frame of the National Center for Competence in Research PlanetS supported by the SNSF. Calculations were performed on the Horus cluster at the University of Bern. The plots shown in this work were generated using *matplotlib* [@2007CSEHunter].
[^1]: The data supporting these findings is available online at <http://dace.unige.ch> under section “Formation & evolution”.
[^2]: <https://dace.unige.ch/populationSearch/>
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
A classical conjecture by Graham Higman states that the number of conjugacy classes of $U_n(q)$, the group of upper triangular $n\times n$ matrices over $\fq$, is polynomial in $q$, for all $n$. In this paper we present both positive and negative evidence, verifying the conjecture for $n\le 16$, and suggesting that it probably fails for $n\ge 59$.
The tools are both theoretical and computational. We introduce a new framework for testing Higman’s conjecture, which involves recurrence relations for the number of conjugacy classed of *pattern groups*. These relations are proved by the *orbit method* for finite nilpotent groups. Other applications are also discussed.
author:
- ' Igor Pak$^\star$'
- ' Andrew Soffer$^\star$'
title: 'On Higman’s $k(U_n(q))$ Conjecture'
---
Introduction
============
Let $k(G)$ denote the number of conjugacy classes of a finite group $G$, and let $U_n(q)$ be the group of upper triangular $n \times n$ matrices over a finite field $\fq$ with ones on the diagonal. In [@H1], Higman made the following celebrated conjecture:
\[conj:higman\] For every positive integer $n$, the number of conjugacy classes in $U_n(q)$ is a polynomial function of $q$.
Higman was motivated by the problem of enumerating $p$-groups. Since then, much effort has been made to verify and establish the result. Notably, Arregi and Vera-López verified Higman’s conjecture for $n\le 13$ in [@VA3] (see below and §\[fin\_rems\_va\]). More recently, John Thompson [@Tho] laid some ground towards a positive resolution of the conjecture, but the proof remains elusive.
In this paper we make a new push towards resolving the conjecture, presenting both positive and negative evidence. Perhaps surprisingly, results of both type are united by the same underlying idea of embedding smaller pattern groups into larger ones (see below).
\[thm:n16\] Higman’s conjecture holds for all $n\le 16$. Moreover, for all $n \le 16$, we have $k(U_n) \in \nn[q-1]$.
This extends the results of Arregi and Vera-López and earlier computational results in favor of Higman’s conjecture. Our approach is based on computing the polynomials indirectly via a recursion over certain co-adjoint orbits arising in the finite field analogue of Kirillov’s *orbit method* (see [@K2; @K3]). This approach is substantially different and turns out to be significantly more efficient than the previous work which is based on direct enumeration of the conjugacy classes. We present the algorithm proving Theorem \[thm:n16\] in Section \[sec:experimental\] and describe the earlier work in Section \[sec:fin-rems\].
Our approach is based on a recursion over a large class of pattern groups. A *pattern group* is a subgroup of $U_n(q)$ where some matrix entries are fixed to be zeroes. In a recent paper [@HP], Halasi and Pálfy showed that the analogue of Higman’s conjecture fails for certain pattern groups. In fact, they show that $k({\ensuremath{U_{\hspace{-0.6mm}P}}}(q))$ can be as bad as one desires, e.g. non-polynomial even when the characteristic of $\fq$ is fixed. This work was the starting point of our investigation. Our next two result are also computational.
\[thm:hp-small\] For every pattern subgroup ${\ensuremath{U_{\hspace{-0.6mm}P}}}(q) \leqslant U_{9}(q)$, we have $k({\ensuremath{U_{\hspace{-0.6mm}P}}}(q))\in\nn[q-1]$.
While this shows that small pattern groups do exhibit polynomial behavior, this is is false for larger $n$.
\[thm:hp-new\] There is a pattern subgroup ${\ensuremath{U_{\hspace{-0.6mm}P}}}(q) \leqslant U_{13}(q)$ such that $k({\ensuremath{U_{\hspace{-0.6mm}P}}}(q))$ is *not* a polynomial function of $q$.
Note that while Halasi and Pálfy’s approach is constructive, they do not give an explicit bound on the size of such a pattern group (c.f. §\[ssec:fin-rems-posets\]). We believe that the constant 13 in Theorem \[thm:hp-new\] is optimal, but this computation remains out of reach in part due to the excessively large number of pattern groups to consider (see Section \[sec:experimental\]).
Our final result offers an evidence against Higman’s conjecture:
\[thm:u59\] The pattern subgroup ${\ensuremath{U_{\hspace{-0.6mm}P}}}(q)$ from Theorem \[thm:hp-new\] *embeds* into $U_{59}(q)$.
Here the notion of *embedding* is somewhat technical and iterative. In Section \[sec:embedding\], we prove that $$k\left(U_{n}(q)\right) = \sum_{P} F_{P,n}(q) \cdot k\left({\ensuremath{U_{\hspace{-0.6mm}P}}}(q)\right),$$ where $F_{P,n}(q) \in \zz[q]$ are polynomials and the sum is over pattern subgroups ${\ensuremath{U_{\hspace{-0.6mm}P}}}(q)$ which embed into $U_{n}(q)$, and are irreducible in a certain formal sense. Taking Theorem \[thm:hp-new\] into account, this strongly suggests that $k\left(U_{n}(q)\right)$ is not polynomial for sufficiently large $n$.
\[conj:false-59\] The number of conjugacy classes $k\left(U_{n}(q)\right)$ is *not* polynomial for $n \ge 59$.
Of course, this conjecture is hopelessly beyond the means of a computer experiment. We believe that Theorem \[thm:n16\] can in principle be extended to $n\le 18$ by building upon our approach, and parallelizing the computation (see §\[ssec:fin-rems-parallel\]). It is unlikely however, that this would lead to a disproof of Higman’s Conjecture \[conj:higman\] without a new approach.
Curiously, this brings the status of Higman’s conjecture in line with that of Higman’s related but more famous *PORC conjecture* (see [@BNV]). It was stated in 1960, also by Graham Higman, in a followup paper [@H2]. Higman conjectured that the number $f(p^n)$ of $p$-groups of order $p^n$ is a polynomial on each fixed residue class modulo some $m$. Higman showed that the number $f(p^n)$ can also be expressed as a large sum over certain *descendants*. Recently, Vaughan-Lee and du Sautoy showed recently that some of the terms counting the numbers of descendants is non-polynomial [@DVL]. Here is how Vaughan-Lee eloquently explains this in [@VL]:
> [ “The grand total might still be PORC, even though we know that one of the individual summands is not PORC. My own view is that this is extremely unlikely. But in any case I believe that Marcus’s group provides a counterexample to what I hazard to call the *philosophy* behind Higman’s conjecture.”]{}
We are hoping the reader views our results in a similar vein (c.f. §\[ssec:fin-rems-alperin\]).
The rest of this paper is structured as follows. We begin with definitions and notation in Section \[sec:defs\]. In Section \[sec:pattern\], we prove some preliminary results on co-adjoint orbits of the pattern groups. We then proceed to develop combinatorial tools giving recursions for the number of co-adjoint orbits (Section \[sec:combo\_tools\]). Section \[sec:embedding\] is essentially poset theoretic, which allows us to combine the results to prove theorems \[thm:hp-new\] and \[thm:u59\]. The experimental work which proves theorems \[thm:n16\] and \[thm:hp-small\] is given the Section \[sec:experimental\]. We conclude with final remarks and open problems in Section \[sec:fin-rems\].
Definitions and notation {#sec:defs}
========================
For any finite group $G$, we write $k(G)$ for the number of conjugacy classes in $G$. We assume the notation that $q=p^r$ is a prime power, and we denote by $\fq$ the finite field with $q$ elements. In the matrix ring $M_{n\times n}(\fq)$, the element $e_{i,j}$ will denote the element which is one in cell $(i,j)$ and zero everywhere else. For $\alpha\in\fq^\times$, we will let ${E}_{i,j}(\alpha)$ denote the elementary transvection ${E}_{i,j}(\alpha):=1+\alpha e_{i,j}$.
Throughout the paper, all posets are finite, and typically denoted by the letters $P$ and $Q$. We adopt the following notation regarding posets (c.f. [@Sta]). By a slight abuse of notation, we identify poset $P$ with a ground set on which partial order “$\prec$" is defined. We use ${\cal C^{n}}$ and ${\cal I^{n}}$ to denote the $n$-chain and $n$-antichain, respectively. We use $\max(P)$ and $\min(P)$ to denote the set of maximal and minimal elements, respectively. The set of anti-chains in $P$ is denoted by $\operatorname{ac}(P)$. The set of pairs of distinct related elements on a poset $P$ is denoted $$\operatorname{rel}(P):=\{(x,y): x\prec_P y\}.$$ When the poset $P$ is clear from context, we omit the subscript on the relation. The [*upper*]{} and [*lower bounds*]{} of an element $x\in P$ are defined as $$\operatorname{ub}_P(x):=\{y\in P: x\prec y\} \quad \text{and} \quad \operatorname{lb}_P(x):=\{y\in P: y\prec x\}.$$
For a subset $S \subseteq P$, let $P|_S$ denote the subposet of $P$ induced on the set $S$. As a special case, for $x\in P$, we write $P-x$ for the subposet of $P$ induced on $P\setminus\{x\}$. For an element $x\in P$, let $P^{(x)}$ to be the poset consisting exclusively of the relations where the larger element is $x$. That is, we have: $$\operatorname{rel}\left( P^{(x)} \right)=\{(w,x): w\prec_P x\}.$$ We say that a poset $P$ is $Q$-free if no induced subposet of $P$ is isomorphic to $Q$. For example, a poset is ${\cal I^{2}}$-free if and only if it is a chain. Similarly, a poset is ${\cal C^{2}}$-free if and only if it is ${\cal I^{n}}$.
For a poset $P$, the [*dual*]{} poset $P^*$ will be the one whose relations are reversed. That is, if $x\prec_P y$, then $y\prec_{P^*}x$. We also define two constructions of posets from smaller ones. First, for posets $P$ and $Q$, their [*disjoint union*]{} $P\amalg Q$ is a poset whose elements are the elements of $P$ and $Q$, and for which $x\prec y$ if either
1. $x,y\in P$, and $x\prec_P y$, or
2. $x,y\in Q$, and $x\prec_Q y$.
Clearly, up to isomorphism, the operation $\amalg$ is both commutative and associative. Second, the [*lexicographic sum*]{} $P+Q$ is the poset whose elements are the elements of $P$ and $Q$, and for which $x\prec y$ if any of the following hold:
1. $x,y\in P$, and $x\prec_P y$,
2. $x,y\in Q$, and $x\prec_Q y$, or
3. $x\in P$ and $y\in Q$.
In terms of the Hasse diagrams (the usual graphical representation of a poset), the lexicographic sum is obtained by placing $Q$ above $P$. Hence, it is clear that the lexicographic sum is not commutative, but is associative (up to isomorphism).
Pattern groups and the co-adjoint action {#sec:pattern}
========================================
Definitions and basic results
-----------------------------
We recall the definition of pattern algebras and pattern groups given by Isaacs in [@Isa], but with slightly different notation. Let $P$ be a poset on $\{1,2,\dots,n\}$ which has the standard ordering as a linear extension. That is, whenever $i{\preccurlyeq}_P j$, then we also have $i\le j$. Define the [*pattern algebra*]{} ${\cal U}_P(q)$ to be $${\cal U}_P(q):=\{X\in M_{n\times n}(\fq) : X_{i,j}=0\text{ if }i\not\prec j\}.$$ Every pattern algebra ${\cal U}_P(q)$ is a nilpotent $\fq$-algebra. In fact, the pattern algebra ${\cal U}_P(q)$ is a subalgebra of the strictly upper-triangular matrices ${\cal U}_n(q)$.[^1]
Define the [*pattern group*]{} ${\ensuremath{U_{\hspace{-0.6mm}P}}}(q):=\{1+X : X\in{\cal U}_P(q)\}$. For general posets $P$, the group ${\ensuremath{U_{\hspace{-0.6mm}P}}}(q)$ is a subgroup of the unitriangular group $U_n(q)$. To simplify notation, we often omit the field and write ${\cal U}_P$ instead of ${\cal U}_P(q)$. Similarly, we abbreviate ${\ensuremath{U_{\hspace{-0.6mm}P}}}(q)$ by ${\ensuremath{U_{\hspace{-0.6mm}P}}}$.
We exhibit several examples of posets and their associated pattern algebras and pattern groups.
1. If $P={\cal C^{n}}$, then ${\cal U}_P={\cal U}_n$, and ${\ensuremath{U_{\hspace{-0.6mm}P}}}=U_n$.
2. If $P={\cal I^{n}}$, then ${\cal U}_P$ is the trivial algebra, and ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ consists only of the identity matrix.
3. If $P$ is the poset in Figure \[fig:pattern\_alg\_exmp\], then ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ consists of matrices of the form shown in the figure. We can see that as a vector space, ${\cal U}_P$ is generated by $$\{e_{1,2},\ e_{1,3},\ e_{1,4},\ e_{1,5},\ e_{2,3},\ e_{2,4},\ e_{2,5},\ e_{3,4}\}.$$ These are precisely the elements $e_{i,j}$ where $i\prec_P j$. As an algebra, ${\cal U}_P$ can be generated by fewer elements. In particular, the pattern algebra ${\cal U}_P$ can be generated (as an algebra) by $\{e_{1,2},e_{2,3},e_{3,4},e_{2,5}\}$. Note that $e_{i,j}$ is in this set precisely when $i$ and $j$ are connected by a line segment in the Hasse diagram (see Figure \[fig:pattern\_alg\_exmp\]). The generators are the minimal relations (in the language of posets, the [*cover relations*]{}).
(1,2) circle (.1cm); at (1.5,2) [5]{};
(-1,3) circle (.1cm); at (-1.5,3) [4]{};
(-1,2) circle (.1cm); at (-1.5,2) [3]{};
(0,1) circle (.1cm); at (-0.5,1) [2]{};
(0,0) circle (.1cm); at (-0.5,0) [1]{};
(1,2) – (0,1); (-1,3) – (-1,2); (-1,2) – (0,1); (0,1) – (0,0);
$$\left( \begin{array}{ccccc}
1 & \ast & \ast & \ast & \ast\\
0 & 1 & \ast & \ast & \ast\\
0 & 0 & 1 & \ast & 0\\
0 & 0 & 0 & 1 & 0\\
0 & 0 & 0 & 0 & 1
\end{array} \right)$$
Pattern groups have a particularly nice presentation which we will need in Section \[sec:combo\_tools\].
\[prop:generators\] For every poset $P$, we have $${\ensuremath{U_{\hspace{-0.6mm}P}}}(q)\. =\. \angs{{E}_{i,j}(\alpha)\middle|i\prec_P j,\alpha\in\fq^\times}.$$ Moreover, for every $\alpha,\beta\in\fq^\times$, we have $$[{E}_{i,j}(\alpha),{E}_{k,\ell}(\beta)]\. = \.
\begin{cases}
{E}_{i,\ell}(\alpha\beta) & \text{if }j=k\ts,\\
1 & \text{if }j\ne k\ts.
\end{cases}$$
The adjoint and co-adjoint actions for pattern groups
-----------------------------------------------------
The [*adjoint action*]{} of ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ on ${\cal U}_P$ is defined by $$\operatorname{Ad}_g:X\mapsto gXg^{-1}$$ for $g\in {\ensuremath{U_{\hspace{-0.6mm}P}}}$ and $X\in{\cal U}_P$. Enumerating conjugacy classes of a pattern group is equivalent to enumerating orbits of the adjoint action. Indeed, the action of ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ on itself by conjugation is equivariant with the adjoint action, as $$1+\operatorname{Ad}_g(X)=1+gXg^{-1}=g(1+X)g^{-1}.$$ We consider the [*co-adjoint action*]{} of ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ on the dual of ${\cal U}_P$. For $f\in{\cal U}_P^*$ and $g\in {\ensuremath{U_{\hspace{-0.6mm}P}}}$, define $K_g(f)\in{\cal U}_P^*$ by $$K_g(f):X\mapsto f(g^{-1}Xg),$$ for all $X\in{\cal U}_P$. In other words, the co-adjoint action is given by $K_g(f)=f\circ\operatorname{Ad}_{g^{-1}}$.
\[lem:adcoad\] The number of co-adjoint orbits for a pattern group ${\ensuremath{U_{\hspace{-0.6mm}P}}}(q)$ is equal to the number of adjoint orbits, and hence $k({\ensuremath{U_{\hspace{-0.6mm}P}}}(q))$.
Note that several versions of the lemma are known (c.f [@K1; @K2]). In particular, Kirillov proves the special case of $U_n$ in [@K3]. We present a full proof here for completeness.
Extend both the adjoint and the co-adjoint actions by linearity from ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ to the entire group algebra $\zz[{\ensuremath{U_{\hspace{-0.6mm}P}}}]$. Then for $f\in{\cal U}_P^*$, the co-adjoint action $K_{g-1}$ annihilates $f$ if and only if $f$ vanishes on the image of $\operatorname{Ad}_{g^{-1}-1}$. Indeed, $$K_{g-1}(f)=K_g(f)-f=f\circ\operatorname{Ad}_{g^{-1}}-f=f\circ\operatorname{Ad}_{g^{-1}-1}.$$ Let $I_g=\im(\operatorname{Ad}_{g^{-1}-1})$. We apply Burnside’s lemma to count the orbits of the co-adjoint action: $$\begin{aligned}
\bigl|{\cal U}_P^*/{\ensuremath{U_{\hspace{-0.6mm}P}}}\bigr| \, &= \, \frac1{\abs{{\ensuremath{U_{\hspace{-0.6mm}P}}}}}\. \sum_{g\in {\ensuremath{U_{\hspace{-0.6mm}P}}}}\. \bigl|\ker(K_{g-1})\bigr| \, = \,
\frac1{\abs{{\ensuremath{U_{\hspace{-0.6mm}P}}}}}\. \sum_{g\in {\ensuremath{U_{\hspace{-0.6mm}P}}}} \. \#\left\{f\in{\cal U}_P^*\. | \. I_g\subseteq\ker f\right\}\\
&= \, \frac1{\abs{{\ensuremath{U_{\hspace{-0.6mm}P}}}}}\. \sum_{g\in {\ensuremath{U_{\hspace{-0.6mm}P}}}}\. q^{\dim{\cal U}_P-\dim I_g} \, = \,
\frac1{\abs{{\ensuremath{U_{\hspace{-0.6mm}P}}}}}\. \sum_{g\in {\ensuremath{U_{\hspace{-0.6mm}P}}}}\. \bigl|\ker(\operatorname{Ad}_{g^{-1}-1})\bigr| \, = \, \bigl|{\cal U}_P/{\ensuremath{U_{\hspace{-0.6mm}P}}}\bigr|\ts.
\end{aligned}$$ This completes the proof.
An explicit realization of the co-adjoint action {#subsec:lower}
------------------------------------------------
In place of functionals on pattern algebras, we identify ${\cal U}^*_P$ with a quotient space of matrices. Define $${\cal L}_P(q):=\left.M_{n\times n}(\fq)\middle/\bigoplus_{i\not\succ j}\fq e_{i,j}\right.\..$$
When $P={\cal C^{n}}$ (the total order $\{1<\dots<n\}$), then ${\cal L}_P$ is the space of lower triangular matrices thought of as a quotient of all matrices by upper-triangular matrices (hence the notation “${\cal L}$”). For general posets $P$, the space ${\cal L}_P$ is a quotient spaces of lower triangular matrices.
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & & \*(gray) & \*(gray) & \*(gray)\
& & & & \*(gray) & \*(gray)\
& & & & & \*(gray)\
& & & & &
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & & \*(gray) & \*(gray) & \*(gray)\
& & & & \*(gray) & \*(gray)\
& & & \*(gray) & \*(gray) & \*(gray)\
& & & & &
The space ${\cal L}_P$ is isomorphic to ${\cal U}_P^*(q)$. Specifically, for each $A\in{\cal L}_P(q)$, define the functional $f_X\in{\cal U}_P^*$ by $$f_X(A)\. := \. \tr(X\cdot A)\ts.$$ This identification is well-defined, as the quotiented cells in ${\cal L}_P$ (those $e_{i,j}$ with $i\not\succ j$) will precisely align with the cells that are forced to be zero by the definition of ${\ensuremath{U_{\hspace{-0.6mm}P}}}$. That is, if $i\not\succ j$, then for $A\in {\ensuremath{U_{\hspace{-0.6mm}P}}}$, we have $A_{j,i}=0$. Thus, their contribution to the trace will be zero.
Pushing the co-adjoint action through this identification yields an action of ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ on ${\cal L}_P$, which we also call the co-adjoint action (and also write $K_g$). For $g\in {\ensuremath{U_{\hspace{-0.6mm}P}}}$ and $L\in{\cal L}_P$, the action becomes $$K_g(L)\.=\. gLg^{-1}.$$ To be precise, let $\rho:M_{n\times n}\to{\cal L}_P$ denote the canonical projection map. For ${X\in{\cal L}_P}$, pick a representative $X'\in M_{n\times n}$ so that $\rho(X')=X$. Then ${K_g(X)=\rho(gX'g^{-1})}$. It is evident that the choice of such an $X'$ is irrelevant.
Let $P$ denote the poset shown in Figure \[fig:pattern\_alg\_exmp\], and let $X\in{\cal L}_P(q)$ denote the element shown below on the left. We consider the co-adjoint action of the elementary matrix $E={E}_{2,3}(1)$ on $X$.
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& 0 & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& 1 & 0 & \*(gray) & \*(gray) & \*(gray)\
& 1 & 1 & 0 & \*(gray) & \*(gray)\
& 0 & 1 & \*(gray) & \*(gray) & \*(gray)\
& & & & &
$\xrightarrow{\hspace{7mm}K_E\hspace{7mm}}$
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& 1 & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& 1 & 0 & \*(gray) & \*(gray) & \*(gray)\
& 1 & 1 & -1 & \*(gray) & \*(gray)\
& 0 & 1 & \*(gray) & \*(gray) & \*(gray)\
& & & & &
Consider the left multiplication $X\mapsto EX$. This action adds the contents of row 3 to row 2. Thus, for $Y=K_E(X)$, we have $Y_{3,1}=X_{3,1}+X_{3,2}$. All other cells in row 2 are trivial in ${\cal L}_P$. For the right multiplication $EX\mapsto EXE^{-1}$, we take the contents of column 2 and subtract them from the contents of column 3. We see that $Y_{4,3}=X_{4,3}-X_{4,2}$. All other cells in column 3 are trivial in ${\cal L}_P$, so this is the only relevant data.
Observe that each of these actions (conjugation, the adjoint action, and the co-adjoint actions on ${\cal U}^*_P$ and ${\cal L}_P$) have the same number of orbits. Therefore, $$k({\ensuremath{U_{\hspace{-0.6mm}P}}}(q)) \. = \. \bigl|{\cal U}_P(q)/{\ensuremath{U_{\hspace{-0.6mm}P}}}(q)\bigr|\. = \.
\bigl|{\cal U}_P^*(q)/{\ensuremath{U_{\hspace{-0.6mm}P}}}(q)\bigr| \. =\. \bigl|{\cal L}_P(q)/{\ensuremath{U_{\hspace{-0.6mm}P}}}(q)\bigl|\ts.$$ We use the notation $k(P)$ for this quantity.
Combinatorial tools {#sec:combo_tools}
===================
In this section we construct several tools to compute $k(P)$ using the structure of the poset $P$. We begin with several simple observations which lead to useful tools.
Elementary operations
---------------------
We begin with a the following result which can be seen easily in the language of the co-adjoint action on ${\cal L}_P$ (see §\[subsec:lower\]). We prove it via elementary group theory.
\[prop:obs\] For posets $P$ and $Q$, we have
1. $k(P)=k(P^*)$
2. $k(P)=k(P_1)\cdot k(P_2)$ where $P_i=P|_{S_i}$ for $i=1,2$, and $S_1,S_2\subseteq P$ such that $S_1\cup S_2=P$ and $P|_{S_1\cap S_2}$ contains no relations.
3. $k(P\amalg Q)=k(P)\cdot k(Q)$
For (1), we must label the elements $P^*$ appropriately so that $i\le j$ whenever $i{\preccurlyeq}_{P^*} j$ (as required by the definition of pattern groups). Let $n=\abs{P}$, and for each $i\in P$, relabel the element $i$ with the label $n+1-i$. This will reverse the total ordering on $P$ so that it agrees with the partial ordering on $P^*$. In terms of matrices, we have expressed $U_{\hspace{-0.6mm}P^*}$ as the elements of ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ “transposed” about the anti-diagonal. Let $\phi$ denote this anti-diagonal transposition. Then the map $g\mapsto \phi(g^{-1})$ is an isomorphism between the groups ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ and $U_{\hspace{-0.6mm}P^*}$, proving $k(P)=k(P^*)$.
For (2), let $P_i=P|_{S_i}$ for $i=1,2$. We claim that the following map $\psi:U_{\hspace{-0.6mm}P_1}\times U_{\hspace{-0.6mm}P_2}\to {\ensuremath{U_{\hspace{-0.6mm}P}}}$ defined by $\psi(g_1,g_2)=g_1g_2$ is the isomorphism. First, note that for $g_i\in U_{\hspace{-0.6mm}P_i}$, the elements $g_1$ and $g_2$ commute. To this end, it suffices to see that generators commute, which follows from the fact that $P|_{S_1\cap S_2}$ has no relations, and Proposition \[prop:generators\]. Then $\psi$ is a homomorphism, as $$\psi(g_1,g_2)\psi(h_1,h_2)\. = \. g_1g_2h_1h_2=g_1h_1g_2h_2\. = \. \psi(g_1h_1,g_2h_2)\ts , \ \text{ and}$$ $$\psi(g_1,g_2)^{-1}\. = \. g_2^{-1}g_1^{-1}=g_1^{-1}g_2^{-1}\. = \. \psi(g_1^{-1},g_2^{-1})\ts.$$ Whenever $x\prec_P y$, then either $x\prec_{P_1}y$ or $x\prec_{P_2}y$. Therefore, every generator ${E}_{x,y}(\alpha)$ of ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ is either a generator of $U_{\hspace{-0.6mm}P_1}$ or $U_{\hspace{-0.6mm}P_2}$, so $\psi$ is surjective. It follows from the fact that ${\abs{U_{\hspace{-0.6mm}P_1}\times U_{\hspace{-0.6mm}P_2}}=\abs{{\ensuremath{U_{\hspace{-0.6mm}P}}}}}$ that $\psi$ is an isomorphism, proving that $k(P)=k(P_1)\cdot k(P_2)$.
For (3), apply (2) to the poset $P\amalg Q$ with $S_1=P$ and $S_2=Q$.
Poset systems
-------------
Let $P$ be a subposet of $P'$. Then ${\cal U}_P$ canonically injects into ${\cal U}_{P'}$, and so we obtain a canonical projection $$\pi_{P',P}:{\cal L}_{P'}\to{\cal L}_P.$$ This projection sends $e_{i,j}$ to zero whenever $i\prec_{P'}j$, but $i\not\prec_Pj$. For specific choices of $P$ and $P'$, this map can be used effectively to enumerate $k(P')$.
Fix a maximal element $m\in\max(P)$. Of particular interest will be the poset $P^{(m)}$, defined by $$\operatorname{rel}\left( P^{(m)} \right)=\{(x,m): x\prec_P m\}.$$ To simplify notation, for the remainder of this section, let $Q=P^{(m)}$, and let $\pi=\pi_{P,Q}$. That is, the projection $\pi$ annihilates all $e_{i,j}\in{\cal L}_P$ which are not of the form $e_{m,x}$ for $x\prec_Pm$ (see Figure \[fig:proj\_exmp\]).
(1,2) circle (.1cm); at (1.5,2) [5]{};
(-1,3) circle (.1cm); at (-1.5,3) [4]{};
(-1,2) circle (.1cm); at (-1.5,2) [3]{};
(0,1) circle (.1cm); at (-0.5,1) [2]{};
(0,0) circle (.1cm); at (-0.5,0) [1]{};
(1,2) – (0,1); (-1,3) – (-1,2); (-1,2) – (0,1); (0,1) – (0,0);
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & & \*(gray) & \*(gray) & \*(gray)\
& & & & \*(gray) & \*(gray)\
& & & \*(gray) & \*(gray) & \*(gray)\
& & & & &
$\xrightarrow{\hspace{5mm}\pi\hspace{5mm}}$
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & & \*(gray) & \*(gray) & \*(gray)\
& & & & &
The map $\pi$ induces an action of ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ on ${\cal L}_Q$, the orbits of which are easy to analyze. Define the [*support*]{} of an element $X\in {\cal L}_Q$ to be $$\operatorname{supp}(X):=\{x\in Q: X_{m,x}\ne0\}.$$ Each ${\ensuremath{U_{\hspace{-0.6mm}P}}}$-orbit of ${\cal L}_Q$ contains precisely one element whose support is an anti-chain in $\operatorname{lb}(m)$. We can stratify the ${\ensuremath{U_{\hspace{-0.6mm}P}}}$-orbits of ${\cal L}_P$ by their image in ${\cal L}_Q$ under them map $\pi$. That is, $$\label{eqn:ugly_sum}
k(P)=\sum_X\abs{\pi^{-1}(X)\middle/\operatorname{stab}_{{\ensuremath{U_{\hspace{-0.6mm}P}}}}(X)}\tag{$\ast$},$$ where the sum is over all elements in ${\cal L}_{Q}$ with anti-chain support.
Moreover, if $X,Y\in{\cal L}_{Q}$ have the same support $A\in\operatorname{ac}(\operatorname{lb}(m))$, then the corresponding summands for $X$ and for $Y$ in are equal. This can be seen by allowing the diagonal matrices to act on ${\cal L}_P$ by conjugation, and noting that for an appropriate choice of diagonal matrix $\dz$, we have $$\dz \ts \pi^{-1}(X)\ts \dz^{-1}\. =\. \pi^{-1}(Y)\ts.$$ Furthermore, for the same diagonal matrix $\dz$, we have $$\dz\ts\operatorname{stab}_{{\ensuremath{U_{\hspace{-0.6mm}P}}}}(X)\ts \dz^{-1}\. =\. \operatorname{stab}_{{\ensuremath{U_{\hspace{-0.6mm}P}}}}(Y)\ts.$$ Therefore we can sum over a single representative for each anti-chain, and take each summand with multiplicity $(q-1)^{\abs{A}}$. That is, $$\label{eqn:less_ugly_sum}
k(P)=\sum_{A\in\operatorname{ac}(\operatorname{lb}(m))}(q-1)^{\abs{A}}\abs{\pi^{-1}(1_A)\middle/\operatorname{stab}_{{\ensuremath{U_{\hspace{-0.6mm}P}}}}(1_A)}\tag{$\ast\ast$}$$ where $1_A=\sum_{a\in A} e_{m,x}$, the indicator function on $A$. Pictorially, we are stratifying the ${\ensuremath{U_{\hspace{-0.6mm}P}}}$-orbits of ${\cal L}_P$ by the bottom row in their associated diagram (see Figure \[fig:pictorial\]).
(0,0) circle (.1cm); at (-0.5,0) [1]{};
(-1,1) circle (.1cm); at (-1.5,1) [2]{};
(1,1) circle (.1cm); at (1.5,1) [3]{};
(0,2) circle (.1cm); at (-0.5,2) [4]{};
(0,3) circle (.1cm); at (-0.5,3) [5]{};
(0,2) – (1,1) – (0,0) – (-1,1) – (0,2) – (0,3);
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & & & \*(gray) & \*(gray)\
& 0 & 0 & 0 & 0 & \*(gray)\
& & & & &
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & & & \*(gray) & \*(gray)\
& 1 & 0 & 0 & 0 & \*(gray)\
& & & & &
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & & & \*(gray) & \*(gray)\
& 0 & 1 & 0 & 0 & \*(gray)\
& & & & &
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & & & \*(gray) & \*(gray)\
& 0 & 0 & 1 & 0 & \*(gray)\
& & & & &
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & & & \*(gray) & \*(gray)\
& 0 & 0 & 0 & 1 & \*(gray)\
& & & & &
& \*(gray) & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & \*(gray) & \*(gray) & \*(gray) & \*(gray)\
& & & & \*(gray) & \*(gray)\
& 0 & 1 & 1 & 0 & \*(gray)\
& & & & &
The notation in is quite cumbersome, even after suppressing some of the subscripts. We make the following definition which keeps track of the essential data.
A [*poset system*]{} is a triple $(P,m,A)$ consisting of a poset $P$, a maximal element $m\in\max(P)$, and an anti-chain $A\in\operatorname{ac}(\operatorname{lb}(m))$.
Let $S=(P,m,A)$ be a poset system. By a slight abuse of notation, we define $k(S)=k(S;q)$ as follows: $$k(S) \. := \. \abs{\pi^{-1}(1_A)\middle/\operatorname{stab}_{{\ensuremath{U_{\hspace{-0.6mm}P}}}}(1_A)}\ts,$$ where $\pi=\pi_{P,Q}$ and $Q=P^{(m)}$ as above. For any poset $P$, and any $m\in\max(P)$, we may rewrite in this more condensed notation, $$\label{eqn:main_tool}
k(P)=\sum_{A\in\operatorname{ac}(\operatorname{lb}(m))}(q-1)^{\abs A}k(P,m,A).\tag{$\circ$}$$ This relation is our main tool for computing $k(U_n)$. We show that under certain conditions on poset systems $S$, there exists a poset $Q$ for which ${k(S)=k(Q)}$. When such a poset exists, we then recursively apply .
Formally, whenever $k(S)=k(P)$ for a poset $P$ and poset system $S$, we say that $S$ [*reduces to*]{} $P$, and that $S$ is [*reducbile*]{}.
If every poset system was reducible, an inductive argument would imply that $k(P)$ was a polynomial for every poset $P$. This is certainly not the case, as Halasi and Pálfy have constructed posets for which $k(P)$ is not a polynomial [@HP]. However, by adding suitable constraints, we can guarantee that $S$ is reducible.
\[lem:remove\_max\] Let $S=(P,m,A)$ be a poset system such that there exists no pair of elements $(a,x)\in A\times P$ for which $a\prec x\prec m$. Then $S$ reduces to $P-m$.
We begin by showing that the entire group ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ stabilizes $1_A$. Let $\alpha\in\fq^\times$, and let $E={E}_{x,y}(\alpha)$ be a generator of ${\ensuremath{U_{\hspace{-0.6mm}P}}}$. If $x\not\in A$, then it is easy to see that $K_E(1_A)=1_A$. On the other hand, if $x\in A$, then by assumption $y\not\prec m$. Therefore, we have $K_E(1_A)=1_A-e_{m,y}=1_A$, as $e_{m,y}$ is trivial in ${\cal L}_Q$.
From Proposition \[prop:generators\], we know that each of the generators ${E}_{x,y}(\alpha)$ of ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ is either a generator of $U_{\hspace{-0.6mm}P-m}$, or of the form ${E}_{x,m}(\alpha)$ for $\alpha\in\fq^\times$. Because each generator of the form ${E}_{x,m}(\alpha)$ acts trivially on ${\cal L}_P$, we have $$k(S)=\abs{\pi^{-1}(1_A)/\operatorname{stab}_{{\ensuremath{U_{\hspace{-0.6mm}P}}}}(1_A)}=\abs{\pi^{-1}(1_A)/U_{\hspace{-0.6mm}P-m}}.$$
Now every element of $U_{\hspace{-0.6mm}P-m}$ acts trivially on row $m$ (the $\fq$-linear span of $e_{m,x}$). Simply removing this row yields the co-adjoint action of $U_{\hspace{-0.6mm}P-m}$ on ${\cal L}_{P-m}$, so $$k(S)=\abs{\pi^{-1}(1_A)/U_{\hspace{-0.6mm}P-m}}=\abs{{\cal L}_{P-m}/U_{\hspace{-0.6mm}P-m}}=k(P-m)$$ as desired.
\[lem:normal\_conj\] Let $(P,m,A)$ be a poset system, and suppose that $a,b\in A$ such that $$\operatorname{ub}(a)\supseteq\operatorname{ub}(b)\text{ and }\operatorname{lb}(a)\subseteq\operatorname{lb}(b).$$ Then $k(P,m,A)=k(P,m,A-\{b\})$.
Let $\Phi:{\cal L}_P\to{\cal L}_P$ denote conjugation by $E={E}_{a,b}(1)$. Note that ${E}\not\in{\ensuremath{U_{\hspace{-0.6mm}P}}}$, since $a$ and $b$ are incomparable. However, $E$ normalizes ${\ensuremath{U_{\hspace{-0.6mm}P}}}$, and so the map $\Phi$ is well-defined. As a slight abuse of notation, we also use $\Phi$ to denote the conjugation map $\Phi:{\ensuremath{U_{\hspace{-0.6mm}P}}}\to{\ensuremath{U_{\hspace{-0.6mm}P}}}$ given by $\Phi(g)=EgE^{-1}$. It is now a triviality that for $X\in{\cal L}_P$ and $g\in{\ensuremath{U_{\hspace{-0.6mm}P}}}$, we have $$\Phi(K_g(X))=K_{\Phi(g)}(\Phi(X)).$$
Now let $Q=P^{(m)}$ and $\pi=\pi_{P,Q}$. Pushing $\Phi$ through $\pi$ to an action of ${\cal L}_Q$, we have $$\Phi(1_A)=E(1_A)E^{-1}=1_A-e_{m,b}=1_{A-\{b\}}.$$ Moreover, as $\Phi$ commutes with $\pi$, we have $\Phi(\pi^{-1}(1_A))=\pi^{-1}(1_{A-\{b\}})$. Lastly, note that $\Phi(\operatorname{stab}_{{\ensuremath{U_{\hspace{-0.6mm}P}}}}(1_A))=\operatorname{stab}_{{\ensuremath{U_{\hspace{-0.6mm}P}}}}(1_{A-\{b\}})$. Thus, we have $$k(P,m,A)=\abs{\pi^{-1}(1_A)\middle/\operatorname{stab}_{{\ensuremath{U_{\hspace{-0.6mm}P}}}}(1_A)}=\abs{\pi^{-1}(1_A)\middle/\operatorname{stab}_{{\ensuremath{U_{\hspace{-0.6mm}P}}}}(1_A)}=k(P,m,A-\{b\}),$$ as desired.
The operator ${\cal D}$
-----------------------
Let $S=(P,m,A)$ be a poset system. Define ${\cal D}(S)$ to be a poset obtained from $P$ by removing relations $a\prec x$ whenever the following two criteria hold:
1. $a\in A$, and $a\prec x\prec m$.
2. If $a'\in A$ and $a'\prec x$, then $a'=a$.
Stated more concisely, the set of pairs of related elements in ${\cal D}(S)$ is given by $$\operatorname{rel}({\cal D}(S))=\operatorname{rel}(P)\setminus\{(a,x): a\prec x\prec m,\ \abs{A\cap\operatorname{lb}(x)}=1\}.$$
In figures \[fig:D\_example1\] and \[fig:D\_example2\], we provide examples of poset systems $S$ and the application of the operator ${\cal D}(S)$. Poset systems are shown graphically as the Hasse diagram of their underlying poset with special marked elements. Generic elements of $P$ will be denoted by “$\bullet$” as with they normally are in the Hasse diagram of a poset. The elements of the anti-chain $A$ will be denoted by “$\circ$.” The maximal element $m$ will be denoted by “[$\square$]{}.”
(0,4) – (0,0);
(-0.1,3.9) rectangle (0.1,4.1); at (-0.5,4) [5]{};
(0,3) circle (.1cm); at (-0.5,3) [4]{};
(0,2) circle (.1cm); at (-0.5,2) [3]{};
(0,1) circle (.1cm); at (-0.5,1) [2]{};
(0,0) circle (.1cm); at (-0.5,0) [1]{};
(10,3) circle (.1cm); at (9.5,3) [5]{};
(10,2) circle (.1cm); at (9.5,2) [3]{};
(11,2) circle (.1cm); at (11.5,2) [4]{};
(10,1) circle (.1cm); at (9.5,1) [2]{};
(10,0) circle (.1cm); at (9.5,0) [1]{};
(11,2) – (10,1); (10,3) – (11,2); (10,3) – (10,0); (10,1) – (10,0);
(1,4) – (0,3); (-1,4) – (0,3); (0,3) – (0,0);
(-1.1,3.9) rectangle (-0.9,4.1); at (-1.5,4) [6]{};
(1,4) circle (.1cm); at (1.5,4) [5]{};
(0,3) circle (.1cm); at (-0.5,3) [4]{};
(0,2) circle (.1cm); at (-0.5,2) [3]{};
(0,1) circle (.1cm); at (-0.5,1) [2]{};
(0,0) circle (.1cm); at (-0.5,0) [1]{};
(9,3) circle (.1cm); at (8.5,3) [6]{};
(10,3) circle (.1cm); at (10.5,3) [5]{};
(9,2) circle (.1cm); at (8.5,2) [4]{};
(10,2) circle (.1cm); at (10.5,2) [3]{};
(10,1) circle (.1cm); at (10.5,1) [2]{};
(10,0) circle (.1cm); at (10.5,0) [1]{};
(9,3) – (9,2); (9,3) – (10,2); (9,2) – (10,3); (10,2) – (10,3); (9,2) – (10,1); (10,2) – (10,1); (10,1) – (10,0);
\[lem:apply\_d\] For any poset system $S=(P,m,A)$, we have $k(S)=k({\cal D}(S),m,A)$.
Let $Q=P^{(m)}$. Not only is $Q$ a subposet of $P$, but it is also a subposet of ${\cal D}(S)$. Therefore every element of $A$ is less than $m$ in ${\cal D}(S)$ as well as in $P$, so the poset system $({\cal D}(S),m,A)$ is well-defined.
We first show that $\operatorname{stab}_{{\ensuremath{U_{\hspace{-0.6mm}P}}}}(1_A)=\operatorname{stab}_{U_{{\cal D}(S)}}(1_A)$. Clearly $\operatorname{stab}_{U_{{\cal D}(S)}}(1_A)\le\operatorname{stab}_{{\ensuremath{U_{\hspace{-0.6mm}P}}}}(1_A)$, so to show equality, it suffices to show that the two stabilizers have the same cardinality. Let $\Omega_P$ denote the ${\ensuremath{U_{\hspace{-0.6mm}P}}}$-orbit of ${\cal L}_Q$ containing $1_A$, and let $\Omega_{{\cal D}(S)}$ denote the $U_{{\cal D}(S)}$-orbit of ${\cal L}_{Q}$ containing $1_A$. By the orbit-stabilizer theorem, it is enough to show that $$\frac{\abs{{\ensuremath{U_{\hspace{-0.6mm}P}}}}}{\abs{U_{{\cal D}(S)}}} = \frac{\abs{\Omega_P}}{\abs{\Omega_{{\cal D}(S)}}}.$$
It is immediate from the definition of pattern groups that $\abs{{\ensuremath{U_{\hspace{-0.6mm}P}}}}=q^{\abs{\operatorname{rel}(P)}}$. Thus, we have $\abs{{\ensuremath{U_{\hspace{-0.6mm}P}}}}/\abs{U_{{\cal D}(S)}}=q^{\abs{R}}$, where $R=\operatorname{rel}(P)\setminus\operatorname{rel}({\cal D}(S))$. We may characterize $R$ in a different way: $$R=\{(a,x)\in A\times \operatorname{lb}(m):\text{$a$ is the unique element of $A$ below $x$}\}.$$ For pairs $(a,x)\in R$, the element $a\in A$ is uniquely defined by $x$, and so $R$ is in bijection with the set $$R'=\{x\prec_P m: \abs{\operatorname{lb}(x)\cap A}=1\}.$$
We now turn to the ${\ensuremath{U_{\hspace{-0.6mm}P}}}$-orbits $\Omega_P$ and $\Omega_{{\cal D}(S)}$ in ${\cal L}_Q$. Certainly $X_{m,a}=1$ for each $a\in A$, and $X_{m,x}=0$ if $x\not\succ_P a$ for all $a\in A$. If, on the other hand, there does exist some $a\in A$ for which $a\prec_P x$, then by conjugation one can obtain any value at $X_{m,x}$. Specifically, note that for $E={E}_{a,x}(\alpha)$, we have $K_E(X)=X-\alpha e_{m,x}$. It follows that
$$\frac{\abs{\Omega_P}}{\abs{\Omega_{{\cal D}(S)}}}\. = \. q^{|R_1| \ts - \ts |R_2|}\ts, \ \ \. \text{where}$$
$$R_1 \ts = \ts \{x\prec_P m~:~a\prec_P x\text{ for some }a\in A\} \ \, \text{and} \ \, R_2 \ts = \ts
\{x\prec_P m~:~a\prec_{{\cal D}(S)} x\text{ for some }a\in A\}.$$
From the definition of ${\cal D}(S)$, we have $a\prec_{{\cal D}(S)} x$ if and only if there is more than one element of $A$ which is less than $x$ in $P$. Hence, $$\frac{\abs{\Omega_P}}{\abs{\Omega_{{\cal D}(S)}}}=q^{\#\{x~:~\abs{\operatorname{lb}(x)\cap A}=1\}}=q^{\abs{R'}}.$$
This proves that $\operatorname{stab}_{{\ensuremath{U_{\hspace{-0.6mm}P}}}}(1_A)=\operatorname{stab}_{U_{{\cal D}(S)}}(1_A)$. For the remainder of the proof, we let $G$ denote both of these groups. We are now left to show that $\pi_{P,Q}^{-1}(1_A)$ and $\pi_{{\cal D}(S),Q}^{-1}(1_A)$ are isomorphic $G$-sets. To do so, we need to construct a map between these two sets which preserves $G$-orbits. There is a natural choice for such a map: The canonical projection $\pi_{P,{\cal D}(S)}:{\cal L}_P\to{\cal L}_{{\cal D}(S)}$ restricts to $$\rho:\pi_{P,Q}^{-1}(1_A)\longrightarrow\pi_{{\cal D}(S),Q}^{-1}(1_A).$$
We now argue that $\rho$ preserves $G$-orbits. More precisely, we claim that for all the elements $X$ and $Y$ belong to the same $G$-orbit if and only if $\rho(X)$ and $\rho(Y)$ belong to the same $G$-orbit.
Because $\rho$ respects the co-adjoint action, it is clear that $\rho(X)$ and $\rho(Y)$ belong to the same $G$-orbit whenever $X$ and $Y$ belong to the same $G$-orbit. In the other direction, suppose for some $g\in G$. Then $X-K_g(Y)\in\ker\rho$. It is easy to see that $$\ker\rho\, = \. \bigoplus_{(a,x)\in R}\fq e_{x,a}\hspace{1mm}.$$ Indeed, the pairs $(a,x)\in R$ are precisely the pairs of elements for which $a\prec_P x$ but $a\not\prec_{{\cal D}(S)}x$, so linear combinations of the $e_{x,a}$ exactly the elements which are projected away by $\rho$. Now let $(a,x)\in R$, and let $E={E}_{x,m}(\alpha)$. For $Z\in\pi_{P,Q}^{-1}(1_A)$, we have $$K_h(Z)=Z+\alpha e_{a,x}.$$ Thus, if two elements of $\pi_{P,Q}^{-1}(1_A)$ differ by an element of $\ker\rho$, they must belong to the same $G$-orbit. In particular, $X$ and $K_g(Y)$ belong to the same $G$-orbit. This proves $$k(S)=\abs{\pi_{P,Q}^{-1}(1_A)/G}=\abs{\pi_{{\cal D}(S),Q}^{-1}(1_A)/G}=k({\cal D}(S),m,A),$$ which completes the proof.
\[lem:antichain\_chain\] Let $S=(P,m,A)$ be a poset system with $A=\{a_1,\dots,a_k\}$ such that $$\begin{aligned}
\label{eqn:lbub}
\operatorname{lb}_P(a_1)\subseteq\operatorname{lb}_P(a_2)\subseteq\cdots\subseteq\operatorname{lb}_P(a_k)\text{ and}\\
\operatorname{ub}_P(a_1)\subseteq\operatorname{ub}_P(a_2)\subseteq\cdots\subseteq\operatorname{ub}_P(a_k).\hspace{5mm}
\end{aligned}$$ Further suppose that $m$ is the unique maximum above $a_1$. Then $S$ is reducible.
We proceed by induction on $\abs{A}$. If $A=\emp$, then $k(S)=k(P-m)$ by Lemma \[lem:remove\_max\]. If $\abs A=1$, then $k(S)=k({\cal D}(S)-m)$ by lemmas \[lem:apply\_d\] and \[lem:remove\_max\] applied in succession.
Now suppose the result holds whenever the anti-chain has fewer than $k$ elements, and let $\abs{A}=k$. Applying Lemma \[lem:apply\_d\], we have $k(S)=k({\cal D}(S),m,A)$. Let $$R \. := \. \operatorname{rel}(P)\setminus\operatorname{rel}({\cal D}(S))\ts,$$ and note that because $m$ is the unique maximum in $P$, we have $$R \. = \. \left\{(a,x)\in A\times P: \operatorname{lb}(x)\cap A=\{a\}\right\}\ts.$$ If $(a_i,x)\in R$, then $a_i\prec_P x$, and for all $j\ne i$, it must be that $a_j\not\prec_P x$. Thus, if $(a_i,x)\in R$, it must be that $i=k$, and so $$R=\{(a_k,x): x\in\operatorname{ub}_P(a_k)\setminus\operatorname{ub}_P(a_{k-1})\}.$$ Therefore $\operatorname{ub}_{{\cal D}(S)}(a_k)=\operatorname{ub}_{{\cal D}(S)}(a_{k-1})$, and so $({\cal D}(S),m,A)$ satisfies the hypotheses of Lemma \[lem:normal\_conj\]. This tells us that $k({\cal D}(S),m,A)=k({\cal D}(S),m,A-\{a_k\})$. By inductive hypothesis, there exists a poset $Q$ for which $k({\cal D}(S),m,A-\{a_k\})=k(Q)$. Stringing these equalities together yields $k(S)=k(Q)$, as desired.
Reduction of [$\mathrm{\mathbf{Y}}$]{}-posets
---------------------------------------------
With suitable constraints on the poset, we may obtain a recurrence relation for the number of conjugacy classes in its pattern group. One such constraint is as follows. Define the poset ${\ensuremath{\mathrm{\mathbf{Y}}}}$ as in Figure \[fig:y\_poset\]. Recall that a poset is [*[$\mathrm{\mathbf{Y}}$]{}-free*]{} if it does not have the poset ${\ensuremath{\mathrm{\mathbf{Y}}}}$ as an induced subposet.
(0,1) – (0,2); (-1,0) – (0,1) – (1,0);
(0,2) circle (.1cm); (0,1) circle (.1cm); (1,0) circle (.1cm); (-1,0) circle (.1cm);
\[thm:y\_free\] Let $P$ be a [$\mathrm{\mathbf{Y}}$]{}-free poset, and let $m\in\max(P)$. Then $$k(P)=\sum_{S=(P,m,A)}(q-1)^{\abs{A}}k({\cal D}(S)-m).$$
Let $S=(P,m,A)$ be a poset system. In light of , it suffices to show that $$k(S)=k({\cal D}(S)-m).$$ By Lemma \[lem:apply\_d\], we see that $k(S)=k({\cal D}(S),m,A)$. We claim that if $P$ is , then ${\cal D}(S)$ has no element $x$ for which $a\prec_{{\cal D}(S)} x\prec_{{\cal D}(S)} m$. Once this claim is established, Lemma \[lem:remove\_max\] proves that ${k(S)=k({\cal D}(S)-m)}$. Suppose for the sake of contradiction that , and $a\in A$ such that $${a\prec_{{\cal D}(S)}x\prec_{{\cal D}(S)}m}.$$ Because ${\cal D}(S)$ is obtained from $P$ by removing relations, certainly $a\prec_P x\prec_P m$. Moreover, because $a\prec_P x$ was not removed, we know that $\abs{A\cap\operatorname{lb}(x)}>1$. Thus, there must be some other $b\in A$ with $b\prec_P x$. Now $\{a,b,x,m\}$ induces a copy of [$\mathrm{\mathbf{Y}}$]{} in $P$, which is a contradiction.
Theorem \[thm:y\_free\] did not use the full strength of the [$\mathrm{\mathbf{Y}}$]{}-freeness condition. It is only necessary that $P$ be [$\mathrm{\mathbf{Y}}$]{}-free below a single maximal element. Hence, we have the following strengthening of the theorem.
\[thm:y\_free\_stronger\] Let $P$ be a poset, and suppose that there exists some $m\in\max(P)$ such that the poset induced on $\{x: x{\preccurlyeq}_P m\}$ is [$\mathrm{\mathbf{Y}}$]{}-free. Then $$k(P)=\sum_{S=(P,m,A)}(q-1)^{\abs{A}}k({\cal D}(S)-m).$$
Interval posets
---------------
In a different direction, we consider interval posets. Given a collection of closed intervals $I_k=[\ell_k,r_k]$ in $\rr$, one can define a partial order called the [*interval order*]{} on $\{I_k\}$ by declaring $I_j{\preccurlyeq}I_k$ whenever $r_j\le \ell_k$. An [*interval poset*]{} is a poset which is the interval order of some family of intervals on a line. The class of interval posets is well studied (see e.g. [@Tro]), and has several equivalent characterizations. For our purposes, the important properties of interval poset will be items (3) and (4) in the following theorem.
\[thm:interval\_equiv\] For a poset $P$, the following are equivalent:
1. $P$ is an interval poset,
2. $P$ is $({\cal C^{2}}\amalg{\cal C^{2}})$-free,
3. the collection of sets $\operatorname{ub}(x)$ for $x\in P$ is totally ordered by inclusion,
4. and the collection of sets $\operatorname{lb}(x)$ for $x\in P$ is totally ordered by inclusion.
From here we have the following positive result.
Every interval poset with a unique maximal element is reducible.
From , it suffices to show that every poset system $S=(P,m,A)$ is reducible. We do so by induction on $\abs{A}$. If $\abs{A}=0$ then the result follows from Lemma \[lem:remove\_max\]. If $\abs{A}=1$, the result follows from lemmas \[lem:apply\_d\] and \[lem:remove\_max\] applied in succession. Otherwise, suppose that $\abs{A}\ge 2$. If there exist $a,b\in A$ which satisfy the conditions of Lemma \[lem:normal\_conj\], then $k(S)=k(P,m,A-\{b\})$, and the inductive hypothesis proves the claim. We may therefore assume that for every $a,b\in A$, whenever $\operatorname{lb}(a)\subseteq\operatorname{lb}(b)$ we also have $\operatorname{ub}(b)\nsubseteq\operatorname{ub}(a)$. Recall that in an interval poset, the sets $\operatorname{lb}(x)$ are totally ordered by inclusion, so we order the elements of $A=\{a_1,\dots,a_k\}$ such that $$\operatorname{lb}(a_1)\subseteq\operatorname{lb}(a_2)\subseteq\cdots\subseteq\operatorname{lb}(a_k).$$ For each $i<j$, we know that $\operatorname{ub}(a_j)\nsubseteq\operatorname{ub}(a_i)$. However, in an interval poset, the sets $\operatorname{ub}(x)$ are also totally ordered by inclusion. We conclude that $$\operatorname{ub}(a_1)\subseteq\operatorname{ub}(a_2)\subseteq\cdots\subseteq\operatorname{ub}(a_k),$$ and the result follows from Lemma \[lem:antichain\_chain\].
Embedding {#sec:embedding}
=========
Embedding sequences
-------------------
Consider an attempt to compute $k(U_n)=k({\cal C^{n}})$ by recursively applying along with the other tools developed in Section \[sec:combo\_tools\]. If a poset system $S$ appears in a computation and is reducible to a poset $P$, we can replace $k(S)$ with $k(P)$, and compute $k(P)$, applying again. We show that for every poset $P$, one can take $n$ sufficiently large so that $k(P)$ appears in the recursive expansion of $k(U_n)$. With the following definition, we make this statement precise in Theorem \[thm:chain\_univ\].
We say that a poset $P$ [*strongly embeds*]{}[^2] into a poset $Q$ if there exists a sequence of poset systems $S_1,\dots, S_n$ with $S_i=(P_i,m_i,\{a_i\})$, such that
1. $P_0=P$,
2. $P_n=Q$,
3. for $0\le i<n$, we have $P_i\cong{\cal D}(S_{i+1})-m_{i+1}$.
When $P$ strongly embeds into $Q$, we write $P{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }Q$. The sequence $$P=P_0{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }P_1{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }\cdots{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }P_{n-1}{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }P_n=Q$$ is called a [*strong embedding sequence*]{}. When we wish to signify that the strong embedding sequence has length $n$, we write $P{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle n$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }Q$.
Note that the anti-chains in each poset system are required to have exactly one element. Thus, lemmas \[lem:remove\_max\] and \[lem:apply\_d\] can be applied, and $k(P_i)=k(S_{i+1})$. The following observations regarding strong embedding are easy.
Let $P$, $Q$, and $R$ be posets such that $P{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle k$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} } Q$. Then
1. $R+P { { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle k$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} } R+Q$, and
2. $R\amalg P { { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle k$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} } R\amalg Q$.
The next few lemmas are technical, so we provide an outline of our methods for showing that every poset strongly embeds into a chain. First, Lemma \[lem:add\_top\] tells us that if we have a poset $P$ sitting inside a larger poset $P+{\cal C^{k}}$, it is safe to focus just on $P$. That is, any strong embedding of $P$ into a chain can be transformed into a strong embedding of $P+{\cal C^{k}}$ into an even larger chain. With this in mind, we may safely assume that $P$ does not have a unique maximum.
Next, Lemma \[lem:max\_el\] proves that we can take a maximal element $m$ of $P$ and connect it to each of the other elements in $P$. The result will be a poset which has a chain sitting atop it which can safely be ignored.
Finally, the content of Theorem \[thm:chain\_univ\] applies Lemma \[lem:max\_el\] inductively, proving that each poset strongly embeds into a chain.
\[lem:add\_top\] Let $P$, $Q$, and $R$ denote posets, and suppose $P{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle k$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} } Q$. Then we have $$P+R{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle 2k$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} } Q+R+{\cal C^{k}}.$$
We proceed by induction on $k$. We first show the result for $P{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle 1$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }Q$. Let $(Q,m,\{a\})$ be a poset system for which $P\cong {\cal D}(Q,m,\{a\})-m$. Then $$\operatorname{rel}(P)=\operatorname{rel}(Q-m)\setminus\{(a,x): a\prec_Q x\prec_Q m\}.$$ We define poset systems $S_1$ and $S_2$ to yield a strong embedding sequence for $P+R{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }Q+R+{\cal C^{1}}$. We work backwards from $Q+R+{\cal C^{1}}$, first defining $S_2$, then defining $S_1$ in terms of $S_2$.
Let $m'$ denote the unique maximal element in $Q+R+{\cal C^{1}}$, and define $$\begin{aligned}
S_2 &= (Q+R+{\cal C^{1}}, m', \{m\})\text{ and}\\
S_1 &= ({\cal D}(S_2)-m', m, \{a\}).
\end{aligned}$$ We aim to show that ${\cal D}(S_1)-m\cong P+R$. To this end, we begin with $Q+R+{\cal C^{1}}$ and follow backwards through the strong embedding sequence to determine which relations were removed. First, for ${\cal D}(S_2)-m'$, the relations removed were all the relations of the form $(m,r)$ for $r\in R$. It follows that $m$ is maximal in ${\cal D}(S_2)-m'$.
Next ${\cal D}(S_1)-m$ removes all of the relations $(a,x)$ where $a\prec_Q x\prec_Q m$. The result is that $P+R$ and ${\cal D}(S_1)-m$ have precisely the same relations and are therefore isomorphic posets. This proves that $P+R{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle 2$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }Q+R+{\cal C^{1}}$, which concludes the base case.
Assume that for all posets $P$, $Q$, and $R$, whenever $P{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle k$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} } Q$ we have $P+R{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle 2k$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} } Q+R+{\cal C^{k}}$. Suppose we have posets $P$ and $Q$ for which $P{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle k+1$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} } Q$. Write the strong embedding sequence $$P=P_0{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle k$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} } P_k{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle 1$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} } Q.$$ By the inductive hypothesis $P+R{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle 2k$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }P_k+R+{\cal C^{k}}$. Furthermore, because $P_k{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle 1$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} } Q$, the base case shows us that $$P_k+(R+{\cal C^{k}}){ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle 2$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }Q+(R+{\cal C^{k}})+{\cal C^{1}}=Q+R+{\cal C^{k+1}}.$$ Together, we have $P+R{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle 2k+2$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }Q+R+{\cal C^{k+1}}$, which completes the induction.
\[lem:max\_el\] Let $P$ be a poset, and let $m\in\max(P)$. Then $$P{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle k$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} } (P-m)+{\cal C^{k+1}},$$ where $k=\abs{P}-\abs{\operatorname{lb}_P(m)}-1$.
Let $X=\{x: x\not{\preccurlyeq}_P m\}$, and note that $\abs{X}=k$. Order the elements of according to some reverse linear extension of $P$, so that if $x_i{\preccurlyeq}_P x_j$, then $i\ge j$.
Let $Q_0=(P-m)+{\cal C^{\abs{X}+1}}$, and label the elements in $${\cal C^{\abs{X}+1}}=\{m<p_k<p_{k-1}<\cdots< p_1\}.$$ For $1\le i\le k$, define $Q_i$ recursively as $Q_i={\cal D}(Q_{i-1},p_i,\{x_i\})-p_i$.
The relations removed from $Q_i$ are simple to describe: $$\operatorname{rel}(Q_{i-1})\setminus\operatorname{rel}(Q_i)=\{(x_i,p_j): i+1\le j\le k\}\cup\{(x_i,m)\}.$$ Note that $Q_k$ is a poset which has $p_1,\dots,p_k$ removed. Thus, the fact that we removed the relations $\{(x_i,p_j): i+1\le j\le k\}$ from $Q_{i-1}$ to obtain $Q_i$ is not relevant. However, we did remove $(x_i,m)$ for each $i$. By the definition of $X$, we have the equality $Q_k=P$. Thus, we have constructed a strong embedding sequence $$P=Q_k{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }Q_{k-1}{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }\cdots{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }Q_0=(P-m)+{\cal C^{\abs{X}+1}},$$ which proves the result.
\[thm:chain\_univ\] Every poset strongly embeds into a chain. Specifically, $P{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }{\cal C^{\abs{P}^2-2\abs{\operatorname{rel}(P)}}}$.
Let $F(P)$ denote the set of elements which are not comparable to every element in $P$. We proceed by induction on $\abs{F(P)}$. If $F(P)=\emp$, then $P$ is a chain and the result is trivial.
Otherwise, let $m\in F(P)$ be maximal amongst elements of $F(P)$. As every element of $\operatorname{ub}(m)$ is comparable to every element in $P$, the elements in $\operatorname{ub}(m)$ are totally ordered. Thus, we may disect $P$ into $$P=P_0+{\cal C^{\ell}},$$ where $\ell=\abs{\operatorname{ub}(m)}$, and where $m\in\max(P_0)$.
By Lemma \[lem:max\_el\], we know that $$P_0{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle k$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} } (P_0-m)+{\cal C^{k+1}},$$ where $k=\abs{P_0}-\abs{\operatorname{lb}_P(m)}-1$. Applying Lemma \[lem:add\_top\], we see that $$P=P_0+{\cal C^{\ell}}{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle 2k$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }(P_0-m)+{\cal C^{2k+\ell+1}}.$$ Let $Q=(P_0-m)+{\cal C^{2k+1+\ell}}$. Note that $F(Q)=F(P)\setminus\{m\}$, and so by inductive hypothesis, $$P{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }Q{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }{\cal C^{\abs{Q}^2-2\abs{\operatorname{rel}{Q}}}}.$$ It now suffices to show that $\abs{Q}^2-\abs{P}^2=2\abs{\operatorname{rel}(Q)}-2\abs{\operatorname{rel}(P)}$. To this end, note that , and so $\abs{Q}^2-\abs{P}^2=4k(k+\abs{P})$.
We now express both $\abs{\operatorname{rel}(Q)}$ and $\abs{\operatorname{rel}(P)}$ in terms of $\abs{\operatorname{rel}(P_0-m)}$ by conditioning each pair of related elements on whether or not each element of the pair is contained in $P_0-m$. We have $$\begin{aligned}
2\abs{\operatorname{rel}(P)} &= 2\abs{\operatorname{rel}(P_0-m)}+2\abs{\operatorname{lb}_P(m)}+2\ell\abs{P_0}+l(l-1),\text{ and}\\
2\abs{\operatorname{rel}(Q)} &= 2\abs{\operatorname{rel}(P_0-m)}+2(2k+\ell+1)(\abs{P_0}-1)+(2k+\ell+1)(2k+\ell).
\end{aligned}$$
Recalling that $\abs{\operatorname{lb}_P(m)}=\abs{P_0}-k-1$ and simplifying, we have $$2\abs{\operatorname{rel}(Q)}-2\abs{\operatorname{rel}(P)} = 4k(\abs{P_0}+\ell+k)=4k(k+\abs{P}),$$ which completes the proof.
Consequences for $U_n$ {#subsec:un_consequences}
----------------------
Recall that Halasi and Pálfy proved the existence of a poset $P$ for which $k(P)$ is not a polynomial [@HP]. We modify their construction we obtained a 13-element poset $P_\diamond$ shown in Figure \[fig:non\_poly\_poset\], such that $k(P_\diamond)$ is not a polynomial in $q$ (c.f. §\[ssec:fin-rems-posets\]). Using Lemma 3.1 of [@HP], we have computed $k(P_\diamond)$.
(-3,0) circle (.1cm); (-1,0) circle (.1cm); (1,0) circle (.1cm); (3,0) circle (.1cm);
(-5,2) circle (.1cm); (-3,2) circle (.1cm); (-1,2) circle (.1cm); (1,2) circle (.1cm); (3,2) circle (.1cm); (5,2) circle (.1cm);
(-2,4) circle (.1cm); (0,4) circle (.1cm); (2,4) circle (.1cm);
(5,2) – (3,0); (5,2) – (1,0); (5,2) – (2,4); (5,2) – (0,4);
(-1,2) – (-1,0); (-1,2) – (1,0); (-1,2) – (-2,4); (-1,2) – (0,4);
(1,2) – (-3,0); (1,2) – (1,0); (1,2) – (-2,4); (1,2) – (2,4);
(-3,2) – (-3,0); (-3,2) – (-1,0); (-3,2) – (2,4); (-3,2) – (0,4);
(-5,2) – (-3,0); (-5,2) – (3,0); (-5,2) – (-2,4); (-5,2) – (0,4);
(3,2) – (-1,0); (3,2) – (3,0); (3,2) – (-2,4); (3,2) – (2,4);
Let $P_\diamond$ denote the poset shown in Figure \[fig:non\_poly\_poset\]. Then $$\begin{aligned}
k(P_0) \, & = \, 1 + 36 \ts t + 582\ts t^2 + 5628\ts t^3 + 36601\ts t^4 + 170712\ts t^5 + 594892\ts t^6\\
&\, \hspace{5mm} + 1593937\ts t^7 + 3355488\ts t^8 + 5646608\ts t^9 + 7705410\ts t^{10}\\
&\, \hspace{5mm} + 8631900\ts t^{11} + 8023776\ts t^{12} + 6248381\ts t^{13} + 4111322\ts t^{14}\\
&\, \hspace{5mm} + 2302222\ts t^{15} + 1102490\ts t^{16} + 451836\ts t^{17} + 157555\ts t^{18}\\
&\, \hspace{5mm} + 46042\ts t^{19} + 10971\ts t^{20} + 2040\ts t^{21} + 276\ts t^{22} + 24\ts t^{23} +\ts t^{24}\\
&\, \hspace{5mm} + \delta(q)\cdot t^{12}(t+2)^6\,,
\end{aligned}$$ where $t=q-1$ and $$\delta(q)=\begin{cases}
2 & \text{if $q$ is odd\ts,}\\
1 & \text{otherwise\ts.}
\end{cases}$$
This proposition proves Theorem \[thm:hp-new\]. Now, it follows from Theorem \[thm:chain\_univ\] that $P_\diamond{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }{\cal C^{97}}$. However, the strong embedding sequence can be made more efficient by weakening our definition.
\[defn:embed\] A poset $P$ [*embeds*]{} into a poset $Q$ if there exists a sequence of poset systems $S_1,\dots,S_n$ with $S_i=(P_i,m_i,A_i)$, such that
1. $P_0=P$,
2. $P_n=Q$, and
3. for $0\le i < n$, we have $k(P_i)=k(S_{i+1})$.
When $P$ embeds into $Q$, we write $P\wkemb Q$. The sequence $$P=P_0\wkemb P_1\wkemb\cdots\wkemb P_{n-1}\wkemb P_n=Q$$ is called an [*embedding sequence*]{}. When we wish to signify that the embedding sequence has length $n$, we write $P{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle n$};
\path[draw,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }Q$.
Note that if $P{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }Q$, then $P\wkemb Q$ as well. One of the tools we are able to use with embeddings which was not possible with the stricter notion of strong embeddings is the fact that $k(P^*)=k(P)$. This is the fact that we will exploit to show that $P_\diamond\wkemb{\cal C^{59}}$ in Proposition \[prop:more\_efficient\].
\[lem:two\_chains\] For nonnegative integers $a$ and $b$, we have ${\cal C^{a}}\amalg{\cal C^{b}}{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }{\cal C^{2a+b}}$
We proceed by induction on $a$. When $a=0$, the result is trivial. Otherwise, let $P={\cal C^{1}}+({\cal C^{a-1}}\amalg{\cal C^{b+1}})$, let $m$ be the maximal element in ${\cal C^{b+1}}$, and let $\hat0$ be the unique minimal element of $P$. By inductive hypothesis, we know that ${\cal C^{a-1}}\amalg{\cal C^{b+1}}{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }{\cal C^{2a+b-1}}$, and so $P{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }{\cal C^{2a+b}}$. Note that ${\cal D}(P,m,\{\hat0\})-m$ is isomorphic to ${\cal C^{a}}\amalg {\cal C^{b}}$, so $${\cal C^{a}}\amalg{\cal C^{b}}{{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }P {{ \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle \hspace{.5ex}$\hspace{.5ex} };
\path[draw,double,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }{\cal C^{2a+b}},$$ proving the result.
\[prop:more\_efficient\] Let $P_\diamond$ denote the poset shown in Figure \[fig:non\_poly\_poset\]. Then $P_\diamond\wkemb{\cal C^{59}}$.
We use the techniques in the proof of Lemma \[lem:max\_el\] to attach each of the maximal elements to each of the non-maximal elements. Most of these relations are already present. For each maximal element, we must only add two relations for each maximal element. Next, we dualize and apply the same process to the newly maximal elements (the elements which were minimal in $P_\diamond$). For each of these, we must only add three relations per maximal element. The resulting poset, shown in Figure \[fig:p0\_smart\_embed\]. Symbolically, this poset can be described as $$P'=({\cal C^{3}}\amalg{\cal C^{3}}\amalg{\cal C^{3}})+{\cal I^{6}}+({\cal C^{4}}\amalg{\cal C^{4}}\amalg{\cal C^{4}}\amalg{\cal C^{4}}).$$
(0,4) circle (0.1cm); (2,4) circle (0.1cm); (4,4) circle (0.1cm); (6,4) circle (0.1cm); (0,4.5) circle (0.1cm); (2,4.5) circle (0.1cm); (4,4.5) circle (0.1cm); (6,4.5) circle (0.1cm); (0,5) circle (0.1cm); (2,5) circle (0.1cm); (4,5) circle (0.1cm); (6,5) circle (0.1cm); (0,5.5) circle (0.1cm); (2,5.5) circle (0.1cm); (4,5.5) circle (0.1cm); (6,5.5) circle (0.1cm); (-2,2.5) circle (0.1cm); (0,2.5) circle (0.1cm); (2,2.5) circle (0.1cm); (4,2.5) circle (0.1cm); (6,2.5) circle (0.1cm); (8,2.5) circle (0.1cm); (0,0) circle (0.1cm); (3,0) circle (0.1cm); (6,0) circle (0.1cm); (0,0.5) circle (0.1cm); (3,0.5) circle (0.1cm); (6,0.5) circle (0.1cm); (0,1) circle (0.1cm); (3,1) circle (0.1cm); (6,1) circle (0.1cm);
(0,0) – (0, 5.5); (3,0) – (3, 1); (6,0) – (6, 5.5); (2,2.5) – (2, 5.5); (4,2.5) – (4, 5.5); (-2,2.5) – (0,4); (-2,2.5) – (2,4); (-2,2.5) – (4,4); (-2,2.5) – (6,4); (0,2.5) – (0,4); (0,2.5) – (2,4); (0,2.5) – (4,4); (0,2.5) – (6,4); (2,2.5) – (0,4); (2,2.5) – (2,4); (2,2.5) – (4,4); (2,2.5) – (6,4); (4,2.5) – (0,4); (4,2.5) – (2,4); (4,2.5) – (4,4); (4,2.5) – (6,4); (6,2.5) – (0,4); (6,2.5) – (2,4); (6,2.5) – (4,4); (6,2.5) – (6,4); (8,2.5) – (0,4); (8,2.5) – (2,4); (8,2.5) – (4,4); (8,2.5) – (6,4);
(-2,2.5) – (0,1); (-2,2.5) – (3,1); (-2,2.5) – (6,1); (0,2.5) – (0,1); (0,2.5) – (3,1); (0,2.5) – (6,1); (2,2.5) – (0,1); (2,2.5) – (3,1); (2,2.5) – (6,1); (4,2.5) – (0,1); (4,2.5) – (3,1); (4,2.5) – (6,1); (6,2.5) – (0,1); (6,2.5) – (3,1); (6,2.5) – (6,1); (8,2.5) – (0,1); (8,2.5) – (3,1); (8,2.5) – (6,1);
Using Lemma \[lem:two\_chains\] and dualizing, we obtain that $$P'\wkemb{\cal C^{28}}+{\cal I^{6}}+{\cal C^{15}}.$$ Finally, because ${\cal I^{6}}{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle 5$};
\path[draw,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }{\cal C^{11}}$, we know that ${\cal C^{28}}+{\cal I^{6}}{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle 5$};
\path[draw,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }{\cal C^{39}}$. Applying Lemma \[lem:add\_top\] yields $$P_\diamond\wkemb P'\wkemb{\cal C^{28}}+{\cal I^{6}}+{\cal C^{15}}{ { \stepcounter{sarrow} \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,-0.5ex) $ )}]
\node[inner sep=.5ex] (\thesarrow) {$\scriptstyle 10$};
\path[draw,<-,decorate,
decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}]
(\thesarrow.south east) -- (\thesarrow.south west);
\end{tikzpicture}}} }{\cal C^{59}},$$ which completes the proof.
\[rmk:heuristic\] As a consequence of the preceeding result, one can express $k(U_{59})$ as a $\zz[q]$-linear combination of terms of the form $k(P)$ and $k(S)$ for posets $P$ and poset systems $S$ such that one of these terms is $k(P_\diamond)$. It seems implausible that the remaining terms would contribute in such a way as to cancel out the contribution of $k(P_\diamond)$, and render $k(U_{59})$ a polynomial, thus our Conjecture \[conj:false-59\]. Unfortunately explicit computation of $k(U_{59})$ is well beyond the capabilities of any modern computer, and likely to remain so in the forseeable future. (c.f. §\[ssec:fin-rems-time\]).
Algorithm and Experimental results {#sec:experimental}
==================================
Algorithm
---------
Given a poset $P$, to test whether or not $k(P)$ is a polynomial in $q$, we apply the following recursive algorithm. Pick a maximal element $m$ of $P$ and iterate through all poset systems of the form $S=(P,m,A)$. If we can apply the equivalences given in lemmas \[lem:remove\_max\], \[lem:normal\_conj\], and \[lem:apply\_d\] to obtain a reduction of $S$ to some poset $Q$, then recursively compute $k(Q)$. If there is even one poset system $S$ which cannot be reduced via these methods, we try another maximal element. If we exhaust all maximal elements in this way, we try the same procedure on the dual poset $P^*$. If this also fails, we fall back on a slower approach to compute the values $k(S)$ which the algorithm otherwise failed to compute. This slower approach is a modification of the algorithm discussed in [@VA1; @VA2; @VA3]. We call this modification the [*VLA-algorithm*]{}, and give a brief description of the necessary adaptations.
Order the cells in ${\cal L}_P$ from bottom to top, and reading each row left to right. This is the ordering induced from $$(n,1)<(n,2)<\cdots<(n,n-1)<(n-1,1)<\cdots<(3,1)<(3,2)<(2,1).$$ The computation starts at the least cell in the ordering, and iterates through the cells recursively, branching when necessary. When the algorithm reaches a cell, it attempts to conjugate the cell to zero while fixing all previously seen cells. If this is possible, the cell is called [*inert*]{}, it sets the cell to zero, and continues on to the next cell in the ordering. If the cell is not inert, it is called a [*ramification cell*]{}. The algorithm will branch into two cases: one where the cell contains a zero, and one where the cell does not.
It often happens that some cells will be inert or ramification cells depending on some algebraic conditions on the previously visited cells. For example, it may be the case that cell $(5,2)$ will be a ramification cell if $X_{5,1}=X_{6,2}$, and inert otherwise, i.e. where $X_{i,j}$ denotes the value in cell $(i,j)$. In such instances, the algorithm will branch into three different cases:
1. the condition to be inert holds, and the cell is set to zero,
2. the condition to be inert fails (so the cell is a ramification cell), but the cell happens to be zero anyway,
3. the condition to be inert fails (so the cell is a ramification cell), and the cell is non-zero.
Determining how often the algebraic conditions hold is done with several simple techniques which take care of the vast majority of the algebraic varieties which show up in practice.
To apply the VLA-algorithm to a poset system $S=(P,m,A)$, rather than starting at the beginning of the ordering, we start with some seeded data. Specifically, we start with the value $1$ in each cell $(m,a)$, where $a\in A$, and the value $0$ in each cell $(m,x)$ for $x\not\in A$.
In the two subsequent boxed figures, we provide pseudocode for our algorithm (excluding the VLA-algorithm). For further details, we refer the reader to our `C++` source code, which is available at <http://www.math.ucla.edu/~asoffer/content/pgcc.zip>).
**Input:** A poset $P$.\
**Output:** The function $k(P)$.
------------------------------------------------------------------------
function compute_poset( P ):
output = 0
let m in max(P)
for each A in antichains(P) below m:
Q = compute_poset_system(P, m, A)
if Q is not "FAILURE":
output = output + (q-1)^size(A) * compute_poset(Q)
else:
if have not tried some max m':
restart with m'
else if have not tried P*:
compute_poset( P* )
else:
output = output + VLA_algorithm(P, m, A)
return output
**Input:** A poset system $(P, m, A)$.\
**Output:** A poset $Q$ with $k(Q)=k(P,m,A)$, or “FAILURE” if none can be found.
------------------------------------------------------------------------
function compute_poset_system( P, m, A ):
while P is changing:
P = D(P, m, A)
for a,b in A:
if above(a) contains above(b) and
below(b) contains below(a):
A = A - b
if no element below m and above member of A:
return P
else:
return "FAILURE"
Small posets
------------
Gann and Proctor maintain a list of posets with 9 or fewer elements on their website [@CP]. We use their lists of connected posets in our verification. Without using the VLA-algorithm, our code verifies that $k(P)\in\zz[q]$ for every poset $P$ with 7 or fewer elements. Furthermore, using the VLA-algorithm when necessary as described above, our code verifies that $k(P)\in\zz[q]$ for every poset $P$ with 9 or fewer elements. Moreover, for each such poset $P$, we have $k(P)\in\nn[q-1]$. This proves Theorem \[thm:hp-small\]. A text file containing all posets on 9 or fewer elements along with their associated polynomials is available at <http://www.math.ucla.edu/~asoffer/kunq/posets.txt>.
Chains
------
Our code computes $k(U_n)$ for every $n\le 11$ without needing to employ the VLA-algorithm. For $12\le n\le 16$, our code verifies the polynomiality modulo the computation of several “exceptional poset systems” which are tackled with the VLA-algorithm. This verifies the results of Arregi and Vera-López in [@VA3], and extends their results to the computation to all $n\le 16$.
As $n$ grows, the number of exceptional poset systems which require the use of the VLA-algorithm grows quickly, as shown in Figure \[fig:exceptional\_posets\] below.[^3] The polynomials $k(U_n)$, for $n \le 16$ are given in the Appendix \[sec:app\] and prove Theorem \[thm:n16\].
$n$ Exceptional poset systems Computation time (sec.)
--------- --------------------------- ------------------------- --------------------
$\le11$ 0 $\le 0.2$
12 1 0.5
13 8 4.4
14 64 120.7 ($\sim 2$ minutes)
15 485 4456 ($\sim 1.2$ hours)
16 3550 164557 ($\sim46$ hours)
Final Remarks {#sec:fin-rems}
=============
Our approach is motivated by the philosophy of Kirillov’s orbit method (see [@K1]). In the case of $U_n(\rr)$, the orbit method provides a correspondence between the unitary representations of $U_n(\rr)$ and the co-adjoint orbits. Moreover, the co-adjoint orbits enjoy the structure of a symplectic manifold. The unitary characters can actually be recovered by integrating a particular form against the corresponding orbit.
Over finite fields, a manifold structure is not possible, but some of the philosophy of the orbit method seems to still be relevant and some formulas translate without difficulty. For example, the number of conjugacy classes (and therefore irreducible repreresentations), is equal to the number of co-adjoint orbits (Lemma \[lem:adcoad\]). However, the naturally analogous character formula does not hold [@IK].
Note also that the proof of Lemma \[lem:adcoad\] makes little use of the structure of pattern groups. In fact, the theorem holds for any [*algebra group*]{}, defined in [@Isa]. The proof is in fact an extension of the proof for $U_n(\fq)$ defined in [@Isa] (c.f. [@DI]).
In [@Isa], Isaacs introduced pattern groups and explained that one can count characters in $U_n(q)$ by counting characters in stabilizers of a certain group action (see also [@DT]). These stabilizers are themselves pattern groups, and lend themselves to a similar recursion, but over characters, rather than coadjoint orbits. There is more than superficial difference between the recursion in [@Isa] and in this paper. In fact, it follows from [@IK], that the characters cannot correspond to coadjoint orbits via the natural analogue of Kirillov’s orbit method.
{#fin_rems_va}
Higman originally stated Conjecture \[conj:higman\] in the form of an open problem [@H1]; it received the name “Higman’s Conjecture" more recently. Higman originally checked that the conjecture holds for $n\le 5$. The calculation of the number of conjugacy classes was later extended to $n\le 8$ by Gudivok et al. in [@G+] (they made a mistake for $n=9$). The authors use a variation on the brute force algorithm.
Later, Arregi and Vera-López verified Higman’s conjecture for $n\le 13$ in [@VA3] by a clever application of a brute force algorithm for counting adjoint orbits. They also proved that the number of conjugacy classes of cardinality $q^s$ is polynomial for $s\le n-3$ [@VA2]. Moreover, they verified that, as a polynomial in $(q-1)$, the number of conjugacy classes of cardinality $q^s$ has non-negative integral coefficients (for $s\le n-3$). For other partial results Higman’s conjecture see also [@ABT; @Isa; @Mar]. We refer to [@Sof-thesis] for a broad survey of the literature on the conjugacy classes of $U_n(q)$, both the algebraic and combinatorial aspects.
{#section-2}
In recent years, much effort has been made to improve Higman’s upper bounds for the asymptotics of $k(U_n(q))$, as $n\to \infty$, see [@Mar; @Sof; @VA0]. For a fixed $q$, it is conjectured that $$k(U_n(q)) \, = \, q^{\frac{n^2}{12}\ts (1+o(1))} \quad \text{as} \ \. n\to \infty\ts.\tag{$\lozenge$}$$ The lower bound is known and due to Higman in the original paper [@H1], while the best upper bound is due to the second author [@Sof] :
$$q^{\frac{n^2}{12}\ts (1+o(1))} \le k(U_n(q)) \, \le \,
q^{\frac{7}{44}\ts n^2\ts (1+o(1))} \quad \text{as} \ \. n\to \infty\ts.$$
The above asymptotics have curious connection to this work. Arregi and Vera-L[ó]{}pez conjectured in [@VA3] a refinement of Higman’s Conjecture \[conj:higman\] stating that the degree of the polynomials $k(U_n)$ are equal to $\floor{n(n+6)/12}$. If true, this would confirm the asymptotics $(\lozenge)$ as well. While we do not believe Higman’s conjecture, the degree formula continues to hold for new values, so it is now known for all $n\le 16$.
{#ssec:fin-rems-posets}
In [@HP], Halasi and Pálfy exhibit a pattern group for which the number of conjugacy classes is not a polynomial in the size of the field. Though they do not provide explicit bounds, their construction yields a 5,592,412-element poset. We obtained the 13-element poset $P_\diamond$ shown in Figure \[fig:non\_poly\_poset\] by modifying their construction.
It would be interesting to see if the poset $P_0$ is in fact the smallest posets with a non-polynomial $k\bigl(U_{P_0}\bigr)$. By Theorem \[thm:hp-small\], such posets would have to have at least 10 elements. Unfortunately, even this computation might be difficult since the total number of connected posets is rather large. For example, there are about $1.06 \cdot 10^9$ connected posets on 12 elements, see e.g. [@BM] and [@OEIS A000608].
{#ssec:fin-rems-parallel}
When our algorithm falls back on the VLA-algorithm, the poset systems it must compute have minimal shared computational resources. For this reason, our technique lends itself well towards parallelization. This, along with several optimization techniques we believe could be used to compute $k(U_{17}(q))$ and $k(U_{18}(q))$. However, due to the super-exponential growth rate of $k(U_n(q))$, pushing the computation significantly further will likely require different techniques.
{#ssec:fin-rems-time}
Based on our computations, one can try to give a conservative lower bound to the cost of computing $k(U_{59}(q))$. Assuming the current rate of increase in timing, we estimate our algorithm to need about $10^{66}$ years of CPU time. Alternatively, if we assume Moore’s law[^4] will continue to hold indefinitely, this computation will not become feasible until the year 2343.
{#ssec:fin-rems-59}
There are two directions in which the bound $n\ge 59$ in Conjecture \[conj:false-59\] can be decreased. First, it is perhaps possible that $P_\diamond$ embeds into a smaller chain. This is a purely combinatorial problem which perhaps also lends to computational solution. We would be interested to see if such improvement is possible.
Second, it is conceivable and perhaps likely that there are posets $P$ with more than 13 elements which embed into $C_n$ with $n<59$, and have non-polynomial $k(U_P)$. Since $P_\diamond$ really encodes the variety $x^2=1$, it would be natural to consider other algebraic varieties which have different point counts depending on teh characteristic. This is a large project which goes beyond the scope of this work.
{#ssec:fin-kirillov}
In [@K2; @K3], Kirillov made two conjectures on the values of $k(U_n(q))$ for small $q$.
\[conj:kirillov-seq\] For all $n\ge 1$, we have $k\bigl(U_n(2)\bigr) \ge \ra_{n+1}$ and $k\bigl(U_n(3)\bigr) \ge \rb_{n+1}$, where $\{\ra_n\}$ is the Euler sequence and $\{\rb_n\}$ is the Springer sequence.
Here the *Euler sequence* $\{\ra_n\}$ counts the number of *alternating permutations* $\si\in S_n$; it has elegant generating function and asymptotics: $$\sum_{n=0}^\infty \. \ra_n \. \frac{x^n}{n!} \, = \, \sec(x) + \tan(x)\., \qquad
\ra_n \. \sim \. \frac{4}{\pi} \. \left(\frac{2}{\pi}\right)^n n!,$$ see [@OEIS A000111]. Similarly, the *Springer sequence* $\{\rb_n\}$ counts the number of *alternating signed permutations* in the hyperoctahedral group $C_n$; it has elegant generating function and asymptotics: $$\sum_{n=0}^\infty \. \rb_n \. \frac{x^n}{n!} \, = \, \frac{1}{\cos(x) - \sin(x)}\., \qquad
\rb_n \. \sim \. \frac{2\sqrt{2}}{\pi} \. \left(\frac{4}{\pi}\right)^n n!,$$ see [@OEIS A001586].
Kirillov observed that there is a remarkable connection between the sequences (see Appendix \[sec:app-kirillov\]), and made further conjectures related to them. It is easy to see that asymptotics imply the conjecture for large $n$. By using exact values of $\{\ra_n\}$ and $\{\rb_n\}$, and technical improvements on the lower bounds by Higman, is easy to show the bounds in the conjecture hold for $n\ge 43$ and $n\ge 30$, respectively [@Sof-thesis]. Our results confirm the conjecture for $n\le 16$, leaving it open only for the intermediate values in both cases.
{#ssec:fin-rems-alperin}
Recall that by the Halasi–Pálfy theorem, the functions $k({\ensuremath{U_{\hspace{-0.6mm}P}}}(q))$ be as bad as any algebraic variety [@HP]. Theorem \[thm:chain\_univ\] suggests that $k(U_n(q))$ is also this bad. This would be in line with other universality results in algebra and geometry, see e.g. [@BB; @Mnev; @Vak].
In a different direction, Alperin showed that the action of $U_n$ by conjugation on $\GL_n$ does have polynomial behavior [@Alp]. Specifically, he showed $$\abs{\GL_n/U_n}\in\zz[q]$$ for all $n>0$. Moreover, because $U_n$ acts by conjugation on each cell of the Bruhat decomposition of $\GL_n$, we have $$\abs{\GL_n/U_n}\, = \, \sum_{w\in S_n}\. \abs{B_nwB_n/U_n}\ts.$$ The term in the summation corresponding to the identity element of $S_n$ is $\abs{B_n/U_n}$, which bears resemblance to $k(U_n)$. Complementary to our heuristic in Remark \[rmk:heuristic\], Alperin noted that it seems unlikely that the summation on the right-hand side has even one non-polynomial term, given that the left-hand side is a polynomial.
For another similar phenomenon, let us mention that there are many moduli spaces which satisfy *Murphy’s law*, a version of Mnëv’s Universality Theorem [@Vak]. Over $\fq$, these moduli spaces have a non-polynomial number of points. But of course, when summed over all possible configurations these functions of $q$ add up to a polynomial, the size of the Grassmannian or other flag varieties.
To reconcile these examples with our main approach, think of them as different examples of counting points on orbifolds. Apparently, both the Grassmannian and Alperin’s actions are *nice*, while conjugation on $U_n(\fq)$ is not. This is not very surprising. For example, both binomial coefficients $\binom{n}{k}$ and the number of integer partitions $p(n)$ count the orbits of certain combinatorial actions (see the *twelvefold way* [@Sta]). However, while the former are “nice” indeed, the latter are notoriously complicated. Despite a large body of work on partitions, from Euler to modern times, little is known about divisibility of $p(n)$; for example, the *Erdős conjecture* that every prime $s$ is a divisor of some $p(n)$ remains wide open (see e.g. [@AO]).[^5] This suggests that certain numbers of orbits are so wild, that even proving that they are wild is a great challenge.
{#section-3}
There is little hope of finding interesting classes of posets for which $k(U_P)$ is always a polynomial. If such a class $\cal P$ contains posets of arbitrary height, then one can show that all chains embed in some member $\cal P$. As embedding is a transitive property, the family $\cal P$ shares the same universality properties that $\{{\cal C^{n}}\}$ has. Even the posets of height no more than three can be as bad as arbitrary algebraic varieties [@HP]. Of course, if $P$ is a poset of height two, then $U_P$ is abelian and therefore $k(U_P)$ is a polynomial in $q$, however this family is not interesting.
{#section-4}
Pattern groups and pattern algebras are closely related to incidence algebras of posets. In fact, the incidence algebra $I(P)$ of a poset $P$ contain, up to isomorphism, both ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ and ${\cal U}_P$ as substructures. This perhaps provides a more natural setting for the group ${\ensuremath{U_{\hspace{-0.6mm}P}}}$, as it is independent of any total ordering we assign to the elements of $P$. For more information on incidence algebras, see [@Rota; @Sta].
{#section-5}
Many group-theoretic constructions have combinatorial interperetations when applied to pattern groups. For instance, the intersection of two pattern groups ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ and $U_Q$ is the pattern group defined on the poset $P\cap Q$. For a less trivial example, the commutator subgroup of ${\ensuremath{U_{\hspace{-0.6mm}P}}}$ is a pattern group $U_{\hspace{-0.6mm}P'}$, where $P'$ is the subposet consisting of the non-covering relations in $P$. Normalizers (in $U_n$) are also pattern groups, and can be expressed combinatorially. However, for normalizers, the precise injection of $U_P$ into $U_n$ is relevant. As such, the normalizer depends not only on the poset $P$ but also its linear extension.
.4cm
[**Acknowledgements.**]{} We are very grateful to a number of people for many interesting conversations and helpful remarks: Karim Adiprasito, Persi Diaconis, Scott Garrabrant, Robert Guralnick, Martin Isaacs, Alexandre Kirillov, Eric Marberg, Brendan McKay, Alejandro Morales, Peter M. Neumann, Greta Panova, Raphaël Rouquier, and Antonio Vera-López. The first author was partially supported by the NSF.
.7cm
[VLA3]{}
M. Aguiar, N. Bergeron and N. Thiem, Hopf monoids from class functions on unitriangular matrices, *Algebra Number Theory* **7** (2013), 1743–1779.
S. Ahlgren and K. Ono, Addition and counting: the arithmetic of partitions, *Notices AMS* **48** (2001), 978–984.
J. Alperin, Unipotent conjugacy in general linear groups, [*Comm. Algebra*]{} **34** (2006), 889–891.
P. Belkale nd P. Brosnan, Matroids, motives, and a conjecture of Kontsevich, *Duke Math. J.* **116** (2003), 147–188.
S. R. Blackburn, P. Neumann and G. Venkataraman, Enumeration of finite groups, Cambridge Univ. Press, Cambridge, 2007.
G. Brinkmann and B. D. McKay, Posets on up to 16 Points, *Order* **19** (2002), 147–179.
P. Diaconis and I. M. Isaacs, Supercharacters and superclasses for algebra groups, *Trans. AMS* **360** (2008), 2359–2392.
P. Diaconis and N. Thiem, Supercharacter formulas for pattern groups, [*Trans. AMS*]{} **361** (2009), 3501–3533.
M. du Sautoy and M. Vaughan-Lee, Non-PORC behaviour of a class of descendant p-groups, *J. Algebra* **361** (2012), 287–312.
C. Gann and R. Proctor, [*Chapel Hill Poset Atlas*]{}, .
P. Gudivok, Yu. Kapitonova, S. Polyak, V. Rud[$'$]{}ko and A. Tsitkin, Classes of conjugate elements of a unitriangular group, [*Kibernetika*]{} **1** (1990), 40–48.
Z. Halasi and P. Pálfy, The number of conjugacy classes in pattern groups is not a polynomial function, [*J. Group Theory*]{} **14** (2011), 841–854.
G. Higman, Enumerating p-groups. I. Inequalities, [*Proc. LMS*]{} **3** (1960), 24–30.
G. Higman, Enumerating p-groups. II. Problems whose solution is PORC, [*Proc. LMS*]{} **3** (1960), 566–582.
I. Isaacs, Counting characters of upper triangular groups, [*J. Algebra*]{} **315** (2007), 698–719.
I. Isaacs and D. Karagueuzian, Conjugacy in groups of upper triangular matrices, [*J. Algebra*]{} **202** (1998), 704–711.
A. Kirillov, [*Lectures on the orbit method*]{}, AMS, Prividence, RI, 2004.
A. Kirillov, On the combinatorics of coadjoint orbits, [*Funct. Anal. Appl.*]{} **27** (1993), 62–64.
A. Kirillov, Variations on the triangular theme, in [*Lie groups and Lie algebras*]{}, AMS, Prividence, RI, 1995, 43–73.
A. Kirillov, Two more variations on the triangular theme, in [*The Orbit Method in Geometry and Physics*]{}, Birkhäuser, Boston, MA, 2003, 243–258.
C. R. Leedham-Green and M. F. Newman, Space groups and groups of prime power order I, [*Arch. Math. (Basel)*]{} **35** (1980), 293–302.
E. Marberg, Combinatorial methods of character enumeration for the unitriangular group, *J. Algebra* **345** (2011), 295–323.
N. E. Mnëv, The universality theorems on the classification problem of configuration varieties and convex polytopes varieties, in *Lecture Notes in Math.* **1346**, Springer, Berlin, 1988, 527–543.
G.-C. Rota, On the foundations of combinatorial theory. I. Theory of M[ö]{}bius functions, [*Z. Wahrsch.*]{} **2** (1964), 340–368.
N. J. A. Sloane, *The On-Line Encyclopedia of Integer Sequences*, .
A. Soffer, Upper bounds on the number of conjugacy classes in unitriangular groups, [arXiv:]{}[1411.5389]{}.
A. Soffer, Ph.D. thesis, UCLA, in preparation.
R. Stanley, [*Enumerative Combinatorics*]{}, Vol. 1 (Second Ed.), Cambridge Univ. Press, Cambridge, UK, 2012.
J. G. Thompson, $k(U_n(\mathbb{F}_q))$, preprint (2004), 84 pp.; available at [<http://tinyurl.com/m2h24nm>]{}.
W. Trotter, New perspectives on interval orders and interval graphs, in [*Surveys in Combinatorics*]{}, Cambridge Univ. Press, Cambridge, UK, 1997, 237–286.
R. Vakil, Murphy’s law in algebraic geometry: badly-behaved deformation spaces, *Invent. Math.* **164** (2006), 569–590.
M. Vaughan-Lee, Graham Higman’s PORC conjecture, *Jahresber. Dtsch. Math.-Ver.* **114** (2012), 89–106.
A. Vera-L[ó]{}pez and J. M. Arregi, Conjugacy classes in unitriangular matrices, *J. Algebra* **152** (1992), 1–19.
A. Vera-L[ó]{}pez and J. M. Arregi, Some algorithms for the calculation of conjugacy classes in the [S]{}ylow [$p$]{}-subgroups of [${\rm GL}(n,q)$]{}, [*J. Algebra*]{} **177** (1995), 899–925.
A. Vera-L[ó]{}pez and J. M. Arregi, Polynomial properties in unitriangular matrices, [*J. Algebra*]{} **244** (2001), 343–351.
A. Vera-L[ó]{}pez and J. M. Arregi, Conjugacy classes in unitriangular matrices, [*Linear Algebra Appl.*]{} **370** (2003), 85–124.
A. J. Weir, Sylow $p$-subgroups of the general linear groups over finite fields of characteristic $p$, [*Proc. AMS*]{} **6** (1955), 454–464.
Polynomials $k\bigl(U_n(q)\bigr)$, $q=t+1$ {#sec:app}
==========================================
[$$\begin{aligned}
k(U_{1}) =\ & 1\\
k(U_{2}) =\ & 1 + \ts t\\
k(U_{3}) =\ & 1 + 3\ts t + \ts t^2\\
k(U_{4}) =\ & 1 + 6\ts t + 7\ts t^2 + 2\ts t^3\\
k(U_{5}) =\ & 1 + 10\ts t + 25\ts t^2 + 20\ts t^3 + 5\ts t^4 \\
k(U_{6}) =\ & 1 + 15\ts t + 65\ts t^2 + 105\ts t^3 + 70\ts t^4 + 18\ts t^5 + \ts t^6\\
k(U_{7}) =\ & 1 + 21\ts t + 140\ts t^2 + 385\ts t^3 + 490\ts t^4 + 301\ts t^5 + 84\ts t^6 + 8\ts t^7\\
k(U_{8}) =\ & 1 + 28\ts t + 266\ts t^2 + 1120\ts t^3 + 2345\ts t^4 + 2604\ts t^5 + 1568\ts t^6 + 496\ts t^7 + 74\ts t^8 + 4\ts t^9\\
k(U_{9}) =\ & 1 + 36\ts t + 462\ts t^2 + 2772\ts t^3 + 8715\ts t^4 + 15372\ts t^5 + 15862\ts t^6 + 9720\ts t^7 + 3489\ts t^8\\
& + 701\ts t^9 + 72\ts t^{10} + 3\ts t^{11}\\
k(U_{10}) =\ & 1 + 45\ts t + 750\ts t^2 + 6090\ts t^3 + 26985\ts t^4 + 69825\ts t^5 + 110530\ts t^6 + 110280\ts t^7\\
& + 70320\ts t^8 + 28640\ts t^9 + 7362\ts t^{10} + 1170\ts t^{11} + 110\ts t^{12} + 5\ts t^{13}\\
k(U_{11}) =\ & 1 + 55\ts t + 1155\ts t^2 + 12210\ts t^3 + 72765\ts t^4 + 261261\ts t^5 + 592207\ts t^6 + 877030\ts t^7\\
& + 868725\ts t^8 + 583550\ts t^9 + 267542\ts t^{10} + 83909\ts t^{11} + 18007\ts t^{12} + 2618\ts t^{13}\\
& + 242\ts t^{14} + 11\ts t^{15}\\
k(U_{12}) =\ & 1 + 66\ts t + 1705\ts t^2 + 22770\ts t^3 + 176055\ts t^4 + 841302\ts t^5 + 2600983\ts t^6 + 5387646\ts t^7\\
& + 7680310\ts t^8 + 7684820\ts t^9 + 5473050\ts t^{10} + 2803182\ts t^{11} + 1042181\ts t^{12} + 284109\ts t^{13}\\
& + 57256\ts t^{14} + 8484\ts t^{15} + 890\ts t^{16} + 60\ts t^{17} + 2\ts t^{18}\\
k(U_{13}) =\ & 1 + 78\ts t + 2431\ts t^2 + 40040\ts t^3 + 390390\ts t^4 + 2403258\ts t^5 + 9766471\ts t^6 + 27116232\ts t^7 \\
& + 52873678\ts t^8 + 74012653\ts t^9 + 75670881\ts t^{10} + 57294120\ts t^{11} + 32515314\ts t^{12}\\
& + 14000495\ts t^{13} + 4635125\ts t^{14} + 1195116\ts t^{15} + 241436\ts t^{16} + 37778\ts t^{17} + 4381\ts t^{18} \\
& + 338\ts t^{19} + 13\ts t^{20}\\
k(U_{14}) =\ & 1 + 91\ts t + 3367\ts t^2 + 67067\ts t^3 + 805805\ts t^4 + 6225219\ts t^5 + 32296264\ts t^6 + 116332645\ts t^7\\
& + 298956658\ts t^8 + 560602042\ts t^9 + 781499719\ts t^{10} + 822549728\ts t^{11} + 662497381\ts t^{12}\\
& + 413509705\ts t^{13} + 202666910\ts t^{14} + 79124292\ts t^{15} + 24968979\ts t^{16} + 6441876\ts t^{17}\\
& + 1362732\ts t^{18} + 233758\ts t^{19} + 31542\ts t^{20} + 3159\ts t^{21} + 210\ts t^{22} + 7\ts t^{23}\\
k(U_{15}) =\ & 1 + 105\ts t + 4550\ts t^2 + 107835\ts t^3 + 1566565\ts t^4 + 14864850\ts t^5 + 96136040\ts t^6 + 437680815\ts t^7\\
& + 1440259535\ts t^8 + 3502779995\ts t^9 + 6416611201\ts t^{10} + 8998108665\ts t^{11} + 9796436195\ts t^{12}\\
& + 8387410675\ts t^{13} + 5718426690\ts t^{14} + 3145744973\ts t^{15} + 1416179446\ts t^{16} + 529371274\ts t^{17}\\
& + 166405370\ts t^{18} + 44325415\ts t^{19} + 9997955\ts t^{20} + 1887955\ts t^{21} + 291345\ts t^{22} + 35270\ts t^{23}\\
& + 3130\ts t^{24} + 180\ts t^{25} + 5\ts t^{26}\\
k(U_{16}) =\ & 1 + 120\ts t + 6020\ts t^2 + 167440\ts t^3 + 2894710\ts t^4 + 33137104\ts t^5 + 261929668\ts t^6\\
& + 1475199440\ts t^7 + 6072906125\ts t^8 + 18674026800\ts t^9 + 43703418616\ts t^{10}\\
& + 79124540872\ts t^{11} + 112420822696\ts t^{12} + 126975887444\ts t^{13} + 115398765556\ts t^{14}\\
& + 85415064915\ts t^{15} + 52146190588\ts t^{16} + 26615252562\ts t^{17} + 11515549082\ts t^{18}\\
& + 4278222573\ts t^{19} + 1378103758\ts t^{20} + 386616800\ts t^{21} + 94259304\ts t^{22} + 19784488\ts t^{23}\\
& + 3513854\ts t^{24} + 514128\ts t^{25} + 59504\ts t^{26} + 5104\ts t^{27} + 288\ts t^{28} + 8\ts t^{29}\end{aligned}$$ ]{}
Known values for the Kirilov sequences {#sec:app-kirillov}
======================================
Here we present a table with Kirillov sequences $\bigl\{k(U_{n}(2))\bigr\}$ and $\bigl\{k(U_{n}(3))\bigr\}$ discussed in $\S$\[ssec:fin-kirillov\]. New values computed in this paper are shown in red. Note that the sequences coincide in the beginning and the ratios increase to $k(U_{16}(2))/\ra_{17} \approx 3.2$, and $k(U_{16}(3))/\rb_{16} \approx 23.6$, suggesting that both parts of Kirillov’s Conjecture \[conj:kirillov-seq\] are likely to be true.
.5cm
[lr]{}
$n$ $\ra_{n+1}$ $k(U_{n}(2))$
----- -------------- -------------------
1 1 1
2 2 2
3 5 5
4 16 16
5 61 61
6 272 275
7 1385 1430
8 7936 8506
9 50521 57205
10 353792 432113
11 2702765 3641288
12 22368256 34064872
13 199360981 352200229
14 1903757312 [ 4010179157]{}
15 19391512145 [ 50124636035]{}
16 209865342976 [ 685996839568]{}
&
$n$ $\rb_n$ $k(U_n(3))$
----- ----------------- ------------------------
1 1 1
2 3 3
3 11 11
4 57 57
5 361 361
6 2763 2891
7 24611 27555
8 250737 315761
9 2873041 4246737
10 36581523 66999699
11 512343611 1226296635
12 7828053417 26011112361
13 129570724921 635526804025
14 2309644635483 [ 17881012846299]{}
15 44110959165011 [ 577907517043923]{}
16 898621108880097 [ 21474199259637473]{}
[^1]: Note that ${\cal U}_P$ depends not only on the isomorphism class of the poset $P$, but also on a specific linear extension of $P$. This definition is purely one of convenience. One could define ${\cal U}_P$ abstractly in terms of generators and relations in such a way as to make it clear that if $P$ and $Q$ are isomorphic posets, then ${\cal U}_P\cong{\cal U}_Q$. We use this isomorphism throughout the paper without further mention.
[^2]: We define a weaker notion which we call *embedding* later on in Definition \[defn:embed\].
[^3]: Computations made with an Intel^^Xeon^^ CPU X5650 2.67GHz and 50Gb of RAM.
[^4]: Moore’s law is the observation that the number of transistors per square inch on an integrated circuit has been doubling roughly every 18 months. Quite roughly, this can be interpretted as computer performance increase.
[^5]: Naturally, one would assume that asymptotically, we have $s\ts|\ts p(n)$ for a positive fraction of $n$. This is known for some primes $s$, such as $5, 7$ and $11$ due to *Ramanujan’s congruences*, but is open for $2$ and $3$, see e.g. [@AO].
|
{
"pile_set_name": "ArXiv"
}
|
Introduction
============
The Kondo lattice model (KLM) describes a many-body system of two distinct types of degrees of freedom, itinerant electrons and localized spins which are arranged on a regular lattice. This model can be considered as an extension of the single impurity Kondo model where electrons interact with a single localized spin. The Hamiltonian of the KLM consists of two parts, the kinetic energy of the itinerant conduction electrons and the local exchange interaction between electron spin $ {\bf S}_{\rm c} $ and localized spin $ {\bf S}_{\rm f}
$, both spin 1/2 degrees of freedom,
$${\cal H}_{\rm KLM} = -t \sum_{\langle i,j \rangle} \sum_s (
c^{\dag}_{is} c_{js} + c^{\dag}_{js} c_{is}) + J \sum_i {\bf S}_{{\rm
c}i} \cdot {\bf S}_{{\rm f}i}$$
where the operator $ c_{is} $ ($ c^{\dag}_{is} $) annihilates (creates) a conduction electron on site $ i $ with spin $ s $ ($=
\uparrow, \downarrow $) ($ S^{\mu}_{{\rm c}i} = (\hbar / 2)
\sum_{s,s'} c^{\dag}_{is} \sigma^{\mu}_{ss'} c_{is'} $) and the sum in the first term runs over all nearest neighbor bonds $ \langle i,j
\rangle $. Furthermore, $ t $ denotes the hopping matrix element and $ J $ is the antiferromagnetic exchange coupling [@TSUNE1].
In this model both the conduction electrons and the localized spins are each uncorrelated. Correlation only appears through the exchange interaction. It is important to realize that the understanding of the single impurity Kondo problem does not simply extend to the lattice case. Indeed the research during recent years has shown that complicated correlation effects occur within the lattice model yielding a variety of physical phenomena beyond the single impurity picture. Although the KLM is certainly of interest of its own as a generic model of strongly correlated electrons, studies of this model are also motivated by real materials such as the so-called heavy fermion compounds or the Kondo insulators. Both systems where correlation effects between itinerant and localized electrons clearly dominate the low-energy physics.
In this article we discuss the extension of the KLM obtained by including the direct interaction among the conduction electrons. We use the most simple form for the interaction, which acts only between electrons on the same site,
$${\cal H}_{\rm int} = U \sum_i c^{\dag}_{i \uparrow} c_{i \uparrow}
c^{\dag}_{i \downarrow} c_{i \downarrow}.$$
We call $ {\cal H} = {\cal H}_{\rm KLM} + {\cal H}_{\rm int} $ Kondo-Hubbard model (KHM). This model was recently also considered in connection with the ferromagnetic ground state away from half filling [@YANAGI]. In the following we will focus on the one-dimensional (1D) system with a half-filled electron band.
The half-filled KLM is considered as a good starting point to understand the Kondo insulators, a class of materials which show a spin and a charge gap at low temperatures. In contrast to ordinary band insulators the two gaps are different, indicating a separation of the spin and charge degrees of freedom due to correlation effects. The half-filled KLM shows indeed this type of properties. The properties of the spins are dominated by short-ranged antiferromagnetic correlations, i.e. this state can be considered as a spin liquid. A particular feature of the 1D KLM is that the spin liquid state exists for all finite values of $ J $. This could be shown numerically using exact diagonalization [@TSUNE1] or, more recently, the density matrix renormalization group technique [@YU] and the mapping to a non-linear sigma model [@TSVELICK]. (Note that in higher dimensions a transition between the spin liquid and an antiferromagnetically ordered state is expected at a critical value of $ J $. This has, however, not been established so far either by analytical or numerical methods.) For weak coupling, $ J \ll t $, one finds that the spin gap ($ \Delta_{\rm s} $) and charge gap ($
\Delta_{\rm c} $) depend in a very different way on $ J $,
$$\begin{array}{l}
\Delta_{\rm s} \propto \exp (- 1/ \alpha \rho J) \\ \\
\Delta_{\rm c} \propto J \\
\end{array}$$
where $ \rho $ is the density of states at the Fermi level for the free electrons. The spin gap energy gives an energy scale formally related to the Kondo temperature $ T_K $ of a single localized spin ($ \alpha = 1 $). For the lattice of localized spins $ \alpha $ is enhanced by a value of about 1.4 obtained from numerical simulations [@SHIBATA1]. The similarity with the Kondo energy scale indicates that the ground state of the half-filled KLM corresponds to a singlet bound state like the Kondo singlet state. However, correlation effects among the localized spins tend to increase the binding energy through the formation of a collective singlet state. On the other hand, the charge gap has no counter part in the single impurity case (where the system forms a Fermi liquid). The charge gap proportional to $ J $ originates from the strong antiferromagnetic correlation of the localized spins. Although short ranged it provides a staggered background for the electron motion, which yields the features of a doubled unit cell as found in a spin density wave state [@REVIEW]. Thus the energy scale is set by the exchange coupling between the electrons and the localized spins. Also in the strong coupling limit ($ J \gg t $) we find a clear distinction between the two excitations as we will consider below.
The spin liquid state of the KLM can be characterized as the formation of a collective Kondo singlet involving the conduction electrons and localized spins. The singlet formation is optimized if configurations of conduction electrons with doubly occupied and empty sites are suppressed. Both configurations “remove” the electron spin degree of freedom so that the exchange term cannot be active. Therefore strong charge fluctuations of the conduction electrons tend to weaken the spin liquid state. The inclusion of a repulsive interaction between the conduction electrons as in Eq.(2) with $ U >0 $ leads to a suppression of the charge fluctuations. It is a well-known fact that the Hubbard model with repulsive interaction develops a charge gap at half-filling for any finite $ U $ in one dimension, while the spin excitation remain gapless. Consequently, we expect that the spin liquid state is further stabilized by the repulsive interaction. This is clearly seen in numerical calculations where, in particular, in the limit of $ J \ll t $ the spin gap is enhanced through an increase of the factor $ \alpha $ in the exponent Eq.(3) [@SHIBATA1]. While the spin gap goes to zero for $ J \to 0 $ the charge gap remains finite, if $ U >0 $.
We now ask what will happen if the interaction among the conduction electrons is attractive. There is a relation between the positive and negative $ U $ Hubbard model due to particle-hole symmetry at half filling. The change of sign for $ U $ leads effectively to an exchange of the charge and spin degrees of freedom. Indeed the charge excitations can be described as isospins completely analogous to the spins, as we will see below [@AUERBACH]. For $ U < 0 $ the spin sector of the conduction electrons has a gap while the charge (isospin) excitations are gapless. Obviously this spin gap weakens the spin liquid phase characterized by the formation of the singlets between localized and electron spins. Therefore a competition arises between the the spin and the charge fluctuations whose behavior is determined by the relative strength of the coupling constants $ J $ and $ U $.
It is the goal of this paper to investigate this competition for the case of $ U < 0 $. By the analysis of two limiting cases we will demonstrate that the attractive interaction leads to a phase transition where the character of the ground state and the excitations change qualitatively. The state which is in competition with the spin liquid phase may be characterized by the property that it has quasi-long-range order in the spin and charge sectors. The dominant correlations are that of an antiferromagnet for the localized spins and of a charge density wave for the conduction electrons, i.e. a spin-charge density wave (SCDW). Additionally we find that within the spin liquid a phase can occur where the spin and charge excitations have the same energy scale and behave similar to a band insulator. In the following we will analyze these properties first by characterizing the states in the limit $ t \ll |U|, J $. In a second step we will investigate the phase transition and the phase diagram, $ U $ versus $
J $, by means of numerical simulation using the density matrix renormalization group method (DMRG).
Phases in two limiting cases
============================
In this section we show that there are two distinct phases for the KHM with attractive interaction. To this end it is helpful to consider two limiting cases which allow a simple analysis of the model. These limits are $ t \ll J \ll - U $ and $ t \ll - U < J $. At the very beginning let us start with $ t=0 $, the atomic limit. Then the states can be represented most easily in a real space basis. For $
U=0 $ the ground state $ | \Psi_{\rm s} \rangle $ is a product of onsite singlets, i.e. on each site we find one conduction electron which forms a spin singlet with the localized spin ($ |\Psi_{\rm s}
\rangle = (c^{\dag}_{i \uparrow} |\downarrow_i \rangle - c^{\dag}_{i
\downarrow} | \uparrow_i \rangle) / \sqrt{2} $; $ |s_i \rangle $ is the electron vacuum with the localized spin $ s_i $ on the site $ i
$). This state is [*non-degenerate*]{} [@TSUNE2]. The lowest spin excitation $
|\Psi_{\rm t} \rangle $ ($ = c^{\dag}_{i \uparrow} | \uparrow_i
\rangle $) corresponds to the spin triplet configuration on one site. Since one singlet had to be replaced by a triplet the excitation energy is $ J $. This state is highly degenerate ($ 3 N $) due to the freedom of position and the three triplet orientations ($ N $ : number of lattice sites). The lowest charge excitation $ |\Psi_{\rm c}
\rangle $ consists of a doubly-occupied and an empty site, which we call doublon and holon, respectively ($ |\Psi_{\rm c} \rangle = |s_i
\rangle + c^{\dag}_{j \uparrow} c^{\dag}_{j \downarrow} |s_j \rangle
$). The excitation energy is $ 3 J /2 $, because two singlets are destroyed to generate this state. The doublon and holon are fermionic particles with a spin 1/2 degree of freedom, which combine to a spin singlet for a pure charge excitation. Then the degeneracy is $ N(N-1)
$. The combination of spin and charge configuration corresponds to the triplet configuration of the doublon-holon spins with a degeneracy of $ 3 N(N-1) $.
Turning on $ U $ now we can change the level scheme (Fig. \[level\]). The relative position of the singlet and triplet states, $ |\Psi_{\rm s} \rangle $ and $ | \Psi_{\rm t} \rangle $, is unchanged. However, the charge (doublon-holon) excitation is shifted according to $ 3 J /2 + U $. For positive $ U $ no qualitative change occurs in the level scheme. Negative $ U $, however, lead to a rearrangement of the onsite energy levels for sufficiently large $ |U|
$. For $ U = - J/2 $ the charge excitation passes the triplet state and for $ U = - 3 J /2 $ the singlet state. This indicates that the attractive interaction between electrons yields a qualitative modification of the system. Now let us discuss the situation where $ t
$ is finite and the above mentioned degeneracies are lifted.
The spin liquid state: $ |U| \ll J $
------------------------------------
For small $ t $ we can now use perturbation theory to describe the effect of hopping of the conduction electrons. A detailed discussion of this type of perturbation can be found in Ref.. For the limit $ |U| \ll J $ the singlet state $ |\Psi_{\rm s} \rangle $ remains the ground state [@TSUNE2]. Its energy acquires a correction in second order due to the polarization of the onsite singlet, i.e. the neighbor electron and localized spin are involved in the formation of the singlet which becomes more extended.
The hopping term leads to the mobility of the spin triplet and the doublon-holon excitations. The former behaves as a single quasiparticle with an excitation energy
$$E_{\rm t} (q) = J + \frac{4 t^2}{3J + 2U} - \frac{4t^2}{J+2U} (1 -
\cos q)$$
where $ q $ is the momentum (lattice constant $ a=1 $). The excitation energy for the doublon-holon state has the form of two-particle excitation with an energy depending on two momenta $ k $ and $ q $,
$$\begin{array}{ll}
E_{\rm c} (k,q) = & \displaystyle \frac{3J}{2} + U + \frac{8t^2}{3
J +2U} - \frac{3 t^2}{J} \\ & \\ & \displaystyle + t \{
\cos(k+q) + \cos(k) \} \\ & \\ & \displaystyle + \frac{t^2}{3 J +
2 U} \{ \cos(2(k+q)) + \cos(2k) \}. \\ &
\end{array}$$
Thus, the doublon-holon excitations form a continuum. The spin triplet excitation may be understood also as a bound state of a doublon and holon with their spins in the triplet configuration. In this sense the triplet excitation has the character of an “exciton” within the gap between the singlet ground state and doublon-holon continuum.
The discussion of the level scheme for $ t =0 $ suggests that there is some change in the properties of the spin liquid phase as we turn $ U
$ from 0 towards negative values. There is a critical value $ U = - J
/ 2 $ where the doublon-holon state falls below the triplet excitation (for $ t=0 $). If $ U $ is smaller than $ - J / 2 $ the triplet excitation discussed above is absorbed into the continuum of the spin triplet channel of the doublon-holon continuum. Therefore the lowest spin and charge excitations have both two-particle character like a particle-hole excitation. This state has still a finite gap and the structure of the excitations is essentially similar to that of a band insulator. The quasiparticles, the doublons and holons, are spin 1/2 fermions, composed of conduction electrons and localized spins.
At the second critical value of $ U $ ($ = - 3 J/2 $) the holon-doublon state passes the singlet ground state. Here the character of the ground state has to change so that this critical value should belong to a phase transition. In the following we want to discuss the properties of the new state.
The perturbative treatment of the different states given above does not allow the evaluation of the transition points for finite $ t $. Close to the transition points a more complicated perturbation for degenerate states would be necessary. This leads, however, already to a complicated many-body problem. Thus, we leave the discussion of the phase boundary lines to the numerical part of our paper.
The SCDW state: $ |U| \gg J $
-----------------------------
In the limit $ |U| \gg J $ the ground state is highly degenerate if we assume $ t = 0 $. A complete basis of these degenerate states is given by all real-space configurations of holons and doublons, with an equal number of each (half filling). Both onsite singlets $ |\Psi_{\rm s}
\rangle $ or triplets $ | \Psi_{\rm t} \rangle $ are now excited states which always should occur in pairs, due to the conservation of the electron number, with energies $ E_{\rm ss} = -3 J/2 -U $, $
E_{\rm st} = -J/2 -U $ and $ E_{\rm tt} = J/2 -U $ for a singlet-singlet, singlet-triplet and a triplet-triplet pair. All these states are highly degenerate.
In order to lift the degeneracy of the ground state we include now the hopping process, assuming a small, but finite $ t $. We can now generate an effective Hamiltonian in the Hilbert-subspace of doublons and holons only. At this point it is convenient to introduce the notion of isospin $ {\bf I} $ (I=1/2) for the charge degrees of freedom of the holon and doublon. We identify a holon with isospin up and a doublon with isospin down. Like the ordinary spin also the isospin transforms according to the SU(2)-symmetry group. Additionally we introduce a phase convention by dividing the lattice into two sublattices $ A $ and $ B $. Then the real space basis functions should be multiplied by $ \prod_{i \in B} \exp (i \pi I_z)
$.
$$\begin{array}{l}
I^{+}_i = s_i c^{\dag}_{i \uparrow} c^{\dag}_{i \downarrow} \\ \\
I^{-}_i = s_i c_{i \downarrow} c_{i \uparrow} \\ \\ \displaystyle
I^z_i = \frac{1}{2}( c^{\dag}_{i \uparrow} c_{i \uparrow} +
c^{\dag}_{i \downarrow} c_{i \downarrow} -1) \\
\end{array}$$
where $ s_i = +1 $ if $ i \in A $ and $ s_i = -1 $ if $ i \in B $. With this notation the effective Hamiltonian which lifts the degeneracy in lowest order perturbation theory has the form,
$${\cal H}_{\rm eff} = \sum_{\langle i,j \rangle} \left[ {\bf I}_i
\cdot {\bf I}_j - \frac{1}{4} \right] \left[ a {\bf S}_{fi} \cdot
{\bf S}_{fj} + b \right].$$
where the two constants are
$$\begin{array}{l} \displaystyle
a = -\frac{ 2 t^2}{U+\frac{J}{2}} + \frac{t^2}{U-\frac{J}{2}} +
\frac{t^2}{U+ \frac{3 J}{2}} , \\ \\ \displaystyle b =
-\frac{9t^2/4}{U-\frac{J}{2}} - \frac{ 3 t^2/2}{U+ \frac{J}{2}} -
\frac{t^2/4}{U+ \frac{3 J}{2}} \\
\end{array}$$
(derived in the Appendix). In this formulation the SO(4)-symmetry of both the spin and isospin is obvious (SO(4) = SU(2) $ \times $ SU(2)). The spin degrees of freedom are only due to the localized spins while the conduction electron spin is completely quenched in the effective model. Thus the conduction electrons only appear as charge degrees of freedom, i.e. isospins. It is easy to see that the coupling constant $
a $ is negative for $ U<-3J/2 $ and vanishes for $ J \to 0 $, because this leads to a complete decoupling of the localized spins from the conduction electrons. Note that the positive constant $ b $ determining the spectrum of the conduction electrons remains finite for $ J = 0 $ so that the isospin degrees of freedom remain coupled. The electron spins appear only in states including onsite singlet or triplet components which are much higher in energy.
For this type of Hamiltonian the following two exact statements are known. (1) For a finite system we can show that the ground state is a singlet in both the spin and the isospin channel and it is non-degenerate. This can be proven using a generalization of Marshall’s theorem. (2) In the thermodynamic limit the excitations in both channels are either gapless or the ground state is degenerate with spontaneously broken parity. This can be demonstrated using a generalization of the Lieb-Schultz-Mattis theorem as given by Affleck and Lieb [@LSM; @AFFLIEB]. There is no need to reproduce the proofs of these two statements here, since they are completely analogous to those applied to the Heisenberg spin system.
The latter statement proves that in the limit $ |U| \gg J $ the phase is different from the spin liquid state. While the spin liquid phase has a unique ground state and a gap in all excitations, this phase has either a degenerate ground state or gapless excitations. In the case of gapless excitations we expect that the system has quasi-long-range order in both spin and isospin analogous to the 1D Heisenberg model. The dominant correlation of both degrees of freedom is “antiferromagnetic”, which corresponds to the usual charge density wave correlation for the isospin part. Due to the qualitative difference of the two ground states, we conclude that there must be a phase transition between the two limits, depending on $ U $ and $ J $. In the following we will use numerical methods to demonstrate that the state for $ U \ll - J $ has actually gapless excitations.
The phase diagram by DMRG
=========================
In order to determine the transition line in the phase diagram, $ U $ versus $ J $, and the character of the different phases, we calculate the excitation energies and the correlation functions of the ground state in the KHM numerically. For this purpose it is necessary to treat long enough systems such that the transition between different phases can be observed reliably. We use the density matrix renormalization group (DMRG) method which allows us to study long chains by iteratively enlarging the system size and obtain ground state wave functions with only small systematic errors. [@WHITE]
We calculate the elementary excitations for various $U$ to identify the three phases discussed in Sec. II. The DMRG scheme is designed to obtain a very good approximation of the ground state of a model. For the evaluation of excited states and their energies we consider the model in those Hilbert-subspaces which contain these states as lowest energy states. The real ground state is found for the subspace with $S_{\rm tot}^z=0$ and $N_{\rm c}=L$ ($ N_{\rm c} $: number of conduction electrons). The spin excited state is obtained for $
S_{\rm tot}^z =1 $ and $N_{\rm c}=L$ and the charge excited state for $S_{\rm tot}^z=0$ and $ N_{\rm c} = L \pm 2 $. In these sectors the lowest energy state is calculated for various system sizes up to $L=48$ with the finite system algorithm using open boundary conditions. The extrapolation to the bulk limit is obtained from the scaling laws. For the gapped spin liquid phase the form $\Delta(L)=\Delta(\infty)+\beta L^{-2}+O(L^{-4})$ is assumed, since the lowest excited state generally corresponds to the bottom of an excitation spectrum which can be expanded in terms of the square of the momentum, $k^2$. On the other hand, for the region which is supposed to possess the gapless phase the bulk limit value of the excitation energies is estimated simply using $1/L$ scaling and $1/\log{L}$ scaling.
First let us discuss the results for the case of strong exchange coupling $ J=10.0t$. The data for the excitation gaps are shown in Fig. \[exci10\]. It is not difficult to identify the three phases as we scan $ U $ from 0 to $ -2 J $: the ordinary spin liquid phase $\Delta_{\rm c}>\Delta_{\rm s}>0$ for $U/J>-0.6$, the spin liquid phase with identical spin and charge excitations $\Delta_{\rm
c}=\Delta_{\rm s}>0$ for $-0.8>U/J>-1.4$, and the gapless phase (SCDW) $\Delta_{\rm c}=\Delta_{\rm s}=0$ for $U/J<-1.8$. In the intermediate regime located between $U/J=-0.8$ and $-1.2$ the difference of $\Delta_{\rm c}$ and $\Delta_{\rm s}$ vanishes in the bulk limit within the accuracy of the present calculation. Therefore this confirms the existence of the intermediate phase where both spin and charge excitations have the same energy properties and are essentially of the two particle type as anticipated in the previous section.
In addition the single quasiparticle excitation gap $\Delta_{\rm qp}$ is shown in Fig. \[exci10\]. The quasiparticles originate from doublons or holons and have therefore fermionic character. The $\Delta_{\rm qp}$ is obtained from the difference of the lowest energy in the Hilbert spaces with $S_{\rm tot}^z=0$ with $N_{\rm c}=L$ and $S_{\rm tot}^z=\pm 1/2$ with $N_{\rm c}=L\pm 1$. The $\Delta_{\rm qp}$ is half of the $\Delta_{\rm c}$ for $U/J>-1.4$. The reason is that the effective interaction between the quasiparticles is repulsive in that region. On the other hand, for $U/J<-1.8$ the $\Delta_{\rm qp}$ increases with growing $-U/J$ while $\Delta_{\rm c}$ is zero (within our accuracy). Hence, we may interpret this as a switching from a repulsive to an attractive quasiparticle interaction, indicated by the minimum of the $\Delta_{\rm qp}$ around $U/J=-1.6$. Since the character of the excitation changes here we expect that this switching coincides with the phase transition to the gapless phase. Obviously, in the thermodynamic limit all gaps should disappear at the transition point. Despite careful scaling analysis it is difficult to determine the exact position of the transition from the present DMRG study because close to the transition point the truncation error in the density matrix becomes large and the convergence of the RG iteration is rather slow.
Next we turn to a weaker exchange coupling. The results for $J=2.0t$ are shown in Fig. \[exci02\]. In contrast to the strong coupling case we cannot find any indication of the intermediate regime with $
\Delta_{\rm c} = \Delta_{\rm s} > 0 $. This means that this regime is either absent or confined to a very small region close to the transition point which we locate at around $U/J \approx -2.0t$. For $U/J < -2.0t$ the two gaps essentially coincide and are practically zero. Corresponding to this change the quasiparticle gap shows a minimum around the critical value of $ U $.
Similar calculations are carried out for $J/t=4,6,8$ and the phase diagram is obtained. In Fig. \[JU-phase\] we show the three phases by different dots which are numerically determined. The crosses in Fig. \[JU-phase\] represent estimated minimum points in the quasiparticle gap where we expect the phase transition to be located. There are technical limitations on the details of the phase diagram, which do not allow us to determine clearly the extension of the intermediate phase ($ \Delta_{\rm c} = \Delta_{\rm s} > 0 $).
A definite advantage of the DMRG scheme is that we can get very good approximations of the ground state wave functions for rather large systems. This allows the study of various correlation functions at least sufficiently far from the transition line of the phase diagram. Here we have analyzed the spin-spin and charge-charge correlation functions which show characteristic features of the phases. The correlation functions can be observed through the perturbing effect of the boundaries. This induces an oscillating disturbance into the wave function leading to a charge or spin density modulation analogous to the Friedel oscillations around an impurity. For example if $\Delta_{\rm c}$ is finite then the density-density correlation has an exponential form and the charge density Friedel oscillations induced by an impurity potential shows the same exponential decay. The length scale is related to the ratio between the charge velocity and the gap, $v_{\rm c}/\Delta_{\rm c}$. On the other hand, if $\Delta_{\rm c}$ is zero, then a power law decay of the correlation function, similar to that of a Tomonaga-Luttinger liquid, is expected. This power law decay occurs naturally in the Friedel oscillations of both spin and charge.
In Fig. \[exp-dec\] we show the charge and spin density Friedel oscillations, $\delta\rho(x)$ and $\delta\sigma(x)$, in the intermediate phase with $J/t=10$ and $U/J=1.4$. The charge density oscillations are naturally induced by the open boundary conditions, while for generating the spin density oscillations we have to apply a local magnetic field, $H_{local}=2h(S_1^z-s_1^z-S_L^z+s_L^z)$, coupling to the spins at both ends of the finite system. Obviously both the charge and spin density oscillations decay exponentially and our analysis shows that their correlation length is essentially the same. This means that not only the excitation gaps but also the velocities of the excitations are identical as we expect for the excitations of the particle-hole type. This clearly indicates that the picture of this phase as a kind of band insulator is appropriate.
The charge and spin density Friedel oscillations in the gapless phase are shown in Fig. \[chrge-osc\] and \[spin-osc\], respectively, by the solid line. In contrast to the exponential decay in the gapped phase, represented by the broken line, a power law decay is observed here, $ \sim (-1)^r / r^{\eta_{\rm c,s}} $. Considering the insets of Fig. \[chrge-osc\] and \[spin-osc\] we find that the powers $
\eta_{\rm c,s} $ of decay are different for spin and charge (for the charge $ \eta_{\rm c} \approx 1.15 $ and for the spin $ \eta_{\rm s}
\approx 2.4 $). This demonstrates the separation of spin and charge excitations as anticipated from the effective Hamiltionian in Eq.(7). In addition the staggering correlations both for spin and charge densities is obviously dominating. This is in agreement with the characterization of this phase as a spin and charge density wave with quasi long-range order, as the effective Hamiltonian in Eq.(7) suggests. From this result it is clear that among the two possibilities the Lieb-Schultz-Mattis theorem proposes for the phase $
-U \gg J $ the gapless state is the correct choice.
Conclusion
==========
We have seen that the 1D half-filled Kondo-Hubbard lattice with attractive electron-electron interaction has three phases, two gapped phases (spin liquid and band insultator-like) with short range correlations and one gapless phase with quasi long-range ordered spin and charge density waves (SCDW). Note that for positive $ U $ only the spin liquid exists. An indication of the difference between positive and negative $ U $ can be found in the small-$ J $ limit. One may be tempted to lift the large spin degeneracy found for $ J =0 $ by perturbation theory, i.e. introducing the RKKY-interaction among the localized spins. In lowest order one separates the system into free conduction electrons and interacting localized spins. A simple calculation shows, however, that this perturbation concept fails for the 1D system, because the effective interaction between the spins diverges for the wave vector $ q = 2 k_F = \pi $ for all $ U \geq 0 $. This indicates that there is no separation between these degrees of freedoms and numerical calculations suggest that for any finite $ J $ the ground state has spin liquid properties. In contrast, for negative $ U $ the perturbation converges for all wave vectors and one can derive a sensible RKKY-model [@schoeller]. Under this condition the localized spins and the conduction electrons can separately exhibit gapless phases. Note, however, that the electron spin excitation has a gap for negative $ U $ and only the charge part is gapless with a tendency towards an enhancement of charge density wave correlations. The effective Hamiltonian (7) describes this kind of system in the extreme limit ($ |U| \gg J $).
An interesting change occurs also for the nature of the excitations. If we scan from $ U = 0 $ to $ U \gg J $ for fixed values of $ t $ and $ J $ then we find close to the transition ($ U \approx - 3 J/2 $) a region where the lowest spin and charge excitations are due to doublon-holon excitation. Therefore, the two excitations have the same spectrum as a band insulator. We would like to point out, however, that the fermions, the holons and doublons, building the excitations are quasiparticles which are composed of the itinerant electronic and the localized spin degree of freedom. For finite $ t $ they are real dressed quasiparticles involving complex correlation effects. Thus it is not an entirely trivial feature of this strongly correlated system to mimic the properties of a simple band insulator. On the other hand, in the spin liquid phase for $ |U| \ll J $ the spin and charge excitation have different nature. The spin triplet excitation has excitonic character, i.e. it is a bound state of a holon-doublon excitation.
Also in the gapless phase the spin and charge excitations are separated as is seen in the difference of the correlation functions. However, the origin of the spin-charge separation is different. The conduction electron and localized spins provide nearly independent degrees of freedom in close connection to the RKKY-model mentioned above.
The KHM with attractive interaction is an example of a strongly correlated electron system which possesses a number of phase of rather different character. These phases depend only on the relation of different coupling constants. Several of the mentioned features can be transfered to higher dimensional systems. The gapless ground state has very likely long-range order in both the charge and the spin density wave correlation. However, it has to be noticed that the spin liquid phase for $ U \geq 0 $ might have antiferromagnetic long-range order, if $ J $ is sufficiently small. This has not been proven so far. However, this long-range ordered state has a charge gap (gap of a spin density wave state) and the spin excitations are due to both the electron and the localized spin. This is in contrast to the SCDW-state we discussed above. Thus in higher dimensions we may expect to see a richer phase diagram.
We would like to thank to H. Tsunetsugu, K. Ueda and H. Schoeller for helpful discussions. This work is financially supported by the Swiss Nationalfonds. M.S. is grateful for the PROFIL-fellowship by the Swiss Nationalfonds.
Effective Hamiltonian in the limit $|U|\gg J\gg t $
====================================================
The low-energy Hilbert subspace $ \Lambda $ in the limit of large attractive interaction consists of all real-space configurations of holons and doublons which appear in equal number at half-filling. All other states containing onsite singlets or triplets are higher in energy by a multiple of order $ |U| $. For convenience we introduced the notion of isospin $ \tilde{{\bf I}} $ as an additional local SU(2) degree of freedom besides the localized spins. The doublon and the holon are mapped to isospin up and down, respectively (Eq.(6)). Thus each site has four basis states: $ | \tilde{I}^z, S^z \rangle = \{ |+
, \uparrow \rangle , |+ , \downarrow \rangle, |- , \uparrow \rangle ,
|- , \downarrow \rangle \} $. For vanishing hopping $ t $ the whole subspace $ \Lambda $ is disconnected leading to complete degeneracy. We would now like to generate the effective Hamiltonian which lifts this degeneracy and determines the low-energy physics. We use second order perturbation theory in the hopping where the higher energy states outside of $ \Lambda $ appear as intermediate states. In this way the effective Hamiltonian will only have nearest neighbor coupling and be of the generic form,
$$\begin{array}{ll}
{\cal H}_{\rm eff} = \sum_i & \displaystyle \{ a \tilde{I}^z_i
\tilde{I}^z_{i+1} + \frac{b}{2} ( \tilde{I}^+_i \tilde{I}^-_{i+1}
+ \tilde{I}^-_i \tilde{I}^+_{i+1} ) +c \} \\ & \\ & \quad
\displaystyle \times \{ a' S^z_i S^z_{i+1} + \frac{b'}{2} ( S^+_i
S^-_{i+1} + S^-_i S^+_{i+1} )+ c' \}.
\end{array}$$
We consider now the coupling on a single bond, say between site 1 and 2, where we represent the local states by $ | \tilde{I}^z_1,
\tilde{I}^z_2 ; S^z_1, S^z_2 \rangle $. There are three different cases to take into account:
0.3 cm
1\) $ | \tilde{I}, \tilde{I} ; S ,\pm S \rangle $: If the $ z
$-component of the isospins on a bond are the same, then the hopping is ineffective and this state is not connected to any other via this bond.
0.3 cm
2\) $ | \tilde{I},- \tilde{I} ; S, S \rangle $: We consider the example $ | + ,-; \uparrow \uparrow \rangle $ which leads to the matrix elements,
$$\begin{array}{ll}
m_1 & \displaystyle
= \langle + , - ; \uparrow \uparrow | {\cal H}_{\rm eff} | +
, - ; \uparrow \uparrow \rangle =
(-\frac{a}{4}+c)(\frac{a'}{4}+c')
\\ & \\ & \displaystyle = \langle + , - ;
\uparrow \uparrow | {\cal H}_{\rm eff} | - , + ; \uparrow \uparrow
\rangle = \frac{b}{2}(\frac{a'}{4}+c')
\\ & \\ & \displaystyle = \frac{t^2}{U-\frac{J}{2}} +
\frac{t^2}{U+\frac{J}{2}} < 0 \\ &
\end{array}$$
where the intermediate states consist of two triplets ($ E = -U +
J/2 $) or of one singlet and one triplet ($ E= -U - J/2 $).
0.3 cm
3\) $ | \tilde{I},- \tilde{I} ; S, -S \rangle $: It is sufficient to consider the example $ | + ,-; \uparrow \downarrow \rangle $,
$$\begin{array}{ll}
m_2 & \displaystyle
= \langle + , - ; \uparrow \downarrow | {\cal H}_{\rm eff} |
+ , - ; \uparrow \downarrow \rangle = (- \frac{a}{4} +c)(-
\frac{a'}{4} + c')
\\ & \\ & \displaystyle = \langle + , - ;
\uparrow \downarrow | {\cal H}_{\rm eff} | - , + ; \uparrow
\downarrow \rangle = \frac{b}{2} (- \frac{a'}{4} + c')
\\ & \\ & \displaystyle = \frac{1}{4} \left[
\frac{5t^2}{U-\frac{J}{2}} + \frac{2t^2}{U+\frac{J}{2}} +
\frac{t^2}{U+ \frac{3 J}{2}} \right] < 0 \\ &
\end{array}$$
and
$$\begin{array}{ll}
m_3 & \displaystyle
= \langle + , - ; \uparrow \downarrow | {\cal H}_{\rm eff} |
+ , - ; \downarrow \uparrow \rangle = (- \frac{a}{4}+c) \frac{b'}{2}
\\ & \\ & \displaystyle = \langle + , - ;
\uparrow \downarrow | {\cal H}_{\rm eff} | - , + ; \downarrow
\uparrow \rangle = \frac{b}{2} \frac{b'}{2}
\\ & \\ & \displaystyle = - \frac{1}{4} \left[
\frac{t^2}{U-\frac{J}{2}} - \frac{2t^2}{U+\frac{J}{2}} +
\frac{t^2}{U+ \frac{3 J}{2}} \right] > 0. \\ &
\end{array}$$
Note that $ m_1 - m_2 = m_3 $. Now we can determine the coefficients in Eq.(7). Fixing $ a = 1 $ we find $ b = -1 $, $ c = -1/4 $, $ a' =
b' $ and
$$\begin{array}{l}
a'= 4 (m_2 - m_1) < 0 \\ \\ c'= -(m_1 + m_2) > 0. \\
\end{array}$$
Next, we rotate the isospin on every second site by $ \pi $ around the $ z $ -axis, $ \tilde{I}^{x,y} \to I^{x,y} = - \tilde{I}^{x,y} $ and $
\tilde{I}^z \to I^z = \tilde{I}^z $ by applying the phase factor $
\exp (i \pi \tilde{I}^z_i) $. With this phase convention on the basis states $ b \to - b $ and the effective Hamiltonian reaches its apparently SU(2) rotational symmetric form in both spin and isospin space as given in Eq.(7).
On leave from Institute of Solid State Physics, University of Tokyo, Roppongi 7-22-1, Minato-ku, Tokyo 106, Japan.
H. Tsunetsugu, Y. Hatsugai, K. Ueda and M. Sigrist, Phys. Rev. B[**46**]{}, 3175 (1992).
T. Yanagisawa and K. Harigaya, Phys. Rev. B[**50**]{}, 9577 (1994).
A.M. Tsvelik, Phys. Rev. Lett. [**72**]{}, 1048 (1994).
CC\. Yu and S. White, Phys. Rev. Lett. [**71**]{}, 3866 (1993).
N. Shibata, T. Nishino, K. Ueda and C. Ishii, Phys. Rev. B[**53**]{}, R8828 (1996).
H. Tsunetsugu, M. Sigrist and K. Ueda, to be published in Rev. Mod. Phys.
A. Auerbach, [*Interacting Electrons and Quantum Magnetism*]{}, Springer, 1994.
H. Tsunetsugu, Phys. Rev. B[**55**]{}, 3042 (1997).
E.H. Lieb, T.D. Schultz and D.C. Mattis, Ann. Phys. [ **16**]{}, 407 (1961).
I. Affleck and E.H. Lieb, Lett. Math. Phys. [ **12**]{}, 12 (1986).
S.R. White, Phys. Rev. Lett. [**69**]{}, 2863 (1992).
R. Egger and H. Schoeller, Phys. Rev. B[**54**]{}, 16337 (1996).
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: |
Pomsets are a model of concurrent computations introduced by Pratt. They can provide a syntax-oblivious description of semantics of coordination models based on asynchronous message-passing, such as Message Sequence Charts (MSCs). In this paper, we study conditions that ensure a specification expressed as a set of pomsets can be faithfully realised via communicating automata.
Our main contributions are (i) the definition of a realisability condition accounting for termination soundness, (ii) conditions for global specifications with participants, and (iii) the definition of realisability conditions that can be decided directly over pomsets. A positive by-product of our approach is the efficiency gain in the verification of the realisability conditions obtained when restricting to specific classes of choreographies characterisable in term of behavioural types.
author:
- Roberto Guanciale
- Emilio Tuosto
bibliography:
- 'bib.bib'
title: 'Realisability of Pomsets via Communicating Automata[^1] '
---
Introduction {#sec:intro}
============
Pomsets and message-sequence charts {#sec:msc}
===================================
Realisability and termination soundness of pomsets {#sec:realisability}
==================================================
Pomset based verification conditions {#sec:pomsets}
====================================
Discussion on the pomset based conditions {#sec:implementation}
=========================================
Related work {#sec:related}
============
Concluding remarks {#sec:conc}
==================
[^1]: Research partly supported by the EU H2020-RISE-2017 project BehAPI and the EU COST Action IC1405. The authors thank the anonymous reviewers for their comments and the interesting discussions on the forum of ICE18 .
|
{
"pile_set_name": "ArXiv"
}
|
---
abstract: 'For a general quantum many-body system, we show that its ground-state entanglement imposes a fundamental constraint on the low-energy excitations. For two-dimensional systems, our result implies that any system that supports anyons must have a nonvanishing topological entanglement entropy. We demonstrate the generality of this argument by applying it to three-dimensional quantum many-body systems, and showing that there is a pair of ground state topological invariants that are associated to their physical boundaries. From the pair, one can determine whether the given boundary can or cannot absorb point-like or line-like excitations.'
author:
- 'Isaac H. Kim'
- 'Benjamin J. Brown'
title: 'Ground state entanglement constrains low-energy excitations'
---
Introduction
============
The exotic features of topological phases of matter such as fractional statistics [@Arovas1984], and genus-dependent ground-state degeneracy [@Wen1990], are intimately linked to their long-range ground-state entanglement. Indeed, by calculating [*topological entanglement entropy*]{} [@Hamma2005a; @Levin2006; @Kitaev2006], we can extract data of the emergent topological quantum field theory for a given Hamiltonian. The study of such systems is not only of significant fundamental interest, but topological systems also offer a promising route towards quantum information processing in an intrinsically fault-tolerant manner [@Kitaev2003].
While two-dimensional topological phases are well understood, their three-dimensional [@Hamma2005b; @Levin2005a; @Walker2011; @Vishwanath2013; @Metlitski2013], and higher-dimensional [@Hastings2015] counterparts remain a largely unexplored area of research [@Keyserlingk2012; @Bullivant2015]. Of recent interest are the boundaries of three-dimensional phases. Specifically, it has been shown that two-dimensional chiral topological phases, e.g., the semion model, exist on the boundaries of certain three-dimensional phases [@Keyserlingk2012]. Conversely, in some topological phases, we have boundaries that condense particle-like or line-like excitations, which are well studied in two dimensions [@Bravyi1998; @Kitaev2012; @Levin2013; @Barkeshli2013]. Here we develop tools to probe the boundaries of three-dimensional topologically ordered phases using on entropic quantities.
The topological entanglement entropy, $\gamma$, is the constant correction term of the entanglement entropy formula for the ground state of a system $$S(\rho_A) = \alpha l - n\gamma + \cdots,\label{eq:TEE}$$ where $S(\rho_A) = -{\textrm{Tr}}(\rho_A \log \rho_A)$ is the von Neumann entropy of subsystem $A$, constant $\alpha$ depends on the microscopic details of the system, $l$ is the length of the boundary that separates subsystem $A$ from its complement and $n$ is the number of disconnected components of the boundary [@Levin2006; @Kitaev2006]. Assuming that the low-energy physics is described by a topological quantum field theory, $\gamma$ reveals information about the underlying field theory [@Levin2006; @Kitaev2006], as well as the data of individual anyonic quasiparticles [@Kitaev2006; @Dong2008; @Brown2013], and their braiding statistics [@Zhang2011].
In this Manuscript, we prove a no-go theorem that illuminates the excitation structure of a system without using any prior assumptions about an underlying topological quantum field theory. Instead, we make assumptions only about the support of creation operators of its quasiparticle excitations. Our results extend existing theorems that give conditions for which topological ground-state degeneracy can or cannot be present [@Hastings2004a; @Kim2013]. Novel to this work is that our theorem constrains low-energy excitations, not the ground state degeneracy.
More precisely, we prove that the low-energy excitations of a local gapped Hamiltonian are topologically trivial in the case that constant term $\gamma$ vanishes. We show this by proving the following expression $$\| UV {\left|\psi_0\right\rangle} -VU {\left|\psi_0\right\rangle} \|\leq O(\gamma^{\frac{1}{2}}), \label{eq:main_result}$$ for ground state ${\left|\psi_0\right\rangle}$ where $\| {\left|\psi\right\rangle}\|={\left\langle\psi | \psi\right\rangle} $ is the norm of the vector. Unitary operator $U$ creates excitations from the vacuum, and $V$ represents a unitary process of (i) creating particles, (ii) performing some non-trivial monodromy operation with a quasiparticle created with $U$, and (iii) annihilating the particles created by $V$.
The result of Eq.\[eq:main\_result\] may seem unsurprising in view of rigorously studied two-dimensional(2D) topological phases [@Kitaev2003; @Levin2005]. However, the novelty of our method is that we obtain this result without making any assumptions that depend on the microscopic details of the Hamiltonian. We only assume that we can perform a monodromy operation between particles using operators $U$ and $V$. This generality enables us to perform a similar analysis in more complicated settings which in turn allow us to find new topological invariants. Another point worth noting is that our method can be easily extended to higher dimensional systems. Indeed, we explicitly demonstrate the power of our framework by proving that certain linear combinations of entanglement entropies cannot vanish on the boundary of certain three-dimensional(3D) topologically ordered systems that support topological excitations. Specifically, we find a pair of topological invariants that are defined on the boundary, each of which represent the long-range entanglement associated to the point-like and line-like excitations. If the invariant for the point-like excitations is zero, all point-like excitations can be condensed at the boundary. Similarly, if the invariant for the line-like excitations is zero, all such excitations can be condensed at the boundary. We give evidence that these numbers are universal by explicit analytical calculation using the boundaries of the 3D toric code [@Hamma2005b]. Moreover, we expect these diagonistics to be useful for analyzing Walker-Wang models [@Walker2011; @Keyserlingk2012], as we are able to prove that the invariants must attain a nonzero value for these models. This is surprising since previous analyses have shown that bulk topological entanglement entropy give null results for the modular variants of the Walker-Wang models [@Keyserlingk2012; @Bullivant2015].
Further, our results extend the work of Grover [*et al.*]{} [@Grover2011], where they seek entropic topological invariants in higher-dimensional phases. In their work they show that there is only one invariant in the [*bulk*]{} of three-dimensional topologically ordered systems. Our results show that the entanglement structure at the boundary of a topological phase can potentially be richer than that of the bulk, as we find two distinct diagnostics that provide information about different types of low-energy excitations at the boundary of a model.
The remainder of this Manuscript is structured as follows; In Sec. \[Sec:TwoDimensions\] we prove that a vanishing topological entanglement entropy is a sufficient condition to show that a phase is topologically trivial. For clarity, we present the proof together with the explicit example of a two-dimensional phase. In Sec. \[Sec:ThreeDimensions\] we modify our proof for the boundaries of three-dimensional systems. We identify two entropic invariants for identifying different particle types. In Sec. \[Sec:ThreeDimensionalToric\] we demonstrate our three-dimensional invariants by consideration of the different boundaries of the three-dimensional toric code before giving some concluding remarks. Technical details of calculations made in Sec. \[Sec:ThreeDimensionalToric\] are given in App. \[App:EntropyCalculations\].
Two-dimensional topological phases {#Sec:TwoDimensions}
==================================
Let us first sketch the proof that $\gamma$ must be non vanishing for a two-dimensional model to give rise to anyonic excitations. We begin by considering the creation of two quasiparticles by a string-like operator $U$. Then we identify a condition on $U$ that ought to be satisfied for any anyon model. This condition, which shall be explained shortly, implies that the action of $U$ on the ground state can be approximated by a unitary operator $U'$ which lies only in the vicinity of the quasiparticles, with an approximation error that scales as $O(\gamma^{\frac{1}{2}})$. We show this using the fact that $V$ has no common support with $U'$ and thus commutes with $V$. The inequality of Eq.\[eq:main\_result\] follows from this observation.
These arguments make use of the well-known concepts in quantum information theory, and as such, we set the relevant terminology and definitions first. We use two different distance measures between quantum states $\rho$ and $\sigma$, the fidelity, $F(\rho,\sigma) = \|\rho^{\frac{1}{2}}\sigma^{\frac{1}{2}}\|_1$, and the trace distance, $D(\rho, \sigma) = \frac{1}{2}\|\rho-\sigma \|_1$. These two measures can be used interchangeably, due to their well-known relation [@Nielsen2000]: $$1-F(\rho,\sigma) \leq D(\rho,\sigma) \leq \sqrt{1-F(\rho,\sigma)^2}.\nonumber$$
Now we go through the details of each steps. Let us begin by stating the most crucial part of the argument, which is pictorially represented in FIG.\[fig:secondstep\]. To be more specific, consider a pair of quasiparticles created out of the vacuum state ${\left|\psi_0\right\rangle}$ by a string-like unitary operator $U$. We show that $$\|U{\left|\psi_0\right\rangle} - U'{\left|\psi_0\right\rangle} \|\leq O(\gamma^{\frac{1}{2}}), \label{eq:main_result_intermediate}$$ for some $U'$ that lies in the vicinity of the particles, if $U$ is *freely deformable*; we say that $U$ is freely deformable if the particles can be created by another string-like unitary operator $U_{\text{def}}$ whose support can be continuously deformed into that of $U$. This is a natural assumption that is expected to hold for many anyon models. When $\gamma \approx 0$, the above assertion implies that $U{\left|\psi_0\right\rangle} \approx U'{\left|\psi_0\right\rangle}$. In short, the effective support of $U$ is reduced. We refer to such process as the *cleaning process* [^1].
The cleaning process relies upon two facts about general quantum states. We first lay out these observations and later explain how they can be applied to anyon models. First, for any two bipartite pure states ${\left|\psi_1\right\rangle}$ and ${\left|\psi_2\right\rangle}$ that have identical density matrices over a subsystem can be mapped onto one another by applying a unitary operation only on the complementary subsystem. Second, there is a condition under which one can check the equivalence of two states from their local subsystems [@Kim2014]. In this paper, we use the second observation to argue that $U{\left|\psi_0\right\rangle}$ and ${\left|\psi_0\right\rangle}$ have the same density matrices over the complement of the support of $U'$ if $\gamma$ is small. Then we use the first observation to argue that there exists a unitary $U'$ which is supported on a smaller region, as explained in FIG. \[fig:secondstep\]. We now elaborate on these observations.
The first observation follows from the celebrated Uhlmann’s theorem [@Uhlmann1976], which asserts that $F(\rho,\sigma)$ is equal to the maximum overlap over their purifications: $$F(\rho,\sigma) = \max_{{\left|\psi_{\sigma}\right\rangle}} |{\left\langle\psi_{\sigma} | \psi_{\rho}\right\rangle} |.$$ In our context, we envision $\rho$ and $\sigma$ to be the reduced states that are inherited from some bipartite pure states ${\left|\psi_{\rho}\right\rangle}$ and ${\left|\psi_{\sigma}\right\rangle}$. If the fidelity between $\rho$ and $\sigma$ is $1$, the above relation implies that there exists a purification of $\sigma$ that has a unit overlap with ${\left|\psi_{\rho}\right\rangle}$. In particular, it would imply the existence of a unitary operator acting on the complement of the support of $\rho$, such that it maps ${\left|\psi_{\rho}\right\rangle}$ to ${\left|\psi_{\sigma}\right\rangle}$ and vice versa.
![(Color online) The division of the system into the relevant subsystems. The particles(red circles) live on $PQ$.\[fig:consistent\_regions\] ](Regions.pdf)
The second observation lies on a recently discovered fact: that two locally equivalent many-body quantum states are globally equivalent under a certain condition. If $\rho_{ABC}$ and $\sigma_{ABC}$ are consistent over $AB$ and $BC$, i.e., $\rho_{AB}=\sigma_{AB}$ and $\rho_{BC}=\sigma_{BC}$, the following inequality holds: $$D(\rho_{ABC},\sigma_{ABC})^2 \leq I(A:C|B)_{\rho} + I(A:C|B)_{\sigma}, \label{eq:LE_GE}$$ where $I(A:C|B)_{\rho} = S(\rho_{AB}) + S(\rho_{BC})- S(\rho_{B}) - S(\rho_{ABC})$ is the conditional mutual information for density matrix $\rho$ [@Kim2014].
So far we have discussed two general facts about quantum states. The natural course is to explain what these facts imply for anyon models. Without loss of generality, let us choose $\rho$ to be the ground state, i.e., $\rho = {\left|\psi_0\right\rangle} \! \! {\left\langle \psi_0 \right|}$ and $\sigma$ to be the excited state, i.e., $U\rho U^{\dagger} = U_{\text{def}} \rho {U_{\text{def}}}^{\dagger}$. We divide the systems into the regions shown in FIG.\[fig:consistent\_regions\], for reasons that will soon become apparent. It should be noted that $\rho$ and $\sigma$ must have the same density matrices over $AB$ and $BC$ since $U$ can be deformed to have a support complementary to these regions. Importantly, this implies that we can use Eq.\[eq:LE\_GE\].
We estimate the right-hand side of Eq.\[eq:LE\_GE\] for the choices we have just made. Quantum entropy obeys strong subadditivity of entropy [@Lieb1972], which implies that $I(A:C|B)_{\rho}\leq I(APQ:C|B)_{\rho}$. Recall that entanglement entropy over a region is equal to the entanglement entropy over its complement, if the global state is pure. Therefore, the right-hand-side of Eq.\[eq:LE\_GE\] can be bounded by the sum of $S(\rho_{BC}) + S(\rho_{CD}) - S(\rho_{B}) - S(\rho_{D})$ and $S(\sigma_{BC}) + S(\sigma_{CD}) - S(\sigma_{B}) - S(\sigma_{D})$. Since $U$ can be freely deformed to be supported on the complement of $BCD$, we have that $S(\rho_R) = S(\sigma_R)$ for $R =B,\, D,\, BC $ and $ CD$. We therefore obtain the bound $$\begin{aligned}
D(\rho_{ABC},\sigma_{ABC})^2 \leq 2 [ S(\rho_{BC}) &+& S(\rho_{CD}) \nonumber
\\ & - &S(\rho_{B}) - S(\rho_{D}) ].\end{aligned}$$
Having obtained an upper bound for $D(\rho_{ABC},\sigma_{ABC})^2$ that depends only on the ground state $\rho$, it can be evaluated for topologically ordered states using Eq.\[eq:TEE\]. We arrive at the conclusion that $\rho_{ABC}$ can be approximated by $\sigma_{ABC}$ with an approximation error of $2\gamma^{\frac{1}{2}}$, i.e., $D(\rho_{ABC}, \sigma_{ABC}) \leq 2 \gamma^{\frac{1}{2}}.$ If $\gamma \approx 0$, $\rho_{ABC} \approx \sigma_{ABC}$. By Uhlmann’s theorem, this would imply that $U{\left|\psi_0\right\rangle}$ can be mapped into ${\left|\psi_0\right\rangle}$ by applying a unitary operator on the complement of $ABC$, thus proving Eq.\[eq:main\_result\_intermediate\].
Intuitively, this leads to a contradiction if the particle carries a nontrivial topological charge. This is due to the defining characteristics of such particles: that they cannot be created or annihilated locally. We use two simple facts to show this concretely. First, $V{\left|\psi_0\right\rangle}= e^{i\phi} {\left|\psi_0\right\rangle}$. This means that the process $V$ acts trivially on the ground state. Second, $V$ commutes with $U'$. This is due to the fact that the support of $U'$ lies only in the vicinity of the quasiparticles, whereas the support of $V$ can be made to be far away from the quasiparticles. Since the norm is invariant under unitary rotation, $$\|VU{\left|\psi_0\right\rangle} - VU' {\left|\psi_0\right\rangle} \| = \|U{\left|\psi_0\right\rangle} - U'{\left|\psi_0\right\rangle} \| \leq O(\gamma^{\frac{1}{2}}).$$ It should be noted that $VU' {\left|\psi_0\right\rangle}$ is actually equal to $U'V{\left|\psi_0\right\rangle}$ due to the commutation relation. Since $V$ acts trivially on the ground state, $$\|U'V{\left|\psi_0\right\rangle} - UV{\left|\psi_0\right\rangle} \| = \|U'{\left|\psi_0\right\rangle} -U{\left|\psi_0\right\rangle} \|\leq O(\gamma^{\frac{1}{2}}).$$ Applying the triangle inequality to the above two inequalities, we arrive at Eq.\[eq:main\_result\].
Three-dimensional topological phases {#Sec:ThreeDimensions}
====================================
So far we have explained why $\gamma$ must attain a nonzero value if anyons exist in two-dimensional systems; otherwise any excitation can be created locally from the vacuum. This is an instructive example which demonstrates the fundamental connection between the ground-state entanglement and the properties of the low-energy excitations. This intuition can be extended to systems of higher dimension to probe the nature of different types of quasiparticle excitations. Further, we can develop our intuition to study the boundaries of topological phases, where the physics of a system will change.
Near the boundary, certain topologically nontrivial excitations can be created locally out of the vacuum. This is because certain boundaries are capable of absorbing, or ‘condensing’ certain topological excitations [@Bravyi1998; @Kitaev2012; @Levin2013; @Barkeshli2013]. As such, the aforementioned argument can be modified accordingly to identify boundaries that condense topological charges. Conversely, it follows from our argument that phases that support topological excitations on their boundaries necessarily have nonzero topological entanglement entropy.
![(Color online) Regions that define $\gamma_{\text{point}}$. The green surface in (a) is the physical boundary. (b) and (c) show regions $CD$, and $B$, respectively. The red dots in (d) represent the point-like excitations we wish to examine. \[fig:obstructions\_3D\_point\]](PointParticleEntropy.pdf){width="\columnwidth"}
Remarkably, in 3D, topological phases can host exotic line-like quasiparticle excitations that carry nontrivial topological charge, as well as point-like excitations. To this end we can construct topological invariants to identify both point-like and line-like topological excitations by consideration of the support of their creation operators.
We give two topological invariants that are applicable to the boundaries of three-dimensional topological phases. The first, the [*point topological entanglement entropy*]{}, is designed to learn the nature of point-like particles near a boundary. The second, the [*line topological entanglement entropy*]{}, achieves a null value for boundaries where all line-like excitations are topologically trivial. Our invariants are obtained following an argument similar to that given in the previous Section.
Point topological entanglement entropy
--------------------------------------
We define the point topological entanglement entropy, $\gamma_{\text{point}}$, as $$\gamma_{\text{point}} = S(\rho_{BC}) + S(\rho_{CD}) - S(\rho_{B}) - S(\rho_{D}), \label{Eqn:GammaPoint}$$ where regions $B$, $C$, and $D$ are shown in FIG.\[fig:obstructions\_3D\_point\]. Region $A$ is the complementary subsystem of region $BCD$. The regions in FIG.\[fig:obstructions\_3D\_point\] are labeled such that they perform analogous roles to the regions with the same labels in FIG.\[fig:consistent\_regions\] in the 2D argument given in the previous Section. For brevity, we have not shown the regions $P$ and $Q$ we have used in the previous Section. These regions are implicitly included in region $A$ adjacent to the parts of region $D$ where the quasiparticles are created. Drawing this analogy allows us to generalize the 2D argument in a natural way to study point excitations on the boundaries of 3D systems. At a high level, one can imagine creating a pair of quasiparticles by applying a deformable string-like operator $U$ that is supported on subsystem $CD$ that, importantly, includes part of the boundary. If $ \gamma_{\text{point}}$ is small, the action of $U$ on the ground state can be approximated by $U'$ which lies in the vicinity of the quasiparticles. Such $U'$ exists only if the quasiparticles can be created locally near to a boundary. Conversely, if there are any point-like excitations that cannot be created by such $U'$, $ \gamma_{\text{point}}$ cannot vanish.
As we did in the 2D case, let us compare two states, the vacuum state, $\rho$, and an excited state with two point-like excitations, $\sigma$; see FIG.\[fig:obstructions\_3D\_point\](d). By our assumption that $U$ is freely deformable, both states have identical density matrices over $BC$ and $CD$. With $A$ the complement of the regions depicted in FIG.\[fig:obstructions\_3D\_point\](a), the trace distance $D(\rho_{ABC}, \sigma_{ABC})$ is upper bounded by ${\gamma_{\text{point}}}^{1/2}$. If $\gamma_{\text{point}}$ is $0$, Eq.\[eq:LE\_GE\] implies that $\rho$ and $\sigma$ are identical over $ABC$. By invoking Uhlmann’s theorem, we conclude that there must exist a unitary operator in the complement of $ABC$ that maps $\rho$ to $\sigma$. Since this region is in the vicinity of the particles, we conclude that the particles can be annihilated or created locally. If there are point-like excitations that cannot be condensed at the boundary, $\gamma_{\text{point}}$ cannot vanish.
Line topological entanglement entropy
-------------------------------------
![(Color online) Regions that define $\gamma_{\text{line}}$. (a) The green surface is the physical boundary. (b) and (c) show individually the annular region $D$ and spherical region $B$, respectively. The red line in (d) represents the line-like excitation we wish to identify.\[fig:obstructions\_3D\_line\]](LineParticleEntropy.pdf){width="\columnwidth"}
A similar argument can be carried out for the line-like excitations. We define the [*line topological entanglement entropy*]{}, $\gamma_{\text{line}}$, by the equation $$\gamma_{\text{line}} = S(\rho_{BC})+S(\rho_{CD})-S(\rho_{B})-S(\rho_{D}), \label{Eqn:GammaLine}$$ where regions $B$, $C$, and $D$ are shown in FIG.\[fig:obstructions\_3D\_line\]. Again, subsystem $A$ is the complement of subsystem $BCD$.
Once again our previous argument holds; we imagine creating a line-like excitation by applying a unitary operator $U$ that has nontrivial support on subsystem $CD$, as is shown in FIG.\[fig:obstructions\_3D\_line\](d). If $\gamma_{\text{line}} $ is small, the action of $U$ on the ground state can be approximated by $U'$ which lies in the vicinity of the line-like excitations. Such $U'$ exists only if either the system has no topologically nontrivial line particles, or if the boundary can absorb all the line-like excitations of the system. As before, it is also true that if there are any line-like excitations that cannot be created by some $U'$, the quantity $\gamma_{\text{line}} $ cannot vanish.
For completeness we explicitly make the argument explaining why the line topological entanglement entropy is a topological invariant. We compare two states, the vacuum state, $\rho$, and an excited state with a loop-like excitation, $\sigma$. By our assumption that $U$ is freely deformable, both states have identical density matrices over $BC$ and $CD$. As we did previously, we denote $A$ as the complement of the regions depicted in FIG.\[fig:obstructions\_3D\_line\](a). The trace distance $D(\rho_{ABC}, \sigma_{ABC})$ for such regions is upper bounded by ${\gamma_{\text{line}}}^{1/2}$. If $\gamma_{\text{line}}$ is $0$, Eq.\[eq:LE\_GE\] implies that $\rho$ and $\sigma$ are identical over $ABC$. Uhlmann’s theorem then implies that there must exist a unitary operator in the complement of $ABC$ that maps these two states. Since this region is a solid torus that surrounds the loop-like excitation, vanishing line topological entanglement entropy implies that the loop-like excitation can be condensed at the boundary.
Universality
------------
In the 2D case, the linear combination was concocted in such a way that the area terms in Eq.\[eq:TEE\] cancel each other out. Based on a general physical intuition that the leading term is due to the short-range entanglement across the cut, we expect a similar behavior for the regions in FIG.\[fig:obstructions\_3D\_point\] and FIG.\[fig:obstructions\_3D\_line\]. It should be noted that the physical boundary does not contribute to such short-range entanglement, since the vacuum that lies beyond the physical boundary is not entangled with the medium. Assuming such a behavior indeed holds, one can easily see that the contributions from the short-range entanglement are canceled out.
The remaining term is invariant under smooth deformation of the regions. Therefore, we expect it to be a topological invariant that characterizes the phase. In particular, we have shown that the point(line) topological entanglement entropy becomes $0$ only if all the point-like(line-like) excitations can be condensed at the given boundary. Moreover, our arguments show that we expect positive values for $\gamma_\text{point}$ and $\gamma_\text{line}$ if the studied boundaries support nontrivial point-like or line-like excitations, respectively. This is surprising given the recent results in Refs. [@Keyserlingk2012; @Bullivant2015] where it is shown that certain topological phases of matter with topological excitations on the boundary do not give rise to positive topological order parameters when one studies the bulk of the system. In contrast, our argument proves that the point topological entanglement entropy must be nontrivial for modular Walker-Wang models [@Walker2011; @Keyserlingk2012]. We point out that while our diagnostics give to positive values for boundaries where topological excitations are realized, we have not shown that a nonzero value guarantees a system with topological excitations at the boundary. It seems unlikely that one could give such a proof as examples of topologically trivial systems that show nontrivial topological behaviour with respect to certain entropic invariants are known [@Bravyi_Counterexample2012]. To this end, one must be wary when using our entropic invariants, or indeed, any entropic invariants to identify topological order.
Analyzing the three-dimensional toric code {#Sec:ThreeDimensionalToric}
==========================================
In Sec. \[Sec:ThreeDimensions\] we have introduced two ground-state topological invariants, and we have argued they will give nonzero values for models that give rise to topological excitations on their boundary. In this Section we use the point topological entanglement entropy and the line topological entanglement entropy to examine the different boundaries of the well understood model, the 3D toric code. In particular, we show that our invariants can be used to determine properties of different boundaries with respect to the types of excitations they are able to absorb.
The 3D toric code [@Hamma2005b] in the bulk has two-types of excitations; one point-like excitation and one line-like excitation, as shown in FIG. \[ToricExcitations\] (a) and (b) respectively. Point-like excitations are created in pairs at the endpoints of string-like creation operators, and line-like excitations form closed loops around the boundary of membrane-like creation operators. The model acquires an $e^{\text{i}\pi}$ phase if a point excitation is moved through a closed line excitation and returned to its initial position, as shown in FIG.\[ToricExcitations\](c).
![(a) Two point-like excitations created at the end points of a string operator. (b) A line-like excitation that is created on the boundary of a membrane operator. (c) A point-like excitation braided through the line-like excitation and returned to its initial position introduces a non-trivial $-1$ phase to the system.\[ToricExcitations\]](ToricExcitations.pdf)
The toric code has two types of boundary, a [*rough boundary*]{} and a [*smooth boundary*]{}. The 3D boundaries generalize straightforwardly from the 2D case [@Bravyi1998]. Close to a boundary, the excitations of the model change non-trivially. A rough boundary absorbs point-like excitations. Therefore, in the vicinity of a rough boundary, we find only line-like excitations are topologically nontrivial. Conversely, a smooth boundary absorbs line-like excitations. We see that the presented diagnostics can distinguish these different boundaries for the considered example.
The von Neumann entropy for subsystems of the three-dimensional toric code
--------------------------------------------------------------------------
To employ our topological invariants, we must first find a general formula for the von Neumann entropy of subsystems of the 3D toric code where subsystems may include qubits at either a rough or a smooth boundary.
The bulk entanglement entropy of region $R$ for the 3D toric code [@Castelnovo2008; @Grover2011; @Keyserlingk2012] is $$S(\rho_R) = A_R - n_R \log 2, \label{ToricEnt}$$ where $A_R$ is the surface area of the boundary of region $R $, denoted $\partial R$. The term $n_R$ is the number of disjoint connected surfaces, $\partial R_j$, of $\partial R$, such that $\partial R = \partial R_1 \sqcup \partial R_2 \sqcup \dots \sqcup \partial R_n$. To calculate $\gamma_{\text{point}} $ and $\gamma_{\text{line}}$, we generalize Eq.\[ToricEnt\] for the toric code to regions that include boundary qubits. These calculations are found explicitly using the method of [@Fattal2004] in App. \[App:EntropyCalculations\]. To summarize App. \[App:EntropyCalculations\], we find that the topological contribution from boundary component $\partial R_j$ that bounds qubits from a smooth boundary is unchanged. Therefore the boundary component $\partial R_j$ contributes a single unit to the topological term. In constrast, we find that each boundary component, $\partial R_j$, that bounds any qubits of the rough boundary will contribute nothing to the topological term. We therefore arrive at the general formula $$S(\rho_R) = A_R - N_R \log 2, \label{Eqn:GeneralFormula}$$ with $A_R$ the surface area contribution of the boundary of region $R $, and $N_R$, the number of disjoint boundary components $\partial R_j$ that enclose no qubits from a rough boundary.
The smooth boundary of the three-dimensional toric code
-------------------------------------------------------
We can apply Eq.\[Eqn:GeneralFormula\] to find $\gamma_{\text{point}} $ and $\gamma_{\text{line}} $ for the smooth boundaries of the 3D toric code. For the regions given in FIG.\[fig:obstructions\_3D\_point\] we have $$N_{BC} = 1, \, N_{CD} = 1, \, N_{B} = 1, \, N_{D} = 2.$$ Similarly, for the regions given in FIG.\[fig:obstructions\_3D\_line\] we have $$N_{BC} = 1, \, N_{CD} = 1, \,N_{B} = 1, \, N_{D} = 1.$$
Given that the local contributions for the terms in Eq.\[Eqn:GammaPoint\] and Eq.\[Eqn:GammaLine\] cancel, we obtain $$\gamma_{\text{point}} = \log 2, \quad \gamma_{\text{line}}= 0 ,\label{Eqn:GammaPoint_3DTC}$$ at a smooth boundary of the 3D toric code. As predicted, this result is indicative of the existence of topological point particles that cannot be absorbed at the boundary. The negative result for $\gamma_{\text{line}}$ shows that all line-like excitations are absorbed by the smooth boundary.
The rough boundary of the three-dimensional toric code
------------------------------------------------------
We finally evaulate $\gamma_{\text{point}} $ and $\gamma_{\text{line}} $ for the rough boundary of the 3D toric code. We find that $N_R = 0$ for all regions used in Eq.\[Eqn:GammaPoint\], as all the disjoint components of the boundaries of the regions in FIG.\[fig:obstructions\_3D\_point\] enclose qubits in the rough boundary. Conversely, we have that $N_B = 1$ for region $B$ in FIG.\[fig:obstructions\_3D\_line\], as region $B$ does not touch the boundary. Otherwise we have $N_R = 0$ for all $R \not= B$ that are used to find $\gamma_\text{line}$ in Eq.\[Eqn:GammaLine\]. We thus obtain $$\gamma_{\text{point}} = 0, \quad \gamma_{\text{line}}= \log 2. \label{Eqn:GammaLine_3DTC}$$ Once again, these are the expected results given that the rough boundary absorbs all the point-like excitations of the 3D toric code, but does not absorb line-like excitations. This result, together with Eq.\[Eqn:GammaPoint\_3DTC\] demonstrates that we can identify boundaries that condense point-like or line-like excitations using our invariants. This is indicated by the null values of $\gamma_{\text{point}}$ or $\gamma_{\text{line}}$.
Conclusion
==========
By consideration of the support of quasiparticle creation operators we have shown that we can obtain new entropic invariants for local gapped Hamiltonians using information theoretic arguments. We have used these methods to find two new order parameters for the boundary theories of 3D topological models. We have demonstrated that the proposed measures are effective by studying the boundaries of the 3D toric code. The result we obtain is remarkable given that we cannot distinguish between different excitation types in the bulk of 3D topological phases using entropic diagnostics [@Grover2011].
One might consider using the proposed topological invariants to interrogate the structure of more general classes of topologically ordered systems [@Walker2011] with exotic surface theories, where perhaps the bulk topological entanglement contribution is zero [@Keyserlingk2012]. It will be interesting to find a quantitative expression for more general theories of boundary excitations using our methods. Another class of models of recent interest in this respect are bosonic topological insulators with surface anyon theories [@Vishwanath2013; @Metlitski2013]. One might also consider using the present general proof to find new topological invariants for other interesting phases such as fractal topological quantum field theories [@Haah2011; @Yoshida2013].
IK’s research at Perimeter Institute is supported in part by the Government of Canada through NSERC and by the Province of Ontario through MRI. BJB is supported by the EPSRC.
The von Neumann entropy of the three-dimensional toric code {#App:EntropyCalculations}
===========================================================
![A star operator shown in red. The star operator for vertex $v$ supports a Pauli-X operator on each of the edges incident on vertex $v$. Some plaquette operators, shown in blue on different planes. Plaquettes have a Pauli-Z operator on each of the edges that bounds the face of the square lattice. \[Stabilizers\]](Stabilizers.pdf)
Here we study the bipartite entanglement between simple regions of the ground state of the 3D toric code lattice. We use the method given in Ref. [@Fattal2004] to find the entanglement entropy of a ball-shaped region in the bulk, and ball-shaped regions that enclose some of the qubits in a rough and a smooth boundary.
The 3D toric code is defined on a square lattice with qubits arranged on its edges. Its degenerate ground space, spanned by basis vectors ${\left|\psi_j\right\rangle}$, is described using the stabilizer formalism [@Gottesman1997]. Specifically, it is described by its (Abelian) stabilizer group, $\mathcal{S} = \left\{ S\in \mathcal{S} : S{\left|\psi_j\right\rangle} = {\left|\psi_j\right\rangle} \forall j \right\} $. The stabilizer group for the 3D toric code contains two types of stabilizers; star and plaquette operators, shown in FIG.\[Stabilizers\].
We use the method of Fattal [*et al.*]{} [@Fattal2004] to find the entanglement entropy between two subsystems, $A$ and $B$, which we briefly summarize. We consider an independent generating set of the stabilizer group with elements $S_j \in \mathcal{S}$. We write the generators $S_j = S_j^A \otimes S_j^B$, where $S_j^A$ is supported on subsystem $A$ and $S_j^B$ is supported on subsystem $B$. We study the restriction of the generating set of one of the subsystems of interest. Without loss of generality, we consider the restriction of the stabilizer group on subsystem $A$.
The restricted stabilizer generators, $S_j^A$, do not in general commute. The method of Fattal [*et al.*]{} looks to find a generating set where each restricted generator either commute with all other restricted generators, or anti commute with only one other restricted generator. Specifically, we look for $2k$ elements of the restricted generating set that satisfies $$\left\{ S^A_{2j-1}, S^A_{2j} \right\} = 0,$$ for all $1 \le j \le k$. The state described by $\mathcal{S}$ shares $k$ ebits of entanglement between subsystem $A$ and $B$. Generating sets where we are able to count [*pairs*]{} of anti-commuting operators when restricted to a subsystem are said to be in [*canonical form*]{}. The result of Fattal [*et al.*]{} shows that it is always possible to find a generating set in canonical form for any bipartition of the stabilizer group.
We must find a generating set of the stabilizer group of the 3D toric code that is in canonical form under a given bipartition. This enables us to count the ebits shared between two subsystems. Importantly, the generating set is over complete if we include $B_f$ operators for all the faces. This is seen by taking the product of all the plaquette operators corresponding to the faces that bound a cube. This product returns identity, showing an over-complete generating set where eigenvalues of stabilizers are dependent on others.
We choose an independent generating set that includes all plaquette operators that lie parallel to the $xy$ and $yz$ plane, and we only take the plaquette operators parallel to the $xz$ plane in a single plane at some fixed $y$. We are free to choose which plane, and for simplicity we always take this plane to be far away from the region of interest for the entropy calculation. For this reason, for all the calculations we make, it is sufficient to consider the only the plaquette generators parallel to the $xy$ and $yz$ plane.
Similarly, we point out now that we need not account for the logical operators that may appear in the generating set of the stabilizer group. Logical operators can always be deformed away from the regions of interest on the lattice and as such never contribute to the entanglement in any of the bipartitions we study. Moreover, our results are independent on the choice of ground state.
The von Neumann entropy of a ball in the bulk
---------------------------------------------
We now consider the entropy of a ball in the 3D toric code, see FIG.\[Jelly\]. To the left of this Figure, we show the corner of a region, where the region is filled with transparent green ‘jelly’. We show some examples of the restriction of star and plaquette operators outside the green jelly. We seek a canonical generating set.
![\[Jelly\] (Left) The corner of a ball-shaped region, labeled $A$, of the 3D toric code. We show the support of one star and two plaquette operators on region $B$ by Pauli-X and Pauli-Z operators. (Right) We represent operators with non-trivial support on region $A$ and $B$ as a graph. Vertex operators are represented as vertices, and plaquette operators are represented as edges. Every edge incident to a vertex represents a plaquette that anti commutes with a star operator. Clearly, the natural generating set is not in canonical form.](Jelly.pdf)
![The graph for a cuboid-like, ball shaped region in the bulk of the 3D toric code with double edges removed. The top face differs from the side faces due to the anisotropic generating set. The entanglement entropy does not depend on the choice of the generating set. This will become apparent as we progress through the calculation. \[Cuboid\]](Cuboid.pdf)
We simplify FIG.\[Jelly\] by representing the restricted stabilizers on a graph of vertices and edges. Vertices are denoted by a single index, $a$, and edges take the index of two vertices $(a,b)$ where $ a \not= b$ and $(a,b) = (b,a)$. We show the graph that corresponds to the corner of the region to the right of FIG.\[Jelly\]. In this graph, vertices represent the restriction of star operators on region $A$, and each edge represents the restriction of an independent plaquette operator. An edge that is incident to a vertex represents a restricted plaquette operator that anti commutes with the respective restricted star operator that is represented by the adjacent vertex. The graph is not in canonical form as there are many edges incident to each vertex.
We show the full graph for the restriction of a ball-shaped region in FIG.\[Cuboid\], where any double edges connecting two vertices are removed. We will see why we are free to replace double edges with single edges shortly.
![(Left) Double edges can be replaced by single edges, without any contribution to the entanglement of the region. (Middle) We replace restricted generator edge 4 with the product of all edges 1, 2, 3 and 4, allowing us to remove edge 4 from the graph. (Right) In general, we can always remove an edge from a circuit due to the circuit rule. \[Rules\]](Rules.pdf)
![We use the rules we have introduced to show that an face with a square grid of edges is equivalent to a face which contains only vertical edges. The right equality is obtained with further use of the circuit rule. \[Faces\]](CuboidFaces.pdf)
We face the task of finding edges that we are allowed to remove from the graph to find a canonical generating set while still generating $\mathcal{S}$. We complete the entropy calculation by introducing rules that enable us to find canonical form and count the ebits of entanglement shared between the region and its complement.
For a ball-shaped region in the bulk of the lattice, $R$, we recover the known result $$S(\rho_R) = A_R - 1, \label{Eqn:BulkEntropyResult}$$ where $A_R$ is the number of star operators with nontrivial support on both subsystem $A$ and subsystem $B$. This is equal to the number of vertices in the graph. We will observe that all but one vertex operator will contribute to the entanglement which gives the result obtained in the literature given our definition of surface area.
We now look to find a canonical generating set. In the first step, we remove double edges, as we have already done in FIG.\[Cuboid\]. We are free to do this due to the [*circuit rule*]{}. Before introducing the circuit rule, we first define a [*series*]{}, and a [*circuit*]{} of edges.
A series of length $x$ is a set of edges $e_j = (a_j, b_j)$ for $1 \le j \le x$ such that $b_j = a_{j+1}$ for $ 1 \le j \le x-1 $. Moreover, each vertex appears in no more than two edges of the series.
We also define a circuit, which is a special case of a series of edges
A circuit is a series of $x$ edges $e_j = (a_j, b_j)$ such that $b_x = a_1$.
Having introduced a series and a circuit, we are able to introduce the circuit rule
We can remove a single edge from a circuit without affecting the entanglement of the partition.
We give examples of circuits and the circuit rule in FIG.\[Rules\]. To show the circuit rule, we consider the explicit examples of the (Left) and (Middle) cases of circuits shown in FIG.\[Rules\]. For (Left), we see two edges, $e_1$ and $e_2$. Their corresponding restricted generators anti-commute with the star operators represented by the two vertices adjacent to $e_1$ and $e_2$. To remove the generator of corresponding to $e_2$, we replace it with the product of the generators represented by $e_1$ and $e_2$. This effectively removes $e_2$, as the new restricted generator commutes with all the star operators shown on the graph.
Similarly, as shown in FIG.\[Rules\](Middle), we can remove a single edge from four edges, $e_1$, $e_2$, $e_3$ and $e_4$, bounding a square face. We replace the plaquette operator represented by $e_4$ with the product of all the stabilizer generators corresponding to the edges bounding the square such that the new restricted generator commutes with all the vertices of the graph. We thus effectively remove the edge from the graph. This rule trivially generalizes to any circuit. We show this generalization in FIG.\[Rules\](Right).
![An loose end of $x$ black edges equates to $x$ ebits of entanglement. \[OpenEnd\]](OpenEnd.pdf)
We apply the circuit rule to the different faces of the graph shown in FIG.\[Cuboid\]. Using this rule, we obtain the equality shown in FIG.\[Faces\] between different faces of the cuboid. We reduce all the faces of the cuboid to the form of the right of equality FIG.\[Faces\] for the next step in the calculation. The new face we find in FIG.\[Faces\] has [*loose ends*]{} in the graph
A loose end is a series of $x$ edges $e_j = (a_j, b_j)$ such that the only edge incident to vertex $a_1$ is $e_1$, and the only edges incident to vertices $b_j$ are $e_j$ and $e_{j+1}$ for $1\le j \le x-1$.
The length of a loose end is proportional to its entropy contribution by the loose-end rule
![The entanglement of a face of the cuboid graph. Each of the $A$ internal vertices on the left hand side of the equality that are removed from the graph in the right hand side of the equality correspond to $A$ ebits of entanglement due to the loose-end rule. \[FaceEnt\]](FaceEntanglement.pdf)
A loose end of $x$ edges denotes $x$ ebits of entanglement shared under the bipartition.
We show this rule pictorially in FIG.\[OpenEnd\]. We see the loose-end rule rigorously by enumerating the restricted stabilizers, $S_j$, along the loose end. Here $S_j$ for odd $j$ are restricted $A_v$ operators, represented vertices in the graph, and restricted $B_f$ operators, edges, have even $j$. The indices take values $1 \le j \le 2x+1$, and $2x+1$ indexes the operator corresponding to the black vertex at the end of the blue string. We have that $\{ S^A_j, \, S^A_{j+1} \} = 0$ for $1 \le j \le 2x$. We find a canonical form for the edges and vertices of the loose end by making the replacement $S_j \rightarrow S_j' = \prod_{\text{odd}k \le j} S_k$ for all odd $j$, and $S_j \rightarrow S_j' = S_j$ for even $j$. With this replacement we have $ \{ S_{2j-1}'^A, S_{2j}'^A \} = 0 $ for $1 \le j \le x$. We thus identify $x$ ebits of entanglement for a loose end of length $x$.
![We use the repeatedly use the circuit rule and the loose-end rule to find the entanglement represented by the faces of a cuboid graph, where $X$ is the number of vertices removed from all the faces of the cuboid on the right hand side of the equality. \[CubeEntAndFrame\]](CubeFaceEntanglement.pdf)
We now identify the entanglement of a face of a cuboid, as shown in FIG.\[FaceEnt\]. We use the loose-end rule to see that all the vertices in each face of the graph contribute a single unit of entanglement to the calculation, and thus that all the vertices contribute to the area term of the entropy. We extend this to all the faces of the cube, as shown in FIG.\[CubeEntAndFrame\].
![\[OneLoopLeft\] We remove $Y$ vertices from the graph on the left hand side of the equation using the circuit rule and the loose-end rule. Each vertex removed contributes a single unit of entanglement in the calculation, we thus identify $Y$ ebits of entanglement on the right hand side of the equality.](CubePlusLoopEntanglement.pdf)
We can continue to use the circuit rule and the loose-end rule to arrive at the result of FIG.\[OneLoopLeft\] where only a single loop of edges remains in the graph. Importantly, all the vertices that have been removed from the graph have contributed one unit to the entanglement entropy. To complete the calculation we must assess the entanglement of the single loop of edges that remains in the graph.
A loop of length $x$ is a circuit of $x$ edges, $e_j = (a_j, b_j)$, such that the only edges of the graph incident to vertices $b_j$ are $e_{j}$ and $e_{j+1}$ for all $j$ where edge $e_{x+1} = e_{1}$.
Given the definition of a loop, we are now able to introduce the loop rule
A loop of length $x$ denotes $x-1$ ebits of entanglement under the bipartition.
![\[ClosedLoopRule\] A loop length $x$ denotes $x-1$ ebits of entanglement.](ClosedLoopRule.pdf)
We show the loop rule graphically in FIG.\[ClosedLoopRule\]. We consider the case of a loop carefully. Like the loose end rule, we denote the restricted stabilizer generators in the loop by operators as $S^A_j$, where the indices take values $1 \le j \le 2x$, with even $j$ restricted plaquette operators, edges, and odd $j$ restricted star operators, vertices. Initially, we have that every $S^A_j$ anti commutes with two other restricted generators, $S^A_{j-1}$ and $S^A_{j+1}$, where $S_{2x+1} = S_1$ to accommodate the periodic structure of the loop.
To obtain canonical form, we must replace a single star operator with the product of all the star operators in the loop $S_1 \rightarrow S_1' = \prod_{\text{odd}k} S_k$, such that ${S_1'}^A$ commutes with all the $B_f^A$ denoted by edges in the loop. Similarly, we make the substitution $S_2 \rightarrow S_2' = \prod_{\text{even}k} S_k$, such that ${S_2'}^A$ commutes with all the other restricted operators in the loop. Having removed one edge and one vertex from the loop, we can reduce the remaining $S_j$ with $j > 2$ into canonical form using the loose-end rule. We thus identify $x-1$ ebits of entanglement.
The loop rule removes a single vertex from the graph without contributing to the entanglement, thus giving the universal topological contribution in the calculation, Eq.\[Eqn:BulkEntropyResult\].
![We show a graph of the restricted generators for a ball-shaped region pressed against an smooth surface, where the smooth surface is at the bottom of the cuboid. We find the result $S(\rho_R) = A_R - 1$ as in the case in the bulk. \[SmoothSurfaceBall\]](SmoothFaceBall.pdf)
One can check that this method extends to any region with a connected boundary, such as an annulus. Ultimately, the calculation will always reduce the graph to a loop. We are then able to remove a single vertex without contributing to the entanglement, thus always giving the desired result for a connected boundary. In general, for regions that include multiple disjoint boundaries, every connected boundary can be reduced to a single loop, enabling us to remove one vertex of the graph per connected boundary without contributing to the entanglement. The topological correction will therefore scale with the number of connected boundaries that enclose the region.
A ball on a smooth surface
--------------------------
![The graph for the restricted stabilizers for a region touching a rough face. The bottom edge terminates with edges, not with vertices, so we cannot use the loose end rule to measure the entanglement here. \[RoughFaceBall\]](RoughFaceBall.pdf)
![ \[RoughEndCircuit\] The graph on the left side of the equality shows an extended circuit of black edges. Both ends of the series terminates at an edge, not a vertex. The right hand side of the equality shows a single edge removed from the extended circuit, as is permitted by the extended circuit rule. ](RoughEndCircuit.pdf)
![\[OneLegRoughFace\] We repeatedly apply the extended circuit rule to detach loose ends from graph. Each of the vertices of these loose ends contribute a single unit of entanglement to the calculation by the loose-end rule.](OneLegRoughFace.pdf)
The entanglement entropy of a ball-shaped region that includes qubits from a smooth boundary gives the same topological contribution as the case we previously considered in the bulk. We show the graph of restricted generators of such a region in FIG.\[SmoothSurfaceBall\]. The result $S(\rho_R) = A_R- 1$ is obtained using the rules we have already established, where $A_R$ is the number of star operators cut by the boundary. Star operators cut near the surface are not treated differently from those cut in the bulk.
A ball on a rough surface
-------------------------
We now consider the entanglement entropy for the case where the region touches the rough face. Contrary to the cases we have considered previously, we do not find a topological contribution to the entanglement entropy. As such, we describe this calculation in detail. We show a picture of the restricted stabilizer graph in FIG.\[RoughFaceBall\]. Unlike the previous graphs we have considered, here we have edges that have only one adjacent vertex. We denote such an edge as $e_j = (a_j)$, where $a_j$ is the single vertex to which edge $e_j$ is incident. These edges represent restricted plaquette operators that anti commute with only one restricted star operator, denoted by their single incident vertex in the graph. Such plaquette operators are found at the rough boundary of the 3D toric code.
To calculate the entanglement of this region, we introduce the extended circuit rule. We first define an [*extended circuit*]{}
An extended circuit is a series of $x$ edges $e_j = (a_j, b_j)$ for $ 2 \le j \le x-1$ and where edges $e_1 = (b_1)$ and $e_x = (a_x)$ contain a single vertex.
We are thus able to give the [*extended-circuit rule*]{} we require to complete the calculation
We can remove a single edge from an extended circuit without modifying the entanglement shared across the bipartition.
We show the extended circuit rule in FIG.\[RoughEndCircuit\]. The restriced plaquette operator represented by the missing edge on the right hand side of this equality has been replaced by the product of all restricted plaquette operators represented by solid edges on the left hand side of the Figure, such that the new generator commutes with all the vertices of the graph.
We implement the extended circuit rule many times, together with the circuit rule and the loose-end rule to arrive at the graph shown in FIG.\[OneLegRoughFace\].
![\[LonelyString\] The remaining graph of $x$ vertices will contribute $x$ ebits of entanglement due to the loose-end rule.](LonelyString.pdf)
The new graph has many loose ends. Following repeated application of the loose-end rule, and one use the circuit rule gives the graph shown in FIG.\[LonelyString\]. We complete this entropy calculation with one final application of the loose-end rule. Unlike the previous calculations, all the vertices of the graph have been removed using the loose-end rule, and not once have we obtained a loop which has previously given the topological contribution in the entropy calculation. We thus obtain $$S(\rho_R) = A_R,$$ for the case where $R$ includes qubits of a rough boundary of the 3D toric code.
[36]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, ().
, , , ****, ().
, ().
().
, ****, ().
, ****, ().
, , , ****, ().
, , , , ****, ().
, , , , ****, ().
, , , , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, **, Cambridge Series on Information and the Natural Sciences (, ), ISBN .
().
, ****, ().
, ****, ().
, ().
, ****, ().
, , , , ().
, ****, ().
, ****, ().
, ().
, ****, ().
[^1]: This method of reducing the support of an operator is of the spirit to the ‘cleaning lemma’, presented for stabilizers in [@Bravyi2008].
|
{
"pile_set_name": "ArXiv"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.